May 14 17:58:49.099946 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] May 14 17:58:49.099989 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed May 14 16:42:23 -00 2025 May 14 17:58:49.100076 kernel: KASLR disabled due to lack of seed May 14 17:58:49.100093 kernel: efi: EFI v2.7 by EDK II May 14 17:58:49.100109 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a733a98 MEMRESERVE=0x78557598 May 14 17:58:49.100124 kernel: secureboot: Secure boot disabled May 14 17:58:49.100141 kernel: ACPI: Early table checksum verification disabled May 14 17:58:49.100156 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) May 14 17:58:49.100171 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) May 14 17:58:49.100186 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) May 14 17:58:49.100205 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) May 14 17:58:49.100220 kernel: ACPI: FACS 0x0000000078630000 000040 May 14 17:58:49.100235 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 14 17:58:49.100249 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) May 14 17:58:49.100267 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) May 14 17:58:49.100282 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) May 14 17:58:49.100302 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 14 17:58:49.100318 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) May 14 17:58:49.100334 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) May 14 17:58:49.100349 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 May 14 17:58:49.100365 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') May 14 17:58:49.100380 kernel: printk: legacy bootconsole [uart0] enabled May 14 17:58:49.100396 kernel: ACPI: Use ACPI SPCR as default console: Yes May 14 17:58:49.100412 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] May 14 17:58:49.100427 kernel: NODE_DATA(0) allocated [mem 0x4b584cdc0-0x4b5853fff] May 14 17:58:49.100443 kernel: Zone ranges: May 14 17:58:49.100462 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 14 17:58:49.100478 kernel: DMA32 empty May 14 17:58:49.100494 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] May 14 17:58:49.100509 kernel: Device empty May 14 17:58:49.100524 kernel: Movable zone start for each node May 14 17:58:49.100540 kernel: Early memory node ranges May 14 17:58:49.100555 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] May 14 17:58:49.100571 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] May 14 17:58:49.100586 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] May 14 17:58:49.100602 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] May 14 17:58:49.100617 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] May 14 17:58:49.100632 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] May 14 17:58:49.100652 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] May 14 17:58:49.100668 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] May 14 17:58:49.100690 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] May 14 17:58:49.100706 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges May 14 17:58:49.100723 kernel: psci: probing for conduit method from ACPI. May 14 17:58:49.100743 kernel: psci: PSCIv1.0 detected in firmware. May 14 17:58:49.100760 kernel: psci: Using standard PSCI v0.2 function IDs May 14 17:58:49.100776 kernel: psci: Trusted OS migration not required May 14 17:58:49.100792 kernel: psci: SMC Calling Convention v1.1 May 14 17:58:49.100808 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 14 17:58:49.100825 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 14 17:58:49.100841 kernel: pcpu-alloc: [0] 0 [0] 1 May 14 17:58:49.100858 kernel: Detected PIPT I-cache on CPU0 May 14 17:58:49.100874 kernel: CPU features: detected: GIC system register CPU interface May 14 17:58:49.100890 kernel: CPU features: detected: Spectre-v2 May 14 17:58:49.100906 kernel: CPU features: detected: Spectre-v3a May 14 17:58:49.100922 kernel: CPU features: detected: Spectre-BHB May 14 17:58:49.100942 kernel: CPU features: detected: ARM erratum 1742098 May 14 17:58:49.100959 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 May 14 17:58:49.100975 kernel: alternatives: applying boot alternatives May 14 17:58:49.100994 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=fb5d39925446c9958629410eadbe2d2aa0566996d55f4385bdd8a5ce4ad5f562 May 14 17:58:49.101033 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 17:58:49.101051 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 17:58:49.101068 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 17:58:49.101085 kernel: Fallback order for Node 0: 0 May 14 17:58:49.101101 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 May 14 17:58:49.101118 kernel: Policy zone: Normal May 14 17:58:49.101140 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 17:58:49.101156 kernel: software IO TLB: area num 2. May 14 17:58:49.101173 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) May 14 17:58:49.101189 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 14 17:58:49.101205 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 17:58:49.103241 kernel: rcu: RCU event tracing is enabled. May 14 17:58:49.103531 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 14 17:58:49.103553 kernel: Trampoline variant of Tasks RCU enabled. May 14 17:58:49.103570 kernel: Tracing variant of Tasks RCU enabled. May 14 17:58:49.103587 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 17:58:49.103603 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 14 17:58:49.103620 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 17:58:49.103647 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 17:58:49.103664 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 17:58:49.103680 kernel: GICv3: 96 SPIs implemented May 14 17:58:49.103696 kernel: GICv3: 0 Extended SPIs implemented May 14 17:58:49.103713 kernel: Root IRQ handler: gic_handle_irq May 14 17:58:49.103729 kernel: GICv3: GICv3 features: 16 PPIs May 14 17:58:49.103746 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 14 17:58:49.103762 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 May 14 17:58:49.103778 kernel: ITS [mem 0x10080000-0x1009ffff] May 14 17:58:49.103795 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) May 14 17:58:49.103812 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) May 14 17:58:49.103833 kernel: GICv3: using LPI property table @0x00000004000e0000 May 14 17:58:49.103849 kernel: ITS: Using hypervisor restricted LPI range [128] May 14 17:58:49.103866 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 May 14 17:58:49.103882 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 17:58:49.103899 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). May 14 17:58:49.103916 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns May 14 17:58:49.103933 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns May 14 17:58:49.103949 kernel: Console: colour dummy device 80x25 May 14 17:58:49.103967 kernel: printk: legacy console [tty1] enabled May 14 17:58:49.103984 kernel: ACPI: Core revision 20240827 May 14 17:58:49.104389 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) May 14 17:58:49.104429 kernel: pid_max: default: 32768 minimum: 301 May 14 17:58:49.104447 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 14 17:58:49.104465 kernel: landlock: Up and running. May 14 17:58:49.104481 kernel: SELinux: Initializing. May 14 17:58:49.104498 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 17:58:49.104515 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 17:58:49.104532 kernel: rcu: Hierarchical SRCU implementation. May 14 17:58:49.104549 kernel: rcu: Max phase no-delay instances is 400. May 14 17:58:49.104566 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 14 17:58:49.104587 kernel: Remapping and enabling EFI services. May 14 17:58:49.104604 kernel: smp: Bringing up secondary CPUs ... May 14 17:58:49.104620 kernel: Detected PIPT I-cache on CPU1 May 14 17:58:49.104637 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 May 14 17:58:49.104654 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 May 14 17:58:49.104670 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] May 14 17:58:49.104687 kernel: smp: Brought up 1 node, 2 CPUs May 14 17:58:49.104704 kernel: SMP: Total of 2 processors activated. May 14 17:58:49.104720 kernel: CPU: All CPU(s) started at EL1 May 14 17:58:49.104741 kernel: CPU features: detected: 32-bit EL0 Support May 14 17:58:49.104769 kernel: CPU features: detected: 32-bit EL1 Support May 14 17:58:49.104787 kernel: CPU features: detected: CRC32 instructions May 14 17:58:49.104808 kernel: alternatives: applying system-wide alternatives May 14 17:58:49.104826 kernel: Memory: 3813532K/4030464K available (11072K kernel code, 2276K rwdata, 8928K rodata, 39424K init, 1034K bss, 212156K reserved, 0K cma-reserved) May 14 17:58:49.104844 kernel: devtmpfs: initialized May 14 17:58:49.104861 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 17:58:49.104879 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 14 17:58:49.104901 kernel: 17024 pages in range for non-PLT usage May 14 17:58:49.104919 kernel: 508544 pages in range for PLT usage May 14 17:58:49.104936 kernel: pinctrl core: initialized pinctrl subsystem May 14 17:58:49.104953 kernel: SMBIOS 3.0.0 present. May 14 17:58:49.104970 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 May 14 17:58:49.104988 kernel: DMI: Memory slots populated: 0/0 May 14 17:58:49.105029 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 17:58:49.105051 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 17:58:49.105069 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 17:58:49.105093 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 17:58:49.105111 kernel: audit: initializing netlink subsys (disabled) May 14 17:58:49.105128 kernel: audit: type=2000 audit(0.226:1): state=initialized audit_enabled=0 res=1 May 14 17:58:49.105145 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 17:58:49.105163 kernel: cpuidle: using governor menu May 14 17:58:49.105180 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 17:58:49.105198 kernel: ASID allocator initialised with 65536 entries May 14 17:58:49.105215 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 17:58:49.105237 kernel: Serial: AMBA PL011 UART driver May 14 17:58:49.105254 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 17:58:49.105272 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 14 17:58:49.105289 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 14 17:58:49.105307 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 14 17:58:49.105324 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 17:58:49.105342 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 14 17:58:49.105379 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 14 17:58:49.105397 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 14 17:58:49.105420 kernel: ACPI: Added _OSI(Module Device) May 14 17:58:49.105437 kernel: ACPI: Added _OSI(Processor Device) May 14 17:58:49.105455 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 17:58:49.105472 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 17:58:49.105490 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 17:58:49.105507 kernel: ACPI: Interpreter enabled May 14 17:58:49.105524 kernel: ACPI: Using GIC for interrupt routing May 14 17:58:49.105542 kernel: ACPI: MCFG table detected, 1 entries May 14 17:58:49.105559 kernel: ACPI: CPU0 has been hot-added May 14 17:58:49.105576 kernel: ACPI: CPU1 has been hot-added May 14 17:58:49.105599 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) May 14 17:58:49.105884 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 17:58:49.108280 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 14 17:58:49.108491 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 14 17:58:49.108774 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 May 14 17:58:49.108966 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] May 14 17:58:49.108990 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] May 14 17:58:49.109040 kernel: acpiphp: Slot [1] registered May 14 17:58:49.109059 kernel: acpiphp: Slot [2] registered May 14 17:58:49.109077 kernel: acpiphp: Slot [3] registered May 14 17:58:49.109094 kernel: acpiphp: Slot [4] registered May 14 17:58:49.109111 kernel: acpiphp: Slot [5] registered May 14 17:58:49.109129 kernel: acpiphp: Slot [6] registered May 14 17:58:49.109146 kernel: acpiphp: Slot [7] registered May 14 17:58:49.109163 kernel: acpiphp: Slot [8] registered May 14 17:58:49.109180 kernel: acpiphp: Slot [9] registered May 14 17:58:49.109203 kernel: acpiphp: Slot [10] registered May 14 17:58:49.109221 kernel: acpiphp: Slot [11] registered May 14 17:58:49.109238 kernel: acpiphp: Slot [12] registered May 14 17:58:49.109255 kernel: acpiphp: Slot [13] registered May 14 17:58:49.109272 kernel: acpiphp: Slot [14] registered May 14 17:58:49.109289 kernel: acpiphp: Slot [15] registered May 14 17:58:49.109307 kernel: acpiphp: Slot [16] registered May 14 17:58:49.109324 kernel: acpiphp: Slot [17] registered May 14 17:58:49.109341 kernel: acpiphp: Slot [18] registered May 14 17:58:49.109376 kernel: acpiphp: Slot [19] registered May 14 17:58:49.109400 kernel: acpiphp: Slot [20] registered May 14 17:58:49.109417 kernel: acpiphp: Slot [21] registered May 14 17:58:49.109435 kernel: acpiphp: Slot [22] registered May 14 17:58:49.109452 kernel: acpiphp: Slot [23] registered May 14 17:58:49.109469 kernel: acpiphp: Slot [24] registered May 14 17:58:49.109487 kernel: acpiphp: Slot [25] registered May 14 17:58:49.109504 kernel: acpiphp: Slot [26] registered May 14 17:58:49.109521 kernel: acpiphp: Slot [27] registered May 14 17:58:49.109539 kernel: acpiphp: Slot [28] registered May 14 17:58:49.109560 kernel: acpiphp: Slot [29] registered May 14 17:58:49.109578 kernel: acpiphp: Slot [30] registered May 14 17:58:49.109595 kernel: acpiphp: Slot [31] registered May 14 17:58:49.109612 kernel: PCI host bridge to bus 0000:00 May 14 17:58:49.109851 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] May 14 17:58:49.110407 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 14 17:58:49.110911 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] May 14 17:58:49.112219 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] May 14 17:58:49.112453 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint May 14 17:58:49.112669 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint May 14 17:58:49.112873 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] May 14 17:58:49.113167 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint May 14 17:58:49.113412 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] May 14 17:58:49.113616 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold May 14 17:58:49.113845 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint May 14 17:58:49.116192 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] May 14 17:58:49.116445 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] May 14 17:58:49.116641 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] May 14 17:58:49.116833 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold May 14 17:58:49.117054 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned May 14 17:58:49.117250 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned May 14 17:58:49.117480 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned May 14 17:58:49.117695 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned May 14 17:58:49.117894 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned May 14 17:58:49.118131 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] May 14 17:58:49.118756 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 14 17:58:49.118927 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] May 14 17:58:49.118951 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 14 17:58:49.118978 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 14 17:58:49.118996 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 14 17:58:49.119801 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 14 17:58:49.120196 kernel: iommu: Default domain type: Translated May 14 17:58:49.120219 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 17:58:49.120238 kernel: efivars: Registered efivars operations May 14 17:58:49.120257 kernel: vgaarb: loaded May 14 17:58:49.120275 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 17:58:49.120293 kernel: VFS: Disk quotas dquot_6.6.0 May 14 17:58:49.120318 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 17:58:49.120336 kernel: pnp: PnP ACPI init May 14 17:58:49.120577 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved May 14 17:58:49.120606 kernel: pnp: PnP ACPI: found 1 devices May 14 17:58:49.120624 kernel: NET: Registered PF_INET protocol family May 14 17:58:49.120641 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 17:58:49.120659 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 17:58:49.120677 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 17:58:49.120700 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 17:58:49.120718 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 17:58:49.120736 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 17:58:49.120753 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 17:58:49.120771 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 17:58:49.120788 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 17:58:49.120806 kernel: PCI: CLS 0 bytes, default 64 May 14 17:58:49.120823 kernel: kvm [1]: HYP mode not available May 14 17:58:49.120840 kernel: Initialise system trusted keyrings May 14 17:58:49.120862 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 17:58:49.120879 kernel: Key type asymmetric registered May 14 17:58:49.120896 kernel: Asymmetric key parser 'x509' registered May 14 17:58:49.120914 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 14 17:58:49.120931 kernel: io scheduler mq-deadline registered May 14 17:58:49.120948 kernel: io scheduler kyber registered May 14 17:58:49.120966 kernel: io scheduler bfq registered May 14 17:58:49.121214 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered May 14 17:58:49.121241 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 17:58:49.121266 kernel: ACPI: button: Power Button [PWRB] May 14 17:58:49.121283 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 May 14 17:58:49.121301 kernel: ACPI: button: Sleep Button [SLPB] May 14 17:58:49.121318 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 17:58:49.121336 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 14 17:58:49.121547 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) May 14 17:58:49.121572 kernel: printk: legacy console [ttyS0] disabled May 14 17:58:49.121591 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A May 14 17:58:49.121615 kernel: printk: legacy console [ttyS0] enabled May 14 17:58:49.121633 kernel: printk: legacy bootconsole [uart0] disabled May 14 17:58:49.121651 kernel: thunder_xcv, ver 1.0 May 14 17:58:49.121668 kernel: thunder_bgx, ver 1.0 May 14 17:58:49.121686 kernel: nicpf, ver 1.0 May 14 17:58:49.121703 kernel: nicvf, ver 1.0 May 14 17:58:49.121894 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 17:58:49.122139 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T17:58:48 UTC (1747245528) May 14 17:58:49.122177 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 17:58:49.122204 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available May 14 17:58:49.122222 kernel: NET: Registered PF_INET6 protocol family May 14 17:58:49.122240 kernel: watchdog: NMI not fully supported May 14 17:58:49.122258 kernel: watchdog: Hard watchdog permanently disabled May 14 17:58:49.122275 kernel: Segment Routing with IPv6 May 14 17:58:49.122293 kernel: In-situ OAM (IOAM) with IPv6 May 14 17:58:49.122311 kernel: NET: Registered PF_PACKET protocol family May 14 17:58:49.122328 kernel: Key type dns_resolver registered May 14 17:58:49.122346 kernel: registered taskstats version 1 May 14 17:58:49.122368 kernel: Loading compiled-in X.509 certificates May 14 17:58:49.122385 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: c0c250ba312a1bb9bceb2432c486db6e5999df1a' May 14 17:58:49.122403 kernel: Demotion targets for Node 0: null May 14 17:58:49.122421 kernel: Key type .fscrypt registered May 14 17:58:49.122438 kernel: Key type fscrypt-provisioning registered May 14 17:58:49.122456 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 17:58:49.122473 kernel: ima: Allocated hash algorithm: sha1 May 14 17:58:49.122491 kernel: ima: No architecture policies found May 14 17:58:49.122509 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 17:58:49.122531 kernel: clk: Disabling unused clocks May 14 17:58:49.122548 kernel: PM: genpd: Disabling unused power domains May 14 17:58:49.122566 kernel: Warning: unable to open an initial console. May 14 17:58:49.122584 kernel: Freeing unused kernel memory: 39424K May 14 17:58:49.122602 kernel: Run /init as init process May 14 17:58:49.122620 kernel: with arguments: May 14 17:58:49.122639 kernel: /init May 14 17:58:49.122656 kernel: with environment: May 14 17:58:49.122672 kernel: HOME=/ May 14 17:58:49.122695 kernel: TERM=linux May 14 17:58:49.122712 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 17:58:49.122731 systemd[1]: Successfully made /usr/ read-only. May 14 17:58:49.122756 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 17:58:49.122776 systemd[1]: Detected virtualization amazon. May 14 17:58:49.122794 systemd[1]: Detected architecture arm64. May 14 17:58:49.122813 systemd[1]: Running in initrd. May 14 17:58:49.122836 systemd[1]: No hostname configured, using default hostname. May 14 17:58:49.122856 systemd[1]: Hostname set to . May 14 17:58:49.122875 systemd[1]: Initializing machine ID from VM UUID. May 14 17:58:49.122894 systemd[1]: Queued start job for default target initrd.target. May 14 17:58:49.122913 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 17:58:49.122932 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 17:58:49.122952 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 17:58:49.122972 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 17:58:49.122995 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 17:58:49.123043 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 17:58:49.123066 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 17:58:49.123086 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 17:58:49.123105 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 17:58:49.123124 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 17:58:49.123143 systemd[1]: Reached target paths.target - Path Units. May 14 17:58:49.123168 systemd[1]: Reached target slices.target - Slice Units. May 14 17:58:49.123188 systemd[1]: Reached target swap.target - Swaps. May 14 17:58:49.123207 systemd[1]: Reached target timers.target - Timer Units. May 14 17:58:49.123226 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 17:58:49.123245 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 17:58:49.123264 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 17:58:49.123283 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 17:58:49.123303 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 17:58:49.123322 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 17:58:49.123345 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 17:58:49.123364 systemd[1]: Reached target sockets.target - Socket Units. May 14 17:58:49.123383 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 17:58:49.123403 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 17:58:49.123422 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 17:58:49.123441 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 14 17:58:49.123461 systemd[1]: Starting systemd-fsck-usr.service... May 14 17:58:49.123480 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 17:58:49.123503 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 17:58:49.123522 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 17:58:49.123542 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 17:58:49.123562 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 17:58:49.123581 systemd[1]: Finished systemd-fsck-usr.service. May 14 17:58:49.123605 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 17:58:49.123662 systemd-journald[257]: Collecting audit messages is disabled. May 14 17:58:49.123705 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 17:58:49.123729 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 17:58:49.123749 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 17:58:49.123781 kernel: Bridge firewalling registered May 14 17:58:49.123804 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 17:58:49.123824 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 17:58:49.123845 systemd-journald[257]: Journal started May 14 17:58:49.123884 systemd-journald[257]: Runtime Journal (/run/log/journal/ec251a4efa33ccacbd06cf0b90a1431e) is 8M, max 75.3M, 67.3M free. May 14 17:58:49.068936 systemd-modules-load[259]: Inserted module 'overlay' May 14 17:58:49.128656 systemd[1]: Started systemd-journald.service - Journal Service. May 14 17:58:49.109906 systemd-modules-load[259]: Inserted module 'br_netfilter' May 14 17:58:49.140320 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 17:58:49.146640 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 17:58:49.153297 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 17:58:49.163953 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 17:58:49.189137 systemd-tmpfiles[281]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 14 17:58:49.199280 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 17:58:49.205498 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 17:58:49.217085 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 17:58:49.223079 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 17:58:49.233230 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 17:58:49.257649 dracut-cmdline[294]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=fb5d39925446c9958629410eadbe2d2aa0566996d55f4385bdd8a5ce4ad5f562 May 14 17:58:49.328466 systemd-resolved[299]: Positive Trust Anchors: May 14 17:58:49.328493 systemd-resolved[299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 17:58:49.328556 systemd-resolved[299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 17:58:49.428040 kernel: SCSI subsystem initialized May 14 17:58:49.436037 kernel: Loading iSCSI transport class v2.0-870. May 14 17:58:49.448064 kernel: iscsi: registered transport (tcp) May 14 17:58:49.469387 kernel: iscsi: registered transport (qla4xxx) May 14 17:58:49.469462 kernel: QLogic iSCSI HBA Driver May 14 17:58:49.503171 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 17:58:49.534608 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 17:58:49.552675 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 17:58:49.613392 kernel: random: crng init done May 14 17:58:49.613366 systemd-resolved[299]: Defaulting to hostname 'linux'. May 14 17:58:49.616749 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 17:58:49.620948 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 17:58:49.641623 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 17:58:49.648113 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 17:58:49.735049 kernel: raid6: neonx8 gen() 6442 MB/s May 14 17:58:49.752036 kernel: raid6: neonx4 gen() 6436 MB/s May 14 17:58:49.769035 kernel: raid6: neonx2 gen() 5352 MB/s May 14 17:58:49.786035 kernel: raid6: neonx1 gen() 3924 MB/s May 14 17:58:49.803035 kernel: raid6: int64x8 gen() 3634 MB/s May 14 17:58:49.820035 kernel: raid6: int64x4 gen() 3682 MB/s May 14 17:58:49.837034 kernel: raid6: int64x2 gen() 3556 MB/s May 14 17:58:49.854859 kernel: raid6: int64x1 gen() 2765 MB/s May 14 17:58:49.854890 kernel: raid6: using algorithm neonx8 gen() 6442 MB/s May 14 17:58:49.872823 kernel: raid6: .... xor() 4762 MB/s, rmw enabled May 14 17:58:49.872856 kernel: raid6: using neon recovery algorithm May 14 17:58:49.880042 kernel: xor: measuring software checksum speed May 14 17:58:49.881035 kernel: 8regs : 11769 MB/sec May 14 17:58:49.883215 kernel: 32regs : 11970 MB/sec May 14 17:58:49.883246 kernel: arm64_neon : 9196 MB/sec May 14 17:58:49.883270 kernel: xor: using function: 32regs (11970 MB/sec) May 14 17:58:49.976064 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 17:58:49.988087 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 17:58:49.994457 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 17:58:50.037501 systemd-udevd[507]: Using default interface naming scheme 'v255'. May 14 17:58:50.049029 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 17:58:50.054252 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 17:58:50.102371 dracut-pre-trigger[512]: rd.md=0: removing MD RAID activation May 14 17:58:50.148089 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 17:58:50.154972 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 17:58:50.298250 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 17:58:50.321059 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 17:58:50.481463 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 14 17:58:50.481547 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) May 14 17:58:50.501819 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 14 17:58:50.501853 kernel: nvme nvme0: pci function 0000:00:04.0 May 14 17:58:50.503154 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 14 17:58:50.503652 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 14 17:58:50.503872 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:fa:b5:65:70:1f May 14 17:58:50.504204 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 14 17:58:50.492510 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 17:58:50.492747 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 17:58:50.511321 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 17:58:50.511369 kernel: GPT:9289727 != 16777215 May 14 17:58:50.495277 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 17:58:50.515351 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 17:58:50.503157 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 17:58:50.519342 kernel: GPT:9289727 != 16777215 May 14 17:58:50.519374 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 17:58:50.520630 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 14 17:58:50.519824 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 17:58:50.530218 (udev-worker)[554]: Network interface NamePolicy= disabled on kernel command line. May 14 17:58:50.556575 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 17:58:50.579072 kernel: nvme nvme0: using unchecked data buffer May 14 17:58:50.738361 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 14 17:58:50.785837 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 14 17:58:50.791385 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 17:58:50.813147 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 14 17:58:50.818522 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 14 17:58:50.843398 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 14 17:58:50.848560 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 17:58:50.851070 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 17:58:50.851663 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 17:58:50.858083 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 17:58:50.865404 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 17:58:50.897085 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 17:58:50.904419 disk-uuid[690]: Primary Header is updated. May 14 17:58:50.904419 disk-uuid[690]: Secondary Entries is updated. May 14 17:58:50.904419 disk-uuid[690]: Secondary Header is updated. May 14 17:58:50.916055 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 14 17:58:51.934041 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 14 17:58:51.934830 disk-uuid[693]: The operation has completed successfully. May 14 17:58:52.142719 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 17:58:52.142939 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 17:58:52.205673 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 17:58:52.223790 sh[954]: Success May 14 17:58:52.245608 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 17:58:52.245681 kernel: device-mapper: uevent: version 1.0.3 May 14 17:58:52.247510 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 14 17:58:52.260039 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 14 17:58:52.369725 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 17:58:52.383170 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 17:58:52.389582 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 17:58:52.434196 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 14 17:58:52.434273 kernel: BTRFS: device fsid e21bbf34-4c71-4257-bd6f-908a2b81e5ab devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (977) May 14 17:58:52.439156 kernel: BTRFS info (device dm-0): first mount of filesystem e21bbf34-4c71-4257-bd6f-908a2b81e5ab May 14 17:58:52.439207 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 14 17:58:52.440354 kernel: BTRFS info (device dm-0): using free-space-tree May 14 17:58:52.613664 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 17:58:52.617365 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 14 17:58:52.630742 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 17:58:52.632197 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 17:58:52.639625 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 17:58:52.694041 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 (259:5) scanned by mount (1011) May 14 17:58:52.698371 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 17:58:52.698438 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 14 17:58:52.698464 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 14 17:58:52.722056 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 17:58:52.724310 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 17:58:52.730049 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 17:58:52.805653 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 17:58:52.813232 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 17:58:52.888488 systemd-networkd[1146]: lo: Link UP May 14 17:58:52.888513 systemd-networkd[1146]: lo: Gained carrier May 14 17:58:52.892273 systemd-networkd[1146]: Enumeration completed May 14 17:58:52.894250 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 17:58:52.894285 systemd-networkd[1146]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 17:58:52.894292 systemd-networkd[1146]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 17:58:52.896930 systemd[1]: Reached target network.target - Network. May 14 17:58:52.911581 systemd-networkd[1146]: eth0: Link UP May 14 17:58:52.911599 systemd-networkd[1146]: eth0: Gained carrier May 14 17:58:52.911620 systemd-networkd[1146]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 17:58:52.930074 systemd-networkd[1146]: eth0: DHCPv4 address 172.31.31.64/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 14 17:58:53.174836 ignition[1080]: Ignition 2.21.0 May 14 17:58:53.174871 ignition[1080]: Stage: fetch-offline May 14 17:58:53.175313 ignition[1080]: no configs at "/usr/lib/ignition/base.d" May 14 17:58:53.175337 ignition[1080]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 14 17:58:53.181874 ignition[1080]: Ignition finished successfully May 14 17:58:53.183945 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 17:58:53.190408 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 14 17:58:53.230744 ignition[1157]: Ignition 2.21.0 May 14 17:58:53.231075 ignition[1157]: Stage: fetch May 14 17:58:53.231547 ignition[1157]: no configs at "/usr/lib/ignition/base.d" May 14 17:58:53.231570 ignition[1157]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 14 17:58:53.231732 ignition[1157]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 14 17:58:53.242535 ignition[1157]: PUT result: OK May 14 17:58:53.245276 ignition[1157]: parsed url from cmdline: "" May 14 17:58:53.245423 ignition[1157]: no config URL provided May 14 17:58:53.245443 ignition[1157]: reading system config file "/usr/lib/ignition/user.ign" May 14 17:58:53.245468 ignition[1157]: no config at "/usr/lib/ignition/user.ign" May 14 17:58:53.245500 ignition[1157]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 14 17:58:53.249443 ignition[1157]: PUT result: OK May 14 17:58:53.249521 ignition[1157]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 14 17:58:53.253466 ignition[1157]: GET result: OK May 14 17:58:53.255430 ignition[1157]: parsing config with SHA512: e998c1784b75ec8a2152f66f4f69fb67cec713901a8c92136ad39bcdede4638d5569ea89dccf04dbf90b488026001157a27ed8ed5e98f221f47f0cff7910160a May 14 17:58:53.269657 unknown[1157]: fetched base config from "system" May 14 17:58:53.270549 unknown[1157]: fetched base config from "system" May 14 17:58:53.271513 ignition[1157]: fetch: fetch complete May 14 17:58:53.270567 unknown[1157]: fetched user config from "aws" May 14 17:58:53.271526 ignition[1157]: fetch: fetch passed May 14 17:58:53.271614 ignition[1157]: Ignition finished successfully May 14 17:58:53.282882 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 14 17:58:53.289267 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 17:58:53.334892 ignition[1163]: Ignition 2.21.0 May 14 17:58:53.334925 ignition[1163]: Stage: kargs May 14 17:58:53.336071 ignition[1163]: no configs at "/usr/lib/ignition/base.d" May 14 17:58:53.336099 ignition[1163]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 14 17:58:53.336247 ignition[1163]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 14 17:58:53.346138 ignition[1163]: PUT result: OK May 14 17:58:53.356694 ignition[1163]: kargs: kargs passed May 14 17:58:53.358257 ignition[1163]: Ignition finished successfully May 14 17:58:53.361934 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 17:58:53.368139 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 17:58:53.404440 ignition[1169]: Ignition 2.21.0 May 14 17:58:53.404469 ignition[1169]: Stage: disks May 14 17:58:53.405180 ignition[1169]: no configs at "/usr/lib/ignition/base.d" May 14 17:58:53.405205 ignition[1169]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 14 17:58:53.405434 ignition[1169]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 14 17:58:53.415355 ignition[1169]: PUT result: OK May 14 17:58:53.419665 ignition[1169]: disks: disks passed May 14 17:58:53.419952 ignition[1169]: Ignition finished successfully May 14 17:58:53.425083 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 17:58:53.430675 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 17:58:53.434949 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 17:58:53.437974 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 17:58:53.444156 systemd[1]: Reached target sysinit.target - System Initialization. May 14 17:58:53.446515 systemd[1]: Reached target basic.target - Basic System. May 14 17:58:53.453974 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 17:58:53.513673 systemd-fsck[1178]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 14 17:58:53.519777 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 17:58:53.527287 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 17:58:53.647042 kernel: EXT4-fs (nvme0n1p9): mounted filesystem a9c1ea72-ce96-48c1-8c16-d7102e51beed r/w with ordered data mode. Quota mode: none. May 14 17:58:53.648728 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 17:58:53.652375 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 17:58:53.658332 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 17:58:53.673165 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 17:58:53.677492 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 17:58:53.679408 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 17:58:53.679460 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 17:58:53.703031 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 (259:5) scanned by mount (1197) May 14 17:58:53.707308 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 17:58:53.707360 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 14 17:58:53.710348 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 14 17:58:53.713799 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 17:58:53.720489 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 17:58:53.733253 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 17:58:54.171056 initrd-setup-root[1221]: cut: /sysroot/etc/passwd: No such file or directory May 14 17:58:54.201955 initrd-setup-root[1228]: cut: /sysroot/etc/group: No such file or directory May 14 17:58:54.210437 initrd-setup-root[1235]: cut: /sysroot/etc/shadow: No such file or directory May 14 17:58:54.218039 initrd-setup-root[1242]: cut: /sysroot/etc/gshadow: No such file or directory May 14 17:58:54.250193 systemd-networkd[1146]: eth0: Gained IPv6LL May 14 17:58:54.618077 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 17:58:54.624105 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 17:58:54.629391 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 17:58:54.656241 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 17:58:54.660155 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 17:58:54.692890 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 17:58:54.709034 ignition[1309]: INFO : Ignition 2.21.0 May 14 17:58:54.711659 ignition[1309]: INFO : Stage: mount May 14 17:58:54.711659 ignition[1309]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 17:58:54.711659 ignition[1309]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 14 17:58:54.711659 ignition[1309]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 14 17:58:54.728669 ignition[1309]: INFO : PUT result: OK May 14 17:58:54.733226 ignition[1309]: INFO : mount: mount passed May 14 17:58:54.736114 ignition[1309]: INFO : Ignition finished successfully May 14 17:58:54.738501 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 17:58:54.744567 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 17:58:54.775300 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 17:58:54.813028 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 (259:5) scanned by mount (1322) May 14 17:58:54.816928 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 6d47052f-e956-47a0-903a-525ae08a05f2 May 14 17:58:54.816975 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 14 17:58:54.818127 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 14 17:58:54.840499 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 17:58:54.881824 ignition[1339]: INFO : Ignition 2.21.0 May 14 17:58:54.881824 ignition[1339]: INFO : Stage: files May 14 17:58:54.885175 ignition[1339]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 17:58:54.885175 ignition[1339]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 14 17:58:54.885175 ignition[1339]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 14 17:58:54.904669 ignition[1339]: INFO : PUT result: OK May 14 17:58:54.909633 ignition[1339]: DEBUG : files: compiled without relabeling support, skipping May 14 17:58:54.912428 ignition[1339]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 17:58:54.912428 ignition[1339]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 17:58:54.920666 ignition[1339]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 17:58:54.923583 ignition[1339]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 17:58:54.926416 unknown[1339]: wrote ssh authorized keys file for user: core May 14 17:58:54.928558 ignition[1339]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 17:58:54.941951 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 17:58:54.945654 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 14 17:58:55.026606 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 17:58:55.173086 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 17:58:55.173086 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 14 17:58:55.180595 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 14 17:58:55.180595 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 17:58:55.180595 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 17:58:55.180595 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 17:58:55.180595 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 17:58:55.180595 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 17:58:55.180595 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 17:58:55.203431 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 17:58:55.203431 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 17:58:55.203431 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 17:58:55.215700 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 17:58:55.215700 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 17:58:55.224563 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 14 17:58:55.679731 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 14 17:58:56.061234 ignition[1339]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 17:58:56.061234 ignition[1339]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 14 17:58:56.068168 ignition[1339]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 17:58:56.074680 ignition[1339]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 17:58:56.074680 ignition[1339]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 14 17:58:56.074680 ignition[1339]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 14 17:58:56.074680 ignition[1339]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 14 17:58:56.086343 ignition[1339]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 17:58:56.086343 ignition[1339]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 17:58:56.086343 ignition[1339]: INFO : files: files passed May 14 17:58:56.086343 ignition[1339]: INFO : Ignition finished successfully May 14 17:58:56.098172 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 17:58:56.104425 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 17:58:56.106879 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 17:58:56.137873 initrd-setup-root-after-ignition[1366]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 17:58:56.137873 initrd-setup-root-after-ignition[1366]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 17:58:56.145991 initrd-setup-root-after-ignition[1371]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 17:58:56.150910 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 17:58:56.153171 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 17:58:56.156233 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 17:58:56.164508 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 17:58:56.169300 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 17:58:56.255301 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 17:58:56.255875 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 17:58:56.263925 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 17:58:56.267319 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 17:58:56.272082 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 17:58:56.275499 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 17:58:56.322951 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 17:58:56.330251 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 17:58:56.368115 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 17:58:56.373144 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 17:58:56.376119 systemd[1]: Stopped target timers.target - Timer Units. May 14 17:58:56.382316 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 17:58:56.382987 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 17:58:56.389538 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 17:58:56.392637 systemd[1]: Stopped target basic.target - Basic System. May 14 17:58:56.398024 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 17:58:56.400575 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 17:58:56.406983 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 17:58:56.409687 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 14 17:58:56.413516 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 17:58:56.419323 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 17:58:56.421889 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 17:58:56.427997 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 17:58:56.432065 systemd[1]: Stopped target swap.target - Swaps. May 14 17:58:56.434446 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 17:58:56.434677 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 17:58:56.441451 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 17:58:56.446469 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 17:58:56.449178 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 17:58:56.452815 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 17:58:56.459830 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 17:58:56.460259 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 17:58:56.466554 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 17:58:56.466966 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 17:58:56.474024 systemd[1]: ignition-files.service: Deactivated successfully. May 14 17:58:56.474234 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 17:58:56.480295 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 17:58:56.489253 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 17:58:56.489824 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 17:58:56.509246 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 17:58:56.513139 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 17:58:56.516669 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 17:58:56.522083 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 17:58:56.524077 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 17:58:56.536830 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 17:58:56.540158 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 17:58:56.563593 ignition[1392]: INFO : Ignition 2.21.0 May 14 17:58:56.566167 ignition[1392]: INFO : Stage: umount May 14 17:58:56.566167 ignition[1392]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 17:58:56.566167 ignition[1392]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 14 17:58:56.572358 ignition[1392]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 14 17:58:56.572358 ignition[1392]: INFO : PUT result: OK May 14 17:58:56.579621 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 17:58:56.585864 ignition[1392]: INFO : umount: umount passed May 14 17:58:56.588183 ignition[1392]: INFO : Ignition finished successfully May 14 17:58:56.592615 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 17:58:56.593169 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 17:58:56.599052 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 17:58:56.599149 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 17:58:56.601697 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 17:58:56.601783 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 17:58:56.605423 systemd[1]: ignition-fetch.service: Deactivated successfully. May 14 17:58:56.605509 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 14 17:58:56.608434 systemd[1]: Stopped target network.target - Network. May 14 17:58:56.610597 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 17:58:56.610680 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 17:58:56.614282 systemd[1]: Stopped target paths.target - Path Units. May 14 17:58:56.616884 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 17:58:56.617846 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 17:58:56.620492 systemd[1]: Stopped target slices.target - Slice Units. May 14 17:58:56.622434 systemd[1]: Stopped target sockets.target - Socket Units. May 14 17:58:56.627782 systemd[1]: iscsid.socket: Deactivated successfully. May 14 17:58:56.627859 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 17:58:56.629829 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 17:58:56.629894 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 17:58:56.632357 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 17:58:56.632467 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 17:58:56.635708 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 17:58:56.635791 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 17:58:56.639490 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 17:58:56.645922 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 17:58:56.680695 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 17:58:56.680914 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 17:58:56.717412 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 17:58:56.720640 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 14 17:58:56.724288 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 17:58:56.724366 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 17:58:56.746499 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 17:58:56.754774 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 17:58:56.754908 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 17:58:56.763307 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 17:58:56.766059 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 17:58:56.769106 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 17:58:56.783556 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 17:58:56.786943 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 17:58:56.787193 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 17:58:56.794217 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 17:58:56.794322 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 17:58:56.801446 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 17:58:56.801559 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 17:58:56.822812 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 17:58:56.839412 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 17:58:56.842134 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 17:58:56.842419 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 17:58:56.857727 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 17:58:56.857853 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 17:58:56.873259 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 17:58:56.873939 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 17:58:56.876034 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 17:58:56.876133 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 17:58:56.878423 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 17:58:56.878514 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 17:58:56.880660 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 17:58:56.880768 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 17:58:56.895263 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 17:58:56.902270 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 14 17:58:56.903833 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 14 17:58:56.910689 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 17:58:56.910797 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 17:58:56.917542 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 17:58:56.917648 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 17:58:56.934843 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 14 17:58:56.934988 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 17:58:56.935159 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 17:58:56.935980 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 17:58:56.936261 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 17:58:56.953256 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 17:58:56.953453 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 17:58:56.962034 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 17:58:56.962205 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 17:58:56.968902 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 17:58:56.978143 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 17:58:56.978309 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 17:58:56.980173 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 17:58:57.025439 systemd[1]: Switching root. May 14 17:58:57.058489 systemd-journald[257]: Journal stopped May 14 17:58:59.451315 systemd-journald[257]: Received SIGTERM from PID 1 (systemd). May 14 17:58:59.451445 kernel: SELinux: policy capability network_peer_controls=1 May 14 17:58:59.451488 kernel: SELinux: policy capability open_perms=1 May 14 17:58:59.451519 kernel: SELinux: policy capability extended_socket_class=1 May 14 17:58:59.451548 kernel: SELinux: policy capability always_check_network=0 May 14 17:58:59.451578 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 17:58:59.451606 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 17:58:59.451635 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 17:58:59.451670 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 17:58:59.451708 kernel: SELinux: policy capability userspace_initial_context=0 May 14 17:58:59.451736 kernel: audit: type=1403 audit(1747245537.393:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 17:58:59.451773 systemd[1]: Successfully loaded SELinux policy in 93.568ms. May 14 17:58:59.451822 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 23.782ms. May 14 17:58:59.451857 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 17:58:59.451889 systemd[1]: Detected virtualization amazon. May 14 17:58:59.451919 systemd[1]: Detected architecture arm64. May 14 17:58:59.451949 systemd[1]: Detected first boot. May 14 17:58:59.451982 systemd[1]: Initializing machine ID from VM UUID. May 14 17:58:59.459107 kernel: NET: Registered PF_VSOCK protocol family May 14 17:58:59.459187 zram_generator::config[1437]: No configuration found. May 14 17:58:59.459224 systemd[1]: Populated /etc with preset unit settings. May 14 17:58:59.459259 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 17:58:59.459292 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 17:58:59.459322 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 17:58:59.459356 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 17:58:59.459408 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 17:58:59.459440 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 17:58:59.459469 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 17:58:59.459508 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 17:58:59.459548 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 17:58:59.459577 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 17:58:59.459607 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 17:58:59.459636 systemd[1]: Created slice user.slice - User and Session Slice. May 14 17:58:59.459665 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 17:58:59.459699 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 17:58:59.459730 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 17:58:59.459767 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 17:58:59.459802 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 17:58:59.459837 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 17:58:59.459870 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 17:58:59.459904 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 17:58:59.459935 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 17:58:59.459968 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 17:58:59.459998 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 17:58:59.463118 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 17:58:59.463163 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 17:58:59.463196 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 17:58:59.463228 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 17:58:59.463260 systemd[1]: Reached target slices.target - Slice Units. May 14 17:58:59.463290 systemd[1]: Reached target swap.target - Swaps. May 14 17:58:59.463319 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 17:58:59.463356 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 17:58:59.463387 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 17:58:59.463417 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 17:58:59.463445 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 17:58:59.463476 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 17:58:59.463506 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 17:58:59.463537 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 17:58:59.463567 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 17:58:59.463598 systemd[1]: Mounting media.mount - External Media Directory... May 14 17:58:59.463633 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 17:58:59.463664 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 17:58:59.463694 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 17:58:59.463723 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 17:58:59.463753 systemd[1]: Reached target machines.target - Containers. May 14 17:58:59.463785 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 17:58:59.463814 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 17:58:59.463842 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 17:58:59.463874 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 17:58:59.463903 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 17:58:59.463933 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 17:58:59.463965 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 17:58:59.463998 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 17:58:59.467091 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 17:58:59.467127 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 17:58:59.467156 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 17:58:59.467193 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 17:58:59.467229 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 17:58:59.467257 systemd[1]: Stopped systemd-fsck-usr.service. May 14 17:58:59.467290 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 17:58:59.467321 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 17:58:59.467349 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 17:58:59.467378 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 17:58:59.467413 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 17:58:59.467445 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 17:58:59.467476 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 17:58:59.467533 systemd[1]: verity-setup.service: Deactivated successfully. May 14 17:58:59.467566 systemd[1]: Stopped verity-setup.service. May 14 17:58:59.467597 kernel: loop: module loaded May 14 17:58:59.467625 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 17:58:59.467659 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 17:58:59.467694 systemd[1]: Mounted media.mount - External Media Directory. May 14 17:58:59.467726 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 17:58:59.467755 kernel: fuse: init (API version 7.41) May 14 17:58:59.467782 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 17:58:59.467810 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 17:58:59.467842 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 17:58:59.467871 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 17:58:59.467900 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 17:58:59.467928 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 17:58:59.467957 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 17:58:59.467986 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 17:58:59.468040 kernel: ACPI: bus type drm_connector registered May 14 17:58:59.468072 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 17:58:59.468107 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 17:58:59.468142 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 17:58:59.470050 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 17:58:59.470093 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 17:58:59.470123 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 17:58:59.470163 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 17:58:59.470192 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 17:58:59.470224 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 17:58:59.470255 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 17:58:59.470287 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 17:58:59.470316 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 17:58:59.470349 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 17:58:59.470381 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 17:58:59.470420 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 17:58:59.470454 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 17:58:59.470492 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 17:58:59.470521 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 17:58:59.470600 systemd-journald[1516]: Collecting audit messages is disabled. May 14 17:58:59.470660 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 17:58:59.470691 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 17:58:59.470720 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 17:58:59.470758 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 17:58:59.470786 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 17:58:59.470817 systemd-journald[1516]: Journal started May 14 17:58:59.470865 systemd-journald[1516]: Runtime Journal (/run/log/journal/ec251a4efa33ccacbd06cf0b90a1431e) is 8M, max 75.3M, 67.3M free. May 14 17:58:58.769131 systemd[1]: Queued start job for default target multi-user.target. May 14 17:58:58.784391 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 14 17:58:58.785223 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 17:58:59.487104 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 17:58:59.487182 systemd[1]: Started systemd-journald.service - Journal Service. May 14 17:58:59.491076 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 17:58:59.496259 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 17:58:59.500809 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 17:58:59.503660 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 17:58:59.521580 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 17:58:59.560165 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 17:58:59.570385 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 17:58:59.580158 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 17:58:59.591342 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 17:58:59.625066 kernel: loop0: detected capacity change from 0 to 107312 May 14 17:58:59.665184 systemd-journald[1516]: Time spent on flushing to /var/log/journal/ec251a4efa33ccacbd06cf0b90a1431e is 44.725ms for 930 entries. May 14 17:58:59.665184 systemd-journald[1516]: System Journal (/var/log/journal/ec251a4efa33ccacbd06cf0b90a1431e) is 8M, max 195.6M, 187.6M free. May 14 17:58:59.727419 systemd-journald[1516]: Received client request to flush runtime journal. May 14 17:58:59.667087 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 17:58:59.676254 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 17:58:59.718091 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 17:58:59.729710 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 17:58:59.770665 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 17:58:59.774464 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 17:58:59.781524 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 17:58:59.788822 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 17:58:59.798052 kernel: loop1: detected capacity change from 0 to 56568 May 14 17:58:59.847045 kernel: loop2: detected capacity change from 0 to 189592 May 14 17:58:59.847999 systemd-tmpfiles[1588]: ACLs are not supported, ignoring. May 14 17:58:59.848083 systemd-tmpfiles[1588]: ACLs are not supported, ignoring. May 14 17:58:59.862242 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 17:58:59.978413 kernel: loop3: detected capacity change from 0 to 138376 May 14 17:59:00.101399 kernel: loop4: detected capacity change from 0 to 107312 May 14 17:59:00.114054 kernel: loop5: detected capacity change from 0 to 56568 May 14 17:59:00.135042 kernel: loop6: detected capacity change from 0 to 189592 May 14 17:59:00.171043 kernel: loop7: detected capacity change from 0 to 138376 May 14 17:59:00.185389 (sd-merge)[1594]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 14 17:59:00.186433 (sd-merge)[1594]: Merged extensions into '/usr'. May 14 17:59:00.195289 systemd[1]: Reload requested from client PID 1552 ('systemd-sysext') (unit systemd-sysext.service)... May 14 17:59:00.195319 systemd[1]: Reloading... May 14 17:59:00.375049 zram_generator::config[1618]: No configuration found. May 14 17:59:00.621881 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 17:59:00.830772 systemd[1]: Reloading finished in 633 ms. May 14 17:59:00.852744 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 17:59:00.855851 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 17:59:00.871370 systemd[1]: Starting ensure-sysext.service... May 14 17:59:00.876339 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 17:59:00.882830 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 17:59:00.920523 systemd[1]: Reload requested from client PID 1672 ('systemctl') (unit ensure-sysext.service)... May 14 17:59:00.920693 systemd[1]: Reloading... May 14 17:59:00.960176 systemd-udevd[1674]: Using default interface naming scheme 'v255'. May 14 17:59:00.985118 systemd-tmpfiles[1673]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 14 17:59:00.985224 systemd-tmpfiles[1673]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 14 17:59:00.985843 systemd-tmpfiles[1673]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 17:59:00.987126 systemd-tmpfiles[1673]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 17:59:00.989999 systemd-tmpfiles[1673]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 17:59:00.990625 systemd-tmpfiles[1673]: ACLs are not supported, ignoring. May 14 17:59:00.990774 systemd-tmpfiles[1673]: ACLs are not supported, ignoring. May 14 17:59:01.003345 systemd-tmpfiles[1673]: Detected autofs mount point /boot during canonicalization of boot. May 14 17:59:01.003365 systemd-tmpfiles[1673]: Skipping /boot May 14 17:59:01.043433 systemd-tmpfiles[1673]: Detected autofs mount point /boot during canonicalization of boot. May 14 17:59:01.043600 systemd-tmpfiles[1673]: Skipping /boot May 14 17:59:01.148050 zram_generator::config[1705]: No configuration found. May 14 17:59:01.299296 ldconfig[1548]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 17:59:01.458564 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 17:59:01.473114 (udev-worker)[1750]: Network interface NamePolicy= disabled on kernel command line. May 14 17:59:01.755401 systemd[1]: Reloading finished in 831 ms. May 14 17:59:01.843508 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 17:59:01.846789 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 17:59:01.870117 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 17:59:01.896919 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 17:59:01.946937 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 17:59:01.954353 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 17:59:01.956880 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 17:59:01.959165 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 17:59:01.965236 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 17:59:01.969569 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 17:59:01.972378 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 17:59:01.972634 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 17:59:01.977540 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 17:59:01.984453 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 17:59:01.991953 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 17:59:01.997344 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 17:59:02.007829 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 17:59:02.008237 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 17:59:02.008447 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 17:59:02.017563 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 17:59:02.023579 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 17:59:02.025636 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 17:59:02.025876 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 17:59:02.026246 systemd[1]: Reached target time-set.target - System Time Set. May 14 17:59:02.061024 systemd[1]: Finished ensure-sysext.service. May 14 17:59:02.087420 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 17:59:02.104538 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 17:59:02.110376 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 17:59:02.114353 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 17:59:02.121909 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 17:59:02.123795 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 17:59:02.136590 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 17:59:02.138181 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 17:59:02.146909 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 17:59:02.161558 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 17:59:02.170907 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 17:59:02.172212 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 17:59:02.214568 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 17:59:02.237154 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 17:59:02.253544 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 17:59:02.285995 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 14 17:59:02.290563 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 17:59:02.359082 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 17:59:02.362138 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 17:59:02.366321 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 17:59:02.366696 augenrules[1933]: No rules May 14 17:59:02.371114 systemd[1]: audit-rules.service: Deactivated successfully. May 14 17:59:02.371814 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 17:59:02.390326 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 17:59:02.491094 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 17:59:02.493721 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 17:59:02.649375 systemd-networkd[1893]: lo: Link UP May 14 17:59:02.649391 systemd-networkd[1893]: lo: Gained carrier May 14 17:59:02.652972 systemd-networkd[1893]: Enumeration completed May 14 17:59:02.653337 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 17:59:02.654607 systemd-resolved[1894]: Positive Trust Anchors: May 14 17:59:02.654629 systemd-resolved[1894]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 17:59:02.654692 systemd-resolved[1894]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 17:59:02.658421 systemd-networkd[1893]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 17:59:02.658448 systemd-networkd[1893]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 17:59:02.659638 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 17:59:02.665573 systemd-networkd[1893]: eth0: Link UP May 14 17:59:02.665862 systemd-networkd[1893]: eth0: Gained carrier May 14 17:59:02.665912 systemd-networkd[1893]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 17:59:02.666168 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 17:59:02.670053 systemd-resolved[1894]: Defaulting to hostname 'linux'. May 14 17:59:02.677569 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 17:59:02.680267 systemd[1]: Reached target network.target - Network. May 14 17:59:02.682185 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 17:59:02.686137 systemd[1]: Reached target sysinit.target - System Initialization. May 14 17:59:02.688254 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 17:59:02.690631 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 17:59:02.693028 systemd-networkd[1893]: eth0: DHCPv4 address 172.31.31.64/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 14 17:59:02.693426 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 17:59:02.695663 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 17:59:02.698153 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 17:59:02.700507 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 17:59:02.700558 systemd[1]: Reached target paths.target - Path Units. May 14 17:59:02.702323 systemd[1]: Reached target timers.target - Timer Units. May 14 17:59:02.705776 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 17:59:02.710667 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 17:59:02.719583 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 17:59:02.723259 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 17:59:02.726427 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 17:59:02.740231 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 17:59:02.743667 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 17:59:02.747268 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 17:59:02.749952 systemd[1]: Reached target sockets.target - Socket Units. May 14 17:59:02.752320 systemd[1]: Reached target basic.target - Basic System. May 14 17:59:02.754146 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 17:59:02.754195 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 17:59:02.764221 systemd[1]: Starting containerd.service - containerd container runtime... May 14 17:59:02.772340 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 14 17:59:02.778359 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 17:59:02.785516 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 17:59:02.798529 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 17:59:02.803437 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 17:59:02.805460 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 17:59:02.810647 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 17:59:02.819760 systemd[1]: Started ntpd.service - Network Time Service. May 14 17:59:02.824354 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 17:59:02.832986 systemd[1]: Starting setup-oem.service - Setup OEM... May 14 17:59:02.843177 jq[1961]: false May 14 17:59:02.850519 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 17:59:02.855128 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 17:59:02.864276 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 17:59:02.870080 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 17:59:02.870956 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 17:59:02.880195 systemd[1]: Starting update-engine.service - Update Engine... May 14 17:59:02.889849 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 17:59:02.899391 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 17:59:02.910092 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 17:59:02.913139 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 17:59:02.916109 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 17:59:02.985959 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 17:59:02.986503 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 17:59:03.013033 jq[1973]: true May 14 17:59:03.034951 systemd[1]: motdgen.service: Deactivated successfully. May 14 17:59:03.035428 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 17:59:03.050064 extend-filesystems[1962]: Found loop4 May 14 17:59:03.050064 extend-filesystems[1962]: Found loop5 May 14 17:59:03.050064 extend-filesystems[1962]: Found loop6 May 14 17:59:03.050064 extend-filesystems[1962]: Found loop7 May 14 17:59:03.050064 extend-filesystems[1962]: Found nvme0n1 May 14 17:59:03.050064 extend-filesystems[1962]: Found nvme0n1p1 May 14 17:59:03.050064 extend-filesystems[1962]: Found nvme0n1p2 May 14 17:59:03.050064 extend-filesystems[1962]: Found nvme0n1p3 May 14 17:59:03.050064 extend-filesystems[1962]: Found usr May 14 17:59:03.050064 extend-filesystems[1962]: Found nvme0n1p4 May 14 17:59:03.050064 extend-filesystems[1962]: Found nvme0n1p6 May 14 17:59:03.089230 extend-filesystems[1962]: Found nvme0n1p7 May 14 17:59:03.089230 extend-filesystems[1962]: Found nvme0n1p9 May 14 17:59:03.089230 extend-filesystems[1962]: Checking size of /dev/nvme0n1p9 May 14 17:59:03.094280 ntpd[1964]: 14 May 17:59:03 ntpd[1964]: ntpd 4.2.8p17@1.4004-o Wed May 14 16:08:03 UTC 2025 (1): Starting May 14 17:59:03.094280 ntpd[1964]: 14 May 17:59:03 ntpd[1964]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 14 17:59:03.094280 ntpd[1964]: 14 May 17:59:03 ntpd[1964]: ---------------------------------------------------- May 14 17:59:03.094280 ntpd[1964]: 14 May 17:59:03 ntpd[1964]: ntp-4 is maintained by Network Time Foundation, May 14 17:59:03.094280 ntpd[1964]: 14 May 17:59:03 ntpd[1964]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 14 17:59:03.094280 ntpd[1964]: 14 May 17:59:03 ntpd[1964]: corporation. Support and training for ntp-4 are May 14 17:59:03.094280 ntpd[1964]: 14 May 17:59:03 ntpd[1964]: available at https://www.nwtime.org/support May 14 17:59:03.094280 ntpd[1964]: 14 May 17:59:03 ntpd[1964]: ---------------------------------------------------- May 14 17:59:03.082868 (ntainerd)[1983]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 17:59:03.086554 ntpd[1964]: ntpd 4.2.8p17@1.4004-o Wed May 14 16:08:03 UTC 2025 (1): Starting May 14 17:59:03.095554 update_engine[1970]: I20250514 17:59:03.090258 1970 main.cc:92] Flatcar Update Engine starting May 14 17:59:03.086601 ntpd[1964]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 14 17:59:03.086620 ntpd[1964]: ---------------------------------------------------- May 14 17:59:03.086637 ntpd[1964]: ntp-4 is maintained by Network Time Foundation, May 14 17:59:03.086653 ntpd[1964]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 14 17:59:03.102216 ntpd[1964]: 14 May 17:59:03 ntpd[1964]: proto: precision = 0.096 usec (-23) May 14 17:59:03.098675 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 17:59:03.086673 ntpd[1964]: corporation. Support and training for ntp-4 are May 14 17:59:03.086689 ntpd[1964]: available at https://www.nwtime.org/support May 14 17:59:03.086705 ntpd[1964]: ---------------------------------------------------- May 14 17:59:03.098224 dbus-daemon[1959]: [system] SELinux support is enabled May 14 17:59:03.101977 ntpd[1964]: proto: precision = 0.096 usec (-23) May 14 17:59:03.107291 ntpd[1964]: basedate set to 2025-05-02 May 14 17:59:03.109166 ntpd[1964]: 14 May 17:59:03 ntpd[1964]: basedate set to 2025-05-02 May 14 17:59:03.109166 ntpd[1964]: 14 May 17:59:03 ntpd[1964]: gps base set to 2025-05-04 (week 2365) May 14 17:59:03.107515 dbus-daemon[1959]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1893 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 14 17:59:03.107547 ntpd[1964]: gps base set to 2025-05-04 (week 2365) May 14 17:59:03.110058 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 17:59:03.110378 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 17:59:03.114072 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 17:59:03.114114 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 17:59:03.122537 ntpd[1964]: Listen and drop on 0 v6wildcard [::]:123 May 14 17:59:03.125269 ntpd[1964]: 14 May 17:59:03 ntpd[1964]: Listen and drop on 0 v6wildcard [::]:123 May 14 17:59:03.125269 ntpd[1964]: 14 May 17:59:03 ntpd[1964]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 14 17:59:03.125269 ntpd[1964]: 14 May 17:59:03 ntpd[1964]: Listen normally on 2 lo 127.0.0.1:123 May 14 17:59:03.125269 ntpd[1964]: 14 May 17:59:03 ntpd[1964]: Listen normally on 3 eth0 172.31.31.64:123 May 14 17:59:03.122625 ntpd[1964]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 14 17:59:03.122879 ntpd[1964]: Listen normally on 2 lo 127.0.0.1:123 May 14 17:59:03.122938 ntpd[1964]: Listen normally on 3 eth0 172.31.31.64:123 May 14 17:59:03.129112 ntpd[1964]: Listen normally on 4 lo [::1]:123 May 14 17:59:03.136332 ntpd[1964]: 14 May 17:59:03 ntpd[1964]: Listen normally on 4 lo [::1]:123 May 14 17:59:03.136332 ntpd[1964]: 14 May 17:59:03 ntpd[1964]: bind(21) AF_INET6 fe80::4fa:b5ff:fe65:701f%2#123 flags 0x11 failed: Cannot assign requested address May 14 17:59:03.136332 ntpd[1964]: 14 May 17:59:03 ntpd[1964]: unable to create socket on eth0 (5) for fe80::4fa:b5ff:fe65:701f%2#123 May 14 17:59:03.136332 ntpd[1964]: 14 May 17:59:03 ntpd[1964]: failed to init interface for address fe80::4fa:b5ff:fe65:701f%2 May 14 17:59:03.136332 ntpd[1964]: 14 May 17:59:03 ntpd[1964]: Listening on routing socket on fd #21 for interface updates May 14 17:59:03.129284 ntpd[1964]: bind(21) AF_INET6 fe80::4fa:b5ff:fe65:701f%2#123 flags 0x11 failed: Cannot assign requested address May 14 17:59:03.129324 ntpd[1964]: unable to create socket on eth0 (5) for fe80::4fa:b5ff:fe65:701f%2#123 May 14 17:59:03.129348 ntpd[1964]: failed to init interface for address fe80::4fa:b5ff:fe65:701f%2 May 14 17:59:03.129406 ntpd[1964]: Listening on routing socket on fd #21 for interface updates May 14 17:59:03.133062 dbus-daemon[1959]: [system] Successfully activated service 'org.freedesktop.systemd1' May 14 17:59:03.146096 update_engine[1970]: I20250514 17:59:03.138286 1970 update_check_scheduler.cc:74] Next update check in 3m59s May 14 17:59:03.147914 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 14 17:59:03.153109 systemd[1]: Finished setup-oem.service - Setup OEM. May 14 17:59:03.155375 systemd[1]: Started update-engine.service - Update Engine. May 14 17:59:03.157930 jq[1998]: true May 14 17:59:03.159765 ntpd[1964]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 14 17:59:03.160565 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 17:59:03.171171 tar[1991]: linux-arm64/helm May 14 17:59:03.168916 ntpd[1964]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 14 17:59:03.171718 ntpd[1964]: 14 May 17:59:03 ntpd[1964]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 14 17:59:03.171718 ntpd[1964]: 14 May 17:59:03 ntpd[1964]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 14 17:59:03.201763 extend-filesystems[1962]: Resized partition /dev/nvme0n1p9 May 14 17:59:03.231984 extend-filesystems[2018]: resize2fs 1.47.2 (1-Jan-2025) May 14 17:59:03.229945 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 17:59:03.244462 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 14 17:59:03.323204 coreos-metadata[1958]: May 14 17:59:03.323 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 14 17:59:03.326219 coreos-metadata[1958]: May 14 17:59:03.325 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 14 17:59:03.332075 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 14 17:59:03.336338 coreos-metadata[1958]: May 14 17:59:03.336 INFO Fetch successful May 14 17:59:03.336338 coreos-metadata[1958]: May 14 17:59:03.336 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 14 17:59:03.342479 coreos-metadata[1958]: May 14 17:59:03.342 INFO Fetch successful May 14 17:59:03.342479 coreos-metadata[1958]: May 14 17:59:03.342 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 14 17:59:03.346027 coreos-metadata[1958]: May 14 17:59:03.344 INFO Fetch successful May 14 17:59:03.346027 coreos-metadata[1958]: May 14 17:59:03.344 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 14 17:59:03.353060 coreos-metadata[1958]: May 14 17:59:03.351 INFO Fetch successful May 14 17:59:03.353060 coreos-metadata[1958]: May 14 17:59:03.351 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 14 17:59:03.354389 coreos-metadata[1958]: May 14 17:59:03.354 INFO Fetch failed with 404: resource not found May 14 17:59:03.354389 coreos-metadata[1958]: May 14 17:59:03.354 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 14 17:59:03.355226 coreos-metadata[1958]: May 14 17:59:03.355 INFO Fetch successful May 14 17:59:03.355226 coreos-metadata[1958]: May 14 17:59:03.355 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 14 17:59:03.356896 extend-filesystems[2018]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 14 17:59:03.356896 extend-filesystems[2018]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 17:59:03.356896 extend-filesystems[2018]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 14 17:59:03.376760 extend-filesystems[1962]: Resized filesystem in /dev/nvme0n1p9 May 14 17:59:03.380793 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 17:59:03.383297 coreos-metadata[1958]: May 14 17:59:03.383 INFO Fetch successful May 14 17:59:03.383297 coreos-metadata[1958]: May 14 17:59:03.383 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 14 17:59:03.387066 bash[2033]: Updated "/home/core/.ssh/authorized_keys" May 14 17:59:03.387306 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 17:59:03.397239 coreos-metadata[1958]: May 14 17:59:03.397 INFO Fetch successful May 14 17:59:03.397239 coreos-metadata[1958]: May 14 17:59:03.397 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 14 17:59:03.400105 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 17:59:03.404159 coreos-metadata[1958]: May 14 17:59:03.404 INFO Fetch successful May 14 17:59:03.404159 coreos-metadata[1958]: May 14 17:59:03.404 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 14 17:59:03.412306 systemd[1]: Starting sshkeys.service... May 14 17:59:03.420077 coreos-metadata[1958]: May 14 17:59:03.418 INFO Fetch successful May 14 17:59:03.477918 systemd-logind[1969]: Watching system buttons on /dev/input/event0 (Power Button) May 14 17:59:03.477975 systemd-logind[1969]: Watching system buttons on /dev/input/event1 (Sleep Button) May 14 17:59:03.478519 systemd-logind[1969]: New seat seat0. May 14 17:59:03.484856 systemd[1]: Started systemd-logind.service - User Login Management. May 14 17:59:03.502940 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 14 17:59:03.513721 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 14 17:59:03.613822 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 14 17:59:03.619760 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 17:59:03.895752 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 14 17:59:03.909395 locksmithd[2011]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 17:59:03.935037 coreos-metadata[2053]: May 14 17:59:03.926 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 14 17:59:03.951551 dbus-daemon[1959]: [system] Successfully activated service 'org.freedesktop.hostname1' May 14 17:59:03.954643 coreos-metadata[2053]: May 14 17:59:03.954 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 14 17:59:03.965889 coreos-metadata[2053]: May 14 17:59:03.959 INFO Fetch successful May 14 17:59:03.965889 coreos-metadata[2053]: May 14 17:59:03.960 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 14 17:59:03.970310 coreos-metadata[2053]: May 14 17:59:03.968 INFO Fetch successful May 14 17:59:03.973073 unknown[2053]: wrote ssh authorized keys file for user: core May 14 17:59:03.978584 dbus-daemon[1959]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2010 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 14 17:59:03.987261 containerd[1983]: time="2025-05-14T17:59:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 17:59:03.990309 containerd[1983]: time="2025-05-14T17:59:03.989879844Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 14 17:59:03.992244 systemd[1]: Starting polkit.service - Authorization Manager... May 14 17:59:04.068805 containerd[1983]: time="2025-05-14T17:59:04.068710029Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.424µs" May 14 17:59:04.068805 containerd[1983]: time="2025-05-14T17:59:04.068780409Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 17:59:04.068953 containerd[1983]: time="2025-05-14T17:59:04.068819193Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 17:59:04.072335 update-ssh-keys[2147]: Updated "/home/core/.ssh/authorized_keys" May 14 17:59:04.075349 containerd[1983]: time="2025-05-14T17:59:04.075282585Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 17:59:04.075478 containerd[1983]: time="2025-05-14T17:59:04.075348225Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 17:59:04.075478 containerd[1983]: time="2025-05-14T17:59:04.075409845Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 17:59:04.075561 containerd[1983]: time="2025-05-14T17:59:04.075539529Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 17:59:04.075605 containerd[1983]: time="2025-05-14T17:59:04.075566073Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 17:59:04.086107 containerd[1983]: time="2025-05-14T17:59:04.075945021Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 17:59:04.086107 containerd[1983]: time="2025-05-14T17:59:04.075992985Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 17:59:04.086107 containerd[1983]: time="2025-05-14T17:59:04.076044477Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 17:59:04.086107 containerd[1983]: time="2025-05-14T17:59:04.076067061Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 17:59:04.086107 containerd[1983]: time="2025-05-14T17:59:04.076245261Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 17:59:04.086107 containerd[1983]: time="2025-05-14T17:59:04.076615569Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 17:59:04.086107 containerd[1983]: time="2025-05-14T17:59:04.076682445Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 17:59:04.086107 containerd[1983]: time="2025-05-14T17:59:04.076705953Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 17:59:04.076741 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 14 17:59:04.081609 systemd[1]: Finished sshkeys.service. May 14 17:59:04.087927 ntpd[1964]: bind(24) AF_INET6 fe80::4fa:b5ff:fe65:701f%2#123 flags 0x11 failed: Cannot assign requested address May 14 17:59:04.092093 containerd[1983]: time="2025-05-14T17:59:04.088208853Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 17:59:04.092093 containerd[1983]: time="2025-05-14T17:59:04.088875093Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 17:59:04.092093 containerd[1983]: time="2025-05-14T17:59:04.089061201Z" level=info msg="metadata content store policy set" policy=shared May 14 17:59:04.092960 ntpd[1964]: 14 May 17:59:04 ntpd[1964]: bind(24) AF_INET6 fe80::4fa:b5ff:fe65:701f%2#123 flags 0x11 failed: Cannot assign requested address May 14 17:59:04.092960 ntpd[1964]: 14 May 17:59:04 ntpd[1964]: unable to create socket on eth0 (6) for fe80::4fa:b5ff:fe65:701f%2#123 May 14 17:59:04.092960 ntpd[1964]: 14 May 17:59:04 ntpd[1964]: failed to init interface for address fe80::4fa:b5ff:fe65:701f%2 May 14 17:59:04.087987 ntpd[1964]: unable to create socket on eth0 (6) for fe80::4fa:b5ff:fe65:701f%2#123 May 14 17:59:04.089568 ntpd[1964]: failed to init interface for address fe80::4fa:b5ff:fe65:701f%2 May 14 17:59:04.096381 containerd[1983]: time="2025-05-14T17:59:04.096034569Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 17:59:04.096381 containerd[1983]: time="2025-05-14T17:59:04.096192465Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 17:59:04.096381 containerd[1983]: time="2025-05-14T17:59:04.096289197Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 17:59:04.096381 containerd[1983]: time="2025-05-14T17:59:04.096323505Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 17:59:04.096381 containerd[1983]: time="2025-05-14T17:59:04.096354873Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 17:59:04.096607 containerd[1983]: time="2025-05-14T17:59:04.096387165Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 17:59:04.096607 containerd[1983]: time="2025-05-14T17:59:04.096414945Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 17:59:04.096607 containerd[1983]: time="2025-05-14T17:59:04.096446373Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 17:59:04.096607 containerd[1983]: time="2025-05-14T17:59:04.096492105Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 17:59:04.096607 containerd[1983]: time="2025-05-14T17:59:04.096518769Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 17:59:04.096607 containerd[1983]: time="2025-05-14T17:59:04.096542877Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 17:59:04.096607 containerd[1983]: time="2025-05-14T17:59:04.096572181Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 17:59:04.096868 containerd[1983]: time="2025-05-14T17:59:04.096798141Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 17:59:04.096868 containerd[1983]: time="2025-05-14T17:59:04.096836289Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 17:59:04.096946 containerd[1983]: time="2025-05-14T17:59:04.096870189Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 17:59:04.096946 containerd[1983]: time="2025-05-14T17:59:04.096899193Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 17:59:04.096946 containerd[1983]: time="2025-05-14T17:59:04.096925953Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 17:59:04.097101 containerd[1983]: time="2025-05-14T17:59:04.096951969Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 17:59:04.097101 containerd[1983]: time="2025-05-14T17:59:04.096979017Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 17:59:04.097987 containerd[1983]: time="2025-05-14T17:59:04.097931049Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 17:59:04.098083 containerd[1983]: time="2025-05-14T17:59:04.098048049Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 17:59:04.098132 containerd[1983]: time="2025-05-14T17:59:04.098082357Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 17:59:04.098132 containerd[1983]: time="2025-05-14T17:59:04.098109285Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 17:59:04.098710 containerd[1983]: time="2025-05-14T17:59:04.098263821Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 17:59:04.098710 containerd[1983]: time="2025-05-14T17:59:04.098306037Z" level=info msg="Start snapshots syncer" May 14 17:59:04.101032 containerd[1983]: time="2025-05-14T17:59:04.099085329Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 17:59:04.101032 containerd[1983]: time="2025-05-14T17:59:04.099496017Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 17:59:04.101335 containerd[1983]: time="2025-05-14T17:59:04.099594249Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 17:59:04.101335 containerd[1983]: time="2025-05-14T17:59:04.099751329Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 17:59:04.101335 containerd[1983]: time="2025-05-14T17:59:04.099995889Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 17:59:04.101335 containerd[1983]: time="2025-05-14T17:59:04.100071849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 17:59:04.101335 containerd[1983]: time="2025-05-14T17:59:04.100118061Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 17:59:04.101335 containerd[1983]: time="2025-05-14T17:59:04.100148913Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 17:59:04.101335 containerd[1983]: time="2025-05-14T17:59:04.100195629Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 17:59:04.101335 containerd[1983]: time="2025-05-14T17:59:04.100226361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 17:59:04.101335 containerd[1983]: time="2025-05-14T17:59:04.100260645Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 17:59:04.101335 containerd[1983]: time="2025-05-14T17:59:04.100322973Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 17:59:04.101335 containerd[1983]: time="2025-05-14T17:59:04.100352397Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 17:59:04.101335 containerd[1983]: time="2025-05-14T17:59:04.100387473Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 17:59:04.101335 containerd[1983]: time="2025-05-14T17:59:04.100456941Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 17:59:04.101335 containerd[1983]: time="2025-05-14T17:59:04.100491141Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 17:59:04.101900 containerd[1983]: time="2025-05-14T17:59:04.100513917Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 17:59:04.101900 containerd[1983]: time="2025-05-14T17:59:04.100539393Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 17:59:04.101900 containerd[1983]: time="2025-05-14T17:59:04.100560801Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 17:59:04.101900 containerd[1983]: time="2025-05-14T17:59:04.100585365Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 17:59:04.101900 containerd[1983]: time="2025-05-14T17:59:04.100611285Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 17:59:04.101900 containerd[1983]: time="2025-05-14T17:59:04.100667949Z" level=info msg="runtime interface created" May 14 17:59:04.101900 containerd[1983]: time="2025-05-14T17:59:04.100683729Z" level=info msg="created NRI interface" May 14 17:59:04.101900 containerd[1983]: time="2025-05-14T17:59:04.100710057Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 17:59:04.101900 containerd[1983]: time="2025-05-14T17:59:04.100739505Z" level=info msg="Connect containerd service" May 14 17:59:04.101900 containerd[1983]: time="2025-05-14T17:59:04.100793145Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 17:59:04.129627 containerd[1983]: time="2025-05-14T17:59:04.129144945Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 17:59:04.370821 polkitd[2145]: Started polkitd version 126 May 14 17:59:04.388847 polkitd[2145]: Loading rules from directory /etc/polkit-1/rules.d May 14 17:59:04.389506 polkitd[2145]: Loading rules from directory /run/polkit-1/rules.d May 14 17:59:04.391848 polkitd[2145]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 14 17:59:04.392433 polkitd[2145]: Loading rules from directory /usr/local/share/polkit-1/rules.d May 14 17:59:04.392506 polkitd[2145]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 14 17:59:04.392591 polkitd[2145]: Loading rules from directory /usr/share/polkit-1/rules.d May 14 17:59:04.394430 polkitd[2145]: Finished loading, compiling and executing 2 rules May 14 17:59:04.398918 systemd[1]: Started polkit.service - Authorization Manager. May 14 17:59:04.402896 dbus-daemon[1959]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 14 17:59:04.404565 polkitd[2145]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 14 17:59:04.435116 systemd-hostnamed[2010]: Hostname set to (transient) May 14 17:59:04.435517 systemd-resolved[1894]: System hostname changed to 'ip-172-31-31-64'. May 14 17:59:04.490191 systemd-networkd[1893]: eth0: Gained IPv6LL May 14 17:59:04.501383 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 17:59:04.504949 systemd[1]: Reached target network-online.target - Network is Online. May 14 17:59:04.512752 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 14 17:59:04.519449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 17:59:04.532173 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 17:59:04.619357 containerd[1983]: time="2025-05-14T17:59:04.619303452Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 17:59:04.624123 containerd[1983]: time="2025-05-14T17:59:04.619862832Z" level=info msg="Start subscribing containerd event" May 14 17:59:04.624123 containerd[1983]: time="2025-05-14T17:59:04.619928028Z" level=info msg="Start recovering state" May 14 17:59:04.624123 containerd[1983]: time="2025-05-14T17:59:04.620076012Z" level=info msg="Start event monitor" May 14 17:59:04.624123 containerd[1983]: time="2025-05-14T17:59:04.620102700Z" level=info msg="Start cni network conf syncer for default" May 14 17:59:04.624123 containerd[1983]: time="2025-05-14T17:59:04.620120064Z" level=info msg="Start streaming server" May 14 17:59:04.624123 containerd[1983]: time="2025-05-14T17:59:04.620143200Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 17:59:04.624123 containerd[1983]: time="2025-05-14T17:59:04.620161524Z" level=info msg="runtime interface starting up..." May 14 17:59:04.624123 containerd[1983]: time="2025-05-14T17:59:04.620176788Z" level=info msg="starting plugins..." May 14 17:59:04.624123 containerd[1983]: time="2025-05-14T17:59:04.620201652Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 17:59:04.624123 containerd[1983]: time="2025-05-14T17:59:04.621063192Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 17:59:04.621362 systemd[1]: Started containerd.service - containerd container runtime. May 14 17:59:04.625706 containerd[1983]: time="2025-05-14T17:59:04.625625172Z" level=info msg="containerd successfully booted in 0.641340s" May 14 17:59:04.689754 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 17:59:04.769485 amazon-ssm-agent[2175]: Initializing new seelog logger May 14 17:59:04.770650 amazon-ssm-agent[2175]: New Seelog Logger Creation Complete May 14 17:59:04.770650 amazon-ssm-agent[2175]: 2025/05/14 17:59:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 14 17:59:04.770650 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 14 17:59:04.772288 amazon-ssm-agent[2175]: 2025/05/14 17:59:04 processing appconfig overrides May 14 17:59:04.774558 amazon-ssm-agent[2175]: 2025/05/14 17:59:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 14 17:59:04.775046 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 14 17:59:04.775046 amazon-ssm-agent[2175]: 2025/05/14 17:59:04 processing appconfig overrides May 14 17:59:04.776614 amazon-ssm-agent[2175]: 2025/05/14 17:59:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 14 17:59:04.776614 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 14 17:59:04.776614 amazon-ssm-agent[2175]: 2025/05/14 17:59:04 processing appconfig overrides May 14 17:59:04.776614 amazon-ssm-agent[2175]: 2025-05-14 17:59:04 INFO Proxy environment variables: May 14 17:59:04.779928 amazon-ssm-agent[2175]: 2025/05/14 17:59:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 14 17:59:04.780086 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 14 17:59:04.780315 amazon-ssm-agent[2175]: 2025/05/14 17:59:04 processing appconfig overrides May 14 17:59:04.877029 amazon-ssm-agent[2175]: 2025-05-14 17:59:04 INFO https_proxy: May 14 17:59:04.978122 amazon-ssm-agent[2175]: 2025-05-14 17:59:04 INFO http_proxy: May 14 17:59:05.081037 amazon-ssm-agent[2175]: 2025-05-14 17:59:04 INFO no_proxy: May 14 17:59:05.123732 tar[1991]: linux-arm64/LICENSE May 14 17:59:05.123732 tar[1991]: linux-arm64/README.md May 14 17:59:05.157130 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 17:59:05.180305 amazon-ssm-agent[2175]: 2025-05-14 17:59:04 INFO Checking if agent identity type OnPrem can be assumed May 14 17:59:05.279108 amazon-ssm-agent[2175]: 2025-05-14 17:59:04 INFO Checking if agent identity type EC2 can be assumed May 14 17:59:05.381415 amazon-ssm-agent[2175]: 2025-05-14 17:59:04 INFO Agent will take identity from EC2 May 14 17:59:05.478111 amazon-ssm-agent[2175]: 2025-05-14 17:59:04 INFO [amazon-ssm-agent] using named pipe channel for IPC May 14 17:59:05.577672 amazon-ssm-agent[2175]: 2025-05-14 17:59:04 INFO [amazon-ssm-agent] using named pipe channel for IPC May 14 17:59:05.642120 sshd_keygen[2004]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 17:59:05.677133 amazon-ssm-agent[2175]: 2025-05-14 17:59:04 INFO [amazon-ssm-agent] using named pipe channel for IPC May 14 17:59:05.687715 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 17:59:05.696215 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 17:59:05.701467 systemd[1]: Started sshd@0-172.31.31.64:22-139.178.89.65:43438.service - OpenSSH per-connection server daemon (139.178.89.65:43438). May 14 17:59:05.731529 amazon-ssm-agent[2175]: 2025-05-14 17:59:04 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 May 14 17:59:05.731529 amazon-ssm-agent[2175]: 2025-05-14 17:59:04 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 May 14 17:59:05.731529 amazon-ssm-agent[2175]: 2025-05-14 17:59:04 INFO [amazon-ssm-agent] Starting Core Agent May 14 17:59:05.731529 amazon-ssm-agent[2175]: 2025-05-14 17:59:04 INFO [amazon-ssm-agent] registrar detected. Attempting registration May 14 17:59:05.731529 amazon-ssm-agent[2175]: 2025-05-14 17:59:04 INFO [Registrar] Starting registrar module May 14 17:59:05.731529 amazon-ssm-agent[2175]: 2025-05-14 17:59:04 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration May 14 17:59:05.731529 amazon-ssm-agent[2175]: 2025-05-14 17:59:05 INFO [EC2Identity] EC2 registration was successful. May 14 17:59:05.731529 amazon-ssm-agent[2175]: 2025-05-14 17:59:05 INFO [CredentialRefresher] credentialRefresher has started May 14 17:59:05.731529 amazon-ssm-agent[2175]: 2025-05-14 17:59:05 INFO [CredentialRefresher] Starting credentials refresher loop May 14 17:59:05.731529 amazon-ssm-agent[2175]: 2025-05-14 17:59:05 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 14 17:59:05.734077 systemd[1]: issuegen.service: Deactivated successfully. May 14 17:59:05.735696 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 17:59:05.744170 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 17:59:05.777281 amazon-ssm-agent[2175]: 2025-05-14 17:59:05 INFO [CredentialRefresher] Next credential rotation will be in 31.2499664344 minutes May 14 17:59:05.797162 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 17:59:05.804249 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 17:59:05.810036 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 17:59:05.813537 systemd[1]: Reached target getty.target - Login Prompts. May 14 17:59:05.935811 sshd[2211]: Accepted publickey for core from 139.178.89.65 port 43438 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 17:59:05.940222 sshd-session[2211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:59:05.954319 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 17:59:05.958403 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 17:59:05.977349 systemd-logind[1969]: New session 1 of user core. May 14 17:59:06.006224 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 17:59:06.014758 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 17:59:06.038332 (systemd)[2222]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 17:59:06.045205 systemd-logind[1969]: New session c1 of user core. May 14 17:59:06.242732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 17:59:06.246122 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 17:59:06.260686 (kubelet)[2233]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 17:59:06.352257 systemd[2222]: Queued start job for default target default.target. May 14 17:59:06.359723 systemd[2222]: Created slice app.slice - User Application Slice. May 14 17:59:06.359786 systemd[2222]: Reached target paths.target - Paths. May 14 17:59:06.359875 systemd[2222]: Reached target timers.target - Timers. May 14 17:59:06.364180 systemd[2222]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 17:59:06.388747 systemd[2222]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 17:59:06.388867 systemd[2222]: Reached target sockets.target - Sockets. May 14 17:59:06.388948 systemd[2222]: Reached target basic.target - Basic System. May 14 17:59:06.389062 systemd[2222]: Reached target default.target - Main User Target. May 14 17:59:06.389124 systemd[2222]: Startup finished in 331ms. May 14 17:59:06.390109 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 17:59:06.399355 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 17:59:06.401862 systemd[1]: Startup finished in 3.809s (kernel) + 8.683s (initrd) + 9.098s (userspace) = 21.591s. May 14 17:59:06.563563 systemd[1]: Started sshd@1-172.31.31.64:22-139.178.89.65:50734.service - OpenSSH per-connection server daemon (139.178.89.65:50734). May 14 17:59:06.762705 amazon-ssm-agent[2175]: 2025-05-14 17:59:06 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 14 17:59:06.773047 sshd[2247]: Accepted publickey for core from 139.178.89.65 port 50734 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 17:59:06.775486 sshd-session[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:59:06.792848 systemd-logind[1969]: New session 2 of user core. May 14 17:59:06.795504 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 17:59:06.947679 amazon-ssm-agent[2175]: 2025-05-14 17:59:06 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2251) started May 14 17:59:07.013497 sshd[2253]: Connection closed by 139.178.89.65 port 50734 May 14 17:59:07.014406 sshd-session[2247]: pam_unix(sshd:session): session closed for user core May 14 17:59:07.025705 systemd[1]: sshd@1-172.31.31.64:22-139.178.89.65:50734.service: Deactivated successfully. May 14 17:59:07.035462 systemd[1]: session-2.scope: Deactivated successfully. May 14 17:59:07.045377 systemd-logind[1969]: Session 2 logged out. Waiting for processes to exit. May 14 17:59:07.049952 amazon-ssm-agent[2175]: 2025-05-14 17:59:06 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 14 17:59:07.062438 systemd[1]: Started sshd@2-172.31.31.64:22-139.178.89.65:50748.service - OpenSSH per-connection server daemon (139.178.89.65:50748). May 14 17:59:07.066346 systemd-logind[1969]: Removed session 2. May 14 17:59:07.087989 ntpd[1964]: Listen normally on 7 eth0 [fe80::4fa:b5ff:fe65:701f%2]:123 May 14 17:59:07.088494 ntpd[1964]: 14 May 17:59:07 ntpd[1964]: Listen normally on 7 eth0 [fe80::4fa:b5ff:fe65:701f%2]:123 May 14 17:59:07.266214 sshd[2263]: Accepted publickey for core from 139.178.89.65 port 50748 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 17:59:07.269551 sshd-session[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:59:07.280452 systemd-logind[1969]: New session 3 of user core. May 14 17:59:07.289368 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 17:59:07.377450 kubelet[2233]: E0514 17:59:07.377368 2233 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 17:59:07.381925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 17:59:07.382323 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 17:59:07.384238 systemd[1]: kubelet.service: Consumed 1.337s CPU time, 230M memory peak. May 14 17:59:07.410268 sshd[2270]: Connection closed by 139.178.89.65 port 50748 May 14 17:59:07.411281 sshd-session[2263]: pam_unix(sshd:session): session closed for user core May 14 17:59:07.416183 systemd[1]: sshd@2-172.31.31.64:22-139.178.89.65:50748.service: Deactivated successfully. May 14 17:59:07.419635 systemd[1]: session-3.scope: Deactivated successfully. May 14 17:59:07.423768 systemd-logind[1969]: Session 3 logged out. Waiting for processes to exit. May 14 17:59:07.425614 systemd-logind[1969]: Removed session 3. May 14 17:59:07.459207 systemd[1]: Started sshd@3-172.31.31.64:22-139.178.89.65:50756.service - OpenSSH per-connection server daemon (139.178.89.65:50756). May 14 17:59:07.656577 sshd[2277]: Accepted publickey for core from 139.178.89.65 port 50756 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 17:59:07.659049 sshd-session[2277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:59:07.666921 systemd-logind[1969]: New session 4 of user core. May 14 17:59:07.676267 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 17:59:07.802034 sshd[2280]: Connection closed by 139.178.89.65 port 50756 May 14 17:59:07.802205 sshd-session[2277]: pam_unix(sshd:session): session closed for user core May 14 17:59:07.808465 systemd[1]: sshd@3-172.31.31.64:22-139.178.89.65:50756.service: Deactivated successfully. May 14 17:59:07.812791 systemd[1]: session-4.scope: Deactivated successfully. May 14 17:59:07.816136 systemd-logind[1969]: Session 4 logged out. Waiting for processes to exit. May 14 17:59:07.818676 systemd-logind[1969]: Removed session 4. May 14 17:59:07.840469 systemd[1]: Started sshd@4-172.31.31.64:22-139.178.89.65:50766.service - OpenSSH per-connection server daemon (139.178.89.65:50766). May 14 17:59:08.050084 sshd[2286]: Accepted publickey for core from 139.178.89.65 port 50766 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 17:59:08.053335 sshd-session[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:59:08.061281 systemd-logind[1969]: New session 5 of user core. May 14 17:59:08.066234 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 17:59:08.226526 sudo[2289]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 17:59:08.227213 sudo[2289]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 17:59:08.256579 sudo[2289]: pam_unix(sudo:session): session closed for user root May 14 17:59:08.278894 sshd[2288]: Connection closed by 139.178.89.65 port 50766 May 14 17:59:08.280071 sshd-session[2286]: pam_unix(sshd:session): session closed for user core May 14 17:59:08.286624 systemd-logind[1969]: Session 5 logged out. Waiting for processes to exit. May 14 17:59:08.286748 systemd[1]: sshd@4-172.31.31.64:22-139.178.89.65:50766.service: Deactivated successfully. May 14 17:59:08.290296 systemd[1]: session-5.scope: Deactivated successfully. May 14 17:59:08.296132 systemd-logind[1969]: Removed session 5. May 14 17:59:08.317925 systemd[1]: Started sshd@5-172.31.31.64:22-139.178.89.65:50774.service - OpenSSH per-connection server daemon (139.178.89.65:50774). May 14 17:59:08.519761 sshd[2295]: Accepted publickey for core from 139.178.89.65 port 50774 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 17:59:08.522242 sshd-session[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:59:08.531709 systemd-logind[1969]: New session 6 of user core. May 14 17:59:08.536266 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 17:59:08.639801 sudo[2299]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 17:59:08.640953 sudo[2299]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 17:59:08.650609 sudo[2299]: pam_unix(sudo:session): session closed for user root May 14 17:59:08.660203 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 17:59:08.660818 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 17:59:08.676905 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 17:59:08.738255 augenrules[2321]: No rules May 14 17:59:08.740366 systemd[1]: audit-rules.service: Deactivated successfully. May 14 17:59:08.742097 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 17:59:08.743650 sudo[2298]: pam_unix(sudo:session): session closed for user root May 14 17:59:08.768699 sshd[2297]: Connection closed by 139.178.89.65 port 50774 May 14 17:59:08.767848 sshd-session[2295]: pam_unix(sshd:session): session closed for user core May 14 17:59:08.774273 systemd[1]: session-6.scope: Deactivated successfully. May 14 17:59:08.777294 systemd[1]: sshd@5-172.31.31.64:22-139.178.89.65:50774.service: Deactivated successfully. May 14 17:59:08.781791 systemd-logind[1969]: Session 6 logged out. Waiting for processes to exit. May 14 17:59:08.785125 systemd-logind[1969]: Removed session 6. May 14 17:59:08.806446 systemd[1]: Started sshd@6-172.31.31.64:22-139.178.89.65:50776.service - OpenSSH per-connection server daemon (139.178.89.65:50776). May 14 17:59:09.004107 sshd[2330]: Accepted publickey for core from 139.178.89.65 port 50776 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 17:59:09.005918 sshd-session[2330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 17:59:09.013397 systemd-logind[1969]: New session 7 of user core. May 14 17:59:09.021245 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 17:59:09.123159 sudo[2333]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 17:59:09.123775 sudo[2333]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 17:59:09.808377 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 17:59:09.821518 (dockerd)[2350]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 17:59:10.443579 systemd-resolved[1894]: Clock change detected. Flushing caches. May 14 17:59:10.687426 dockerd[2350]: time="2025-05-14T17:59:10.687327725Z" level=info msg="Starting up" May 14 17:59:10.691912 dockerd[2350]: time="2025-05-14T17:59:10.690987785Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 17:59:10.742394 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3469600664-merged.mount: Deactivated successfully. May 14 17:59:10.876427 systemd[1]: var-lib-docker-metacopy\x2dcheck3142560836-merged.mount: Deactivated successfully. May 14 17:59:10.896357 dockerd[2350]: time="2025-05-14T17:59:10.895953690Z" level=info msg="Loading containers: start." May 14 17:59:10.919572 kernel: Initializing XFRM netlink socket May 14 17:59:11.259811 (udev-worker)[2373]: Network interface NamePolicy= disabled on kernel command line. May 14 17:59:11.332205 systemd-networkd[1893]: docker0: Link UP May 14 17:59:11.338042 dockerd[2350]: time="2025-05-14T17:59:11.337860808Z" level=info msg="Loading containers: done." May 14 17:59:11.363146 dockerd[2350]: time="2025-05-14T17:59:11.363069136Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 17:59:11.363345 dockerd[2350]: time="2025-05-14T17:59:11.363193192Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 14 17:59:11.363401 dockerd[2350]: time="2025-05-14T17:59:11.363374632Z" level=info msg="Initializing buildkit" May 14 17:59:11.401783 dockerd[2350]: time="2025-05-14T17:59:11.401690764Z" level=info msg="Completed buildkit initialization" May 14 17:59:11.418211 dockerd[2350]: time="2025-05-14T17:59:11.418133956Z" level=info msg="Daemon has completed initialization" May 14 17:59:11.418356 dockerd[2350]: time="2025-05-14T17:59:11.418236028Z" level=info msg="API listen on /run/docker.sock" May 14 17:59:11.418663 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 17:59:11.731724 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3926343077-merged.mount: Deactivated successfully. May 14 17:59:12.566068 containerd[1983]: time="2025-05-14T17:59:12.566014974Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 14 17:59:13.204011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1036926525.mount: Deactivated successfully. May 14 17:59:14.418491 containerd[1983]: time="2025-05-14T17:59:14.418406995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:59:14.420405 containerd[1983]: time="2025-05-14T17:59:14.420347311Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554608" May 14 17:59:14.422235 containerd[1983]: time="2025-05-14T17:59:14.422180935Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:59:14.427382 containerd[1983]: time="2025-05-14T17:59:14.427295539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:59:14.429913 containerd[1983]: time="2025-05-14T17:59:14.429605731Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 1.863509913s" May 14 17:59:14.429913 containerd[1983]: time="2025-05-14T17:59:14.429688339Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 14 17:59:14.430994 containerd[1983]: time="2025-05-14T17:59:14.430873195Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 14 17:59:15.761554 containerd[1983]: time="2025-05-14T17:59:15.761127010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:59:15.764075 containerd[1983]: time="2025-05-14T17:59:15.764022190Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458978" May 14 17:59:15.766048 containerd[1983]: time="2025-05-14T17:59:15.765805222Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:59:15.771882 containerd[1983]: time="2025-05-14T17:59:15.771831826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:59:15.773246 containerd[1983]: time="2025-05-14T17:59:15.773034478Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.342086619s" May 14 17:59:15.773246 containerd[1983]: time="2025-05-14T17:59:15.773085202Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 14 17:59:15.773883 containerd[1983]: time="2025-05-14T17:59:15.773830630Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 14 17:59:16.989560 containerd[1983]: time="2025-05-14T17:59:16.989473656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:59:16.991973 containerd[1983]: time="2025-05-14T17:59:16.991915680Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125813" May 14 17:59:16.993639 containerd[1983]: time="2025-05-14T17:59:16.993579456Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:59:16.997590 containerd[1983]: time="2025-05-14T17:59:16.997540992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:59:16.999660 containerd[1983]: time="2025-05-14T17:59:16.999442272Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.225415478s" May 14 17:59:16.999660 containerd[1983]: time="2025-05-14T17:59:16.999493212Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 14 17:59:17.000872 containerd[1983]: time="2025-05-14T17:59:17.000818960Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 14 17:59:17.980805 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 17:59:17.984874 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 17:59:18.299633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2198819942.mount: Deactivated successfully. May 14 17:59:18.344743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 17:59:18.357732 (kubelet)[2628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 17:59:18.459581 kubelet[2628]: E0514 17:59:18.459487 2628 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 17:59:18.469798 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 17:59:18.470139 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 17:59:18.470859 systemd[1]: kubelet.service: Consumed 304ms CPU time, 93M memory peak. May 14 17:59:18.914865 containerd[1983]: time="2025-05-14T17:59:18.914810690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:59:18.917073 containerd[1983]: time="2025-05-14T17:59:18.917021966Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871917" May 14 17:59:18.918555 containerd[1983]: time="2025-05-14T17:59:18.918261050Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:59:18.920909 containerd[1983]: time="2025-05-14T17:59:18.920849546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:59:18.922346 containerd[1983]: time="2025-05-14T17:59:18.922289090Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.921413754s" May 14 17:59:18.923312 containerd[1983]: time="2025-05-14T17:59:18.922345814Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 14 17:59:18.923312 containerd[1983]: time="2025-05-14T17:59:18.922902302Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 17:59:19.446034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2212346152.mount: Deactivated successfully. May 14 17:59:20.438100 containerd[1983]: time="2025-05-14T17:59:20.438042589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:59:20.440397 containerd[1983]: time="2025-05-14T17:59:20.440353117Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" May 14 17:59:20.441548 containerd[1983]: time="2025-05-14T17:59:20.441489037Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:59:20.447636 containerd[1983]: time="2025-05-14T17:59:20.447583345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:59:20.448832 containerd[1983]: time="2025-05-14T17:59:20.448592065Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.525614175s" May 14 17:59:20.448832 containerd[1983]: time="2025-05-14T17:59:20.448656421Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 14 17:59:20.449361 containerd[1983]: time="2025-05-14T17:59:20.449313133Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 17:59:20.915450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2128249845.mount: Deactivated successfully. May 14 17:59:20.922569 containerd[1983]: time="2025-05-14T17:59:20.922453576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 17:59:20.924474 containerd[1983]: time="2025-05-14T17:59:20.924410992Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 14 17:59:20.926063 containerd[1983]: time="2025-05-14T17:59:20.925985824Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 17:59:20.929897 containerd[1983]: time="2025-05-14T17:59:20.929804980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 17:59:20.931638 containerd[1983]: time="2025-05-14T17:59:20.931251952Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 481.885287ms" May 14 17:59:20.931638 containerd[1983]: time="2025-05-14T17:59:20.931306540Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 14 17:59:20.932208 containerd[1983]: time="2025-05-14T17:59:20.932158432Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 14 17:59:21.457397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3325418980.mount: Deactivated successfully. May 14 17:59:23.350557 containerd[1983]: time="2025-05-14T17:59:23.349828792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:59:23.352277 containerd[1983]: time="2025-05-14T17:59:23.352234144Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" May 14 17:59:23.353272 containerd[1983]: time="2025-05-14T17:59:23.353233372Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:59:23.358191 containerd[1983]: time="2025-05-14T17:59:23.358143208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:59:23.360434 containerd[1983]: time="2025-05-14T17:59:23.360389032Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.428178436s" May 14 17:59:23.360795 containerd[1983]: time="2025-05-14T17:59:23.360580924Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 14 17:59:28.481806 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 17:59:28.487677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 17:59:28.861824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 17:59:28.874979 (kubelet)[2769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 17:59:28.961726 kubelet[2769]: E0514 17:59:28.961667 2769 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 17:59:28.967684 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 17:59:28.968897 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 17:59:28.970204 systemd[1]: kubelet.service: Consumed 269ms CPU time, 93.9M memory peak. May 14 17:59:33.364875 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 17:59:33.365784 systemd[1]: kubelet.service: Consumed 269ms CPU time, 93.9M memory peak. May 14 17:59:33.369562 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 17:59:33.416542 systemd[1]: Reload requested from client PID 2784 ('systemctl') (unit session-7.scope)... May 14 17:59:33.416575 systemd[1]: Reloading... May 14 17:59:33.659005 zram_generator::config[2832]: No configuration found. May 14 17:59:33.851641 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 17:59:34.104976 systemd[1]: Reloading finished in 687 ms. May 14 17:59:34.203740 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 14 17:59:34.203919 systemd[1]: kubelet.service: Failed with result 'signal'. May 14 17:59:34.204497 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 17:59:34.204611 systemd[1]: kubelet.service: Consumed 193ms CPU time, 82.3M memory peak. May 14 17:59:34.207841 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 17:59:34.819922 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 14 17:59:34.956407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 17:59:34.972984 (kubelet)[2895]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 17:59:35.041754 kubelet[2895]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 17:59:35.041754 kubelet[2895]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 17:59:35.041754 kubelet[2895]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 17:59:35.042300 kubelet[2895]: I0514 17:59:35.041901 2895 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 17:59:35.826760 kubelet[2895]: I0514 17:59:35.826693 2895 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 17:59:35.826760 kubelet[2895]: I0514 17:59:35.826744 2895 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 17:59:35.827189 kubelet[2895]: I0514 17:59:35.827148 2895 server.go:929] "Client rotation is on, will bootstrap in background" May 14 17:59:35.883421 kubelet[2895]: E0514 17:59:35.883354 2895 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.31.64:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.64:6443: connect: connection refused" logger="UnhandledError" May 14 17:59:35.886567 kubelet[2895]: I0514 17:59:35.886342 2895 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 17:59:35.906420 kubelet[2895]: I0514 17:59:35.906387 2895 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 17:59:35.913126 kubelet[2895]: I0514 17:59:35.913091 2895 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 17:59:35.915430 kubelet[2895]: I0514 17:59:35.915306 2895 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 17:59:35.915658 kubelet[2895]: I0514 17:59:35.915599 2895 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 17:59:35.915956 kubelet[2895]: I0514 17:59:35.915648 2895 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-64","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 17:59:35.916141 kubelet[2895]: I0514 17:59:35.916000 2895 topology_manager.go:138] "Creating topology manager with none policy" May 14 17:59:35.916141 kubelet[2895]: I0514 17:59:35.916021 2895 container_manager_linux.go:300] "Creating device plugin manager" May 14 17:59:35.916300 kubelet[2895]: I0514 17:59:35.916222 2895 state_mem.go:36] "Initialized new in-memory state store" May 14 17:59:35.920544 kubelet[2895]: I0514 17:59:35.920115 2895 kubelet.go:408] "Attempting to sync node with API server" May 14 17:59:35.920544 kubelet[2895]: I0514 17:59:35.920161 2895 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 17:59:35.920544 kubelet[2895]: I0514 17:59:35.920221 2895 kubelet.go:314] "Adding apiserver pod source" May 14 17:59:35.920544 kubelet[2895]: I0514 17:59:35.920240 2895 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 17:59:35.927476 kubelet[2895]: W0514 17:59:35.927407 2895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-64&limit=500&resourceVersion=0": dial tcp 172.31.31.64:6443: connect: connection refused May 14 17:59:35.928126 kubelet[2895]: E0514 17:59:35.928089 2895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-64&limit=500&resourceVersion=0\": dial tcp 172.31.31.64:6443: connect: connection refused" logger="UnhandledError" May 14 17:59:35.928406 kubelet[2895]: I0514 17:59:35.928361 2895 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 17:59:35.932572 kubelet[2895]: I0514 17:59:35.932534 2895 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 17:59:35.934756 kubelet[2895]: W0514 17:59:35.934013 2895 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 17:59:35.937334 kubelet[2895]: I0514 17:59:35.937295 2895 server.go:1269] "Started kubelet" May 14 17:59:35.940300 kubelet[2895]: W0514 17:59:35.940217 2895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.64:6443: connect: connection refused May 14 17:59:35.940430 kubelet[2895]: E0514 17:59:35.940312 2895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.64:6443: connect: connection refused" logger="UnhandledError" May 14 17:59:35.940625 kubelet[2895]: I0514 17:59:35.940574 2895 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 17:59:35.942746 kubelet[2895]: I0514 17:59:35.942668 2895 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 17:59:35.943387 kubelet[2895]: I0514 17:59:35.943356 2895 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 17:59:35.944208 kubelet[2895]: I0514 17:59:35.943926 2895 server.go:460] "Adding debug handlers to kubelet server" May 14 17:59:35.946877 kubelet[2895]: I0514 17:59:35.946823 2895 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 17:59:35.948799 kubelet[2895]: E0514 17:59:35.945399 2895 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.64:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.64:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-64.183f7698111ccfda default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-64,UID:ip-172-31-31-64,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-64,},FirstTimestamp:2025-05-14 17:59:35.937249242 +0000 UTC m=+0.958779498,LastTimestamp:2025-05-14 17:59:35.937249242 +0000 UTC m=+0.958779498,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-64,}" May 14 17:59:35.949573 kubelet[2895]: I0514 17:59:35.949510 2895 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 17:59:35.958149 kubelet[2895]: E0514 17:59:35.958113 2895 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-64\" not found" May 14 17:59:35.958949 kubelet[2895]: I0514 17:59:35.958865 2895 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 17:59:35.959862 kubelet[2895]: I0514 17:59:35.959287 2895 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 17:59:35.959862 kubelet[2895]: I0514 17:59:35.959379 2895 reconciler.go:26] "Reconciler: start to sync state" May 14 17:59:35.960784 kubelet[2895]: W0514 17:59:35.960714 2895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.64:6443: connect: connection refused May 14 17:59:35.960991 kubelet[2895]: E0514 17:59:35.960961 2895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.64:6443: connect: connection refused" logger="UnhandledError" May 14 17:59:35.961258 kubelet[2895]: E0514 17:59:35.961232 2895 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 17:59:35.961564 kubelet[2895]: E0514 17:59:35.961490 2895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-64?timeout=10s\": dial tcp 172.31.31.64:6443: connect: connection refused" interval="200ms" May 14 17:59:35.962198 kubelet[2895]: I0514 17:59:35.962160 2895 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 17:59:35.965478 kubelet[2895]: I0514 17:59:35.965443 2895 factory.go:221] Registration of the containerd container factory successfully May 14 17:59:35.965679 kubelet[2895]: I0514 17:59:35.965659 2895 factory.go:221] Registration of the systemd container factory successfully May 14 17:59:35.980505 kubelet[2895]: I0514 17:59:35.980217 2895 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 17:59:35.984206 kubelet[2895]: I0514 17:59:35.983629 2895 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 17:59:35.984206 kubelet[2895]: I0514 17:59:35.983678 2895 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 17:59:35.984206 kubelet[2895]: I0514 17:59:35.983713 2895 kubelet.go:2321] "Starting kubelet main sync loop" May 14 17:59:35.984206 kubelet[2895]: E0514 17:59:35.983784 2895 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 17:59:36.003716 kubelet[2895]: W0514 17:59:36.003630 2895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.64:6443: connect: connection refused May 14 17:59:36.004164 kubelet[2895]: E0514 17:59:36.003729 2895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.64:6443: connect: connection refused" logger="UnhandledError" May 14 17:59:36.013602 kubelet[2895]: I0514 17:59:36.013541 2895 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 17:59:36.013602 kubelet[2895]: I0514 17:59:36.013573 2895 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 17:59:36.013602 kubelet[2895]: I0514 17:59:36.013605 2895 state_mem.go:36] "Initialized new in-memory state store" May 14 17:59:36.016642 kubelet[2895]: I0514 17:59:36.016592 2895 policy_none.go:49] "None policy: Start" May 14 17:59:36.018091 kubelet[2895]: I0514 17:59:36.017948 2895 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 17:59:36.018091 kubelet[2895]: I0514 17:59:36.017992 2895 state_mem.go:35] "Initializing new in-memory state store" May 14 17:59:36.028173 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 17:59:36.045910 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 17:59:36.053483 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 17:59:36.059432 kubelet[2895]: E0514 17:59:36.059386 2895 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-64\" not found" May 14 17:59:36.062544 kubelet[2895]: I0514 17:59:36.062476 2895 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 17:59:36.063849 kubelet[2895]: I0514 17:59:36.063686 2895 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 17:59:36.063849 kubelet[2895]: I0514 17:59:36.063785 2895 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 17:59:36.064681 kubelet[2895]: I0514 17:59:36.064586 2895 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 17:59:36.069009 kubelet[2895]: E0514 17:59:36.068962 2895 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-64\" not found" May 14 17:59:36.106983 systemd[1]: Created slice kubepods-burstable-pod01de09a1fd86c8a40d11da47689a6daa.slice - libcontainer container kubepods-burstable-pod01de09a1fd86c8a40d11da47689a6daa.slice. May 14 17:59:36.123963 systemd[1]: Created slice kubepods-burstable-podc8d37826e3786501bb173a59eb30aa35.slice - libcontainer container kubepods-burstable-podc8d37826e3786501bb173a59eb30aa35.slice. May 14 17:59:36.137580 systemd[1]: Created slice kubepods-burstable-podfa3a4a61b9a7eb6bdecec8857f46b012.slice - libcontainer container kubepods-burstable-podfa3a4a61b9a7eb6bdecec8857f46b012.slice. May 14 17:59:36.163067 kubelet[2895]: E0514 17:59:36.163006 2895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-64?timeout=10s\": dial tcp 172.31.31.64:6443: connect: connection refused" interval="400ms" May 14 17:59:36.169558 kubelet[2895]: I0514 17:59:36.169304 2895 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-64" May 14 17:59:36.170151 kubelet[2895]: E0514 17:59:36.170071 2895 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.64:6443/api/v1/nodes\": dial tcp 172.31.31.64:6443: connect: connection refused" node="ip-172-31-31-64" May 14 17:59:36.260336 kubelet[2895]: I0514 17:59:36.260243 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c8d37826e3786501bb173a59eb30aa35-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-64\" (UID: \"c8d37826e3786501bb173a59eb30aa35\") " pod="kube-system/kube-controller-manager-ip-172-31-31-64" May 14 17:59:36.260336 kubelet[2895]: I0514 17:59:36.260297 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/01de09a1fd86c8a40d11da47689a6daa-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-64\" (UID: \"01de09a1fd86c8a40d11da47689a6daa\") " pod="kube-system/kube-apiserver-ip-172-31-31-64" May 14 17:59:36.260336 kubelet[2895]: I0514 17:59:36.260352 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa3a4a61b9a7eb6bdecec8857f46b012-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-64\" (UID: \"fa3a4a61b9a7eb6bdecec8857f46b012\") " pod="kube-system/kube-scheduler-ip-172-31-31-64" May 14 17:59:36.260732 kubelet[2895]: I0514 17:59:36.260409 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/01de09a1fd86c8a40d11da47689a6daa-ca-certs\") pod \"kube-apiserver-ip-172-31-31-64\" (UID: \"01de09a1fd86c8a40d11da47689a6daa\") " pod="kube-system/kube-apiserver-ip-172-31-31-64" May 14 17:59:36.260732 kubelet[2895]: I0514 17:59:36.260449 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/01de09a1fd86c8a40d11da47689a6daa-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-64\" (UID: \"01de09a1fd86c8a40d11da47689a6daa\") " pod="kube-system/kube-apiserver-ip-172-31-31-64" May 14 17:59:36.260732 kubelet[2895]: I0514 17:59:36.260491 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c8d37826e3786501bb173a59eb30aa35-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-64\" (UID: \"c8d37826e3786501bb173a59eb30aa35\") " pod="kube-system/kube-controller-manager-ip-172-31-31-64" May 14 17:59:36.260732 kubelet[2895]: I0514 17:59:36.260550 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c8d37826e3786501bb173a59eb30aa35-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-64\" (UID: \"c8d37826e3786501bb173a59eb30aa35\") " pod="kube-system/kube-controller-manager-ip-172-31-31-64" May 14 17:59:36.260732 kubelet[2895]: I0514 17:59:36.260587 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c8d37826e3786501bb173a59eb30aa35-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-64\" (UID: \"c8d37826e3786501bb173a59eb30aa35\") " pod="kube-system/kube-controller-manager-ip-172-31-31-64" May 14 17:59:36.260979 kubelet[2895]: I0514 17:59:36.260625 2895 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c8d37826e3786501bb173a59eb30aa35-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-64\" (UID: \"c8d37826e3786501bb173a59eb30aa35\") " pod="kube-system/kube-controller-manager-ip-172-31-31-64" May 14 17:59:36.372714 kubelet[2895]: I0514 17:59:36.372592 2895 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-64" May 14 17:59:36.373099 kubelet[2895]: E0514 17:59:36.373046 2895 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.64:6443/api/v1/nodes\": dial tcp 172.31.31.64:6443: connect: connection refused" node="ip-172-31-31-64" May 14 17:59:36.421330 containerd[1983]: time="2025-05-14T17:59:36.421251485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-64,Uid:01de09a1fd86c8a40d11da47689a6daa,Namespace:kube-system,Attempt:0,}" May 14 17:59:36.432076 containerd[1983]: time="2025-05-14T17:59:36.431992685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-64,Uid:c8d37826e3786501bb173a59eb30aa35,Namespace:kube-system,Attempt:0,}" May 14 17:59:36.449181 containerd[1983]: time="2025-05-14T17:59:36.448576397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-64,Uid:fa3a4a61b9a7eb6bdecec8857f46b012,Namespace:kube-system,Attempt:0,}" May 14 17:59:36.461927 containerd[1983]: time="2025-05-14T17:59:36.461701769Z" level=info msg="connecting to shim 122e5a7f214ca7e7cd200410b52c547d78915d11009d4462bc7451fac156eec3" address="unix:///run/containerd/s/1b371fab1c58d02c9ba96e7b8095a8f1f4c9b0bdd580ebfc81bf78dbdcb20a68" namespace=k8s.io protocol=ttrpc version=3 May 14 17:59:36.506924 containerd[1983]: time="2025-05-14T17:59:36.506588381Z" level=info msg="connecting to shim 3f20f3f80fe6e943579aac782ec14955b101d4242143d67b91e99015c277eec3" address="unix:///run/containerd/s/dd1c59231ebeeb4bc0fae1e4cb93b27d51e0eadacfa523de3ea3f3272a429e5a" namespace=k8s.io protocol=ttrpc version=3 May 14 17:59:36.541383 systemd[1]: Started cri-containerd-122e5a7f214ca7e7cd200410b52c547d78915d11009d4462bc7451fac156eec3.scope - libcontainer container 122e5a7f214ca7e7cd200410b52c547d78915d11009d4462bc7451fac156eec3. May 14 17:59:36.556954 containerd[1983]: time="2025-05-14T17:59:36.556868213Z" level=info msg="connecting to shim 6dd421eb719997fa4a5daa6ec7593fb7cbfd8758ed25b59b8d99f04a20a1c7b6" address="unix:///run/containerd/s/e9ba4bbd8c7528e8f88f96c6e232e0879b8b0421956eeea788a384a1644967f7" namespace=k8s.io protocol=ttrpc version=3 May 14 17:59:36.566145 kubelet[2895]: E0514 17:59:36.566070 2895 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-64?timeout=10s\": dial tcp 172.31.31.64:6443: connect: connection refused" interval="800ms" May 14 17:59:36.618818 systemd[1]: Started cri-containerd-3f20f3f80fe6e943579aac782ec14955b101d4242143d67b91e99015c277eec3.scope - libcontainer container 3f20f3f80fe6e943579aac782ec14955b101d4242143d67b91e99015c277eec3. May 14 17:59:36.621363 systemd[1]: Started cri-containerd-6dd421eb719997fa4a5daa6ec7593fb7cbfd8758ed25b59b8d99f04a20a1c7b6.scope - libcontainer container 6dd421eb719997fa4a5daa6ec7593fb7cbfd8758ed25b59b8d99f04a20a1c7b6. May 14 17:59:36.697428 containerd[1983]: time="2025-05-14T17:59:36.697268886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-64,Uid:01de09a1fd86c8a40d11da47689a6daa,Namespace:kube-system,Attempt:0,} returns sandbox id \"122e5a7f214ca7e7cd200410b52c547d78915d11009d4462bc7451fac156eec3\"" May 14 17:59:36.710160 containerd[1983]: time="2025-05-14T17:59:36.710094222Z" level=info msg="CreateContainer within sandbox \"122e5a7f214ca7e7cd200410b52c547d78915d11009d4462bc7451fac156eec3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 17:59:36.736607 containerd[1983]: time="2025-05-14T17:59:36.735954846Z" level=info msg="Container a7572f1325410eae15ac0c8bbb6a849dd7730ee34f5958d6265190cd60b5013e: CDI devices from CRI Config.CDIDevices: []" May 14 17:59:36.746989 containerd[1983]: time="2025-05-14T17:59:36.746912442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-64,Uid:c8d37826e3786501bb173a59eb30aa35,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f20f3f80fe6e943579aac782ec14955b101d4242143d67b91e99015c277eec3\"" May 14 17:59:36.750944 containerd[1983]: time="2025-05-14T17:59:36.750881562Z" level=info msg="CreateContainer within sandbox \"3f20f3f80fe6e943579aac782ec14955b101d4242143d67b91e99015c277eec3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 17:59:36.766276 containerd[1983]: time="2025-05-14T17:59:36.766209954Z" level=info msg="Container b7fbae9c48ca18075c2094823d96989ab3cbb9b8d352aa0f3c440015c22ee39b: CDI devices from CRI Config.CDIDevices: []" May 14 17:59:36.767773 containerd[1983]: time="2025-05-14T17:59:36.767695482Z" level=info msg="CreateContainer within sandbox \"122e5a7f214ca7e7cd200410b52c547d78915d11009d4462bc7451fac156eec3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a7572f1325410eae15ac0c8bbb6a849dd7730ee34f5958d6265190cd60b5013e\"" May 14 17:59:36.769313 containerd[1983]: time="2025-05-14T17:59:36.769260534Z" level=info msg="StartContainer for \"a7572f1325410eae15ac0c8bbb6a849dd7730ee34f5958d6265190cd60b5013e\"" May 14 17:59:36.773456 containerd[1983]: time="2025-05-14T17:59:36.773369070Z" level=info msg="connecting to shim a7572f1325410eae15ac0c8bbb6a849dd7730ee34f5958d6265190cd60b5013e" address="unix:///run/containerd/s/1b371fab1c58d02c9ba96e7b8095a8f1f4c9b0bdd580ebfc81bf78dbdcb20a68" protocol=ttrpc version=3 May 14 17:59:36.779299 kubelet[2895]: I0514 17:59:36.779244 2895 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-64" May 14 17:59:36.779829 kubelet[2895]: E0514 17:59:36.779764 2895 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.64:6443/api/v1/nodes\": dial tcp 172.31.31.64:6443: connect: connection refused" node="ip-172-31-31-64" May 14 17:59:36.799453 containerd[1983]: time="2025-05-14T17:59:36.799378494Z" level=info msg="CreateContainer within sandbox \"3f20f3f80fe6e943579aac782ec14955b101d4242143d67b91e99015c277eec3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b7fbae9c48ca18075c2094823d96989ab3cbb9b8d352aa0f3c440015c22ee39b\"" May 14 17:59:36.801122 containerd[1983]: time="2025-05-14T17:59:36.801030858Z" level=info msg="StartContainer for \"b7fbae9c48ca18075c2094823d96989ab3cbb9b8d352aa0f3c440015c22ee39b\"" May 14 17:59:36.804438 containerd[1983]: time="2025-05-14T17:59:36.804366210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-64,Uid:fa3a4a61b9a7eb6bdecec8857f46b012,Namespace:kube-system,Attempt:0,} returns sandbox id \"6dd421eb719997fa4a5daa6ec7593fb7cbfd8758ed25b59b8d99f04a20a1c7b6\"" May 14 17:59:36.804904 containerd[1983]: time="2025-05-14T17:59:36.804840858Z" level=info msg="connecting to shim b7fbae9c48ca18075c2094823d96989ab3cbb9b8d352aa0f3c440015c22ee39b" address="unix:///run/containerd/s/dd1c59231ebeeb4bc0fae1e4cb93b27d51e0eadacfa523de3ea3f3272a429e5a" protocol=ttrpc version=3 May 14 17:59:36.810630 containerd[1983]: time="2025-05-14T17:59:36.810458862Z" level=info msg="CreateContainer within sandbox \"6dd421eb719997fa4a5daa6ec7593fb7cbfd8758ed25b59b8d99f04a20a1c7b6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 17:59:36.819232 kubelet[2895]: W0514 17:59:36.819066 2895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-64&limit=500&resourceVersion=0": dial tcp 172.31.31.64:6443: connect: connection refused May 14 17:59:36.819232 kubelet[2895]: E0514 17:59:36.819167 2895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-64&limit=500&resourceVersion=0\": dial tcp 172.31.31.64:6443: connect: connection refused" logger="UnhandledError" May 14 17:59:36.831350 containerd[1983]: time="2025-05-14T17:59:36.830282587Z" level=info msg="Container 167208dd0ab050b3d11986aa934ed059ac0de5496e67dad817e9c3c16a9ffab9: CDI devices from CRI Config.CDIDevices: []" May 14 17:59:36.838844 systemd[1]: Started cri-containerd-a7572f1325410eae15ac0c8bbb6a849dd7730ee34f5958d6265190cd60b5013e.scope - libcontainer container a7572f1325410eae15ac0c8bbb6a849dd7730ee34f5958d6265190cd60b5013e. May 14 17:59:36.845347 containerd[1983]: time="2025-05-14T17:59:36.845266411Z" level=info msg="CreateContainer within sandbox \"6dd421eb719997fa4a5daa6ec7593fb7cbfd8758ed25b59b8d99f04a20a1c7b6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"167208dd0ab050b3d11986aa934ed059ac0de5496e67dad817e9c3c16a9ffab9\"" May 14 17:59:36.846096 containerd[1983]: time="2025-05-14T17:59:36.846007831Z" level=info msg="StartContainer for \"167208dd0ab050b3d11986aa934ed059ac0de5496e67dad817e9c3c16a9ffab9\"" May 14 17:59:36.849571 containerd[1983]: time="2025-05-14T17:59:36.848397211Z" level=info msg="connecting to shim 167208dd0ab050b3d11986aa934ed059ac0de5496e67dad817e9c3c16a9ffab9" address="unix:///run/containerd/s/e9ba4bbd8c7528e8f88f96c6e232e0879b8b0421956eeea788a384a1644967f7" protocol=ttrpc version=3 May 14 17:59:36.870940 systemd[1]: Started cri-containerd-b7fbae9c48ca18075c2094823d96989ab3cbb9b8d352aa0f3c440015c22ee39b.scope - libcontainer container b7fbae9c48ca18075c2094823d96989ab3cbb9b8d352aa0f3c440015c22ee39b. May 14 17:59:36.902819 systemd[1]: Started cri-containerd-167208dd0ab050b3d11986aa934ed059ac0de5496e67dad817e9c3c16a9ffab9.scope - libcontainer container 167208dd0ab050b3d11986aa934ed059ac0de5496e67dad817e9c3c16a9ffab9. May 14 17:59:37.029351 kubelet[2895]: W0514 17:59:37.029185 2895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.64:6443: connect: connection refused May 14 17:59:37.029351 kubelet[2895]: E0514 17:59:37.029284 2895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.64:6443: connect: connection refused" logger="UnhandledError" May 14 17:59:37.033177 containerd[1983]: time="2025-05-14T17:59:37.032782564Z" level=info msg="StartContainer for \"a7572f1325410eae15ac0c8bbb6a849dd7730ee34f5958d6265190cd60b5013e\" returns successfully" May 14 17:59:37.087108 containerd[1983]: time="2025-05-14T17:59:37.087026548Z" level=info msg="StartContainer for \"b7fbae9c48ca18075c2094823d96989ab3cbb9b8d352aa0f3c440015c22ee39b\" returns successfully" May 14 17:59:37.097659 containerd[1983]: time="2025-05-14T17:59:37.097407856Z" level=info msg="StartContainer for \"167208dd0ab050b3d11986aa934ed059ac0de5496e67dad817e9c3c16a9ffab9\" returns successfully" May 14 17:59:37.099492 kubelet[2895]: W0514 17:59:37.098844 2895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.64:6443: connect: connection refused May 14 17:59:37.101139 kubelet[2895]: E0514 17:59:37.099606 2895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.64:6443: connect: connection refused" logger="UnhandledError" May 14 17:59:37.127122 kubelet[2895]: W0514 17:59:37.125994 2895 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.64:6443: connect: connection refused May 14 17:59:37.127446 kubelet[2895]: E0514 17:59:37.127400 2895 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.64:6443: connect: connection refused" logger="UnhandledError" May 14 17:59:37.585548 kubelet[2895]: I0514 17:59:37.583409 2895 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-64" May 14 17:59:40.740609 kubelet[2895]: I0514 17:59:40.738497 2895 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-31-64" May 14 17:59:40.740609 kubelet[2895]: E0514 17:59:40.740587 2895 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-31-64\": node \"ip-172-31-31-64\" not found" May 14 17:59:40.873675 kubelet[2895]: E0514 17:59:40.873618 2895 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="1.6s" May 14 17:59:40.940600 kubelet[2895]: I0514 17:59:40.940540 2895 apiserver.go:52] "Watching apiserver" May 14 17:59:40.959859 kubelet[2895]: I0514 17:59:40.959797 2895 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 17:59:42.570188 systemd[1]: Reload requested from client PID 3160 ('systemctl') (unit session-7.scope)... May 14 17:59:42.570701 systemd[1]: Reloading... May 14 17:59:42.757572 zram_generator::config[3204]: No configuration found. May 14 17:59:42.951233 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 17:59:43.238353 systemd[1]: Reloading finished in 667 ms. May 14 17:59:43.271815 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 17:59:43.272619 kubelet[2895]: I0514 17:59:43.271983 2895 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 17:59:43.293344 systemd[1]: kubelet.service: Deactivated successfully. May 14 17:59:43.293836 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 17:59:43.293920 systemd[1]: kubelet.service: Consumed 1.637s CPU time, 115.5M memory peak. May 14 17:59:43.298407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 17:59:43.601500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 17:59:43.619130 (kubelet)[3264]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 17:59:43.724706 kubelet[3264]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 17:59:43.725810 kubelet[3264]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 17:59:43.725810 kubelet[3264]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 17:59:43.726197 kubelet[3264]: I0514 17:59:43.725936 3264 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 17:59:43.745315 kubelet[3264]: I0514 17:59:43.745270 3264 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 17:59:43.746566 kubelet[3264]: I0514 17:59:43.745460 3264 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 17:59:43.746566 kubelet[3264]: I0514 17:59:43.745919 3264 server.go:929] "Client rotation is on, will bootstrap in background" May 14 17:59:43.748804 kubelet[3264]: I0514 17:59:43.748763 3264 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 17:59:43.754097 kubelet[3264]: I0514 17:59:43.754055 3264 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 17:59:43.766839 kubelet[3264]: I0514 17:59:43.766805 3264 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 17:59:43.772410 kubelet[3264]: I0514 17:59:43.772366 3264 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 17:59:43.772822 kubelet[3264]: I0514 17:59:43.772800 3264 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 17:59:43.773222 kubelet[3264]: I0514 17:59:43.773170 3264 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 17:59:43.773850 kubelet[3264]: I0514 17:59:43.773355 3264 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-64","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 17:59:43.774935 kubelet[3264]: I0514 17:59:43.774897 3264 topology_manager.go:138] "Creating topology manager with none policy" May 14 17:59:43.775158 kubelet[3264]: I0514 17:59:43.775136 3264 container_manager_linux.go:300] "Creating device plugin manager" May 14 17:59:43.775325 kubelet[3264]: I0514 17:59:43.775306 3264 state_mem.go:36] "Initialized new in-memory state store" May 14 17:59:43.775640 kubelet[3264]: I0514 17:59:43.775619 3264 kubelet.go:408] "Attempting to sync node with API server" May 14 17:59:43.775752 kubelet[3264]: I0514 17:59:43.775734 3264 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 17:59:43.775912 kubelet[3264]: I0514 17:59:43.775892 3264 kubelet.go:314] "Adding apiserver pod source" May 14 17:59:43.776012 kubelet[3264]: I0514 17:59:43.775994 3264 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 17:59:43.782538 kubelet[3264]: I0514 17:59:43.780377 3264 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 17:59:43.788541 kubelet[3264]: I0514 17:59:43.788042 3264 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 17:59:43.789061 kubelet[3264]: I0514 17:59:43.789035 3264 server.go:1269] "Started kubelet" May 14 17:59:43.793845 kubelet[3264]: I0514 17:59:43.793776 3264 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 17:59:43.795223 kubelet[3264]: I0514 17:59:43.795168 3264 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 17:59:43.797729 kubelet[3264]: I0514 17:59:43.797640 3264 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 17:59:43.798264 kubelet[3264]: I0514 17:59:43.798239 3264 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 17:59:43.798802 kubelet[3264]: I0514 17:59:43.798773 3264 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 17:59:43.804368 kubelet[3264]: I0514 17:59:43.804331 3264 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 17:59:43.806423 kubelet[3264]: I0514 17:59:43.806379 3264 server.go:460] "Adding debug handlers to kubelet server" May 14 17:59:43.841545 kubelet[3264]: E0514 17:59:43.806898 3264 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-64\" not found" May 14 17:59:43.841838 kubelet[3264]: I0514 17:59:43.811354 3264 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 17:59:43.841956 kubelet[3264]: I0514 17:59:43.811631 3264 reconciler.go:26] "Reconciler: start to sync state" May 14 17:59:43.860538 kubelet[3264]: I0514 17:59:43.859821 3264 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 17:59:43.861894 kubelet[3264]: I0514 17:59:43.861841 3264 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 17:59:43.861894 kubelet[3264]: I0514 17:59:43.861887 3264 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 17:59:43.862076 kubelet[3264]: I0514 17:59:43.861919 3264 kubelet.go:2321] "Starting kubelet main sync loop" May 14 17:59:43.862076 kubelet[3264]: E0514 17:59:43.861994 3264 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 17:59:43.898982 kubelet[3264]: E0514 17:59:43.897970 3264 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 17:59:43.905057 kubelet[3264]: I0514 17:59:43.904898 3264 factory.go:221] Registration of the containerd container factory successfully May 14 17:59:43.905057 kubelet[3264]: I0514 17:59:43.904936 3264 factory.go:221] Registration of the systemd container factory successfully May 14 17:59:43.905686 kubelet[3264]: I0514 17:59:43.905105 3264 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 17:59:43.962879 kubelet[3264]: E0514 17:59:43.962810 3264 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 17:59:44.010076 kubelet[3264]: I0514 17:59:44.010033 3264 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 17:59:44.010076 kubelet[3264]: I0514 17:59:44.010064 3264 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 17:59:44.010283 kubelet[3264]: I0514 17:59:44.010099 3264 state_mem.go:36] "Initialized new in-memory state store" May 14 17:59:44.011313 kubelet[3264]: I0514 17:59:44.010348 3264 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 17:59:44.011313 kubelet[3264]: I0514 17:59:44.010378 3264 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 17:59:44.011313 kubelet[3264]: I0514 17:59:44.010413 3264 policy_none.go:49] "None policy: Start" May 14 17:59:44.011565 kubelet[3264]: I0514 17:59:44.011456 3264 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 17:59:44.011565 kubelet[3264]: I0514 17:59:44.011489 3264 state_mem.go:35] "Initializing new in-memory state store" May 14 17:59:44.013260 kubelet[3264]: I0514 17:59:44.011924 3264 state_mem.go:75] "Updated machine memory state" May 14 17:59:44.023703 kubelet[3264]: I0514 17:59:44.022791 3264 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 17:59:44.023703 kubelet[3264]: I0514 17:59:44.023056 3264 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 17:59:44.023703 kubelet[3264]: I0514 17:59:44.023073 3264 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 17:59:44.025016 kubelet[3264]: I0514 17:59:44.024986 3264 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 17:59:44.147671 kubelet[3264]: I0514 17:59:44.147278 3264 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-64" May 14 17:59:44.174006 kubelet[3264]: I0514 17:59:44.173933 3264 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-31-64" May 14 17:59:44.174151 kubelet[3264]: I0514 17:59:44.174101 3264 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-31-64" May 14 17:59:44.244132 kubelet[3264]: I0514 17:59:44.244067 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/01de09a1fd86c8a40d11da47689a6daa-ca-certs\") pod \"kube-apiserver-ip-172-31-31-64\" (UID: \"01de09a1fd86c8a40d11da47689a6daa\") " pod="kube-system/kube-apiserver-ip-172-31-31-64" May 14 17:59:44.244263 kubelet[3264]: I0514 17:59:44.244136 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/01de09a1fd86c8a40d11da47689a6daa-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-64\" (UID: \"01de09a1fd86c8a40d11da47689a6daa\") " pod="kube-system/kube-apiserver-ip-172-31-31-64" May 14 17:59:44.244263 kubelet[3264]: I0514 17:59:44.244191 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c8d37826e3786501bb173a59eb30aa35-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-64\" (UID: \"c8d37826e3786501bb173a59eb30aa35\") " pod="kube-system/kube-controller-manager-ip-172-31-31-64" May 14 17:59:44.244263 kubelet[3264]: I0514 17:59:44.244226 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c8d37826e3786501bb173a59eb30aa35-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-64\" (UID: \"c8d37826e3786501bb173a59eb30aa35\") " pod="kube-system/kube-controller-manager-ip-172-31-31-64" May 14 17:59:44.244455 kubelet[3264]: I0514 17:59:44.244265 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c8d37826e3786501bb173a59eb30aa35-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-64\" (UID: \"c8d37826e3786501bb173a59eb30aa35\") " pod="kube-system/kube-controller-manager-ip-172-31-31-64" May 14 17:59:44.244455 kubelet[3264]: I0514 17:59:44.244321 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/01de09a1fd86c8a40d11da47689a6daa-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-64\" (UID: \"01de09a1fd86c8a40d11da47689a6daa\") " pod="kube-system/kube-apiserver-ip-172-31-31-64" May 14 17:59:44.244455 kubelet[3264]: I0514 17:59:44.244360 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c8d37826e3786501bb173a59eb30aa35-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-64\" (UID: \"c8d37826e3786501bb173a59eb30aa35\") " pod="kube-system/kube-controller-manager-ip-172-31-31-64" May 14 17:59:44.244455 kubelet[3264]: I0514 17:59:44.244394 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c8d37826e3786501bb173a59eb30aa35-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-64\" (UID: \"c8d37826e3786501bb173a59eb30aa35\") " pod="kube-system/kube-controller-manager-ip-172-31-31-64" May 14 17:59:44.244455 kubelet[3264]: I0514 17:59:44.244429 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa3a4a61b9a7eb6bdecec8857f46b012-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-64\" (UID: \"fa3a4a61b9a7eb6bdecec8857f46b012\") " pod="kube-system/kube-scheduler-ip-172-31-31-64" May 14 17:59:44.778551 kubelet[3264]: I0514 17:59:44.778461 3264 apiserver.go:52] "Watching apiserver" May 14 17:59:44.842300 kubelet[3264]: I0514 17:59:44.842239 3264 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 17:59:44.982532 kubelet[3264]: E0514 17:59:44.979213 3264 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-31-64\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-64" May 14 17:59:45.064360 kubelet[3264]: I0514 17:59:45.064168 3264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-64" podStartSLOduration=1.064144271 podStartE2EDuration="1.064144271s" podCreationTimestamp="2025-05-14 17:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 17:59:45.051334811 +0000 UTC m=+1.422202232" watchObservedRunningTime="2025-05-14 17:59:45.064144271 +0000 UTC m=+1.435011680" May 14 17:59:45.065654 kubelet[3264]: I0514 17:59:45.065002 3264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-64" podStartSLOduration=1.064980803 podStartE2EDuration="1.064980803s" podCreationTimestamp="2025-05-14 17:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 17:59:45.019945811 +0000 UTC m=+1.390813244" watchObservedRunningTime="2025-05-14 17:59:45.064980803 +0000 UTC m=+1.435848212" May 14 17:59:47.912244 kubelet[3264]: I0514 17:59:47.912142 3264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-64" podStartSLOduration=3.91211907 podStartE2EDuration="3.91211907s" podCreationTimestamp="2025-05-14 17:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 17:59:45.10535208 +0000 UTC m=+1.476219501" watchObservedRunningTime="2025-05-14 17:59:47.91211907 +0000 UTC m=+4.282986491" May 14 17:59:49.166686 update_engine[1970]: I20250514 17:59:49.165637 1970 update_attempter.cc:509] Updating boot flags... May 14 17:59:50.310774 kubelet[3264]: I0514 17:59:50.310722 3264 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 17:59:50.317047 containerd[1983]: time="2025-05-14T17:59:50.316959618Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 17:59:50.324819 kubelet[3264]: I0514 17:59:50.324762 3264 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 17:59:50.883622 sudo[2333]: pam_unix(sudo:session): session closed for user root May 14 17:59:50.908568 sshd[2332]: Connection closed by 139.178.89.65 port 50776 May 14 17:59:50.908833 sshd-session[2330]: pam_unix(sshd:session): session closed for user core May 14 17:59:50.915949 systemd[1]: sshd@6-172.31.31.64:22-139.178.89.65:50776.service: Deactivated successfully. May 14 17:59:50.920442 systemd[1]: session-7.scope: Deactivated successfully. May 14 17:59:50.921289 systemd[1]: session-7.scope: Consumed 13.197s CPU time, 236.3M memory peak. May 14 17:59:50.925308 systemd-logind[1969]: Session 7 logged out. Waiting for processes to exit. May 14 17:59:50.929626 systemd-logind[1969]: Removed session 7. May 14 17:59:51.119440 systemd[1]: Created slice kubepods-besteffort-pod57d0e022_74ba_4086_9d4b_d1af35cb1940.slice - libcontainer container kubepods-besteffort-pod57d0e022_74ba_4086_9d4b_d1af35cb1940.slice. May 14 17:59:51.298570 kubelet[3264]: I0514 17:59:51.297872 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57d0e022-74ba-4086-9d4b-d1af35cb1940-xtables-lock\") pod \"kube-proxy-zkd6k\" (UID: \"57d0e022-74ba-4086-9d4b-d1af35cb1940\") " pod="kube-system/kube-proxy-zkd6k" May 14 17:59:51.299166 kubelet[3264]: I0514 17:59:51.299043 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/57d0e022-74ba-4086-9d4b-d1af35cb1940-kube-proxy\") pod \"kube-proxy-zkd6k\" (UID: \"57d0e022-74ba-4086-9d4b-d1af35cb1940\") " pod="kube-system/kube-proxy-zkd6k" May 14 17:59:51.299355 kubelet[3264]: I0514 17:59:51.299137 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57d0e022-74ba-4086-9d4b-d1af35cb1940-lib-modules\") pod \"kube-proxy-zkd6k\" (UID: \"57d0e022-74ba-4086-9d4b-d1af35cb1940\") " pod="kube-system/kube-proxy-zkd6k" May 14 17:59:51.299476 kubelet[3264]: I0514 17:59:51.299321 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4zf7\" (UniqueName: \"kubernetes.io/projected/57d0e022-74ba-4086-9d4b-d1af35cb1940-kube-api-access-n4zf7\") pod \"kube-proxy-zkd6k\" (UID: \"57d0e022-74ba-4086-9d4b-d1af35cb1940\") " pod="kube-system/kube-proxy-zkd6k" May 14 17:59:51.370618 systemd[1]: Created slice kubepods-besteffort-podff5deede_b8f5_4dc0_a311_22ad123d2502.slice - libcontainer container kubepods-besteffort-podff5deede_b8f5_4dc0_a311_22ad123d2502.slice. May 14 17:59:51.437850 containerd[1983]: time="2025-05-14T17:59:51.437787991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zkd6k,Uid:57d0e022-74ba-4086-9d4b-d1af35cb1940,Namespace:kube-system,Attempt:0,}" May 14 17:59:51.491069 containerd[1983]: time="2025-05-14T17:59:51.490918687Z" level=info msg="connecting to shim 5c3398e1258ef488368a22d2ca1660ad82ee834ca12028db3998925d9096f30b" address="unix:///run/containerd/s/1ec1475ecead2071bcb2042e73e0b97b6a019fdf745d000f545bcb3950ee2d41" namespace=k8s.io protocol=ttrpc version=3 May 14 17:59:51.501394 kubelet[3264]: I0514 17:59:51.501065 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ff5deede-b8f5-4dc0-a311-22ad123d2502-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-98mz4\" (UID: \"ff5deede-b8f5-4dc0-a311-22ad123d2502\") " pod="tigera-operator/tigera-operator-6f6897fdc5-98mz4" May 14 17:59:51.501394 kubelet[3264]: I0514 17:59:51.501187 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhlss\" (UniqueName: \"kubernetes.io/projected/ff5deede-b8f5-4dc0-a311-22ad123d2502-kube-api-access-bhlss\") pod \"tigera-operator-6f6897fdc5-98mz4\" (UID: \"ff5deede-b8f5-4dc0-a311-22ad123d2502\") " pod="tigera-operator/tigera-operator-6f6897fdc5-98mz4" May 14 17:59:51.572257 systemd[1]: Started cri-containerd-5c3398e1258ef488368a22d2ca1660ad82ee834ca12028db3998925d9096f30b.scope - libcontainer container 5c3398e1258ef488368a22d2ca1660ad82ee834ca12028db3998925d9096f30b. May 14 17:59:51.654673 containerd[1983]: time="2025-05-14T17:59:51.654612620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zkd6k,Uid:57d0e022-74ba-4086-9d4b-d1af35cb1940,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c3398e1258ef488368a22d2ca1660ad82ee834ca12028db3998925d9096f30b\"" May 14 17:59:51.672066 containerd[1983]: time="2025-05-14T17:59:51.669969824Z" level=info msg="CreateContainer within sandbox \"5c3398e1258ef488368a22d2ca1660ad82ee834ca12028db3998925d9096f30b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 17:59:51.679877 containerd[1983]: time="2025-05-14T17:59:51.679813832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-98mz4,Uid:ff5deede-b8f5-4dc0-a311-22ad123d2502,Namespace:tigera-operator,Attempt:0,}" May 14 17:59:51.692338 containerd[1983]: time="2025-05-14T17:59:51.692279264Z" level=info msg="Container d20cb7a97d8907070d7d878054f1953a26755dc9d9eda0777024339023b5d380: CDI devices from CRI Config.CDIDevices: []" May 14 17:59:51.719508 containerd[1983]: time="2025-05-14T17:59:51.719340429Z" level=info msg="CreateContainer within sandbox \"5c3398e1258ef488368a22d2ca1660ad82ee834ca12028db3998925d9096f30b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d20cb7a97d8907070d7d878054f1953a26755dc9d9eda0777024339023b5d380\"" May 14 17:59:51.723107 containerd[1983]: time="2025-05-14T17:59:51.723051393Z" level=info msg="StartContainer for \"d20cb7a97d8907070d7d878054f1953a26755dc9d9eda0777024339023b5d380\"" May 14 17:59:51.731730 containerd[1983]: time="2025-05-14T17:59:51.731659737Z" level=info msg="connecting to shim d20cb7a97d8907070d7d878054f1953a26755dc9d9eda0777024339023b5d380" address="unix:///run/containerd/s/1ec1475ecead2071bcb2042e73e0b97b6a019fdf745d000f545bcb3950ee2d41" protocol=ttrpc version=3 May 14 17:59:51.738892 containerd[1983]: time="2025-05-14T17:59:51.738810117Z" level=info msg="connecting to shim ad99a7bb223b4a45b517763f0157638bd47c9633ca8d31248aeba7604bac841b" address="unix:///run/containerd/s/968e0b086356b1bdef9c5a5c2c87a449fa92f24e6bcf2e2336dd87a46687e202" namespace=k8s.io protocol=ttrpc version=3 May 14 17:59:51.767146 systemd[1]: Started cri-containerd-d20cb7a97d8907070d7d878054f1953a26755dc9d9eda0777024339023b5d380.scope - libcontainer container d20cb7a97d8907070d7d878054f1953a26755dc9d9eda0777024339023b5d380. May 14 17:59:51.799818 systemd[1]: Started cri-containerd-ad99a7bb223b4a45b517763f0157638bd47c9633ca8d31248aeba7604bac841b.scope - libcontainer container ad99a7bb223b4a45b517763f0157638bd47c9633ca8d31248aeba7604bac841b. May 14 17:59:51.906965 containerd[1983]: time="2025-05-14T17:59:51.906417201Z" level=info msg="StartContainer for \"d20cb7a97d8907070d7d878054f1953a26755dc9d9eda0777024339023b5d380\" returns successfully" May 14 17:59:51.922676 containerd[1983]: time="2025-05-14T17:59:51.922612978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-98mz4,Uid:ff5deede-b8f5-4dc0-a311-22ad123d2502,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ad99a7bb223b4a45b517763f0157638bd47c9633ca8d31248aeba7604bac841b\"" May 14 17:59:51.927673 containerd[1983]: time="2025-05-14T17:59:51.927487714Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 14 17:59:52.351128 kubelet[3264]: I0514 17:59:52.351031 3264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zkd6k" podStartSLOduration=1.351006008 podStartE2EDuration="1.351006008s" podCreationTimestamp="2025-05-14 17:59:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 17:59:52.001228542 +0000 UTC m=+8.372095975" watchObservedRunningTime="2025-05-14 17:59:52.351006008 +0000 UTC m=+8.721873417" May 14 17:59:53.241091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1058255744.mount: Deactivated successfully. May 14 17:59:53.991831 containerd[1983]: time="2025-05-14T17:59:53.991777968Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:59:53.994221 containerd[1983]: time="2025-05-14T17:59:53.993487560Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 14 17:59:53.994221 containerd[1983]: time="2025-05-14T17:59:53.993630792Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:59:53.997497 containerd[1983]: time="2025-05-14T17:59:53.997447344Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 17:59:53.998772 containerd[1983]: time="2025-05-14T17:59:53.998719308Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 2.071094302s" May 14 17:59:53.998873 containerd[1983]: time="2025-05-14T17:59:53.998772096Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 14 17:59:54.004839 containerd[1983]: time="2025-05-14T17:59:54.004370732Z" level=info msg="CreateContainer within sandbox \"ad99a7bb223b4a45b517763f0157638bd47c9633ca8d31248aeba7604bac841b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 14 17:59:54.018545 containerd[1983]: time="2025-05-14T17:59:54.016979564Z" level=info msg="Container 016f2a60a392fe0d3575eb9b7b4725ac3ed843201619e9b0b0ddd84f64cb50a2: CDI devices from CRI Config.CDIDevices: []" May 14 17:59:54.026209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3697671302.mount: Deactivated successfully. May 14 17:59:54.036916 containerd[1983]: time="2025-05-14T17:59:54.036865976Z" level=info msg="CreateContainer within sandbox \"ad99a7bb223b4a45b517763f0157638bd47c9633ca8d31248aeba7604bac841b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"016f2a60a392fe0d3575eb9b7b4725ac3ed843201619e9b0b0ddd84f64cb50a2\"" May 14 17:59:54.038847 containerd[1983]: time="2025-05-14T17:59:54.038124800Z" level=info msg="StartContainer for \"016f2a60a392fe0d3575eb9b7b4725ac3ed843201619e9b0b0ddd84f64cb50a2\"" May 14 17:59:54.042455 containerd[1983]: time="2025-05-14T17:59:54.042385088Z" level=info msg="connecting to shim 016f2a60a392fe0d3575eb9b7b4725ac3ed843201619e9b0b0ddd84f64cb50a2" address="unix:///run/containerd/s/968e0b086356b1bdef9c5a5c2c87a449fa92f24e6bcf2e2336dd87a46687e202" protocol=ttrpc version=3 May 14 17:59:54.085927 systemd[1]: Started cri-containerd-016f2a60a392fe0d3575eb9b7b4725ac3ed843201619e9b0b0ddd84f64cb50a2.scope - libcontainer container 016f2a60a392fe0d3575eb9b7b4725ac3ed843201619e9b0b0ddd84f64cb50a2. May 14 17:59:54.161491 containerd[1983]: time="2025-05-14T17:59:54.161342037Z" level=info msg="StartContainer for \"016f2a60a392fe0d3575eb9b7b4725ac3ed843201619e9b0b0ddd84f64cb50a2\" returns successfully" May 14 17:59:55.009671 kubelet[3264]: I0514 17:59:55.009579 3264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-98mz4" podStartSLOduration=1.934522571 podStartE2EDuration="4.009558537s" podCreationTimestamp="2025-05-14 17:59:51 +0000 UTC" firstStartedPulling="2025-05-14 17:59:51.925936894 +0000 UTC m=+8.296804303" lastFinishedPulling="2025-05-14 17:59:54.00097286 +0000 UTC m=+10.371840269" observedRunningTime="2025-05-14 17:59:55.008583129 +0000 UTC m=+11.379450550" watchObservedRunningTime="2025-05-14 17:59:55.009558537 +0000 UTC m=+11.380425958" May 14 17:59:58.966425 systemd[1]: Created slice kubepods-besteffort-pod2a302da7_267c_44da_b393_f22460b28dbb.slice - libcontainer container kubepods-besteffort-pod2a302da7_267c_44da_b393_f22460b28dbb.slice. May 14 17:59:59.055166 kubelet[3264]: I0514 17:59:59.054887 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a302da7-267c-44da-b393-f22460b28dbb-tigera-ca-bundle\") pod \"calico-typha-f79bff7d-v84hv\" (UID: \"2a302da7-267c-44da-b393-f22460b28dbb\") " pod="calico-system/calico-typha-f79bff7d-v84hv" May 14 17:59:59.055166 kubelet[3264]: I0514 17:59:59.055109 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9rvl\" (UniqueName: \"kubernetes.io/projected/2a302da7-267c-44da-b393-f22460b28dbb-kube-api-access-k9rvl\") pod \"calico-typha-f79bff7d-v84hv\" (UID: \"2a302da7-267c-44da-b393-f22460b28dbb\") " pod="calico-system/calico-typha-f79bff7d-v84hv" May 14 17:59:59.055166 kubelet[3264]: I0514 17:59:59.055190 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2a302da7-267c-44da-b393-f22460b28dbb-typha-certs\") pod \"calico-typha-f79bff7d-v84hv\" (UID: \"2a302da7-267c-44da-b393-f22460b28dbb\") " pod="calico-system/calico-typha-f79bff7d-v84hv" May 14 17:59:59.167319 systemd[1]: Created slice kubepods-besteffort-pod8c4acc98_eec7_4b8d_8f02_137af25887f5.slice - libcontainer container kubepods-besteffort-pod8c4acc98_eec7_4b8d_8f02_137af25887f5.slice. May 14 17:59:59.258290 kubelet[3264]: I0514 17:59:59.258189 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8c4acc98-eec7-4b8d-8f02-137af25887f5-node-certs\") pod \"calico-node-tzstj\" (UID: \"8c4acc98-eec7-4b8d-8f02-137af25887f5\") " pod="calico-system/calico-node-tzstj" May 14 17:59:59.258290 kubelet[3264]: I0514 17:59:59.258257 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c4acc98-eec7-4b8d-8f02-137af25887f5-lib-modules\") pod \"calico-node-tzstj\" (UID: \"8c4acc98-eec7-4b8d-8f02-137af25887f5\") " pod="calico-system/calico-node-tzstj" May 14 17:59:59.258947 kubelet[3264]: I0514 17:59:59.258569 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8c4acc98-eec7-4b8d-8f02-137af25887f5-flexvol-driver-host\") pod \"calico-node-tzstj\" (UID: \"8c4acc98-eec7-4b8d-8f02-137af25887f5\") " pod="calico-system/calico-node-tzstj" May 14 17:59:59.259110 kubelet[3264]: I0514 17:59:59.258785 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c4acc98-eec7-4b8d-8f02-137af25887f5-tigera-ca-bundle\") pod \"calico-node-tzstj\" (UID: \"8c4acc98-eec7-4b8d-8f02-137af25887f5\") " pod="calico-system/calico-node-tzstj" May 14 17:59:59.259110 kubelet[3264]: I0514 17:59:59.259072 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8c4acc98-eec7-4b8d-8f02-137af25887f5-policysync\") pod \"calico-node-tzstj\" (UID: \"8c4acc98-eec7-4b8d-8f02-137af25887f5\") " pod="calico-system/calico-node-tzstj" May 14 17:59:59.259442 kubelet[3264]: I0514 17:59:59.259310 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8c4acc98-eec7-4b8d-8f02-137af25887f5-var-run-calico\") pod \"calico-node-tzstj\" (UID: \"8c4acc98-eec7-4b8d-8f02-137af25887f5\") " pod="calico-system/calico-node-tzstj" May 14 17:59:59.259442 kubelet[3264]: I0514 17:59:59.259383 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8c4acc98-eec7-4b8d-8f02-137af25887f5-cni-bin-dir\") pod \"calico-node-tzstj\" (UID: \"8c4acc98-eec7-4b8d-8f02-137af25887f5\") " pod="calico-system/calico-node-tzstj" May 14 17:59:59.259719 kubelet[3264]: I0514 17:59:59.259627 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8c4acc98-eec7-4b8d-8f02-137af25887f5-cni-net-dir\") pod \"calico-node-tzstj\" (UID: \"8c4acc98-eec7-4b8d-8f02-137af25887f5\") " pod="calico-system/calico-node-tzstj" May 14 17:59:59.259932 kubelet[3264]: I0514 17:59:59.259794 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhh4z\" (UniqueName: \"kubernetes.io/projected/8c4acc98-eec7-4b8d-8f02-137af25887f5-kube-api-access-rhh4z\") pod \"calico-node-tzstj\" (UID: \"8c4acc98-eec7-4b8d-8f02-137af25887f5\") " pod="calico-system/calico-node-tzstj" May 14 17:59:59.260099 kubelet[3264]: I0514 17:59:59.260070 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c4acc98-eec7-4b8d-8f02-137af25887f5-xtables-lock\") pod \"calico-node-tzstj\" (UID: \"8c4acc98-eec7-4b8d-8f02-137af25887f5\") " pod="calico-system/calico-node-tzstj" May 14 17:59:59.260444 kubelet[3264]: I0514 17:59:59.260222 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8c4acc98-eec7-4b8d-8f02-137af25887f5-var-lib-calico\") pod \"calico-node-tzstj\" (UID: \"8c4acc98-eec7-4b8d-8f02-137af25887f5\") " pod="calico-system/calico-node-tzstj" May 14 17:59:59.260621 kubelet[3264]: I0514 17:59:59.260407 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8c4acc98-eec7-4b8d-8f02-137af25887f5-cni-log-dir\") pod \"calico-node-tzstj\" (UID: \"8c4acc98-eec7-4b8d-8f02-137af25887f5\") " pod="calico-system/calico-node-tzstj" May 14 17:59:59.276735 containerd[1983]: time="2025-05-14T17:59:59.276640466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f79bff7d-v84hv,Uid:2a302da7-267c-44da-b393-f22460b28dbb,Namespace:calico-system,Attempt:0,}" May 14 17:59:59.319551 containerd[1983]: time="2025-05-14T17:59:59.319188818Z" level=info msg="connecting to shim 4b277b23757caf4219e94a5408fd2638e0c9c84bf651a656254a0d5021ddc47d" address="unix:///run/containerd/s/333d3ad1055a08b905a263c4e45f9aa934840018b3e6016930fd67260f136610" namespace=k8s.io protocol=ttrpc version=3 May 14 17:59:59.369773 kubelet[3264]: E0514 17:59:59.369710 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.369938 kubelet[3264]: W0514 17:59:59.369783 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.370886 kubelet[3264]: E0514 17:59:59.370786 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.371350 kubelet[3264]: E0514 17:59:59.370940 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.371350 kubelet[3264]: W0514 17:59:59.370862 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.371873 kubelet[3264]: E0514 17:59:59.371167 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.373110 kubelet[3264]: E0514 17:59:59.372973 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.373662 kubelet[3264]: W0514 17:59:59.373484 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.374572 kubelet[3264]: E0514 17:59:59.374218 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.378370 kubelet[3264]: E0514 17:59:59.378170 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.378370 kubelet[3264]: W0514 17:59:59.378207 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.378370 kubelet[3264]: E0514 17:59:59.378241 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.382633 kubelet[3264]: E0514 17:59:59.382411 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.382633 kubelet[3264]: W0514 17:59:59.382450 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.382633 kubelet[3264]: E0514 17:59:59.382483 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.386581 kubelet[3264]: E0514 17:59:59.385700 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.386581 kubelet[3264]: W0514 17:59:59.385739 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.386581 kubelet[3264]: E0514 17:59:59.385774 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.397429 kubelet[3264]: E0514 17:59:59.395960 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.397429 kubelet[3264]: W0514 17:59:59.396001 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.397429 kubelet[3264]: E0514 17:59:59.396035 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.402940 systemd[1]: Started cri-containerd-4b277b23757caf4219e94a5408fd2638e0c9c84bf651a656254a0d5021ddc47d.scope - libcontainer container 4b277b23757caf4219e94a5408fd2638e0c9c84bf651a656254a0d5021ddc47d. May 14 17:59:59.424742 kubelet[3264]: E0514 17:59:59.424615 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.425095 kubelet[3264]: W0514 17:59:59.424900 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.425095 kubelet[3264]: E0514 17:59:59.424942 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.464557 kubelet[3264]: E0514 17:59:59.464169 3264 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qz84k" podUID="055a695d-11dc-4f62-b437-308224c2bdff" May 14 17:59:59.493149 containerd[1983]: time="2025-05-14T17:59:59.493089783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tzstj,Uid:8c4acc98-eec7-4b8d-8f02-137af25887f5,Namespace:calico-system,Attempt:0,}" May 14 17:59:59.538846 containerd[1983]: time="2025-05-14T17:59:59.538148379Z" level=info msg="connecting to shim 210ec751e9f4296e334cfabd80fbc3a70db8b674d302b4d3a381d98c70d665ea" address="unix:///run/containerd/s/6b563b5440c1d977556238cacc77151418280c1bb4a23b975e63af40ea654a74" namespace=k8s.io protocol=ttrpc version=3 May 14 17:59:59.565833 kubelet[3264]: E0514 17:59:59.565796 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.565833 kubelet[3264]: W0514 17:59:59.565883 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.565833 kubelet[3264]: E0514 17:59:59.565917 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.568445 kubelet[3264]: E0514 17:59:59.568304 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.568445 kubelet[3264]: W0514 17:59:59.568366 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.568445 kubelet[3264]: E0514 17:59:59.568401 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.571009 kubelet[3264]: E0514 17:59:59.570758 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.572558 kubelet[3264]: W0514 17:59:59.571649 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.572558 kubelet[3264]: E0514 17:59:59.571696 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.574198 kubelet[3264]: E0514 17:59:59.574050 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.574749 kubelet[3264]: W0514 17:59:59.574390 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.575455 kubelet[3264]: E0514 17:59:59.574428 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.581695 kubelet[3264]: E0514 17:59:59.580721 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.581695 kubelet[3264]: W0514 17:59:59.580759 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.581695 kubelet[3264]: E0514 17:59:59.581571 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.583800 kubelet[3264]: E0514 17:59:59.583673 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.584958 kubelet[3264]: W0514 17:59:59.584583 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.585552 kubelet[3264]: E0514 17:59:59.585136 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.589140 kubelet[3264]: E0514 17:59:59.588589 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.590408 kubelet[3264]: W0514 17:59:59.589463 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.593432 kubelet[3264]: E0514 17:59:59.592868 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.596550 kubelet[3264]: E0514 17:59:59.596424 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.596550 kubelet[3264]: W0514 17:59:59.596482 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.596837 kubelet[3264]: E0514 17:59:59.596755 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.603798 kubelet[3264]: E0514 17:59:59.603759 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.604352 kubelet[3264]: W0514 17:59:59.603956 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.604352 kubelet[3264]: E0514 17:59:59.603995 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.606113 kubelet[3264]: E0514 17:59:59.605876 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.606113 kubelet[3264]: W0514 17:59:59.605911 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.606113 kubelet[3264]: E0514 17:59:59.605944 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.606701 kubelet[3264]: E0514 17:59:59.606606 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.607009 kubelet[3264]: W0514 17:59:59.606870 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.607553 kubelet[3264]: E0514 17:59:59.606911 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.609307 kubelet[3264]: E0514 17:59:59.608201 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.608903 systemd[1]: Started cri-containerd-210ec751e9f4296e334cfabd80fbc3a70db8b674d302b4d3a381d98c70d665ea.scope - libcontainer container 210ec751e9f4296e334cfabd80fbc3a70db8b674d302b4d3a381d98c70d665ea. May 14 17:59:59.610990 kubelet[3264]: W0514 17:59:59.610854 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.610990 kubelet[3264]: E0514 17:59:59.610944 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.615733 kubelet[3264]: E0514 17:59:59.615581 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.615733 kubelet[3264]: W0514 17:59:59.615621 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.615733 kubelet[3264]: E0514 17:59:59.615679 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.617746 kubelet[3264]: E0514 17:59:59.617624 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.617746 kubelet[3264]: W0514 17:59:59.617677 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.618282 kubelet[3264]: E0514 17:59:59.618003 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.619023 kubelet[3264]: E0514 17:59:59.618828 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.619023 kubelet[3264]: W0514 17:59:59.618857 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.619023 kubelet[3264]: E0514 17:59:59.618885 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.619709 kubelet[3264]: E0514 17:59:59.619597 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.619709 kubelet[3264]: W0514 17:59:59.619640 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.619709 kubelet[3264]: E0514 17:59:59.619671 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.620805 kubelet[3264]: E0514 17:59:59.620576 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.620805 kubelet[3264]: W0514 17:59:59.620606 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.620805 kubelet[3264]: E0514 17:59:59.620632 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.621863 kubelet[3264]: E0514 17:59:59.621682 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.621863 kubelet[3264]: W0514 17:59:59.621715 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.621863 kubelet[3264]: E0514 17:59:59.621765 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.622455 kubelet[3264]: E0514 17:59:59.622343 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.622455 kubelet[3264]: W0514 17:59:59.622366 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.622455 kubelet[3264]: E0514 17:59:59.622385 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.623162 kubelet[3264]: E0514 17:59:59.623044 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.623162 kubelet[3264]: W0514 17:59:59.623073 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.623162 kubelet[3264]: E0514 17:59:59.623100 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.624206 kubelet[3264]: E0514 17:59:59.624076 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.624206 kubelet[3264]: W0514 17:59:59.624102 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.624206 kubelet[3264]: E0514 17:59:59.624163 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.624640 kubelet[3264]: I0514 17:59:59.624469 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/055a695d-11dc-4f62-b437-308224c2bdff-kubelet-dir\") pod \"csi-node-driver-qz84k\" (UID: \"055a695d-11dc-4f62-b437-308224c2bdff\") " pod="calico-system/csi-node-driver-qz84k" May 14 17:59:59.625144 kubelet[3264]: E0514 17:59:59.625056 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.625144 kubelet[3264]: W0514 17:59:59.625116 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.625402 kubelet[3264]: E0514 17:59:59.625279 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.625739 kubelet[3264]: I0514 17:59:59.625668 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/055a695d-11dc-4f62-b437-308224c2bdff-socket-dir\") pod \"csi-node-driver-qz84k\" (UID: \"055a695d-11dc-4f62-b437-308224c2bdff\") " pod="calico-system/csi-node-driver-qz84k" May 14 17:59:59.625926 kubelet[3264]: E0514 17:59:59.625892 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.626092 kubelet[3264]: W0514 17:59:59.626056 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.626278 kubelet[3264]: E0514 17:59:59.626199 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.626796 kubelet[3264]: E0514 17:59:59.626742 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.626796 kubelet[3264]: W0514 17:59:59.626766 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.627116 kubelet[3264]: E0514 17:59:59.626992 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.627565 kubelet[3264]: E0514 17:59:59.627491 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.627861 kubelet[3264]: W0514 17:59:59.627685 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.627861 kubelet[3264]: E0514 17:59:59.627727 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.627861 kubelet[3264]: I0514 17:59:59.627770 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/055a695d-11dc-4f62-b437-308224c2bdff-registration-dir\") pod \"csi-node-driver-qz84k\" (UID: \"055a695d-11dc-4f62-b437-308224c2bdff\") " pod="calico-system/csi-node-driver-qz84k" May 14 17:59:59.628549 kubelet[3264]: E0514 17:59:59.628421 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.628549 kubelet[3264]: W0514 17:59:59.628447 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.629363 kubelet[3264]: E0514 17:59:59.629327 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.630719 kubelet[3264]: E0514 17:59:59.630605 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.630719 kubelet[3264]: W0514 17:59:59.630642 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.631163 kubelet[3264]: E0514 17:59:59.630937 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.631163 kubelet[3264]: I0514 17:59:59.630993 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcs5m\" (UniqueName: \"kubernetes.io/projected/055a695d-11dc-4f62-b437-308224c2bdff-kube-api-access-wcs5m\") pod \"csi-node-driver-qz84k\" (UID: \"055a695d-11dc-4f62-b437-308224c2bdff\") " pod="calico-system/csi-node-driver-qz84k" May 14 17:59:59.631656 kubelet[3264]: E0514 17:59:59.631588 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.631656 kubelet[3264]: W0514 17:59:59.631621 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.632080 kubelet[3264]: E0514 17:59:59.631907 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.632451 kubelet[3264]: E0514 17:59:59.632367 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.632451 kubelet[3264]: W0514 17:59:59.632393 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.632451 kubelet[3264]: E0514 17:59:59.632421 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.633158 kubelet[3264]: E0514 17:59:59.633101 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.633158 kubelet[3264]: W0514 17:59:59.633127 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.634140 kubelet[3264]: E0514 17:59:59.633686 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.634493 kubelet[3264]: E0514 17:59:59.634444 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.634689 kubelet[3264]: W0514 17:59:59.634564 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.634913 kubelet[3264]: E0514 17:59:59.634749 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.635276 kubelet[3264]: I0514 17:59:59.635231 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/055a695d-11dc-4f62-b437-308224c2bdff-varrun\") pod \"csi-node-driver-qz84k\" (UID: \"055a695d-11dc-4f62-b437-308224c2bdff\") " pod="calico-system/csi-node-driver-qz84k" May 14 17:59:59.635614 kubelet[3264]: E0514 17:59:59.635513 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.635614 kubelet[3264]: W0514 17:59:59.635561 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.635614 kubelet[3264]: E0514 17:59:59.635585 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.636996 kubelet[3264]: E0514 17:59:59.636929 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.636996 kubelet[3264]: W0514 17:59:59.636959 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.637372 kubelet[3264]: E0514 17:59:59.637229 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.637827 kubelet[3264]: E0514 17:59:59.637741 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.637827 kubelet[3264]: W0514 17:59:59.637768 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.637827 kubelet[3264]: E0514 17:59:59.637795 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.638551 kubelet[3264]: E0514 17:59:59.638458 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.638551 kubelet[3264]: W0514 17:59:59.638485 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.638824 kubelet[3264]: E0514 17:59:59.638511 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.714933 containerd[1983]: time="2025-05-14T17:59:59.714848680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f79bff7d-v84hv,Uid:2a302da7-267c-44da-b393-f22460b28dbb,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b277b23757caf4219e94a5408fd2638e0c9c84bf651a656254a0d5021ddc47d\"" May 14 17:59:59.720680 containerd[1983]: time="2025-05-14T17:59:59.720608980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 14 17:59:59.739499 kubelet[3264]: E0514 17:59:59.739451 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.739499 kubelet[3264]: W0514 17:59:59.739488 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.739499 kubelet[3264]: E0514 17:59:59.739539 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.740492 kubelet[3264]: E0514 17:59:59.739858 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.740492 kubelet[3264]: W0514 17:59:59.739882 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.740492 kubelet[3264]: E0514 17:59:59.739904 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.740492 kubelet[3264]: E0514 17:59:59.740206 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.740492 kubelet[3264]: W0514 17:59:59.740223 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.740492 kubelet[3264]: E0514 17:59:59.740245 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.740492 kubelet[3264]: E0514 17:59:59.740540 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.740492 kubelet[3264]: W0514 17:59:59.740581 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.740492 kubelet[3264]: E0514 17:59:59.740615 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.743085 kubelet[3264]: E0514 17:59:59.740890 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.743085 kubelet[3264]: W0514 17:59:59.740907 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.743085 kubelet[3264]: E0514 17:59:59.740942 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.743085 kubelet[3264]: E0514 17:59:59.741250 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.743085 kubelet[3264]: W0514 17:59:59.741268 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.743085 kubelet[3264]: E0514 17:59:59.741301 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.743085 kubelet[3264]: E0514 17:59:59.741618 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.743085 kubelet[3264]: W0514 17:59:59.741638 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.743085 kubelet[3264]: E0514 17:59:59.741710 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.743085 kubelet[3264]: E0514 17:59:59.741887 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.745425 kubelet[3264]: W0514 17:59:59.741903 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.745425 kubelet[3264]: E0514 17:59:59.742127 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.745425 kubelet[3264]: E0514 17:59:59.742182 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.745425 kubelet[3264]: W0514 17:59:59.742196 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.745425 kubelet[3264]: E0514 17:59:59.742420 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.745425 kubelet[3264]: W0514 17:59:59.742434 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.745425 kubelet[3264]: E0514 17:59:59.742699 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.745425 kubelet[3264]: W0514 17:59:59.742715 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.745425 kubelet[3264]: E0514 17:59:59.742807 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.745425 kubelet[3264]: E0514 17:59:59.742845 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.748824 kubelet[3264]: E0514 17:59:59.742921 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.748824 kubelet[3264]: E0514 17:59:59.744419 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.748824 kubelet[3264]: W0514 17:59:59.744445 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.748824 kubelet[3264]: E0514 17:59:59.744489 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.748824 kubelet[3264]: E0514 17:59:59.744897 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.748824 kubelet[3264]: W0514 17:59:59.744916 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.748824 kubelet[3264]: E0514 17:59:59.745152 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.748824 kubelet[3264]: E0514 17:59:59.745170 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.748824 kubelet[3264]: W0514 17:59:59.745185 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.748824 kubelet[3264]: E0514 17:59:59.745206 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.749309 kubelet[3264]: E0514 17:59:59.745396 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.749309 kubelet[3264]: W0514 17:59:59.745410 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.749309 kubelet[3264]: E0514 17:59:59.746608 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.749309 kubelet[3264]: E0514 17:59:59.746761 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.749309 kubelet[3264]: W0514 17:59:59.746780 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.749309 kubelet[3264]: E0514 17:59:59.747162 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.749309 kubelet[3264]: E0514 17:59:59.747173 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.749309 kubelet[3264]: W0514 17:59:59.747213 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.749309 kubelet[3264]: E0514 17:59:59.747262 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.751478 kubelet[3264]: E0514 17:59:59.748652 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.751478 kubelet[3264]: W0514 17:59:59.750318 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.751478 kubelet[3264]: E0514 17:59:59.750489 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.751783 kubelet[3264]: E0514 17:59:59.751606 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.751783 kubelet[3264]: W0514 17:59:59.751771 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.752469 kubelet[3264]: E0514 17:59:59.751980 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.753323 kubelet[3264]: E0514 17:59:59.752942 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.753323 kubelet[3264]: W0514 17:59:59.752972 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.753323 kubelet[3264]: E0514 17:59:59.753040 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.754289 kubelet[3264]: E0514 17:59:59.754167 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.754654 kubelet[3264]: W0514 17:59:59.754534 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.755050 kubelet[3264]: E0514 17:59:59.754860 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.757028 kubelet[3264]: E0514 17:59:59.756752 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.757028 kubelet[3264]: W0514 17:59:59.756786 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.757028 kubelet[3264]: E0514 17:59:59.756953 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.758779 kubelet[3264]: E0514 17:59:59.758119 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.758779 kubelet[3264]: W0514 17:59:59.758151 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.758779 kubelet[3264]: E0514 17:59:59.758215 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.759798 kubelet[3264]: E0514 17:59:59.759676 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.760440 kubelet[3264]: W0514 17:59:59.760004 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.760440 kubelet[3264]: E0514 17:59:59.760214 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.761410 kubelet[3264]: E0514 17:59:59.761257 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.761748 kubelet[3264]: W0514 17:59:59.761705 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.762171 kubelet[3264]: E0514 17:59:59.761899 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.799435 kubelet[3264]: E0514 17:59:59.799010 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 17:59:59.799435 kubelet[3264]: W0514 17:59:59.799076 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 17:59:59.799435 kubelet[3264]: E0514 17:59:59.799122 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 17:59:59.809930 containerd[1983]: time="2025-05-14T17:59:59.809775653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tzstj,Uid:8c4acc98-eec7-4b8d-8f02-137af25887f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"210ec751e9f4296e334cfabd80fbc3a70db8b674d302b4d3a381d98c70d665ea\"" May 14 18:00:00.863144 kubelet[3264]: E0514 18:00:00.863071 3264 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qz84k" podUID="055a695d-11dc-4f62-b437-308224c2bdff" May 14 18:00:02.863562 kubelet[3264]: E0514 18:00:02.863077 3264 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qz84k" podUID="055a695d-11dc-4f62-b437-308224c2bdff" May 14 18:00:04.813013 containerd[1983]: time="2025-05-14T18:00:04.812910334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:04.815375 containerd[1983]: time="2025-05-14T18:00:04.815331322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 14 18:00:04.816463 containerd[1983]: time="2025-05-14T18:00:04.816410530Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:04.819466 containerd[1983]: time="2025-05-14T18:00:04.819388882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:04.820973 containerd[1983]: time="2025-05-14T18:00:04.820779742Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 5.100097622s" May 14 18:00:04.820973 containerd[1983]: time="2025-05-14T18:00:04.820832026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 14 18:00:04.825284 containerd[1983]: time="2025-05-14T18:00:04.824815978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 14 18:00:04.848635 containerd[1983]: time="2025-05-14T18:00:04.848497966Z" level=info msg="CreateContainer within sandbox \"4b277b23757caf4219e94a5408fd2638e0c9c84bf651a656254a0d5021ddc47d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 14 18:00:04.863572 kubelet[3264]: E0514 18:00:04.862344 3264 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qz84k" podUID="055a695d-11dc-4f62-b437-308224c2bdff" May 14 18:00:04.867119 containerd[1983]: time="2025-05-14T18:00:04.867056326Z" level=info msg="Container 599782a4df1aeb67e948918e502c57147b506a35573df8970c3c30914aba8d63: CDI devices from CRI Config.CDIDevices: []" May 14 18:00:04.872972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1013361128.mount: Deactivated successfully. May 14 18:00:04.886612 containerd[1983]: time="2025-05-14T18:00:04.886481026Z" level=info msg="CreateContainer within sandbox \"4b277b23757caf4219e94a5408fd2638e0c9c84bf651a656254a0d5021ddc47d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"599782a4df1aeb67e948918e502c57147b506a35573df8970c3c30914aba8d63\"" May 14 18:00:04.888119 containerd[1983]: time="2025-05-14T18:00:04.887887642Z" level=info msg="StartContainer for \"599782a4df1aeb67e948918e502c57147b506a35573df8970c3c30914aba8d63\"" May 14 18:00:04.890175 containerd[1983]: time="2025-05-14T18:00:04.890118034Z" level=info msg="connecting to shim 599782a4df1aeb67e948918e502c57147b506a35573df8970c3c30914aba8d63" address="unix:///run/containerd/s/333d3ad1055a08b905a263c4e45f9aa934840018b3e6016930fd67260f136610" protocol=ttrpc version=3 May 14 18:00:04.926822 systemd[1]: Started cri-containerd-599782a4df1aeb67e948918e502c57147b506a35573df8970c3c30914aba8d63.scope - libcontainer container 599782a4df1aeb67e948918e502c57147b506a35573df8970c3c30914aba8d63. May 14 18:00:05.007263 containerd[1983]: time="2025-05-14T18:00:05.007118287Z" level=info msg="StartContainer for \"599782a4df1aeb67e948918e502c57147b506a35573df8970c3c30914aba8d63\" returns successfully" May 14 18:00:05.068161 kubelet[3264]: E0514 18:00:05.068017 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.068285 kubelet[3264]: W0514 18:00:05.068059 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.068341 kubelet[3264]: E0514 18:00:05.068279 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.070719 kubelet[3264]: E0514 18:00:05.070291 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.070719 kubelet[3264]: W0514 18:00:05.070326 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.070719 kubelet[3264]: E0514 18:00:05.070379 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.071194 kubelet[3264]: E0514 18:00:05.071154 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.071351 kubelet[3264]: W0514 18:00:05.071186 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.071424 kubelet[3264]: E0514 18:00:05.071360 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.072743 kubelet[3264]: E0514 18:00:05.072640 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.072743 kubelet[3264]: W0514 18:00:05.072671 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.072954 kubelet[3264]: E0514 18:00:05.072699 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.073440 kubelet[3264]: E0514 18:00:05.073405 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.074548 kubelet[3264]: W0514 18:00:05.073591 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.074548 kubelet[3264]: E0514 18:00:05.073630 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.074827 kubelet[3264]: E0514 18:00:05.074802 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.074926 kubelet[3264]: W0514 18:00:05.074903 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.075055 kubelet[3264]: E0514 18:00:05.075031 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.076987 kubelet[3264]: E0514 18:00:05.076766 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.076987 kubelet[3264]: W0514 18:00:05.076798 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.076987 kubelet[3264]: E0514 18:00:05.076828 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.077366 kubelet[3264]: E0514 18:00:05.077342 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.077548 kubelet[3264]: W0514 18:00:05.077459 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.077548 kubelet[3264]: E0514 18:00:05.077492 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.079138 kubelet[3264]: E0514 18:00:05.078920 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.079138 kubelet[3264]: W0514 18:00:05.078952 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.079138 kubelet[3264]: E0514 18:00:05.078980 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.079548 kubelet[3264]: E0514 18:00:05.079492 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.080118 kubelet[3264]: W0514 18:00:05.079880 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.080118 kubelet[3264]: E0514 18:00:05.079927 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.080901 kubelet[3264]: E0514 18:00:05.080761 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.081141 kubelet[3264]: W0514 18:00:05.081094 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.081483 kubelet[3264]: E0514 18:00:05.081451 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.082150 kubelet[3264]: E0514 18:00:05.081969 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.082591 kubelet[3264]: W0514 18:00:05.082301 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.082591 kubelet[3264]: E0514 18:00:05.082340 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.082960 kubelet[3264]: E0514 18:00:05.082935 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.083090 kubelet[3264]: W0514 18:00:05.083065 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.083194 kubelet[3264]: E0514 18:00:05.083172 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.083601 kubelet[3264]: E0514 18:00:05.083572 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.083881 kubelet[3264]: W0514 18:00:05.083702 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.083881 kubelet[3264]: E0514 18:00:05.083734 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.084229 kubelet[3264]: E0514 18:00:05.084206 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.084443 kubelet[3264]: W0514 18:00:05.084326 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.084443 kubelet[3264]: E0514 18:00:05.084360 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.085117 kubelet[3264]: E0514 18:00:05.085036 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.085117 kubelet[3264]: W0514 18:00:05.085062 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.085117 kubelet[3264]: E0514 18:00:05.085086 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.086008 kubelet[3264]: E0514 18:00:05.085852 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.086008 kubelet[3264]: W0514 18:00:05.085880 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.086008 kubelet[3264]: E0514 18:00:05.085917 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.086788 kubelet[3264]: E0514 18:00:05.086746 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.086788 kubelet[3264]: W0514 18:00:05.086779 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.087978 kubelet[3264]: E0514 18:00:05.086824 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.088205 kubelet[3264]: E0514 18:00:05.088047 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.088205 kubelet[3264]: W0514 18:00:05.088089 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.088205 kubelet[3264]: E0514 18:00:05.088151 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.088433 kubelet[3264]: E0514 18:00:05.088402 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.088433 kubelet[3264]: W0514 18:00:05.088426 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.088670 kubelet[3264]: E0514 18:00:05.088473 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.088722 kubelet[3264]: E0514 18:00:05.088709 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.088771 kubelet[3264]: W0514 18:00:05.088724 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.089655 kubelet[3264]: E0514 18:00:05.089175 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.089655 kubelet[3264]: W0514 18:00:05.089203 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.089655 kubelet[3264]: E0514 18:00:05.089230 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.089655 kubelet[3264]: E0514 18:00:05.089285 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.091731 kubelet[3264]: E0514 18:00:05.091690 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.091907 kubelet[3264]: W0514 18:00:05.091881 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.092029 kubelet[3264]: E0514 18:00:05.092006 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.092502 kubelet[3264]: E0514 18:00:05.092475 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.094834 kubelet[3264]: W0514 18:00:05.093069 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.094834 kubelet[3264]: E0514 18:00:05.093128 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.095892 kubelet[3264]: E0514 18:00:05.095646 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.095892 kubelet[3264]: W0514 18:00:05.095684 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.095892 kubelet[3264]: E0514 18:00:05.095718 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.097499 kubelet[3264]: E0514 18:00:05.097443 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.099880 kubelet[3264]: W0514 18:00:05.099575 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.099880 kubelet[3264]: E0514 18:00:05.099636 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.104268 kubelet[3264]: E0514 18:00:05.103700 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.104268 kubelet[3264]: W0514 18:00:05.103738 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.104268 kubelet[3264]: E0514 18:00:05.103785 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.105247 kubelet[3264]: E0514 18:00:05.105202 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.106639 kubelet[3264]: W0514 18:00:05.106587 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.106844 kubelet[3264]: E0514 18:00:05.106819 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.110000 kubelet[3264]: E0514 18:00:05.108449 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.110000 kubelet[3264]: W0514 18:00:05.108493 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.110000 kubelet[3264]: E0514 18:00:05.108649 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.110000 kubelet[3264]: E0514 18:00:05.109625 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.110000 kubelet[3264]: W0514 18:00:05.109652 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.110000 kubelet[3264]: E0514 18:00:05.109682 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.110385 kubelet[3264]: E0514 18:00:05.110044 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.110385 kubelet[3264]: W0514 18:00:05.110063 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.110385 kubelet[3264]: E0514 18:00:05.110087 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.112362 kubelet[3264]: E0514 18:00:05.111858 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.112362 kubelet[3264]: W0514 18:00:05.112021 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.112362 kubelet[3264]: E0514 18:00:05.112056 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:05.113952 kubelet[3264]: E0514 18:00:05.113902 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:05.113952 kubelet[3264]: W0514 18:00:05.113944 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:05.114070 kubelet[3264]: E0514 18:00:05.113976 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.052923 kubelet[3264]: I0514 18:00:06.052883 3264 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:00:06.091924 kubelet[3264]: E0514 18:00:06.091885 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.091924 kubelet[3264]: W0514 18:00:06.091917 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.092141 kubelet[3264]: E0514 18:00:06.091947 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.092245 kubelet[3264]: E0514 18:00:06.092217 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.092312 kubelet[3264]: W0514 18:00:06.092242 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.092312 kubelet[3264]: E0514 18:00:06.092264 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.092525 kubelet[3264]: E0514 18:00:06.092496 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.092594 kubelet[3264]: W0514 18:00:06.092542 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.092594 kubelet[3264]: E0514 18:00:06.092564 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.092845 kubelet[3264]: E0514 18:00:06.092819 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.092907 kubelet[3264]: W0514 18:00:06.092843 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.092907 kubelet[3264]: E0514 18:00:06.092864 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.093138 kubelet[3264]: E0514 18:00:06.093113 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.093196 kubelet[3264]: W0514 18:00:06.093135 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.093196 kubelet[3264]: E0514 18:00:06.093158 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.093420 kubelet[3264]: E0514 18:00:06.093395 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.093476 kubelet[3264]: W0514 18:00:06.093418 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.093476 kubelet[3264]: E0514 18:00:06.093438 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.093929 kubelet[3264]: E0514 18:00:06.093898 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.094002 kubelet[3264]: W0514 18:00:06.093927 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.094002 kubelet[3264]: E0514 18:00:06.093955 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.094239 kubelet[3264]: E0514 18:00:06.094214 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.094298 kubelet[3264]: W0514 18:00:06.094238 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.094298 kubelet[3264]: E0514 18:00:06.094258 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.094580 kubelet[3264]: E0514 18:00:06.094549 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.094580 kubelet[3264]: W0514 18:00:06.094573 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.094720 kubelet[3264]: E0514 18:00:06.094593 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.094852 kubelet[3264]: E0514 18:00:06.094826 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.094852 kubelet[3264]: W0514 18:00:06.094849 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.094965 kubelet[3264]: E0514 18:00:06.094869 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.095124 kubelet[3264]: E0514 18:00:06.095099 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.095124 kubelet[3264]: W0514 18:00:06.095121 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.095236 kubelet[3264]: E0514 18:00:06.095141 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.095391 kubelet[3264]: E0514 18:00:06.095360 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.095391 kubelet[3264]: W0514 18:00:06.095384 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.095492 kubelet[3264]: E0514 18:00:06.095405 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.095732 kubelet[3264]: E0514 18:00:06.095679 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.095732 kubelet[3264]: W0514 18:00:06.095705 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.095732 kubelet[3264]: E0514 18:00:06.095726 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.095982 kubelet[3264]: E0514 18:00:06.095957 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.095982 kubelet[3264]: W0514 18:00:06.095979 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.096106 kubelet[3264]: E0514 18:00:06.095998 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.096274 kubelet[3264]: E0514 18:00:06.096248 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.096344 kubelet[3264]: W0514 18:00:06.096271 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.096344 kubelet[3264]: E0514 18:00:06.096291 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.096736 kubelet[3264]: E0514 18:00:06.096707 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.096816 kubelet[3264]: W0514 18:00:06.096734 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.096816 kubelet[3264]: E0514 18:00:06.096756 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.097077 kubelet[3264]: E0514 18:00:06.097064 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.097163 kubelet[3264]: W0514 18:00:06.097080 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.097163 kubelet[3264]: E0514 18:00:06.097111 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.097431 kubelet[3264]: E0514 18:00:06.097404 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.097508 kubelet[3264]: W0514 18:00:06.097429 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.097508 kubelet[3264]: E0514 18:00:06.097461 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.097847 kubelet[3264]: E0514 18:00:06.097820 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.097925 kubelet[3264]: W0514 18:00:06.097846 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.097925 kubelet[3264]: E0514 18:00:06.097879 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.098190 kubelet[3264]: E0514 18:00:06.098143 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.098190 kubelet[3264]: W0514 18:00:06.098177 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.098375 kubelet[3264]: E0514 18:00:06.098275 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.098590 kubelet[3264]: E0514 18:00:06.098419 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.098590 kubelet[3264]: W0514 18:00:06.098433 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.098590 kubelet[3264]: E0514 18:00:06.098477 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.098886 kubelet[3264]: E0514 18:00:06.098706 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.098886 kubelet[3264]: W0514 18:00:06.098720 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.098886 kubelet[3264]: E0514 18:00:06.098765 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.099221 kubelet[3264]: E0514 18:00:06.098949 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.099221 kubelet[3264]: W0514 18:00:06.098963 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.099221 kubelet[3264]: E0514 18:00:06.098992 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.099709 kubelet[3264]: E0514 18:00:06.099572 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.099709 kubelet[3264]: W0514 18:00:06.099597 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.099709 kubelet[3264]: E0514 18:00:06.099632 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.100181 kubelet[3264]: E0514 18:00:06.100160 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.100283 kubelet[3264]: W0514 18:00:06.100261 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.100394 kubelet[3264]: E0514 18:00:06.100372 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.100686 kubelet[3264]: E0514 18:00:06.100659 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.100760 kubelet[3264]: W0514 18:00:06.100684 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.100760 kubelet[3264]: E0514 18:00:06.100717 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.100989 kubelet[3264]: E0514 18:00:06.100961 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.100989 kubelet[3264]: W0514 18:00:06.100985 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.101222 kubelet[3264]: E0514 18:00:06.101032 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.101222 kubelet[3264]: E0514 18:00:06.101218 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.101640 kubelet[3264]: W0514 18:00:06.101233 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.101640 kubelet[3264]: E0514 18:00:06.101365 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.101744 kubelet[3264]: E0514 18:00:06.101685 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.101744 kubelet[3264]: W0514 18:00:06.101701 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.101744 kubelet[3264]: E0514 18:00:06.101723 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.102325 kubelet[3264]: E0514 18:00:06.102301 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.102437 kubelet[3264]: W0514 18:00:06.102415 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.102578 kubelet[3264]: E0514 18:00:06.102556 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.102930 kubelet[3264]: E0514 18:00:06.102903 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.103036 kubelet[3264]: W0514 18:00:06.102928 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.103036 kubelet[3264]: E0514 18:00:06.102961 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.103246 kubelet[3264]: E0514 18:00:06.103221 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.103313 kubelet[3264]: W0514 18:00:06.103247 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.103313 kubelet[3264]: E0514 18:00:06.103267 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.104125 kubelet[3264]: E0514 18:00:06.104088 3264 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 18:00:06.104125 kubelet[3264]: W0514 18:00:06.104120 3264 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 18:00:06.104304 kubelet[3264]: E0514 18:00:06.104147 3264 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 18:00:06.862982 kubelet[3264]: E0514 18:00:06.862910 3264 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qz84k" podUID="055a695d-11dc-4f62-b437-308224c2bdff" May 14 18:00:08.862287 kubelet[3264]: E0514 18:00:08.862205 3264 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qz84k" podUID="055a695d-11dc-4f62-b437-308224c2bdff" May 14 18:00:10.862335 kubelet[3264]: E0514 18:00:10.862276 3264 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qz84k" podUID="055a695d-11dc-4f62-b437-308224c2bdff" May 14 18:00:12.089250 containerd[1983]: time="2025-05-14T18:00:12.089098826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:12.090953 containerd[1983]: time="2025-05-14T18:00:12.090906878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 14 18:00:12.091273 containerd[1983]: time="2025-05-14T18:00:12.091237922Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:12.100080 containerd[1983]: time="2025-05-14T18:00:12.099981494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:12.102903 containerd[1983]: time="2025-05-14T18:00:12.102843878Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 7.277860104s" May 14 18:00:12.103242 containerd[1983]: time="2025-05-14T18:00:12.103207250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 14 18:00:12.110002 containerd[1983]: time="2025-05-14T18:00:12.109901642Z" level=info msg="CreateContainer within sandbox \"210ec751e9f4296e334cfabd80fbc3a70db8b674d302b4d3a381d98c70d665ea\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 14 18:00:12.124915 containerd[1983]: time="2025-05-14T18:00:12.124844198Z" level=info msg="Container 847360ea886869e9c978393e97683de078fb9e1648f137321d07aed0e09142c9: CDI devices from CRI Config.CDIDevices: []" May 14 18:00:12.133541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4054251226.mount: Deactivated successfully. May 14 18:00:12.142687 containerd[1983]: time="2025-05-14T18:00:12.142603226Z" level=info msg="CreateContainer within sandbox \"210ec751e9f4296e334cfabd80fbc3a70db8b674d302b4d3a381d98c70d665ea\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"847360ea886869e9c978393e97683de078fb9e1648f137321d07aed0e09142c9\"" May 14 18:00:12.144988 containerd[1983]: time="2025-05-14T18:00:12.144927638Z" level=info msg="StartContainer for \"847360ea886869e9c978393e97683de078fb9e1648f137321d07aed0e09142c9\"" May 14 18:00:12.149269 containerd[1983]: time="2025-05-14T18:00:12.148871822Z" level=info msg="connecting to shim 847360ea886869e9c978393e97683de078fb9e1648f137321d07aed0e09142c9" address="unix:///run/containerd/s/6b563b5440c1d977556238cacc77151418280c1bb4a23b975e63af40ea654a74" protocol=ttrpc version=3 May 14 18:00:12.192823 systemd[1]: Started cri-containerd-847360ea886869e9c978393e97683de078fb9e1648f137321d07aed0e09142c9.scope - libcontainer container 847360ea886869e9c978393e97683de078fb9e1648f137321d07aed0e09142c9. May 14 18:00:12.267139 containerd[1983]: time="2025-05-14T18:00:12.267006831Z" level=info msg="StartContainer for \"847360ea886869e9c978393e97683de078fb9e1648f137321d07aed0e09142c9\" returns successfully" May 14 18:00:12.294072 systemd[1]: cri-containerd-847360ea886869e9c978393e97683de078fb9e1648f137321d07aed0e09142c9.scope: Deactivated successfully. May 14 18:00:12.301394 containerd[1983]: time="2025-05-14T18:00:12.301092879Z" level=info msg="received exit event container_id:\"847360ea886869e9c978393e97683de078fb9e1648f137321d07aed0e09142c9\" id:\"847360ea886869e9c978393e97683de078fb9e1648f137321d07aed0e09142c9\" pid:4217 exited_at:{seconds:1747245612 nanos:300575499}" May 14 18:00:12.302874 containerd[1983]: time="2025-05-14T18:00:12.302830467Z" level=info msg="TaskExit event in podsandbox handler container_id:\"847360ea886869e9c978393e97683de078fb9e1648f137321d07aed0e09142c9\" id:\"847360ea886869e9c978393e97683de078fb9e1648f137321d07aed0e09142c9\" pid:4217 exited_at:{seconds:1747245612 nanos:300575499}" May 14 18:00:12.342049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-847360ea886869e9c978393e97683de078fb9e1648f137321d07aed0e09142c9-rootfs.mount: Deactivated successfully. May 14 18:00:12.863081 kubelet[3264]: E0514 18:00:12.863013 3264 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qz84k" podUID="055a695d-11dc-4f62-b437-308224c2bdff" May 14 18:00:13.076962 containerd[1983]: time="2025-05-14T18:00:13.076788159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 14 18:00:13.106589 kubelet[3264]: I0514 18:00:13.106452 3264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f79bff7d-v84hv" podStartSLOduration=10.002957277 podStartE2EDuration="15.106430943s" podCreationTimestamp="2025-05-14 17:59:58 +0000 UTC" firstStartedPulling="2025-05-14 17:59:59.718949272 +0000 UTC m=+16.089816669" lastFinishedPulling="2025-05-14 18:00:04.822422926 +0000 UTC m=+21.193290335" observedRunningTime="2025-05-14 18:00:05.107755423 +0000 UTC m=+21.478622856" watchObservedRunningTime="2025-05-14 18:00:13.106430943 +0000 UTC m=+29.477298340" May 14 18:00:14.542772 kubelet[3264]: I0514 18:00:14.542693 3264 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:00:14.863182 kubelet[3264]: E0514 18:00:14.863016 3264 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qz84k" podUID="055a695d-11dc-4f62-b437-308224c2bdff" May 14 18:00:16.863332 kubelet[3264]: E0514 18:00:16.862999 3264 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qz84k" podUID="055a695d-11dc-4f62-b437-308224c2bdff" May 14 18:00:18.862287 kubelet[3264]: E0514 18:00:18.862214 3264 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qz84k" podUID="055a695d-11dc-4f62-b437-308224c2bdff" May 14 18:00:20.862977 kubelet[3264]: E0514 18:00:20.862923 3264 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qz84k" podUID="055a695d-11dc-4f62-b437-308224c2bdff" May 14 18:00:22.862653 kubelet[3264]: E0514 18:00:22.862584 3264 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qz84k" podUID="055a695d-11dc-4f62-b437-308224c2bdff" May 14 18:00:22.985068 containerd[1983]: time="2025-05-14T18:00:22.984992752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:22.986977 containerd[1983]: time="2025-05-14T18:00:22.986888992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 14 18:00:22.987729 containerd[1983]: time="2025-05-14T18:00:22.987658156Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:22.991109 containerd[1983]: time="2025-05-14T18:00:22.991031344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:22.992396 containerd[1983]: time="2025-05-14T18:00:22.992229232Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 9.915382285s" May 14 18:00:22.992396 containerd[1983]: time="2025-05-14T18:00:22.992277580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 14 18:00:22.998261 containerd[1983]: time="2025-05-14T18:00:22.998130256Z" level=info msg="CreateContainer within sandbox \"210ec751e9f4296e334cfabd80fbc3a70db8b674d302b4d3a381d98c70d665ea\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 18:00:23.012800 containerd[1983]: time="2025-05-14T18:00:23.012672828Z" level=info msg="Container 8cf7cb8465dbf2ca615acd8800f4a59bb4c3d9ce9b47903a172004e015a382a6: CDI devices from CRI Config.CDIDevices: []" May 14 18:00:23.027190 containerd[1983]: time="2025-05-14T18:00:23.027118392Z" level=info msg="CreateContainer within sandbox \"210ec751e9f4296e334cfabd80fbc3a70db8b674d302b4d3a381d98c70d665ea\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8cf7cb8465dbf2ca615acd8800f4a59bb4c3d9ce9b47903a172004e015a382a6\"" May 14 18:00:23.028592 containerd[1983]: time="2025-05-14T18:00:23.028375560Z" level=info msg="StartContainer for \"8cf7cb8465dbf2ca615acd8800f4a59bb4c3d9ce9b47903a172004e015a382a6\"" May 14 18:00:23.034313 containerd[1983]: time="2025-05-14T18:00:23.034140912Z" level=info msg="connecting to shim 8cf7cb8465dbf2ca615acd8800f4a59bb4c3d9ce9b47903a172004e015a382a6" address="unix:///run/containerd/s/6b563b5440c1d977556238cacc77151418280c1bb4a23b975e63af40ea654a74" protocol=ttrpc version=3 May 14 18:00:23.073832 systemd[1]: Started cri-containerd-8cf7cb8465dbf2ca615acd8800f4a59bb4c3d9ce9b47903a172004e015a382a6.scope - libcontainer container 8cf7cb8465dbf2ca615acd8800f4a59bb4c3d9ce9b47903a172004e015a382a6. May 14 18:00:23.163325 containerd[1983]: time="2025-05-14T18:00:23.163084357Z" level=info msg="StartContainer for \"8cf7cb8465dbf2ca615acd8800f4a59bb4c3d9ce9b47903a172004e015a382a6\" returns successfully" May 14 18:00:24.018328 systemd[1]: cri-containerd-8cf7cb8465dbf2ca615acd8800f4a59bb4c3d9ce9b47903a172004e015a382a6.scope: Deactivated successfully. May 14 18:00:24.019663 systemd[1]: cri-containerd-8cf7cb8465dbf2ca615acd8800f4a59bb4c3d9ce9b47903a172004e015a382a6.scope: Consumed 843ms CPU time, 170M memory peak, 150.3M written to disk. May 14 18:00:24.025481 containerd[1983]: time="2025-05-14T18:00:24.025417741Z" level=info msg="received exit event container_id:\"8cf7cb8465dbf2ca615acd8800f4a59bb4c3d9ce9b47903a172004e015a382a6\" id:\"8cf7cb8465dbf2ca615acd8800f4a59bb4c3d9ce9b47903a172004e015a382a6\" pid:4280 exited_at:{seconds:1747245624 nanos:24932941}" May 14 18:00:24.026211 containerd[1983]: time="2025-05-14T18:00:24.025801261Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8cf7cb8465dbf2ca615acd8800f4a59bb4c3d9ce9b47903a172004e015a382a6\" id:\"8cf7cb8465dbf2ca615acd8800f4a59bb4c3d9ce9b47903a172004e015a382a6\" pid:4280 exited_at:{seconds:1747245624 nanos:24932941}" May 14 18:00:24.036081 kubelet[3264]: I0514 18:00:24.034936 3264 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 14 18:00:24.113790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cf7cb8465dbf2ca615acd8800f4a59bb4c3d9ce9b47903a172004e015a382a6-rootfs.mount: Deactivated successfully. May 14 18:00:24.115593 kubelet[3264]: W0514 18:00:24.115195 3264 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-31-64" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-31-64' and this object May 14 18:00:24.115593 kubelet[3264]: E0514 18:00:24.115262 3264 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ip-172-31-31-64\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-31-64' and this object" logger="UnhandledError" May 14 18:00:24.165255 systemd[1]: Created slice kubepods-burstable-podcdfb56ba_6f85_4f01_b0f2_e5b9882e1236.slice - libcontainer container kubepods-burstable-podcdfb56ba_6f85_4f01_b0f2_e5b9882e1236.slice. May 14 18:00:24.196125 systemd[1]: Created slice kubepods-burstable-pod478c3253_ad52_4254_9fd9_c79e7fba6d23.slice - libcontainer container kubepods-burstable-pod478c3253_ad52_4254_9fd9_c79e7fba6d23.slice. May 14 18:00:24.218881 systemd[1]: Created slice kubepods-besteffort-pod32252210_5347_484d_8a6f_b3e8a7512e46.slice - libcontainer container kubepods-besteffort-pod32252210_5347_484d_8a6f_b3e8a7512e46.slice. May 14 18:00:24.231840 kubelet[3264]: I0514 18:00:24.229723 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h78mc\" (UniqueName: \"kubernetes.io/projected/cdfb56ba-6f85-4f01-b0f2-e5b9882e1236-kube-api-access-h78mc\") pod \"coredns-6f6b679f8f-h78r8\" (UID: \"cdfb56ba-6f85-4f01-b0f2-e5b9882e1236\") " pod="kube-system/coredns-6f6b679f8f-h78r8" May 14 18:00:24.231840 kubelet[3264]: I0514 18:00:24.229828 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpstp\" (UniqueName: \"kubernetes.io/projected/478c3253-ad52-4254-9fd9-c79e7fba6d23-kube-api-access-rpstp\") pod \"coredns-6f6b679f8f-pwg6j\" (UID: \"478c3253-ad52-4254-9fd9-c79e7fba6d23\") " pod="kube-system/coredns-6f6b679f8f-pwg6j" May 14 18:00:24.231840 kubelet[3264]: I0514 18:00:24.229900 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch9bf\" (UniqueName: \"kubernetes.io/projected/32252210-5347-484d-8a6f-b3e8a7512e46-kube-api-access-ch9bf\") pod \"calico-apiserver-6964b45d66-nxqh5\" (UID: \"32252210-5347-484d-8a6f-b3e8a7512e46\") " pod="calico-apiserver/calico-apiserver-6964b45d66-nxqh5" May 14 18:00:24.231840 kubelet[3264]: I0514 18:00:24.229976 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdfb56ba-6f85-4f01-b0f2-e5b9882e1236-config-volume\") pod \"coredns-6f6b679f8f-h78r8\" (UID: \"cdfb56ba-6f85-4f01-b0f2-e5b9882e1236\") " pod="kube-system/coredns-6f6b679f8f-h78r8" May 14 18:00:24.231840 kubelet[3264]: I0514 18:00:24.230065 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/32252210-5347-484d-8a6f-b3e8a7512e46-calico-apiserver-certs\") pod \"calico-apiserver-6964b45d66-nxqh5\" (UID: \"32252210-5347-484d-8a6f-b3e8a7512e46\") " pod="calico-apiserver/calico-apiserver-6964b45d66-nxqh5" May 14 18:00:24.237603 kubelet[3264]: I0514 18:00:24.230156 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/478c3253-ad52-4254-9fd9-c79e7fba6d23-config-volume\") pod \"coredns-6f6b679f8f-pwg6j\" (UID: \"478c3253-ad52-4254-9fd9-c79e7fba6d23\") " pod="kube-system/coredns-6f6b679f8f-pwg6j" May 14 18:00:24.236227 systemd[1]: Created slice kubepods-besteffort-pod9f9b5b99_510b_4ac5_9618_f81c7735abe4.slice - libcontainer container kubepods-besteffort-pod9f9b5b99_510b_4ac5_9618_f81c7735abe4.slice. May 14 18:00:24.256250 systemd[1]: Created slice kubepods-besteffort-pod63810593_4b4a_4661_87f0_484cfc23b0e5.slice - libcontainer container kubepods-besteffort-pod63810593_4b4a_4661_87f0_484cfc23b0e5.slice. May 14 18:00:24.330584 kubelet[3264]: I0514 18:00:24.330357 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9f9b5b99-510b-4ac5-9618-f81c7735abe4-calico-apiserver-certs\") pod \"calico-apiserver-6964b45d66-h745v\" (UID: \"9f9b5b99-510b-4ac5-9618-f81c7735abe4\") " pod="calico-apiserver/calico-apiserver-6964b45d66-h745v" May 14 18:00:24.330909 kubelet[3264]: I0514 18:00:24.330832 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63810593-4b4a-4661-87f0-484cfc23b0e5-tigera-ca-bundle\") pod \"calico-kube-controllers-9b5f67457-mkjwp\" (UID: \"63810593-4b4a-4661-87f0-484cfc23b0e5\") " pod="calico-system/calico-kube-controllers-9b5f67457-mkjwp" May 14 18:00:24.330997 kubelet[3264]: I0514 18:00:24.330915 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tljqg\" (UniqueName: \"kubernetes.io/projected/9f9b5b99-510b-4ac5-9618-f81c7735abe4-kube-api-access-tljqg\") pod \"calico-apiserver-6964b45d66-h745v\" (UID: \"9f9b5b99-510b-4ac5-9618-f81c7735abe4\") " pod="calico-apiserver/calico-apiserver-6964b45d66-h745v" May 14 18:00:24.330997 kubelet[3264]: I0514 18:00:24.330954 3264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sglch\" (UniqueName: \"kubernetes.io/projected/63810593-4b4a-4661-87f0-484cfc23b0e5-kube-api-access-sglch\") pod \"calico-kube-controllers-9b5f67457-mkjwp\" (UID: \"63810593-4b4a-4661-87f0-484cfc23b0e5\") " pod="calico-system/calico-kube-controllers-9b5f67457-mkjwp" May 14 18:00:24.542700 containerd[1983]: time="2025-05-14T18:00:24.542652928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6964b45d66-nxqh5,Uid:32252210-5347-484d-8a6f-b3e8a7512e46,Namespace:calico-apiserver,Attempt:0,}" May 14 18:00:24.551291 containerd[1983]: time="2025-05-14T18:00:24.551051776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6964b45d66-h745v,Uid:9f9b5b99-510b-4ac5-9618-f81c7735abe4,Namespace:calico-apiserver,Attempt:0,}" May 14 18:00:24.569147 containerd[1983]: time="2025-05-14T18:00:24.569091112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b5f67457-mkjwp,Uid:63810593-4b4a-4661-87f0-484cfc23b0e5,Namespace:calico-system,Attempt:0,}" May 14 18:00:24.876620 systemd[1]: Created slice kubepods-besteffort-pod055a695d_11dc_4f62_b437_308224c2bdff.slice - libcontainer container kubepods-besteffort-pod055a695d_11dc_4f62_b437_308224c2bdff.slice. May 14 18:00:24.884074 containerd[1983]: time="2025-05-14T18:00:24.883957589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qz84k,Uid:055a695d-11dc-4f62-b437-308224c2bdff,Namespace:calico-system,Attempt:0,}" May 14 18:00:24.966433 containerd[1983]: time="2025-05-14T18:00:24.966313194Z" level=error msg="Failed to destroy network for sandbox \"09aca54d27ce7960d6be174167b86ff154f6475afc723a684756a734a439eb07\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:00:25.044027 containerd[1983]: time="2025-05-14T18:00:25.043927622Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6964b45d66-nxqh5,Uid:32252210-5347-484d-8a6f-b3e8a7512e46,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"09aca54d27ce7960d6be174167b86ff154f6475afc723a684756a734a439eb07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:00:25.047945 kubelet[3264]: E0514 18:00:25.047304 3264 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09aca54d27ce7960d6be174167b86ff154f6475afc723a684756a734a439eb07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:00:25.047945 kubelet[3264]: E0514 18:00:25.047407 3264 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09aca54d27ce7960d6be174167b86ff154f6475afc723a684756a734a439eb07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6964b45d66-nxqh5" May 14 18:00:25.047945 kubelet[3264]: E0514 18:00:25.047440 3264 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09aca54d27ce7960d6be174167b86ff154f6475afc723a684756a734a439eb07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6964b45d66-nxqh5" May 14 18:00:25.048626 kubelet[3264]: E0514 18:00:25.047511 3264 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6964b45d66-nxqh5_calico-apiserver(32252210-5347-484d-8a6f-b3e8a7512e46)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6964b45d66-nxqh5_calico-apiserver(32252210-5347-484d-8a6f-b3e8a7512e46)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09aca54d27ce7960d6be174167b86ff154f6475afc723a684756a734a439eb07\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6964b45d66-nxqh5" podUID="32252210-5347-484d-8a6f-b3e8a7512e46" May 14 18:00:25.076829 containerd[1983]: time="2025-05-14T18:00:25.076737662Z" level=error msg="Failed to destroy network for sandbox \"057de14384c2f4920cbb3dfc2317c989ce33ad48db6ea6e22a91f5e0d3b9cacf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:00:25.079304 containerd[1983]: time="2025-05-14T18:00:25.079115990Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6964b45d66-h745v,Uid:9f9b5b99-510b-4ac5-9618-f81c7735abe4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"057de14384c2f4920cbb3dfc2317c989ce33ad48db6ea6e22a91f5e0d3b9cacf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:00:25.079998 kubelet[3264]: E0514 18:00:25.079953 3264 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"057de14384c2f4920cbb3dfc2317c989ce33ad48db6ea6e22a91f5e0d3b9cacf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:00:25.081066 kubelet[3264]: E0514 18:00:25.080221 3264 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"057de14384c2f4920cbb3dfc2317c989ce33ad48db6ea6e22a91f5e0d3b9cacf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6964b45d66-h745v" May 14 18:00:25.081066 kubelet[3264]: E0514 18:00:25.080263 3264 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"057de14384c2f4920cbb3dfc2317c989ce33ad48db6ea6e22a91f5e0d3b9cacf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6964b45d66-h745v" May 14 18:00:25.081066 kubelet[3264]: E0514 18:00:25.080334 3264 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6964b45d66-h745v_calico-apiserver(9f9b5b99-510b-4ac5-9618-f81c7735abe4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6964b45d66-h745v_calico-apiserver(9f9b5b99-510b-4ac5-9618-f81c7735abe4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"057de14384c2f4920cbb3dfc2317c989ce33ad48db6ea6e22a91f5e0d3b9cacf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6964b45d66-h745v" podUID="9f9b5b99-510b-4ac5-9618-f81c7735abe4" May 14 18:00:25.187836 containerd[1983]: time="2025-05-14T18:00:25.187484739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 14 18:00:25.197003 containerd[1983]: time="2025-05-14T18:00:25.196760391Z" level=error msg="Failed to destroy network for sandbox \"a510563fa38664f3ab2c31d3cc16eac0ba25a0aa56e3ec731d9b78829c2af18e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:00:25.199205 containerd[1983]: time="2025-05-14T18:00:25.199104087Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qz84k,Uid:055a695d-11dc-4f62-b437-308224c2bdff,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a510563fa38664f3ab2c31d3cc16eac0ba25a0aa56e3ec731d9b78829c2af18e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:00:25.201545 kubelet[3264]: E0514 18:00:25.200826 3264 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a510563fa38664f3ab2c31d3cc16eac0ba25a0aa56e3ec731d9b78829c2af18e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:00:25.201545 kubelet[3264]: E0514 18:00:25.200908 3264 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a510563fa38664f3ab2c31d3cc16eac0ba25a0aa56e3ec731d9b78829c2af18e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qz84k" May 14 18:00:25.201545 kubelet[3264]: E0514 18:00:25.200940 3264 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a510563fa38664f3ab2c31d3cc16eac0ba25a0aa56e3ec731d9b78829c2af18e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qz84k" May 14 18:00:25.201812 kubelet[3264]: E0514 18:00:25.201000 3264 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qz84k_calico-system(055a695d-11dc-4f62-b437-308224c2bdff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qz84k_calico-system(055a695d-11dc-4f62-b437-308224c2bdff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a510563fa38664f3ab2c31d3cc16eac0ba25a0aa56e3ec731d9b78829c2af18e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qz84k" podUID="055a695d-11dc-4f62-b437-308224c2bdff" May 14 18:00:25.206427 systemd[1]: run-netns-cni\x2d48f81e60\x2d2cdf\x2decc1\x2d3ee2\x2de57356b9a000.mount: Deactivated successfully. May 14 18:00:25.252547 containerd[1983]: time="2025-05-14T18:00:25.252426063Z" level=error msg="Failed to destroy network for sandbox \"77b25ed991fc77f2380c701329b939518b1634a0f4fc721526ecd7dd1b312dc3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:00:25.254573 containerd[1983]: time="2025-05-14T18:00:25.253848903Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b5f67457-mkjwp,Uid:63810593-4b4a-4661-87f0-484cfc23b0e5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"77b25ed991fc77f2380c701329b939518b1634a0f4fc721526ecd7dd1b312dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:00:25.256768 kubelet[3264]: E0514 18:00:25.256702 3264 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77b25ed991fc77f2380c701329b939518b1634a0f4fc721526ecd7dd1b312dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:00:25.256881 kubelet[3264]: E0514 18:00:25.256787 3264 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77b25ed991fc77f2380c701329b939518b1634a0f4fc721526ecd7dd1b312dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9b5f67457-mkjwp" May 14 18:00:25.256881 kubelet[3264]: E0514 18:00:25.256821 3264 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77b25ed991fc77f2380c701329b939518b1634a0f4fc721526ecd7dd1b312dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9b5f67457-mkjwp" May 14 18:00:25.257029 kubelet[3264]: E0514 18:00:25.256891 3264 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9b5f67457-mkjwp_calico-system(63810593-4b4a-4661-87f0-484cfc23b0e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9b5f67457-mkjwp_calico-system(63810593-4b4a-4661-87f0-484cfc23b0e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77b25ed991fc77f2380c701329b939518b1634a0f4fc721526ecd7dd1b312dc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9b5f67457-mkjwp" podUID="63810593-4b4a-4661-87f0-484cfc23b0e5" May 14 18:00:25.259347 systemd[1]: run-netns-cni\x2d2f7c5b51\x2d111b\x2d435b\x2d31a5\x2dfd5e1a725630.mount: Deactivated successfully. May 14 18:00:25.331947 kubelet[3264]: E0514 18:00:25.331462 3264 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 14 18:00:25.331947 kubelet[3264]: E0514 18:00:25.331618 3264 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/478c3253-ad52-4254-9fd9-c79e7fba6d23-config-volume podName:478c3253-ad52-4254-9fd9-c79e7fba6d23 nodeName:}" failed. No retries permitted until 2025-05-14 18:00:25.831580631 +0000 UTC m=+42.202448040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/478c3253-ad52-4254-9fd9-c79e7fba6d23-config-volume") pod "coredns-6f6b679f8f-pwg6j" (UID: "478c3253-ad52-4254-9fd9-c79e7fba6d23") : failed to sync configmap cache: timed out waiting for the condition May 14 18:00:25.342474 kubelet[3264]: E0514 18:00:25.342435 3264 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 14 18:00:25.343078 kubelet[3264]: E0514 18:00:25.342777 3264 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cdfb56ba-6f85-4f01-b0f2-e5b9882e1236-config-volume podName:cdfb56ba-6f85-4f01-b0f2-e5b9882e1236 nodeName:}" failed. No retries permitted until 2025-05-14 18:00:25.842748696 +0000 UTC m=+42.213616093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/cdfb56ba-6f85-4f01-b0f2-e5b9882e1236-config-volume") pod "coredns-6f6b679f8f-h78r8" (UID: "cdfb56ba-6f85-4f01-b0f2-e5b9882e1236") : failed to sync configmap cache: timed out waiting for the condition May 14 18:00:25.983818 containerd[1983]: time="2025-05-14T18:00:25.983725447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-h78r8,Uid:cdfb56ba-6f85-4f01-b0f2-e5b9882e1236,Namespace:kube-system,Attempt:0,}" May 14 18:00:26.009235 containerd[1983]: time="2025-05-14T18:00:26.009142887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pwg6j,Uid:478c3253-ad52-4254-9fd9-c79e7fba6d23,Namespace:kube-system,Attempt:0,}" May 14 18:00:26.107628 containerd[1983]: time="2025-05-14T18:00:26.107503539Z" level=error msg="Failed to destroy network for sandbox \"ddcbd4c921c8a45ee5cff9df4c30fe77b67ff5cad2d9ac487f46a2c734a4acc5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:00:26.112543 containerd[1983]: time="2025-05-14T18:00:26.108928431Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-h78r8,Uid:cdfb56ba-6f85-4f01-b0f2-e5b9882e1236,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddcbd4c921c8a45ee5cff9df4c30fe77b67ff5cad2d9ac487f46a2c734a4acc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:00:26.112740 kubelet[3264]: E0514 18:00:26.112453 3264 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddcbd4c921c8a45ee5cff9df4c30fe77b67ff5cad2d9ac487f46a2c734a4acc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:00:26.116232 kubelet[3264]: E0514 18:00:26.114956 3264 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddcbd4c921c8a45ee5cff9df4c30fe77b67ff5cad2d9ac487f46a2c734a4acc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-h78r8" May 14 18:00:26.116232 kubelet[3264]: E0514 18:00:26.115018 3264 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddcbd4c921c8a45ee5cff9df4c30fe77b67ff5cad2d9ac487f46a2c734a4acc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-h78r8" May 14 18:00:26.116232 kubelet[3264]: E0514 18:00:26.115111 3264 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-h78r8_kube-system(cdfb56ba-6f85-4f01-b0f2-e5b9882e1236)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-h78r8_kube-system(cdfb56ba-6f85-4f01-b0f2-e5b9882e1236)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ddcbd4c921c8a45ee5cff9df4c30fe77b67ff5cad2d9ac487f46a2c734a4acc5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-h78r8" podUID="cdfb56ba-6f85-4f01-b0f2-e5b9882e1236" May 14 18:00:26.116654 systemd[1]: run-netns-cni\x2d52b59696\x2dbac4\x2d5c28\x2dd075\x2d8b9da20408bd.mount: Deactivated successfully. May 14 18:00:26.140773 containerd[1983]: time="2025-05-14T18:00:26.140708451Z" level=error msg="Failed to destroy network for sandbox \"c6e6ef44c1030fa4cec5a32c0cf555f3aab31c06cf08d02a42d6f61bd8f214ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:00:26.142572 containerd[1983]: time="2025-05-14T18:00:26.142211284Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pwg6j,Uid:478c3253-ad52-4254-9fd9-c79e7fba6d23,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6e6ef44c1030fa4cec5a32c0cf555f3aab31c06cf08d02a42d6f61bd8f214ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:00:26.143763 kubelet[3264]: E0514 18:00:26.143683 3264 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6e6ef44c1030fa4cec5a32c0cf555f3aab31c06cf08d02a42d6f61bd8f214ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 18:00:26.143976 kubelet[3264]: E0514 18:00:26.143945 3264 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6e6ef44c1030fa4cec5a32c0cf555f3aab31c06cf08d02a42d6f61bd8f214ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-pwg6j" May 14 18:00:26.146574 kubelet[3264]: E0514 18:00:26.144104 3264 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6e6ef44c1030fa4cec5a32c0cf555f3aab31c06cf08d02a42d6f61bd8f214ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-pwg6j" May 14 18:00:26.146574 kubelet[3264]: E0514 18:00:26.144674 3264 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-pwg6j_kube-system(478c3253-ad52-4254-9fd9-c79e7fba6d23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-pwg6j_kube-system(478c3253-ad52-4254-9fd9-c79e7fba6d23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6e6ef44c1030fa4cec5a32c0cf555f3aab31c06cf08d02a42d6f61bd8f214ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-pwg6j" podUID="478c3253-ad52-4254-9fd9-c79e7fba6d23" May 14 18:00:26.147239 systemd[1]: run-netns-cni\x2d5d80837b\x2d93cc\x2d4033\x2d90ab\x2de844cba818ce.mount: Deactivated successfully. May 14 18:00:28.148234 systemd[1]: Started sshd@7-172.31.31.64:22-139.178.89.65:59770.service - OpenSSH per-connection server daemon (139.178.89.65:59770). May 14 18:00:28.369065 sshd[4503]: Accepted publickey for core from 139.178.89.65 port 59770 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 18:00:28.375726 sshd-session[4503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:00:28.392131 systemd-logind[1969]: New session 8 of user core. May 14 18:00:28.399116 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 18:00:28.769247 sshd[4505]: Connection closed by 139.178.89.65 port 59770 May 14 18:00:28.770579 sshd-session[4503]: pam_unix(sshd:session): session closed for user core May 14 18:00:28.785946 systemd[1]: sshd@7-172.31.31.64:22-139.178.89.65:59770.service: Deactivated successfully. May 14 18:00:28.796042 systemd[1]: session-8.scope: Deactivated successfully. May 14 18:00:28.800827 systemd-logind[1969]: Session 8 logged out. Waiting for processes to exit. May 14 18:00:28.808581 systemd-logind[1969]: Removed session 8. May 14 18:00:32.437137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1013240319.mount: Deactivated successfully. May 14 18:00:32.511844 containerd[1983]: time="2025-05-14T18:00:32.511762535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:32.513145 containerd[1983]: time="2025-05-14T18:00:32.512825483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 14 18:00:32.514617 containerd[1983]: time="2025-05-14T18:00:32.514509515Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:32.517649 containerd[1983]: time="2025-05-14T18:00:32.517562555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:32.519008 containerd[1983]: time="2025-05-14T18:00:32.518802419Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 7.331231088s" May 14 18:00:32.519008 containerd[1983]: time="2025-05-14T18:00:32.518862215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 14 18:00:32.555573 containerd[1983]: time="2025-05-14T18:00:32.555140675Z" level=info msg="CreateContainer within sandbox \"210ec751e9f4296e334cfabd80fbc3a70db8b674d302b4d3a381d98c70d665ea\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 14 18:00:32.572846 containerd[1983]: time="2025-05-14T18:00:32.572764787Z" level=info msg="Container c8e34a6e9f5071a19b7bd12c7e259266bd710893792d452ab3287f78c64f1e03: CDI devices from CRI Config.CDIDevices: []" May 14 18:00:32.595999 containerd[1983]: time="2025-05-14T18:00:32.595831872Z" level=info msg="CreateContainer within sandbox \"210ec751e9f4296e334cfabd80fbc3a70db8b674d302b4d3a381d98c70d665ea\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c8e34a6e9f5071a19b7bd12c7e259266bd710893792d452ab3287f78c64f1e03\"" May 14 18:00:32.597289 containerd[1983]: time="2025-05-14T18:00:32.596975556Z" level=info msg="StartContainer for \"c8e34a6e9f5071a19b7bd12c7e259266bd710893792d452ab3287f78c64f1e03\"" May 14 18:00:32.603337 containerd[1983]: time="2025-05-14T18:00:32.603281472Z" level=info msg="connecting to shim c8e34a6e9f5071a19b7bd12c7e259266bd710893792d452ab3287f78c64f1e03" address="unix:///run/containerd/s/6b563b5440c1d977556238cacc77151418280c1bb4a23b975e63af40ea654a74" protocol=ttrpc version=3 May 14 18:00:32.639869 systemd[1]: Started cri-containerd-c8e34a6e9f5071a19b7bd12c7e259266bd710893792d452ab3287f78c64f1e03.scope - libcontainer container c8e34a6e9f5071a19b7bd12c7e259266bd710893792d452ab3287f78c64f1e03. May 14 18:00:32.735358 containerd[1983]: time="2025-05-14T18:00:32.735239784Z" level=info msg="StartContainer for \"c8e34a6e9f5071a19b7bd12c7e259266bd710893792d452ab3287f78c64f1e03\" returns successfully" May 14 18:00:32.890995 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 14 18:00:32.891168 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 14 18:00:33.420483 containerd[1983]: time="2025-05-14T18:00:33.420412812Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c8e34a6e9f5071a19b7bd12c7e259266bd710893792d452ab3287f78c64f1e03\" id:\"d6dfa4a5eedd1ca1e554299ed304e585e61ad84d7e79e78f84dbce7123908f5a\" pid:4583 exit_status:1 exited_at:{seconds:1747245633 nanos:419448060}" May 14 18:00:33.810721 systemd[1]: Started sshd@8-172.31.31.64:22-139.178.89.65:59778.service - OpenSSH per-connection server daemon (139.178.89.65:59778). May 14 18:00:34.009861 sshd[4607]: Accepted publickey for core from 139.178.89.65 port 59778 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 18:00:34.012872 sshd-session[4607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:00:34.021496 systemd-logind[1969]: New session 9 of user core. May 14 18:00:34.033880 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 18:00:34.305182 sshd[4609]: Connection closed by 139.178.89.65 port 59778 May 14 18:00:34.304726 sshd-session[4607]: pam_unix(sshd:session): session closed for user core May 14 18:00:34.315069 systemd-logind[1969]: Session 9 logged out. Waiting for processes to exit. May 14 18:00:34.316861 systemd[1]: sshd@8-172.31.31.64:22-139.178.89.65:59778.service: Deactivated successfully. May 14 18:00:34.324418 systemd[1]: session-9.scope: Deactivated successfully. May 14 18:00:34.329637 systemd-logind[1969]: Removed session 9. May 14 18:00:34.364400 containerd[1983]: time="2025-05-14T18:00:34.364335132Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c8e34a6e9f5071a19b7bd12c7e259266bd710893792d452ab3287f78c64f1e03\" id:\"1a39412be2784256509e3485adaf8cfb8cd536a93e90badbc449b422b63bbc38\" pid:4630 exit_status:1 exited_at:{seconds:1747245634 nanos:363901056}" May 14 18:00:35.353951 containerd[1983]: time="2025-05-14T18:00:35.353889229Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c8e34a6e9f5071a19b7bd12c7e259266bd710893792d452ab3287f78c64f1e03\" id:\"1db0003e136db64ac093f62e67847f6f9bedfbc68fe33752c363e627e2084f5f\" pid:4781 exit_status:1 exited_at:{seconds:1747245635 nanos:353324305}" May 14 18:00:35.509257 (udev-worker)[4555]: Network interface NamePolicy= disabled on kernel command line. May 14 18:00:35.519707 systemd-networkd[1893]: vxlan.calico: Link UP May 14 18:00:35.519727 systemd-networkd[1893]: vxlan.calico: Gained carrier May 14 18:00:35.552843 (udev-worker)[4559]: Network interface NamePolicy= disabled on kernel command line. May 14 18:00:36.749748 systemd-networkd[1893]: vxlan.calico: Gained IPv6LL May 14 18:00:36.863911 containerd[1983]: time="2025-05-14T18:00:36.863842253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6964b45d66-nxqh5,Uid:32252210-5347-484d-8a6f-b3e8a7512e46,Namespace:calico-apiserver,Attempt:0,}" May 14 18:00:37.110707 (udev-worker)[4826]: Network interface NamePolicy= disabled on kernel command line. May 14 18:00:37.112971 systemd-networkd[1893]: calie5908a5ad41: Link UP May 14 18:00:37.115154 systemd-networkd[1893]: calie5908a5ad41: Gained carrier May 14 18:00:37.141560 kubelet[3264]: I0514 18:00:37.141219 3264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tzstj" podStartSLOduration=5.435457688 podStartE2EDuration="38.14118623s" podCreationTimestamp="2025-05-14 17:59:59 +0000 UTC" firstStartedPulling="2025-05-14 17:59:59.815042333 +0000 UTC m=+16.185909742" lastFinishedPulling="2025-05-14 18:00:32.520770875 +0000 UTC m=+48.891638284" observedRunningTime="2025-05-14 18:00:33.279653039 +0000 UTC m=+49.650520640" watchObservedRunningTime="2025-05-14 18:00:37.14118623 +0000 UTC m=+53.512053711" May 14 18:00:37.144007 containerd[1983]: 2025-05-14 18:00:36.970 [INFO][4863] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--nxqh5-eth0 calico-apiserver-6964b45d66- calico-apiserver 32252210-5347-484d-8a6f-b3e8a7512e46 749 0 2025-05-14 17:59:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6964b45d66 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-64 calico-apiserver-6964b45d66-nxqh5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie5908a5ad41 [] []}} ContainerID="5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457" Namespace="calico-apiserver" Pod="calico-apiserver-6964b45d66-nxqh5" WorkloadEndpoint="ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--nxqh5-" May 14 18:00:37.144007 containerd[1983]: 2025-05-14 18:00:36.972 [INFO][4863] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457" Namespace="calico-apiserver" Pod="calico-apiserver-6964b45d66-nxqh5" WorkloadEndpoint="ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--nxqh5-eth0" May 14 18:00:37.144007 containerd[1983]: 2025-05-14 18:00:37.034 [INFO][4874] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457" HandleID="k8s-pod-network.5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457" Workload="ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--nxqh5-eth0" May 14 18:00:37.144340 containerd[1983]: 2025-05-14 18:00:37.050 [INFO][4874] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457" HandleID="k8s-pod-network.5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457" Workload="ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--nxqh5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002eb4b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-31-64", "pod":"calico-apiserver-6964b45d66-nxqh5", "timestamp":"2025-05-14 18:00:37.03460061 +0000 UTC"}, Hostname:"ip-172-31-31-64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:00:37.144340 containerd[1983]: 2025-05-14 18:00:37.051 [INFO][4874] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:00:37.144340 containerd[1983]: 2025-05-14 18:00:37.051 [INFO][4874] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:00:37.144340 containerd[1983]: 2025-05-14 18:00:37.051 [INFO][4874] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-64' May 14 18:00:37.144340 containerd[1983]: 2025-05-14 18:00:37.054 [INFO][4874] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457" host="ip-172-31-31-64" May 14 18:00:37.144340 containerd[1983]: 2025-05-14 18:00:37.060 [INFO][4874] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-64" May 14 18:00:37.144340 containerd[1983]: 2025-05-14 18:00:37.067 [INFO][4874] ipam/ipam.go 489: Trying affinity for 192.168.72.128/26 host="ip-172-31-31-64" May 14 18:00:37.144340 containerd[1983]: 2025-05-14 18:00:37.070 [INFO][4874] ipam/ipam.go 155: Attempting to load block cidr=192.168.72.128/26 host="ip-172-31-31-64" May 14 18:00:37.144340 containerd[1983]: 2025-05-14 18:00:37.074 [INFO][4874] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ip-172-31-31-64" May 14 18:00:37.145060 containerd[1983]: 2025-05-14 18:00:37.075 [INFO][4874] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457" host="ip-172-31-31-64" May 14 18:00:37.145060 containerd[1983]: 2025-05-14 18:00:37.077 [INFO][4874] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457 May 14 18:00:37.145060 containerd[1983]: 2025-05-14 18:00:37.087 [INFO][4874] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457" host="ip-172-31-31-64" May 14 18:00:37.145060 containerd[1983]: 2025-05-14 18:00:37.095 [INFO][4874] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.72.129/26] block=192.168.72.128/26 handle="k8s-pod-network.5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457" host="ip-172-31-31-64" May 14 18:00:37.145060 containerd[1983]: 2025-05-14 18:00:37.095 [INFO][4874] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.129/26] handle="k8s-pod-network.5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457" host="ip-172-31-31-64" May 14 18:00:37.145060 containerd[1983]: 2025-05-14 18:00:37.096 [INFO][4874] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:00:37.145060 containerd[1983]: 2025-05-14 18:00:37.096 [INFO][4874] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.129/26] IPv6=[] ContainerID="5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457" HandleID="k8s-pod-network.5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457" Workload="ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--nxqh5-eth0" May 14 18:00:37.145639 containerd[1983]: 2025-05-14 18:00:37.104 [INFO][4863] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457" Namespace="calico-apiserver" Pod="calico-apiserver-6964b45d66-nxqh5" WorkloadEndpoint="ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--nxqh5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--nxqh5-eth0", GenerateName:"calico-apiserver-6964b45d66-", Namespace:"calico-apiserver", SelfLink:"", UID:"32252210-5347-484d-8a6f-b3e8a7512e46", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 17, 59, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6964b45d66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-64", ContainerID:"", Pod:"calico-apiserver-6964b45d66-nxqh5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5908a5ad41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:00:37.145949 containerd[1983]: 2025-05-14 18:00:37.104 [INFO][4863] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.72.129/32] ContainerID="5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457" Namespace="calico-apiserver" Pod="calico-apiserver-6964b45d66-nxqh5" WorkloadEndpoint="ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--nxqh5-eth0" May 14 18:00:37.145949 containerd[1983]: 2025-05-14 18:00:37.104 [INFO][4863] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie5908a5ad41 ContainerID="5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457" Namespace="calico-apiserver" Pod="calico-apiserver-6964b45d66-nxqh5" WorkloadEndpoint="ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--nxqh5-eth0" May 14 18:00:37.145949 containerd[1983]: 2025-05-14 18:00:37.116 [INFO][4863] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457" Namespace="calico-apiserver" Pod="calico-apiserver-6964b45d66-nxqh5" WorkloadEndpoint="ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--nxqh5-eth0" May 14 18:00:37.146196 containerd[1983]: 2025-05-14 18:00:37.117 [INFO][4863] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457" Namespace="calico-apiserver" Pod="calico-apiserver-6964b45d66-nxqh5" WorkloadEndpoint="ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--nxqh5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--nxqh5-eth0", GenerateName:"calico-apiserver-6964b45d66-", Namespace:"calico-apiserver", SelfLink:"", UID:"32252210-5347-484d-8a6f-b3e8a7512e46", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 17, 59, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6964b45d66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-64", ContainerID:"5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457", Pod:"calico-apiserver-6964b45d66-nxqh5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5908a5ad41", MAC:"aa:96:3d:0c:20:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:00:37.146760 containerd[1983]: 2025-05-14 18:00:37.140 [INFO][4863] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457" Namespace="calico-apiserver" Pod="calico-apiserver-6964b45d66-nxqh5" WorkloadEndpoint="ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--nxqh5-eth0" May 14 18:00:37.200199 containerd[1983]: time="2025-05-14T18:00:37.200106254Z" level=info msg="connecting to shim 5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457" address="unix:///run/containerd/s/99a3d82dd5235a6b030c263597ea0d88cf33eb04008647ee9ae17cbbb2a632ad" namespace=k8s.io protocol=ttrpc version=3 May 14 18:00:37.253836 systemd[1]: Started cri-containerd-5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457.scope - libcontainer container 5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457. May 14 18:00:37.334372 containerd[1983]: time="2025-05-14T18:00:37.334304019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6964b45d66-nxqh5,Uid:32252210-5347-484d-8a6f-b3e8a7512e46,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457\"" May 14 18:00:37.338286 containerd[1983]: time="2025-05-14T18:00:37.338228919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 18:00:37.865062 containerd[1983]: time="2025-05-14T18:00:37.864222114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qz84k,Uid:055a695d-11dc-4f62-b437-308224c2bdff,Namespace:calico-system,Attempt:0,}" May 14 18:00:37.867032 containerd[1983]: time="2025-05-14T18:00:37.866960442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-h78r8,Uid:cdfb56ba-6f85-4f01-b0f2-e5b9882e1236,Namespace:kube-system,Attempt:0,}" May 14 18:00:37.867678 containerd[1983]: time="2025-05-14T18:00:37.867448914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6964b45d66-h745v,Uid:9f9b5b99-510b-4ac5-9618-f81c7735abe4,Namespace:calico-apiserver,Attempt:0,}" May 14 18:00:38.226737 systemd-networkd[1893]: cali4c01e1ba1c4: Link UP May 14 18:00:38.230355 systemd-networkd[1893]: cali4c01e1ba1c4: Gained carrier May 14 18:00:38.273475 containerd[1983]: 2025-05-14 18:00:38.007 [INFO][4945] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--64-k8s-coredns--6f6b679f8f--h78r8-eth0 coredns-6f6b679f8f- kube-system cdfb56ba-6f85-4f01-b0f2-e5b9882e1236 742 0 2025-05-14 17:59:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-64 coredns-6f6b679f8f-h78r8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4c01e1ba1c4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3" Namespace="kube-system" Pod="coredns-6f6b679f8f-h78r8" WorkloadEndpoint="ip--172--31--31--64-k8s-coredns--6f6b679f8f--h78r8-" May 14 18:00:38.273475 containerd[1983]: 2025-05-14 18:00:38.007 [INFO][4945] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3" Namespace="kube-system" Pod="coredns-6f6b679f8f-h78r8" WorkloadEndpoint="ip--172--31--31--64-k8s-coredns--6f6b679f8f--h78r8-eth0" May 14 18:00:38.273475 containerd[1983]: 2025-05-14 18:00:38.113 [INFO][4982] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3" HandleID="k8s-pod-network.cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3" Workload="ip--172--31--31--64-k8s-coredns--6f6b679f8f--h78r8-eth0" May 14 18:00:38.273809 containerd[1983]: 2025-05-14 18:00:38.144 [INFO][4982] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3" HandleID="k8s-pod-network.cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3" Workload="ip--172--31--31--64-k8s-coredns--6f6b679f8f--h78r8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039fd90), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-64", "pod":"coredns-6f6b679f8f-h78r8", "timestamp":"2025-05-14 18:00:38.113193735 +0000 UTC"}, Hostname:"ip-172-31-31-64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:00:38.273809 containerd[1983]: 2025-05-14 18:00:38.145 [INFO][4982] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:00:38.273809 containerd[1983]: 2025-05-14 18:00:38.145 [INFO][4982] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:00:38.273809 containerd[1983]: 2025-05-14 18:00:38.145 [INFO][4982] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-64' May 14 18:00:38.273809 containerd[1983]: 2025-05-14 18:00:38.154 [INFO][4982] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3" host="ip-172-31-31-64" May 14 18:00:38.273809 containerd[1983]: 2025-05-14 18:00:38.166 [INFO][4982] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-64" May 14 18:00:38.273809 containerd[1983]: 2025-05-14 18:00:38.177 [INFO][4982] ipam/ipam.go 489: Trying affinity for 192.168.72.128/26 host="ip-172-31-31-64" May 14 18:00:38.273809 containerd[1983]: 2025-05-14 18:00:38.182 [INFO][4982] ipam/ipam.go 155: Attempting to load block cidr=192.168.72.128/26 host="ip-172-31-31-64" May 14 18:00:38.273809 containerd[1983]: 2025-05-14 18:00:38.191 [INFO][4982] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ip-172-31-31-64" May 14 18:00:38.273809 containerd[1983]: 2025-05-14 18:00:38.192 [INFO][4982] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3" host="ip-172-31-31-64" May 14 18:00:38.274281 containerd[1983]: 2025-05-14 18:00:38.195 [INFO][4982] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3 May 14 18:00:38.274281 containerd[1983]: 2025-05-14 18:00:38.203 [INFO][4982] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3" host="ip-172-31-31-64" May 14 18:00:38.274281 containerd[1983]: 2025-05-14 18:00:38.215 [INFO][4982] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.72.130/26] block=192.168.72.128/26 handle="k8s-pod-network.cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3" host="ip-172-31-31-64" May 14 18:00:38.274281 containerd[1983]: 2025-05-14 18:00:38.216 [INFO][4982] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.130/26] handle="k8s-pod-network.cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3" host="ip-172-31-31-64" May 14 18:00:38.274281 containerd[1983]: 2025-05-14 18:00:38.216 [INFO][4982] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:00:38.274281 containerd[1983]: 2025-05-14 18:00:38.216 [INFO][4982] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.130/26] IPv6=[] ContainerID="cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3" HandleID="k8s-pod-network.cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3" Workload="ip--172--31--31--64-k8s-coredns--6f6b679f8f--h78r8-eth0" May 14 18:00:38.274608 containerd[1983]: 2025-05-14 18:00:38.221 [INFO][4945] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3" Namespace="kube-system" Pod="coredns-6f6b679f8f-h78r8" WorkloadEndpoint="ip--172--31--31--64-k8s-coredns--6f6b679f8f--h78r8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--64-k8s-coredns--6f6b679f8f--h78r8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cdfb56ba-6f85-4f01-b0f2-e5b9882e1236", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 17, 59, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-64", ContainerID:"", Pod:"coredns-6f6b679f8f-h78r8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c01e1ba1c4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:00:38.274739 containerd[1983]: 2025-05-14 18:00:38.221 [INFO][4945] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.72.130/32] ContainerID="cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3" Namespace="kube-system" Pod="coredns-6f6b679f8f-h78r8" WorkloadEndpoint="ip--172--31--31--64-k8s-coredns--6f6b679f8f--h78r8-eth0" May 14 18:00:38.274739 containerd[1983]: 2025-05-14 18:00:38.221 [INFO][4945] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c01e1ba1c4 ContainerID="cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3" Namespace="kube-system" Pod="coredns-6f6b679f8f-h78r8" WorkloadEndpoint="ip--172--31--31--64-k8s-coredns--6f6b679f8f--h78r8-eth0" May 14 18:00:38.274739 containerd[1983]: 2025-05-14 18:00:38.226 [INFO][4945] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3" Namespace="kube-system" Pod="coredns-6f6b679f8f-h78r8" WorkloadEndpoint="ip--172--31--31--64-k8s-coredns--6f6b679f8f--h78r8-eth0" May 14 18:00:38.274885 containerd[1983]: 2025-05-14 18:00:38.228 [INFO][4945] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3" Namespace="kube-system" Pod="coredns-6f6b679f8f-h78r8" WorkloadEndpoint="ip--172--31--31--64-k8s-coredns--6f6b679f8f--h78r8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--64-k8s-coredns--6f6b679f8f--h78r8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cdfb56ba-6f85-4f01-b0f2-e5b9882e1236", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 17, 59, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-64", ContainerID:"cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3", Pod:"coredns-6f6b679f8f-h78r8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4c01e1ba1c4", MAC:"c6:7d:17:f4:c3:e7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:00:38.274885 containerd[1983]: 2025-05-14 18:00:38.261 [INFO][4945] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3" Namespace="kube-system" Pod="coredns-6f6b679f8f-h78r8" WorkloadEndpoint="ip--172--31--31--64-k8s-coredns--6f6b679f8f--h78r8-eth0" May 14 18:00:38.288585 systemd-networkd[1893]: calie5908a5ad41: Gained IPv6LL May 14 18:00:38.372021 systemd-networkd[1893]: caliec9cb2f8ac4: Link UP May 14 18:00:38.373408 systemd-networkd[1893]: caliec9cb2f8ac4: Gained carrier May 14 18:00:38.383951 containerd[1983]: time="2025-05-14T18:00:38.383596132Z" level=info msg="connecting to shim cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3" address="unix:///run/containerd/s/112931d8b74639b4dc7fd1a419730289fc4f7bd09b0bbdcd0833ead65987b4d2" namespace=k8s.io protocol=ttrpc version=3 May 14 18:00:38.433731 containerd[1983]: 2025-05-14 18:00:38.005 [INFO][4948] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--h745v-eth0 calico-apiserver-6964b45d66- calico-apiserver 9f9b5b99-510b-4ac5-9618-f81c7735abe4 750 0 2025-05-14 17:59:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6964b45d66 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-64 calico-apiserver-6964b45d66-h745v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliec9cb2f8ac4 [] []}} ContainerID="564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d" Namespace="calico-apiserver" Pod="calico-apiserver-6964b45d66-h745v" WorkloadEndpoint="ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--h745v-" May 14 18:00:38.433731 containerd[1983]: 2025-05-14 18:00:38.006 [INFO][4948] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d" Namespace="calico-apiserver" Pod="calico-apiserver-6964b45d66-h745v" WorkloadEndpoint="ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--h745v-eth0" May 14 18:00:38.433731 containerd[1983]: 2025-05-14 18:00:38.132 [INFO][4988] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d" HandleID="k8s-pod-network.564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d" Workload="ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--h745v-eth0" May 14 18:00:38.433731 containerd[1983]: 2025-05-14 18:00:38.166 [INFO][4988] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d" HandleID="k8s-pod-network.564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d" Workload="ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--h745v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000319c40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-31-64", "pod":"calico-apiserver-6964b45d66-h745v", "timestamp":"2025-05-14 18:00:38.132397071 +0000 UTC"}, Hostname:"ip-172-31-31-64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:00:38.433731 containerd[1983]: 2025-05-14 18:00:38.166 [INFO][4988] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:00:38.433731 containerd[1983]: 2025-05-14 18:00:38.216 [INFO][4988] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:00:38.433731 containerd[1983]: 2025-05-14 18:00:38.216 [INFO][4988] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-64' May 14 18:00:38.433731 containerd[1983]: 2025-05-14 18:00:38.261 [INFO][4988] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d" host="ip-172-31-31-64" May 14 18:00:38.433731 containerd[1983]: 2025-05-14 18:00:38.289 [INFO][4988] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-64" May 14 18:00:38.433731 containerd[1983]: 2025-05-14 18:00:38.305 [INFO][4988] ipam/ipam.go 489: Trying affinity for 192.168.72.128/26 host="ip-172-31-31-64" May 14 18:00:38.433731 containerd[1983]: 2025-05-14 18:00:38.311 [INFO][4988] ipam/ipam.go 155: Attempting to load block cidr=192.168.72.128/26 host="ip-172-31-31-64" May 14 18:00:38.433731 containerd[1983]: 2025-05-14 18:00:38.317 [INFO][4988] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ip-172-31-31-64" May 14 18:00:38.433731 containerd[1983]: 2025-05-14 18:00:38.317 [INFO][4988] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d" host="ip-172-31-31-64" May 14 18:00:38.433731 containerd[1983]: 2025-05-14 18:00:38.321 [INFO][4988] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d May 14 18:00:38.433731 containerd[1983]: 2025-05-14 18:00:38.336 [INFO][4988] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d" host="ip-172-31-31-64" May 14 18:00:38.433731 containerd[1983]: 2025-05-14 18:00:38.349 [INFO][4988] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.72.131/26] block=192.168.72.128/26 handle="k8s-pod-network.564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d" host="ip-172-31-31-64" May 14 18:00:38.433731 containerd[1983]: 2025-05-14 18:00:38.349 [INFO][4988] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.131/26] handle="k8s-pod-network.564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d" host="ip-172-31-31-64" May 14 18:00:38.433731 containerd[1983]: 2025-05-14 18:00:38.349 [INFO][4988] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:00:38.433731 containerd[1983]: 2025-05-14 18:00:38.349 [INFO][4988] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.131/26] IPv6=[] ContainerID="564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d" HandleID="k8s-pod-network.564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d" Workload="ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--h745v-eth0" May 14 18:00:38.437015 containerd[1983]: 2025-05-14 18:00:38.360 [INFO][4948] cni-plugin/k8s.go 386: Populated endpoint ContainerID="564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d" Namespace="calico-apiserver" Pod="calico-apiserver-6964b45d66-h745v" WorkloadEndpoint="ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--h745v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--h745v-eth0", GenerateName:"calico-apiserver-6964b45d66-", Namespace:"calico-apiserver", SelfLink:"", UID:"9f9b5b99-510b-4ac5-9618-f81c7735abe4", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 17, 59, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6964b45d66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-64", ContainerID:"", Pod:"calico-apiserver-6964b45d66-h745v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliec9cb2f8ac4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:00:38.437015 containerd[1983]: 2025-05-14 18:00:38.360 [INFO][4948] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.72.131/32] ContainerID="564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d" Namespace="calico-apiserver" Pod="calico-apiserver-6964b45d66-h745v" WorkloadEndpoint="ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--h745v-eth0" May 14 18:00:38.437015 containerd[1983]: 2025-05-14 18:00:38.361 [INFO][4948] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliec9cb2f8ac4 ContainerID="564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d" Namespace="calico-apiserver" Pod="calico-apiserver-6964b45d66-h745v" WorkloadEndpoint="ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--h745v-eth0" May 14 18:00:38.437015 containerd[1983]: 2025-05-14 18:00:38.372 [INFO][4948] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d" Namespace="calico-apiserver" Pod="calico-apiserver-6964b45d66-h745v" WorkloadEndpoint="ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--h745v-eth0" May 14 18:00:38.437015 containerd[1983]: 2025-05-14 18:00:38.373 [INFO][4948] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d" Namespace="calico-apiserver" Pod="calico-apiserver-6964b45d66-h745v" WorkloadEndpoint="ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--h745v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--h745v-eth0", GenerateName:"calico-apiserver-6964b45d66-", Namespace:"calico-apiserver", SelfLink:"", UID:"9f9b5b99-510b-4ac5-9618-f81c7735abe4", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 17, 59, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6964b45d66", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-64", ContainerID:"564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d", Pod:"calico-apiserver-6964b45d66-h745v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliec9cb2f8ac4", MAC:"f6:d1:60:c2:f2:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:00:38.437015 containerd[1983]: 2025-05-14 18:00:38.419 [INFO][4948] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d" Namespace="calico-apiserver" Pod="calico-apiserver-6964b45d66-h745v" WorkloadEndpoint="ip--172--31--31--64-k8s-calico--apiserver--6964b45d66--h745v-eth0" May 14 18:00:38.505955 systemd[1]: Started cri-containerd-cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3.scope - libcontainer container cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3. May 14 18:00:38.536575 systemd-networkd[1893]: calidc62f097f65: Link UP May 14 18:00:38.537014 systemd-networkd[1893]: calidc62f097f65: Gained carrier May 14 18:00:38.559890 containerd[1983]: time="2025-05-14T18:00:38.559820405Z" level=info msg="connecting to shim 564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d" address="unix:///run/containerd/s/0979f398684c89b628bec727787dacbae75ad57c7a1ae9d0a5946c73d4c058ef" namespace=k8s.io protocol=ttrpc version=3 May 14 18:00:38.593499 containerd[1983]: 2025-05-14 18:00:38.047 [INFO][4944] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--64-k8s-csi--node--driver--qz84k-eth0 csi-node-driver- calico-system 055a695d-11dc-4f62-b437-308224c2bdff 623 0 2025-05-14 17:59:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-31-64 csi-node-driver-qz84k eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidc62f097f65 [] []}} ContainerID="49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688" Namespace="calico-system" Pod="csi-node-driver-qz84k" WorkloadEndpoint="ip--172--31--31--64-k8s-csi--node--driver--qz84k-" May 14 18:00:38.593499 containerd[1983]: 2025-05-14 18:00:38.047 [INFO][4944] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688" Namespace="calico-system" Pod="csi-node-driver-qz84k" WorkloadEndpoint="ip--172--31--31--64-k8s-csi--node--driver--qz84k-eth0" May 14 18:00:38.593499 containerd[1983]: 2025-05-14 18:00:38.171 [INFO][4994] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688" HandleID="k8s-pod-network.49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688" Workload="ip--172--31--31--64-k8s-csi--node--driver--qz84k-eth0" May 14 18:00:38.593499 containerd[1983]: 2025-05-14 18:00:38.193 [INFO][4994] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688" HandleID="k8s-pod-network.49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688" Workload="ip--172--31--31--64-k8s-csi--node--driver--qz84k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000300c20), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-64", "pod":"csi-node-driver-qz84k", "timestamp":"2025-05-14 18:00:38.171868431 +0000 UTC"}, Hostname:"ip-172-31-31-64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:00:38.593499 containerd[1983]: 2025-05-14 18:00:38.193 [INFO][4994] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:00:38.593499 containerd[1983]: 2025-05-14 18:00:38.350 [INFO][4994] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:00:38.593499 containerd[1983]: 2025-05-14 18:00:38.350 [INFO][4994] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-64' May 14 18:00:38.593499 containerd[1983]: 2025-05-14 18:00:38.357 [INFO][4994] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688" host="ip-172-31-31-64" May 14 18:00:38.593499 containerd[1983]: 2025-05-14 18:00:38.389 [INFO][4994] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-64" May 14 18:00:38.593499 containerd[1983]: 2025-05-14 18:00:38.409 [INFO][4994] ipam/ipam.go 489: Trying affinity for 192.168.72.128/26 host="ip-172-31-31-64" May 14 18:00:38.593499 containerd[1983]: 2025-05-14 18:00:38.418 [INFO][4994] ipam/ipam.go 155: Attempting to load block cidr=192.168.72.128/26 host="ip-172-31-31-64" May 14 18:00:38.593499 containerd[1983]: 2025-05-14 18:00:38.432 [INFO][4994] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ip-172-31-31-64" May 14 18:00:38.593499 containerd[1983]: 2025-05-14 18:00:38.432 [INFO][4994] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688" host="ip-172-31-31-64" May 14 18:00:38.593499 containerd[1983]: 2025-05-14 18:00:38.441 [INFO][4994] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688 May 14 18:00:38.593499 containerd[1983]: 2025-05-14 18:00:38.455 [INFO][4994] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688" host="ip-172-31-31-64" May 14 18:00:38.593499 containerd[1983]: 2025-05-14 18:00:38.487 [INFO][4994] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.72.132/26] block=192.168.72.128/26 handle="k8s-pod-network.49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688" host="ip-172-31-31-64" May 14 18:00:38.593499 containerd[1983]: 2025-05-14 18:00:38.494 [INFO][4994] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.132/26] handle="k8s-pod-network.49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688" host="ip-172-31-31-64" May 14 18:00:38.593499 containerd[1983]: 2025-05-14 18:00:38.494 [INFO][4994] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:00:38.593499 containerd[1983]: 2025-05-14 18:00:38.495 [INFO][4994] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.132/26] IPv6=[] ContainerID="49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688" HandleID="k8s-pod-network.49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688" Workload="ip--172--31--31--64-k8s-csi--node--driver--qz84k-eth0" May 14 18:00:38.597359 containerd[1983]: 2025-05-14 18:00:38.512 [INFO][4944] cni-plugin/k8s.go 386: Populated endpoint ContainerID="49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688" Namespace="calico-system" Pod="csi-node-driver-qz84k" WorkloadEndpoint="ip--172--31--31--64-k8s-csi--node--driver--qz84k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--64-k8s-csi--node--driver--qz84k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"055a695d-11dc-4f62-b437-308224c2bdff", ResourceVersion:"623", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 17, 59, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-64", ContainerID:"", Pod:"csi-node-driver-qz84k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidc62f097f65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:00:38.597359 containerd[1983]: 2025-05-14 18:00:38.512 [INFO][4944] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.72.132/32] ContainerID="49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688" Namespace="calico-system" Pod="csi-node-driver-qz84k" WorkloadEndpoint="ip--172--31--31--64-k8s-csi--node--driver--qz84k-eth0" May 14 18:00:38.597359 containerd[1983]: 2025-05-14 18:00:38.512 [INFO][4944] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc62f097f65 ContainerID="49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688" Namespace="calico-system" Pod="csi-node-driver-qz84k" WorkloadEndpoint="ip--172--31--31--64-k8s-csi--node--driver--qz84k-eth0" May 14 18:00:38.597359 containerd[1983]: 2025-05-14 18:00:38.539 [INFO][4944] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688" Namespace="calico-system" Pod="csi-node-driver-qz84k" WorkloadEndpoint="ip--172--31--31--64-k8s-csi--node--driver--qz84k-eth0" May 14 18:00:38.597359 containerd[1983]: 2025-05-14 18:00:38.543 [INFO][4944] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688" Namespace="calico-system" Pod="csi-node-driver-qz84k" WorkloadEndpoint="ip--172--31--31--64-k8s-csi--node--driver--qz84k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--64-k8s-csi--node--driver--qz84k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"055a695d-11dc-4f62-b437-308224c2bdff", ResourceVersion:"623", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 17, 59, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-64", ContainerID:"49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688", Pod:"csi-node-driver-qz84k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidc62f097f65", MAC:"52:a0:58:71:5c:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:00:38.597359 containerd[1983]: 2025-05-14 18:00:38.581 [INFO][4944] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688" Namespace="calico-system" Pod="csi-node-driver-qz84k" WorkloadEndpoint="ip--172--31--31--64-k8s-csi--node--driver--qz84k-eth0" May 14 18:00:38.623973 systemd[1]: Started cri-containerd-564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d.scope - libcontainer container 564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d. May 14 18:00:38.722758 containerd[1983]: time="2025-05-14T18:00:38.722625486Z" level=info msg="connecting to shim 49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688" address="unix:///run/containerd/s/6fc64e6b72881876dd690c9454e59db6a628c68df542ec4c277924123d8ba3b5" namespace=k8s.io protocol=ttrpc version=3 May 14 18:00:38.734907 containerd[1983]: time="2025-05-14T18:00:38.734261370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-h78r8,Uid:cdfb56ba-6f85-4f01-b0f2-e5b9882e1236,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3\"" May 14 18:00:38.743037 containerd[1983]: time="2025-05-14T18:00:38.742896162Z" level=info msg="CreateContainer within sandbox \"cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 18:00:38.773842 containerd[1983]: time="2025-05-14T18:00:38.772795098Z" level=info msg="Container fe154c55307f21cd281fc46477678c53beff34cceb6c21be704b1058fe781bd5: CDI devices from CRI Config.CDIDevices: []" May 14 18:00:38.804051 systemd[1]: Started cri-containerd-49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688.scope - libcontainer container 49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688. May 14 18:00:38.811707 containerd[1983]: time="2025-05-14T18:00:38.811648398Z" level=info msg="CreateContainer within sandbox \"cd082e396034ddb04f2012c592668b16b11558630e91e9063fdd20e6fc09b4c3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fe154c55307f21cd281fc46477678c53beff34cceb6c21be704b1058fe781bd5\"" May 14 18:00:38.813338 containerd[1983]: time="2025-05-14T18:00:38.813133578Z" level=info msg="StartContainer for \"fe154c55307f21cd281fc46477678c53beff34cceb6c21be704b1058fe781bd5\"" May 14 18:00:38.818719 containerd[1983]: time="2025-05-14T18:00:38.818583450Z" level=info msg="connecting to shim fe154c55307f21cd281fc46477678c53beff34cceb6c21be704b1058fe781bd5" address="unix:///run/containerd/s/112931d8b74639b4dc7fd1a419730289fc4f7bd09b0bbdcd0833ead65987b4d2" protocol=ttrpc version=3 May 14 18:00:38.916381 systemd[1]: Started cri-containerd-fe154c55307f21cd281fc46477678c53beff34cceb6c21be704b1058fe781bd5.scope - libcontainer container fe154c55307f21cd281fc46477678c53beff34cceb6c21be704b1058fe781bd5. May 14 18:00:38.928607 containerd[1983]: time="2025-05-14T18:00:38.928541647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6964b45d66-h745v,Uid:9f9b5b99-510b-4ac5-9618-f81c7735abe4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d\"" May 14 18:00:39.019073 containerd[1983]: time="2025-05-14T18:00:39.019008111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qz84k,Uid:055a695d-11dc-4f62-b437-308224c2bdff,Namespace:calico-system,Attempt:0,} returns sandbox id \"49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688\"" May 14 18:00:39.035512 containerd[1983]: time="2025-05-14T18:00:39.035101960Z" level=info msg="StartContainer for \"fe154c55307f21cd281fc46477678c53beff34cceb6c21be704b1058fe781bd5\" returns successfully" May 14 18:00:39.284979 kubelet[3264]: I0514 18:00:39.284866 3264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-h78r8" podStartSLOduration=48.284841521 podStartE2EDuration="48.284841521s" podCreationTimestamp="2025-05-14 17:59:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:00:39.283791029 +0000 UTC m=+55.654658534" watchObservedRunningTime="2025-05-14 18:00:39.284841521 +0000 UTC m=+55.655708954" May 14 18:00:39.348872 systemd[1]: Started sshd@9-172.31.31.64:22-139.178.89.65:51994.service - OpenSSH per-connection server daemon (139.178.89.65:51994). May 14 18:00:39.570050 sshd[5218]: Accepted publickey for core from 139.178.89.65 port 51994 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 18:00:39.579301 sshd-session[5218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:00:39.600614 systemd-logind[1969]: New session 10 of user core. May 14 18:00:39.611742 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 18:00:39.631849 systemd-networkd[1893]: calidc62f097f65: Gained IPv6LL May 14 18:00:39.821881 systemd-networkd[1893]: caliec9cb2f8ac4: Gained IPv6LL May 14 18:00:39.868131 containerd[1983]: time="2025-05-14T18:00:39.867088712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b5f67457-mkjwp,Uid:63810593-4b4a-4661-87f0-484cfc23b0e5,Namespace:calico-system,Attempt:0,}" May 14 18:00:39.887100 systemd-networkd[1893]: cali4c01e1ba1c4: Gained IPv6LL May 14 18:00:39.952744 sshd[5223]: Connection closed by 139.178.89.65 port 51994 May 14 18:00:39.954053 sshd-session[5218]: pam_unix(sshd:session): session closed for user core May 14 18:00:39.964405 systemd[1]: sshd@9-172.31.31.64:22-139.178.89.65:51994.service: Deactivated successfully. May 14 18:00:39.971391 systemd[1]: session-10.scope: Deactivated successfully. May 14 18:00:39.979019 systemd-logind[1969]: Session 10 logged out. Waiting for processes to exit. May 14 18:00:40.006395 systemd[1]: Started sshd@10-172.31.31.64:22-139.178.89.65:52000.service - OpenSSH per-connection server daemon (139.178.89.65:52000). May 14 18:00:40.010157 systemd-logind[1969]: Removed session 10. May 14 18:00:40.233402 sshd[5251]: Accepted publickey for core from 139.178.89.65 port 52000 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 18:00:40.240175 sshd-session[5251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:00:40.257193 systemd-logind[1969]: New session 11 of user core. May 14 18:00:40.266071 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 18:00:40.295195 systemd-networkd[1893]: cali5c5451b44f2: Link UP May 14 18:00:40.296638 systemd-networkd[1893]: cali5c5451b44f2: Gained carrier May 14 18:00:40.342465 containerd[1983]: 2025-05-14 18:00:40.037 [INFO][5236] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--64-k8s-calico--kube--controllers--9b5f67457--mkjwp-eth0 calico-kube-controllers-9b5f67457- calico-system 63810593-4b4a-4661-87f0-484cfc23b0e5 746 0 2025-05-14 17:59:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9b5f67457 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-31-64 calico-kube-controllers-9b5f67457-mkjwp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5c5451b44f2 [] []}} ContainerID="1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4" Namespace="calico-system" Pod="calico-kube-controllers-9b5f67457-mkjwp" WorkloadEndpoint="ip--172--31--31--64-k8s-calico--kube--controllers--9b5f67457--mkjwp-" May 14 18:00:40.342465 containerd[1983]: 2025-05-14 18:00:40.037 [INFO][5236] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4" Namespace="calico-system" Pod="calico-kube-controllers-9b5f67457-mkjwp" WorkloadEndpoint="ip--172--31--31--64-k8s-calico--kube--controllers--9b5f67457--mkjwp-eth0" May 14 18:00:40.342465 containerd[1983]: 2025-05-14 18:00:40.150 [INFO][5255] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4" HandleID="k8s-pod-network.1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4" Workload="ip--172--31--31--64-k8s-calico--kube--controllers--9b5f67457--mkjwp-eth0" May 14 18:00:40.342465 containerd[1983]: 2025-05-14 18:00:40.184 [INFO][5255] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4" HandleID="k8s-pod-network.1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4" Workload="ip--172--31--31--64-k8s-calico--kube--controllers--9b5f67457--mkjwp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000220ed0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-64", "pod":"calico-kube-controllers-9b5f67457-mkjwp", "timestamp":"2025-05-14 18:00:40.150561785 +0000 UTC"}, Hostname:"ip-172-31-31-64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:00:40.342465 containerd[1983]: 2025-05-14 18:00:40.184 [INFO][5255] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:00:40.342465 containerd[1983]: 2025-05-14 18:00:40.184 [INFO][5255] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:00:40.342465 containerd[1983]: 2025-05-14 18:00:40.184 [INFO][5255] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-64' May 14 18:00:40.342465 containerd[1983]: 2025-05-14 18:00:40.189 [INFO][5255] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4" host="ip-172-31-31-64" May 14 18:00:40.342465 containerd[1983]: 2025-05-14 18:00:40.199 [INFO][5255] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-64" May 14 18:00:40.342465 containerd[1983]: 2025-05-14 18:00:40.210 [INFO][5255] ipam/ipam.go 489: Trying affinity for 192.168.72.128/26 host="ip-172-31-31-64" May 14 18:00:40.342465 containerd[1983]: 2025-05-14 18:00:40.215 [INFO][5255] ipam/ipam.go 155: Attempting to load block cidr=192.168.72.128/26 host="ip-172-31-31-64" May 14 18:00:40.342465 containerd[1983]: 2025-05-14 18:00:40.220 [INFO][5255] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ip-172-31-31-64" May 14 18:00:40.342465 containerd[1983]: 2025-05-14 18:00:40.221 [INFO][5255] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4" host="ip-172-31-31-64" May 14 18:00:40.342465 containerd[1983]: 2025-05-14 18:00:40.225 [INFO][5255] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4 May 14 18:00:40.342465 containerd[1983]: 2025-05-14 18:00:40.238 [INFO][5255] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4" host="ip-172-31-31-64" May 14 18:00:40.342465 containerd[1983]: 2025-05-14 18:00:40.267 [INFO][5255] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.72.133/26] block=192.168.72.128/26 handle="k8s-pod-network.1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4" host="ip-172-31-31-64" May 14 18:00:40.342465 containerd[1983]: 2025-05-14 18:00:40.270 [INFO][5255] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.133/26] handle="k8s-pod-network.1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4" host="ip-172-31-31-64" May 14 18:00:40.342465 containerd[1983]: 2025-05-14 18:00:40.272 [INFO][5255] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:00:40.342465 containerd[1983]: 2025-05-14 18:00:40.273 [INFO][5255] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.133/26] IPv6=[] ContainerID="1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4" HandleID="k8s-pod-network.1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4" Workload="ip--172--31--31--64-k8s-calico--kube--controllers--9b5f67457--mkjwp-eth0" May 14 18:00:40.344444 containerd[1983]: 2025-05-14 18:00:40.280 [INFO][5236] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4" Namespace="calico-system" Pod="calico-kube-controllers-9b5f67457-mkjwp" WorkloadEndpoint="ip--172--31--31--64-k8s-calico--kube--controllers--9b5f67457--mkjwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--64-k8s-calico--kube--controllers--9b5f67457--mkjwp-eth0", GenerateName:"calico-kube-controllers-9b5f67457-", Namespace:"calico-system", SelfLink:"", UID:"63810593-4b4a-4661-87f0-484cfc23b0e5", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 17, 59, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9b5f67457", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-64", ContainerID:"", Pod:"calico-kube-controllers-9b5f67457-mkjwp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5c5451b44f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:00:40.344444 containerd[1983]: 2025-05-14 18:00:40.281 [INFO][5236] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.72.133/32] ContainerID="1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4" Namespace="calico-system" Pod="calico-kube-controllers-9b5f67457-mkjwp" WorkloadEndpoint="ip--172--31--31--64-k8s-calico--kube--controllers--9b5f67457--mkjwp-eth0" May 14 18:00:40.344444 containerd[1983]: 2025-05-14 18:00:40.281 [INFO][5236] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c5451b44f2 ContainerID="1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4" Namespace="calico-system" Pod="calico-kube-controllers-9b5f67457-mkjwp" WorkloadEndpoint="ip--172--31--31--64-k8s-calico--kube--controllers--9b5f67457--mkjwp-eth0" May 14 18:00:40.344444 containerd[1983]: 2025-05-14 18:00:40.296 [INFO][5236] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4" Namespace="calico-system" Pod="calico-kube-controllers-9b5f67457-mkjwp" WorkloadEndpoint="ip--172--31--31--64-k8s-calico--kube--controllers--9b5f67457--mkjwp-eth0" May 14 18:00:40.344444 containerd[1983]: 2025-05-14 18:00:40.302 [INFO][5236] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4" Namespace="calico-system" Pod="calico-kube-controllers-9b5f67457-mkjwp" WorkloadEndpoint="ip--172--31--31--64-k8s-calico--kube--controllers--9b5f67457--mkjwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--64-k8s-calico--kube--controllers--9b5f67457--mkjwp-eth0", GenerateName:"calico-kube-controllers-9b5f67457-", Namespace:"calico-system", SelfLink:"", UID:"63810593-4b4a-4661-87f0-484cfc23b0e5", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 17, 59, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9b5f67457", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-64", ContainerID:"1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4", Pod:"calico-kube-controllers-9b5f67457-mkjwp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5c5451b44f2", MAC:"d6:04:fc:74:e7:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:00:40.344444 containerd[1983]: 2025-05-14 18:00:40.331 [INFO][5236] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4" Namespace="calico-system" Pod="calico-kube-controllers-9b5f67457-mkjwp" WorkloadEndpoint="ip--172--31--31--64-k8s-calico--kube--controllers--9b5f67457--mkjwp-eth0" May 14 18:00:40.472158 containerd[1983]: time="2025-05-14T18:00:40.472077079Z" level=info msg="connecting to shim 1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4" address="unix:///run/containerd/s/ee746e6c754487be9db261fa7d456acb016ef04008f23010bdaae25e7d5e31ea" namespace=k8s.io protocol=ttrpc version=3 May 14 18:00:40.593058 systemd[1]: Started cri-containerd-1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4.scope - libcontainer container 1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4. May 14 18:00:40.765553 sshd[5263]: Connection closed by 139.178.89.65 port 52000 May 14 18:00:40.767089 sshd-session[5251]: pam_unix(sshd:session): session closed for user core May 14 18:00:40.778601 systemd[1]: session-11.scope: Deactivated successfully. May 14 18:00:40.783199 systemd[1]: sshd@10-172.31.31.64:22-139.178.89.65:52000.service: Deactivated successfully. May 14 18:00:40.830936 systemd-logind[1969]: Session 11 logged out. Waiting for processes to exit. May 14 18:00:40.834752 systemd[1]: Started sshd@11-172.31.31.64:22-139.178.89.65:52016.service - OpenSSH per-connection server daemon (139.178.89.65:52016). May 14 18:00:40.843623 systemd-logind[1969]: Removed session 11. May 14 18:00:40.867651 containerd[1983]: time="2025-05-14T18:00:40.867369117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pwg6j,Uid:478c3253-ad52-4254-9fd9-c79e7fba6d23,Namespace:kube-system,Attempt:0,}" May 14 18:00:40.899919 containerd[1983]: time="2025-05-14T18:00:40.899487045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9b5f67457-mkjwp,Uid:63810593-4b4a-4661-87f0-484cfc23b0e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4\"" May 14 18:00:41.120731 sshd[5331]: Accepted publickey for core from 139.178.89.65 port 52016 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 18:00:41.127185 sshd-session[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:00:41.150839 systemd-logind[1969]: New session 12 of user core. May 14 18:00:41.156182 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 18:00:41.467398 systemd-networkd[1893]: cali28761252903: Link UP May 14 18:00:41.473015 systemd-networkd[1893]: cali28761252903: Gained carrier May 14 18:00:41.525688 containerd[1983]: 2025-05-14 18:00:41.151 [INFO][5332] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--64-k8s-coredns--6f6b679f8f--pwg6j-eth0 coredns-6f6b679f8f- kube-system 478c3253-ad52-4254-9fd9-c79e7fba6d23 748 0 2025-05-14 17:59:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-64 coredns-6f6b679f8f-pwg6j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali28761252903 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da" Namespace="kube-system" Pod="coredns-6f6b679f8f-pwg6j" WorkloadEndpoint="ip--172--31--31--64-k8s-coredns--6f6b679f8f--pwg6j-" May 14 18:00:41.525688 containerd[1983]: 2025-05-14 18:00:41.152 [INFO][5332] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da" Namespace="kube-system" Pod="coredns-6f6b679f8f-pwg6j" WorkloadEndpoint="ip--172--31--31--64-k8s-coredns--6f6b679f8f--pwg6j-eth0" May 14 18:00:41.525688 containerd[1983]: 2025-05-14 18:00:41.302 [INFO][5351] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da" HandleID="k8s-pod-network.d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da" Workload="ip--172--31--31--64-k8s-coredns--6f6b679f8f--pwg6j-eth0" May 14 18:00:41.525688 containerd[1983]: 2025-05-14 18:00:41.337 [INFO][5351] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da" HandleID="k8s-pod-network.d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da" Workload="ip--172--31--31--64-k8s-coredns--6f6b679f8f--pwg6j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011aa40), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-64", "pod":"coredns-6f6b679f8f-pwg6j", "timestamp":"2025-05-14 18:00:41.302281795 +0000 UTC"}, Hostname:"ip-172-31-31-64", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 18:00:41.525688 containerd[1983]: 2025-05-14 18:00:41.337 [INFO][5351] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 18:00:41.525688 containerd[1983]: 2025-05-14 18:00:41.338 [INFO][5351] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 18:00:41.525688 containerd[1983]: 2025-05-14 18:00:41.338 [INFO][5351] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-64' May 14 18:00:41.525688 containerd[1983]: 2025-05-14 18:00:41.344 [INFO][5351] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da" host="ip-172-31-31-64" May 14 18:00:41.525688 containerd[1983]: 2025-05-14 18:00:41.355 [INFO][5351] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-64" May 14 18:00:41.525688 containerd[1983]: 2025-05-14 18:00:41.368 [INFO][5351] ipam/ipam.go 489: Trying affinity for 192.168.72.128/26 host="ip-172-31-31-64" May 14 18:00:41.525688 containerd[1983]: 2025-05-14 18:00:41.373 [INFO][5351] ipam/ipam.go 155: Attempting to load block cidr=192.168.72.128/26 host="ip-172-31-31-64" May 14 18:00:41.525688 containerd[1983]: 2025-05-14 18:00:41.383 [INFO][5351] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ip-172-31-31-64" May 14 18:00:41.525688 containerd[1983]: 2025-05-14 18:00:41.390 [INFO][5351] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da" host="ip-172-31-31-64" May 14 18:00:41.525688 containerd[1983]: 2025-05-14 18:00:41.393 [INFO][5351] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da May 14 18:00:41.525688 containerd[1983]: 2025-05-14 18:00:41.406 [INFO][5351] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da" host="ip-172-31-31-64" May 14 18:00:41.525688 containerd[1983]: 2025-05-14 18:00:41.433 [INFO][5351] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.72.134/26] block=192.168.72.128/26 handle="k8s-pod-network.d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da" host="ip-172-31-31-64" May 14 18:00:41.525688 containerd[1983]: 2025-05-14 18:00:41.434 [INFO][5351] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.134/26] handle="k8s-pod-network.d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da" host="ip-172-31-31-64" May 14 18:00:41.525688 containerd[1983]: 2025-05-14 18:00:41.434 [INFO][5351] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 18:00:41.525688 containerd[1983]: 2025-05-14 18:00:41.434 [INFO][5351] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.72.134/26] IPv6=[] ContainerID="d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da" HandleID="k8s-pod-network.d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da" Workload="ip--172--31--31--64-k8s-coredns--6f6b679f8f--pwg6j-eth0" May 14 18:00:41.528930 containerd[1983]: 2025-05-14 18:00:41.444 [INFO][5332] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da" Namespace="kube-system" Pod="coredns-6f6b679f8f-pwg6j" WorkloadEndpoint="ip--172--31--31--64-k8s-coredns--6f6b679f8f--pwg6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--64-k8s-coredns--6f6b679f8f--pwg6j-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"478c3253-ad52-4254-9fd9-c79e7fba6d23", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 17, 59, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-64", ContainerID:"", Pod:"coredns-6f6b679f8f-pwg6j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28761252903", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:00:41.528930 containerd[1983]: 2025-05-14 18:00:41.444 [INFO][5332] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.72.134/32] ContainerID="d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da" Namespace="kube-system" Pod="coredns-6f6b679f8f-pwg6j" WorkloadEndpoint="ip--172--31--31--64-k8s-coredns--6f6b679f8f--pwg6j-eth0" May 14 18:00:41.528930 containerd[1983]: 2025-05-14 18:00:41.444 [INFO][5332] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28761252903 ContainerID="d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da" Namespace="kube-system" Pod="coredns-6f6b679f8f-pwg6j" WorkloadEndpoint="ip--172--31--31--64-k8s-coredns--6f6b679f8f--pwg6j-eth0" May 14 18:00:41.528930 containerd[1983]: 2025-05-14 18:00:41.472 [INFO][5332] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da" Namespace="kube-system" Pod="coredns-6f6b679f8f-pwg6j" WorkloadEndpoint="ip--172--31--31--64-k8s-coredns--6f6b679f8f--pwg6j-eth0" May 14 18:00:41.528930 containerd[1983]: 2025-05-14 18:00:41.489 [INFO][5332] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da" Namespace="kube-system" Pod="coredns-6f6b679f8f-pwg6j" WorkloadEndpoint="ip--172--31--31--64-k8s-coredns--6f6b679f8f--pwg6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--64-k8s-coredns--6f6b679f8f--pwg6j-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"478c3253-ad52-4254-9fd9-c79e7fba6d23", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 17, 59, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-64", ContainerID:"d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da", Pod:"coredns-6f6b679f8f-pwg6j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28761252903", MAC:"ba:67:62:08:3b:e7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 18:00:41.528930 containerd[1983]: 2025-05-14 18:00:41.516 [INFO][5332] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da" Namespace="kube-system" Pod="coredns-6f6b679f8f-pwg6j" WorkloadEndpoint="ip--172--31--31--64-k8s-coredns--6f6b679f8f--pwg6j-eth0" May 14 18:00:41.546159 sshd[5349]: Connection closed by 139.178.89.65 port 52016 May 14 18:00:41.547980 sshd-session[5331]: pam_unix(sshd:session): session closed for user core May 14 18:00:41.559142 systemd[1]: sshd@11-172.31.31.64:22-139.178.89.65:52016.service: Deactivated successfully. May 14 18:00:41.570226 systemd[1]: session-12.scope: Deactivated successfully. May 14 18:00:41.582198 systemd-logind[1969]: Session 12 logged out. Waiting for processes to exit. May 14 18:00:41.585934 systemd-logind[1969]: Removed session 12. May 14 18:00:41.634093 containerd[1983]: time="2025-05-14T18:00:41.633938372Z" level=info msg="connecting to shim d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da" address="unix:///run/containerd/s/de9ac601dbd9958a7f4db8c5eb69e7763a5f8c9d91f3ce9b929412d1c643dc34" namespace=k8s.io protocol=ttrpc version=3 May 14 18:00:41.701829 systemd[1]: Started cri-containerd-d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da.scope - libcontainer container d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da. May 14 18:00:41.826471 containerd[1983]: time="2025-05-14T18:00:41.826411269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pwg6j,Uid:478c3253-ad52-4254-9fd9-c79e7fba6d23,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da\"" May 14 18:00:41.839259 containerd[1983]: time="2025-05-14T18:00:41.839029701Z" level=info msg="CreateContainer within sandbox \"d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 18:00:41.882285 containerd[1983]: time="2025-05-14T18:00:41.879388414Z" level=info msg="Container 3442899a7bc888aad9fd4eca9e0cb4b1a4dfefbf2d2afb71c5225545e2940c25: CDI devices from CRI Config.CDIDevices: []" May 14 18:00:41.884644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2829989061.mount: Deactivated successfully. May 14 18:00:41.904667 containerd[1983]: time="2025-05-14T18:00:41.904611646Z" level=info msg="CreateContainer within sandbox \"d6a0e5db6a135345ca842f8d1a17ade02f155c3412a20dee4b971f5a707508da\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3442899a7bc888aad9fd4eca9e0cb4b1a4dfefbf2d2afb71c5225545e2940c25\"" May 14 18:00:41.908745 containerd[1983]: time="2025-05-14T18:00:41.908125582Z" level=info msg="StartContainer for \"3442899a7bc888aad9fd4eca9e0cb4b1a4dfefbf2d2afb71c5225545e2940c25\"" May 14 18:00:41.914093 containerd[1983]: time="2025-05-14T18:00:41.914033014Z" level=info msg="connecting to shim 3442899a7bc888aad9fd4eca9e0cb4b1a4dfefbf2d2afb71c5225545e2940c25" address="unix:///run/containerd/s/de9ac601dbd9958a7f4db8c5eb69e7763a5f8c9d91f3ce9b929412d1c643dc34" protocol=ttrpc version=3 May 14 18:00:41.970940 systemd[1]: Started cri-containerd-3442899a7bc888aad9fd4eca9e0cb4b1a4dfefbf2d2afb71c5225545e2940c25.scope - libcontainer container 3442899a7bc888aad9fd4eca9e0cb4b1a4dfefbf2d2afb71c5225545e2940c25. May 14 18:00:42.062812 systemd-networkd[1893]: cali5c5451b44f2: Gained IPv6LL May 14 18:00:42.071623 containerd[1983]: time="2025-05-14T18:00:42.071551159Z" level=info msg="StartContainer for \"3442899a7bc888aad9fd4eca9e0cb4b1a4dfefbf2d2afb71c5225545e2940c25\" returns successfully" May 14 18:00:42.295220 containerd[1983]: time="2025-05-14T18:00:42.294895124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:42.297760 containerd[1983]: time="2025-05-14T18:00:42.297688712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 14 18:00:42.301243 containerd[1983]: time="2025-05-14T18:00:42.301186544Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:42.309553 containerd[1983]: time="2025-05-14T18:00:42.309300152Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:42.311578 containerd[1983]: time="2025-05-14T18:00:42.311485292Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 4.973170525s" May 14 18:00:42.311907 containerd[1983]: time="2025-05-14T18:00:42.311768840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 14 18:00:42.314555 containerd[1983]: time="2025-05-14T18:00:42.313721900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 18:00:42.316870 containerd[1983]: time="2025-05-14T18:00:42.316816040Z" level=info msg="CreateContainer within sandbox \"5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 18:00:42.332394 kubelet[3264]: I0514 18:00:42.332313 3264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-pwg6j" podStartSLOduration=51.332279492 podStartE2EDuration="51.332279492s" podCreationTimestamp="2025-05-14 17:59:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 18:00:42.321697832 +0000 UTC m=+58.692565241" watchObservedRunningTime="2025-05-14 18:00:42.332279492 +0000 UTC m=+58.703146913" May 14 18:00:42.348654 containerd[1983]: time="2025-05-14T18:00:42.348116504Z" level=info msg="Container bbeed98ff5b84125dced8b615a7ae81bd7fc2f9d5bda19a921f68de495f53b58: CDI devices from CRI Config.CDIDevices: []" May 14 18:00:42.376987 containerd[1983]: time="2025-05-14T18:00:42.376886300Z" level=info msg="CreateContainer within sandbox \"5d105b9240880c5e3cb86b8558c11665b827f8863504373e3ecfb09b8d760457\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bbeed98ff5b84125dced8b615a7ae81bd7fc2f9d5bda19a921f68de495f53b58\"" May 14 18:00:42.379640 containerd[1983]: time="2025-05-14T18:00:42.379156688Z" level=info msg="StartContainer for \"bbeed98ff5b84125dced8b615a7ae81bd7fc2f9d5bda19a921f68de495f53b58\"" May 14 18:00:42.386469 containerd[1983]: time="2025-05-14T18:00:42.386349896Z" level=info msg="connecting to shim bbeed98ff5b84125dced8b615a7ae81bd7fc2f9d5bda19a921f68de495f53b58" address="unix:///run/containerd/s/99a3d82dd5235a6b030c263597ea0d88cf33eb04008647ee9ae17cbbb2a632ad" protocol=ttrpc version=3 May 14 18:00:42.440889 systemd[1]: Started cri-containerd-bbeed98ff5b84125dced8b615a7ae81bd7fc2f9d5bda19a921f68de495f53b58.scope - libcontainer container bbeed98ff5b84125dced8b615a7ae81bd7fc2f9d5bda19a921f68de495f53b58. May 14 18:00:42.548405 containerd[1983]: time="2025-05-14T18:00:42.548008005Z" level=info msg="StartContainer for \"bbeed98ff5b84125dced8b615a7ae81bd7fc2f9d5bda19a921f68de495f53b58\" returns successfully" May 14 18:00:42.573759 systemd-networkd[1893]: cali28761252903: Gained IPv6LL May 14 18:00:42.770695 containerd[1983]: time="2025-05-14T18:00:42.770615038Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:42.772568 containerd[1983]: time="2025-05-14T18:00:42.772456282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 14 18:00:42.776887 containerd[1983]: time="2025-05-14T18:00:42.776819170Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 463.037882ms" May 14 18:00:42.776887 containerd[1983]: time="2025-05-14T18:00:42.776876062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 14 18:00:42.779253 containerd[1983]: time="2025-05-14T18:00:42.778298338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 14 18:00:42.784705 containerd[1983]: time="2025-05-14T18:00:42.784042498Z" level=info msg="CreateContainer within sandbox \"564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 18:00:42.814837 containerd[1983]: time="2025-05-14T18:00:42.814366330Z" level=info msg="Container 4bb477c7a901e0741e6bbda7366e5dc0f5cfc2f9a35adc8bc8e90dd64401f403: CDI devices from CRI Config.CDIDevices: []" May 14 18:00:42.831420 containerd[1983]: time="2025-05-14T18:00:42.831354742Z" level=info msg="CreateContainer within sandbox \"564c759bf4ef87b57a1e09c3472332dbf941e46e2badbeccf56c0515cfe4195d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4bb477c7a901e0741e6bbda7366e5dc0f5cfc2f9a35adc8bc8e90dd64401f403\"" May 14 18:00:42.832349 containerd[1983]: time="2025-05-14T18:00:42.832297078Z" level=info msg="StartContainer for \"4bb477c7a901e0741e6bbda7366e5dc0f5cfc2f9a35adc8bc8e90dd64401f403\"" May 14 18:00:42.837612 containerd[1983]: time="2025-05-14T18:00:42.837547702Z" level=info msg="connecting to shim 4bb477c7a901e0741e6bbda7366e5dc0f5cfc2f9a35adc8bc8e90dd64401f403" address="unix:///run/containerd/s/0979f398684c89b628bec727787dacbae75ad57c7a1ae9d0a5946c73d4c058ef" protocol=ttrpc version=3 May 14 18:00:42.885884 systemd[1]: Started cri-containerd-4bb477c7a901e0741e6bbda7366e5dc0f5cfc2f9a35adc8bc8e90dd64401f403.scope - libcontainer container 4bb477c7a901e0741e6bbda7366e5dc0f5cfc2f9a35adc8bc8e90dd64401f403. May 14 18:00:42.988308 containerd[1983]: time="2025-05-14T18:00:42.988158791Z" level=info msg="StartContainer for \"4bb477c7a901e0741e6bbda7366e5dc0f5cfc2f9a35adc8bc8e90dd64401f403\" returns successfully" May 14 18:00:43.332183 kubelet[3264]: I0514 18:00:43.331735 3264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6964b45d66-h745v" podStartSLOduration=42.487613946 podStartE2EDuration="46.331690137s" podCreationTimestamp="2025-05-14 17:59:57 +0000 UTC" firstStartedPulling="2025-05-14 18:00:38.934050955 +0000 UTC m=+55.304918364" lastFinishedPulling="2025-05-14 18:00:42.778127158 +0000 UTC m=+59.148994555" observedRunningTime="2025-05-14 18:00:43.331157841 +0000 UTC m=+59.702025274" watchObservedRunningTime="2025-05-14 18:00:43.331690137 +0000 UTC m=+59.702557570" May 14 18:00:44.323264 kubelet[3264]: I0514 18:00:44.322013 3264 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:00:45.135479 containerd[1983]: time="2025-05-14T18:00:45.135274498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:45.146055 containerd[1983]: time="2025-05-14T18:00:45.145583710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 14 18:00:45.150159 containerd[1983]: time="2025-05-14T18:00:45.150052750Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:45.159819 containerd[1983]: time="2025-05-14T18:00:45.159602110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:45.164737 containerd[1983]: time="2025-05-14T18:00:45.164509378Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 2.385055596s" May 14 18:00:45.164737 containerd[1983]: time="2025-05-14T18:00:45.164611894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 14 18:00:45.170573 containerd[1983]: time="2025-05-14T18:00:45.169327582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 14 18:00:45.172438 containerd[1983]: time="2025-05-14T18:00:45.172083178Z" level=info msg="CreateContainer within sandbox \"49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 14 18:00:45.207547 containerd[1983]: time="2025-05-14T18:00:45.205714618Z" level=info msg="Container c3a9f8fc96cbbd22c313213e65843a30020bfe5dc930112285ec70c2acaec727: CDI devices from CRI Config.CDIDevices: []" May 14 18:00:45.239493 containerd[1983]: time="2025-05-14T18:00:45.239435986Z" level=info msg="CreateContainer within sandbox \"49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c3a9f8fc96cbbd22c313213e65843a30020bfe5dc930112285ec70c2acaec727\"" May 14 18:00:45.242505 containerd[1983]: time="2025-05-14T18:00:45.242381362Z" level=info msg="StartContainer for \"c3a9f8fc96cbbd22c313213e65843a30020bfe5dc930112285ec70c2acaec727\"" May 14 18:00:45.263180 containerd[1983]: time="2025-05-14T18:00:45.262885234Z" level=info msg="connecting to shim c3a9f8fc96cbbd22c313213e65843a30020bfe5dc930112285ec70c2acaec727" address="unix:///run/containerd/s/6fc64e6b72881876dd690c9454e59db6a628c68df542ec4c277924123d8ba3b5" protocol=ttrpc version=3 May 14 18:00:45.336903 systemd[1]: Started cri-containerd-c3a9f8fc96cbbd22c313213e65843a30020bfe5dc930112285ec70c2acaec727.scope - libcontainer container c3a9f8fc96cbbd22c313213e65843a30020bfe5dc930112285ec70c2acaec727. May 14 18:00:45.443061 ntpd[1964]: Listen normally on 8 vxlan.calico 192.168.72.128:123 May 14 18:00:45.444316 ntpd[1964]: 14 May 18:00:45 ntpd[1964]: Listen normally on 8 vxlan.calico 192.168.72.128:123 May 14 18:00:45.444316 ntpd[1964]: 14 May 18:00:45 ntpd[1964]: Listen normally on 9 vxlan.calico [fe80::648b:19ff:fe04:1e57%4]:123 May 14 18:00:45.444316 ntpd[1964]: 14 May 18:00:45 ntpd[1964]: Listen normally on 10 calie5908a5ad41 [fe80::ecee:eeff:feee:eeee%7]:123 May 14 18:00:45.443179 ntpd[1964]: Listen normally on 9 vxlan.calico [fe80::648b:19ff:fe04:1e57%4]:123 May 14 18:00:45.443258 ntpd[1964]: Listen normally on 10 calie5908a5ad41 [fe80::ecee:eeff:feee:eeee%7]:123 May 14 18:00:45.447123 ntpd[1964]: 14 May 18:00:45 ntpd[1964]: Listen normally on 11 cali4c01e1ba1c4 [fe80::ecee:eeff:feee:eeee%8]:123 May 14 18:00:45.447123 ntpd[1964]: 14 May 18:00:45 ntpd[1964]: Listen normally on 12 caliec9cb2f8ac4 [fe80::ecee:eeff:feee:eeee%9]:123 May 14 18:00:45.447123 ntpd[1964]: 14 May 18:00:45 ntpd[1964]: Listen normally on 13 calidc62f097f65 [fe80::ecee:eeff:feee:eeee%10]:123 May 14 18:00:45.447123 ntpd[1964]: 14 May 18:00:45 ntpd[1964]: Listen normally on 14 cali5c5451b44f2 [fe80::ecee:eeff:feee:eeee%11]:123 May 14 18:00:45.447123 ntpd[1964]: 14 May 18:00:45 ntpd[1964]: Listen normally on 15 cali28761252903 [fe80::ecee:eeff:feee:eeee%12]:123 May 14 18:00:45.445633 ntpd[1964]: Listen normally on 11 cali4c01e1ba1c4 [fe80::ecee:eeff:feee:eeee%8]:123 May 14 18:00:45.445752 ntpd[1964]: Listen normally on 12 caliec9cb2f8ac4 [fe80::ecee:eeff:feee:eeee%9]:123 May 14 18:00:45.446054 ntpd[1964]: Listen normally on 13 calidc62f097f65 [fe80::ecee:eeff:feee:eeee%10]:123 May 14 18:00:45.446128 ntpd[1964]: Listen normally on 14 cali5c5451b44f2 [fe80::ecee:eeff:feee:eeee%11]:123 May 14 18:00:45.446193 ntpd[1964]: Listen normally on 15 cali28761252903 [fe80::ecee:eeff:feee:eeee%12]:123 May 14 18:00:45.550500 containerd[1983]: time="2025-05-14T18:00:45.550425708Z" level=info msg="StartContainer for \"c3a9f8fc96cbbd22c313213e65843a30020bfe5dc930112285ec70c2acaec727\" returns successfully" May 14 18:00:46.046480 kubelet[3264]: I0514 18:00:46.046374 3264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6964b45d66-nxqh5" podStartSLOduration=44.070031309 podStartE2EDuration="49.046354486s" podCreationTimestamp="2025-05-14 17:59:57 +0000 UTC" firstStartedPulling="2025-05-14 18:00:37.336852315 +0000 UTC m=+53.707719712" lastFinishedPulling="2025-05-14 18:00:42.313175408 +0000 UTC m=+58.684042889" observedRunningTime="2025-05-14 18:00:43.357809649 +0000 UTC m=+59.728677082" watchObservedRunningTime="2025-05-14 18:00:46.046354486 +0000 UTC m=+62.417221895" May 14 18:00:46.598185 systemd[1]: Started sshd@12-172.31.31.64:22-139.178.89.65:57006.service - OpenSSH per-connection server daemon (139.178.89.65:57006). May 14 18:00:46.829442 sshd[5587]: Accepted publickey for core from 139.178.89.65 port 57006 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 18:00:46.833139 sshd-session[5587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:00:46.843816 systemd-logind[1969]: New session 13 of user core. May 14 18:00:46.853887 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 18:00:47.201768 sshd[5591]: Connection closed by 139.178.89.65 port 57006 May 14 18:00:47.202693 sshd-session[5587]: pam_unix(sshd:session): session closed for user core May 14 18:00:47.211717 systemd-logind[1969]: Session 13 logged out. Waiting for processes to exit. May 14 18:00:47.213386 systemd[1]: sshd@12-172.31.31.64:22-139.178.89.65:57006.service: Deactivated successfully. May 14 18:00:47.221107 systemd[1]: session-13.scope: Deactivated successfully. May 14 18:00:47.228499 systemd-logind[1969]: Removed session 13. May 14 18:00:48.702393 containerd[1983]: time="2025-05-14T18:00:48.702327208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:48.703758 containerd[1983]: time="2025-05-14T18:00:48.703643524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 14 18:00:48.705331 containerd[1983]: time="2025-05-14T18:00:48.705227716Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:48.708591 containerd[1983]: time="2025-05-14T18:00:48.708449524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:48.710225 containerd[1983]: time="2025-05-14T18:00:48.710171680Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 3.54078537s" May 14 18:00:48.710380 containerd[1983]: time="2025-05-14T18:00:48.710352856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 14 18:00:48.712343 containerd[1983]: time="2025-05-14T18:00:48.712278628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 14 18:00:48.745214 containerd[1983]: time="2025-05-14T18:00:48.745140772Z" level=info msg="CreateContainer within sandbox \"1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 14 18:00:48.758130 containerd[1983]: time="2025-05-14T18:00:48.758077420Z" level=info msg="Container b2d5645cbaf5fb358a5dbc7696d0cbd08389a068afc143abfd259d1858564d43: CDI devices from CRI Config.CDIDevices: []" May 14 18:00:48.772367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount968645886.mount: Deactivated successfully. May 14 18:00:48.777086 containerd[1983]: time="2025-05-14T18:00:48.776999608Z" level=info msg="CreateContainer within sandbox \"1751d5e5acb257e1f00b6dd4ae1ab2d63461a34933e2a2dacb81c5e9f15167d4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b2d5645cbaf5fb358a5dbc7696d0cbd08389a068afc143abfd259d1858564d43\"" May 14 18:00:48.778166 containerd[1983]: time="2025-05-14T18:00:48.778073908Z" level=info msg="StartContainer for \"b2d5645cbaf5fb358a5dbc7696d0cbd08389a068afc143abfd259d1858564d43\"" May 14 18:00:48.781531 containerd[1983]: time="2025-05-14T18:00:48.781395436Z" level=info msg="connecting to shim b2d5645cbaf5fb358a5dbc7696d0cbd08389a068afc143abfd259d1858564d43" address="unix:///run/containerd/s/ee746e6c754487be9db261fa7d456acb016ef04008f23010bdaae25e7d5e31ea" protocol=ttrpc version=3 May 14 18:00:48.825843 systemd[1]: Started cri-containerd-b2d5645cbaf5fb358a5dbc7696d0cbd08389a068afc143abfd259d1858564d43.scope - libcontainer container b2d5645cbaf5fb358a5dbc7696d0cbd08389a068afc143abfd259d1858564d43. May 14 18:00:48.930265 containerd[1983]: time="2025-05-14T18:00:48.930218405Z" level=info msg="StartContainer for \"b2d5645cbaf5fb358a5dbc7696d0cbd08389a068afc143abfd259d1858564d43\" returns successfully" May 14 18:00:49.460029 containerd[1983]: time="2025-05-14T18:00:49.459889551Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2d5645cbaf5fb358a5dbc7696d0cbd08389a068afc143abfd259d1858564d43\" id:\"5dcf377c943da24f9ed6b7aa30277ddb2e24bf1131980a15a1b408e27bf05a01\" pid:5655 exited_at:{seconds:1747245649 nanos:457892607}" May 14 18:00:49.488825 kubelet[3264]: I0514 18:00:49.488734 3264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-9b5f67457-mkjwp" podStartSLOduration=42.686377604 podStartE2EDuration="50.488685795s" podCreationTimestamp="2025-05-14 17:59:59 +0000 UTC" firstStartedPulling="2025-05-14 18:00:40.909373893 +0000 UTC m=+57.280241302" lastFinishedPulling="2025-05-14 18:00:48.711682084 +0000 UTC m=+65.082549493" observedRunningTime="2025-05-14 18:00:49.407955543 +0000 UTC m=+65.778823060" watchObservedRunningTime="2025-05-14 18:00:49.488685795 +0000 UTC m=+65.859553204" May 14 18:00:50.480992 containerd[1983]: time="2025-05-14T18:00:50.480905248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:50.485813 containerd[1983]: time="2025-05-14T18:00:50.485737408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 14 18:00:50.493205 containerd[1983]: time="2025-05-14T18:00:50.493127236Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:50.500997 containerd[1983]: time="2025-05-14T18:00:50.500788744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 18:00:50.504919 containerd[1983]: time="2025-05-14T18:00:50.504312665Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.791278241s" May 14 18:00:50.504919 containerd[1983]: time="2025-05-14T18:00:50.504370889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 14 18:00:50.512575 containerd[1983]: time="2025-05-14T18:00:50.512489789Z" level=info msg="CreateContainer within sandbox \"49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 14 18:00:50.541575 containerd[1983]: time="2025-05-14T18:00:50.539963105Z" level=info msg="Container e47faa27b4bb6daf27fd627e23f9f0ceb82faa49ae7c6f0d6cff22e5a6b8ce5a: CDI devices from CRI Config.CDIDevices: []" May 14 18:00:50.557106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount771015686.mount: Deactivated successfully. May 14 18:00:50.569953 containerd[1983]: time="2025-05-14T18:00:50.569880365Z" level=info msg="CreateContainer within sandbox \"49372b57931fc675446e0a1ac30405a2139c11685b98bcd4cfcc014a741e2688\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e47faa27b4bb6daf27fd627e23f9f0ceb82faa49ae7c6f0d6cff22e5a6b8ce5a\"" May 14 18:00:50.571665 containerd[1983]: time="2025-05-14T18:00:50.570858401Z" level=info msg="StartContainer for \"e47faa27b4bb6daf27fd627e23f9f0ceb82faa49ae7c6f0d6cff22e5a6b8ce5a\"" May 14 18:00:50.578782 containerd[1983]: time="2025-05-14T18:00:50.578494325Z" level=info msg="connecting to shim e47faa27b4bb6daf27fd627e23f9f0ceb82faa49ae7c6f0d6cff22e5a6b8ce5a" address="unix:///run/containerd/s/6fc64e6b72881876dd690c9454e59db6a628c68df542ec4c277924123d8ba3b5" protocol=ttrpc version=3 May 14 18:00:50.642853 systemd[1]: Started cri-containerd-e47faa27b4bb6daf27fd627e23f9f0ceb82faa49ae7c6f0d6cff22e5a6b8ce5a.scope - libcontainer container e47faa27b4bb6daf27fd627e23f9f0ceb82faa49ae7c6f0d6cff22e5a6b8ce5a. May 14 18:00:50.730652 containerd[1983]: time="2025-05-14T18:00:50.730578354Z" level=info msg="StartContainer for \"e47faa27b4bb6daf27fd627e23f9f0ceb82faa49ae7c6f0d6cff22e5a6b8ce5a\" returns successfully" May 14 18:00:51.079219 kubelet[3264]: I0514 18:00:51.078508 3264 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 14 18:00:51.079219 kubelet[3264]: I0514 18:00:51.078582 3264 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 14 18:00:52.239122 systemd[1]: Started sshd@13-172.31.31.64:22-139.178.89.65:57020.service - OpenSSH per-connection server daemon (139.178.89.65:57020). May 14 18:00:52.446494 sshd[5705]: Accepted publickey for core from 139.178.89.65 port 57020 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 18:00:52.450313 sshd-session[5705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:00:52.459715 systemd-logind[1969]: New session 14 of user core. May 14 18:00:52.465788 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 18:00:52.735689 sshd[5707]: Connection closed by 139.178.89.65 port 57020 May 14 18:00:52.736646 sshd-session[5705]: pam_unix(sshd:session): session closed for user core May 14 18:00:52.744647 systemd[1]: sshd@13-172.31.31.64:22-139.178.89.65:57020.service: Deactivated successfully. May 14 18:00:52.748250 systemd[1]: session-14.scope: Deactivated successfully. May 14 18:00:52.751635 systemd-logind[1969]: Session 14 logged out. Waiting for processes to exit. May 14 18:00:52.755112 systemd-logind[1969]: Removed session 14. May 14 18:00:53.193604 containerd[1983]: time="2025-05-14T18:00:53.193406418Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c8e34a6e9f5071a19b7bd12c7e259266bd710893792d452ab3287f78c64f1e03\" id:\"28173df813739017c34ddfbb95002d118e59ef70c4c01fd68e712e103e43a824\" pid:5730 exited_at:{seconds:1747245653 nanos:192482910}" May 14 18:00:53.225381 kubelet[3264]: I0514 18:00:53.225254 3264 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qz84k" podStartSLOduration=42.740541884 podStartE2EDuration="54.225226878s" podCreationTimestamp="2025-05-14 17:59:59 +0000 UTC" firstStartedPulling="2025-05-14 18:00:39.022278579 +0000 UTC m=+55.393145988" lastFinishedPulling="2025-05-14 18:00:50.506963573 +0000 UTC m=+66.877830982" observedRunningTime="2025-05-14 18:00:51.418614941 +0000 UTC m=+67.789482434" watchObservedRunningTime="2025-05-14 18:00:53.225226878 +0000 UTC m=+69.596094335" May 14 18:00:54.640711 containerd[1983]: time="2025-05-14T18:00:54.640649877Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2d5645cbaf5fb358a5dbc7696d0cbd08389a068afc143abfd259d1858564d43\" id:\"ebdd1c33a05e856771174f98ae69094d0a7b2b780b10750e62f992c5bd23331d\" pid:5754 exited_at:{seconds:1747245654 nanos:640083165}" May 14 18:00:56.672189 kubelet[3264]: I0514 18:00:56.671603 3264 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 18:00:57.775790 systemd[1]: Started sshd@14-172.31.31.64:22-139.178.89.65:46194.service - OpenSSH per-connection server daemon (139.178.89.65:46194). May 14 18:00:57.986922 sshd[5772]: Accepted publickey for core from 139.178.89.65 port 46194 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 18:00:57.989424 sshd-session[5772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:00:57.998112 systemd-logind[1969]: New session 15 of user core. May 14 18:00:58.004818 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 18:00:58.250395 sshd[5774]: Connection closed by 139.178.89.65 port 46194 May 14 18:00:58.251266 sshd-session[5772]: pam_unix(sshd:session): session closed for user core May 14 18:00:58.258232 systemd[1]: sshd@14-172.31.31.64:22-139.178.89.65:46194.service: Deactivated successfully. May 14 18:00:58.261947 systemd[1]: session-15.scope: Deactivated successfully. May 14 18:00:58.264053 systemd-logind[1969]: Session 15 logged out. Waiting for processes to exit. May 14 18:00:58.267648 systemd-logind[1969]: Removed session 15. May 14 18:00:59.423597 containerd[1983]: time="2025-05-14T18:00:59.423491989Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2d5645cbaf5fb358a5dbc7696d0cbd08389a068afc143abfd259d1858564d43\" id:\"b585837a4e3bd445d4b077d7c8db958d4d0d81fbff56749e4e751f6922d10e23\" pid:5797 exited_at:{seconds:1747245659 nanos:423114217}" May 14 18:01:03.290507 systemd[1]: Started sshd@15-172.31.31.64:22-139.178.89.65:46206.service - OpenSSH per-connection server daemon (139.178.89.65:46206). May 14 18:01:03.489158 sshd[5810]: Accepted publickey for core from 139.178.89.65 port 46206 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 18:01:03.493571 sshd-session[5810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:03.503157 systemd-logind[1969]: New session 16 of user core. May 14 18:01:03.508833 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 18:01:03.781963 sshd[5812]: Connection closed by 139.178.89.65 port 46206 May 14 18:01:03.782853 sshd-session[5810]: pam_unix(sshd:session): session closed for user core May 14 18:01:03.789646 systemd[1]: sshd@15-172.31.31.64:22-139.178.89.65:46206.service: Deactivated successfully. May 14 18:01:03.793469 systemd[1]: session-16.scope: Deactivated successfully. May 14 18:01:03.795597 systemd-logind[1969]: Session 16 logged out. Waiting for processes to exit. May 14 18:01:03.799103 systemd-logind[1969]: Removed session 16. May 14 18:01:03.815967 systemd[1]: Started sshd@16-172.31.31.64:22-139.178.89.65:46214.service - OpenSSH per-connection server daemon (139.178.89.65:46214). May 14 18:01:04.029588 sshd[5824]: Accepted publickey for core from 139.178.89.65 port 46214 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 18:01:04.031603 sshd-session[5824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:04.040605 systemd-logind[1969]: New session 17 of user core. May 14 18:01:04.046238 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 18:01:04.556806 sshd[5826]: Connection closed by 139.178.89.65 port 46214 May 14 18:01:04.558120 sshd-session[5824]: pam_unix(sshd:session): session closed for user core May 14 18:01:04.564408 systemd[1]: sshd@16-172.31.31.64:22-139.178.89.65:46214.service: Deactivated successfully. May 14 18:01:04.569958 systemd[1]: session-17.scope: Deactivated successfully. May 14 18:01:04.579324 systemd-logind[1969]: Session 17 logged out. Waiting for processes to exit. May 14 18:01:04.605967 systemd[1]: Started sshd@17-172.31.31.64:22-139.178.89.65:46224.service - OpenSSH per-connection server daemon (139.178.89.65:46224). May 14 18:01:04.611420 systemd-logind[1969]: Removed session 17. May 14 18:01:04.848582 sshd[5836]: Accepted publickey for core from 139.178.89.65 port 46224 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 18:01:04.852359 sshd-session[5836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:04.862924 systemd-logind[1969]: New session 18 of user core. May 14 18:01:04.872803 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 18:01:08.434352 sshd[5838]: Connection closed by 139.178.89.65 port 46224 May 14 18:01:08.435363 sshd-session[5836]: pam_unix(sshd:session): session closed for user core May 14 18:01:08.448430 systemd[1]: sshd@17-172.31.31.64:22-139.178.89.65:46224.service: Deactivated successfully. May 14 18:01:08.448912 systemd-logind[1969]: Session 18 logged out. Waiting for processes to exit. May 14 18:01:08.458899 systemd[1]: session-18.scope: Deactivated successfully. May 14 18:01:08.460998 systemd[1]: session-18.scope: Consumed 1.015s CPU time, 65.7M memory peak. May 14 18:01:08.487628 systemd-logind[1969]: Removed session 18. May 14 18:01:08.491577 systemd[1]: Started sshd@18-172.31.31.64:22-139.178.89.65:38600.service - OpenSSH per-connection server daemon (139.178.89.65:38600). May 14 18:01:08.708727 sshd[5857]: Accepted publickey for core from 139.178.89.65 port 38600 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 18:01:08.710962 sshd-session[5857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:08.719625 systemd-logind[1969]: New session 19 of user core. May 14 18:01:08.730817 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 18:01:09.286496 sshd[5860]: Connection closed by 139.178.89.65 port 38600 May 14 18:01:09.287056 sshd-session[5857]: pam_unix(sshd:session): session closed for user core May 14 18:01:09.295469 systemd-logind[1969]: Session 19 logged out. Waiting for processes to exit. May 14 18:01:09.296323 systemd[1]: sshd@18-172.31.31.64:22-139.178.89.65:38600.service: Deactivated successfully. May 14 18:01:09.299873 systemd[1]: session-19.scope: Deactivated successfully. May 14 18:01:09.306363 systemd-logind[1969]: Removed session 19. May 14 18:01:09.326643 systemd[1]: Started sshd@19-172.31.31.64:22-139.178.89.65:38614.service - OpenSSH per-connection server daemon (139.178.89.65:38614). May 14 18:01:09.528360 sshd[5872]: Accepted publickey for core from 139.178.89.65 port 38614 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 18:01:09.530934 sshd-session[5872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:09.540624 systemd-logind[1969]: New session 20 of user core. May 14 18:01:09.547839 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 18:01:09.792563 sshd[5874]: Connection closed by 139.178.89.65 port 38614 May 14 18:01:09.793553 sshd-session[5872]: pam_unix(sshd:session): session closed for user core May 14 18:01:09.802668 systemd[1]: sshd@19-172.31.31.64:22-139.178.89.65:38614.service: Deactivated successfully. May 14 18:01:09.807131 systemd[1]: session-20.scope: Deactivated successfully. May 14 18:01:09.809318 systemd-logind[1969]: Session 20 logged out. Waiting for processes to exit. May 14 18:01:09.812713 systemd-logind[1969]: Removed session 20. May 14 18:01:14.835991 systemd[1]: Started sshd@20-172.31.31.64:22-139.178.89.65:38618.service - OpenSSH per-connection server daemon (139.178.89.65:38618). May 14 18:01:15.036124 sshd[5886]: Accepted publickey for core from 139.178.89.65 port 38618 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 18:01:15.040140 sshd-session[5886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:15.049699 systemd-logind[1969]: New session 21 of user core. May 14 18:01:15.060832 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 18:01:15.310534 sshd[5888]: Connection closed by 139.178.89.65 port 38618 May 14 18:01:15.311379 sshd-session[5886]: pam_unix(sshd:session): session closed for user core May 14 18:01:15.316991 systemd[1]: sshd@20-172.31.31.64:22-139.178.89.65:38618.service: Deactivated successfully. May 14 18:01:15.321109 systemd[1]: session-21.scope: Deactivated successfully. May 14 18:01:15.325465 systemd-logind[1969]: Session 21 logged out. Waiting for processes to exit. May 14 18:01:15.328979 systemd-logind[1969]: Removed session 21. May 14 18:01:20.355973 systemd[1]: Started sshd@21-172.31.31.64:22-139.178.89.65:52868.service - OpenSSH per-connection server daemon (139.178.89.65:52868). May 14 18:01:20.570499 sshd[5909]: Accepted publickey for core from 139.178.89.65 port 52868 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 18:01:20.574513 sshd-session[5909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:20.586443 systemd-logind[1969]: New session 22 of user core. May 14 18:01:20.593811 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 18:01:20.908186 sshd[5911]: Connection closed by 139.178.89.65 port 52868 May 14 18:01:20.908684 sshd-session[5909]: pam_unix(sshd:session): session closed for user core May 14 18:01:20.919189 systemd[1]: sshd@21-172.31.31.64:22-139.178.89.65:52868.service: Deactivated successfully. May 14 18:01:20.927120 systemd[1]: session-22.scope: Deactivated successfully. May 14 18:01:20.930637 systemd-logind[1969]: Session 22 logged out. Waiting for processes to exit. May 14 18:01:20.935694 systemd-logind[1969]: Removed session 22. May 14 18:01:23.301433 containerd[1983]: time="2025-05-14T18:01:23.301373363Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c8e34a6e9f5071a19b7bd12c7e259266bd710893792d452ab3287f78c64f1e03\" id:\"5dd7b0e1e2c17aac067c8a20ad27dfb88cf91020f3f151cdc38d0caf112f48fd\" pid:5942 exited_at:{seconds:1747245683 nanos:300443567}" May 14 18:01:24.668489 containerd[1983]: time="2025-05-14T18:01:24.668329886Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2d5645cbaf5fb358a5dbc7696d0cbd08389a068afc143abfd259d1858564d43\" id:\"b03b32fa9b40a026bf27b00b012d69c371317fbeb48cc56b6d3e258519efc65b\" pid:5966 exited_at:{seconds:1747245684 nanos:667592102}" May 14 18:01:25.948225 systemd[1]: Started sshd@22-172.31.31.64:22-139.178.89.65:52870.service - OpenSSH per-connection server daemon (139.178.89.65:52870). May 14 18:01:26.170488 sshd[5977]: Accepted publickey for core from 139.178.89.65 port 52870 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 18:01:26.173767 sshd-session[5977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:26.185705 systemd-logind[1969]: New session 23 of user core. May 14 18:01:26.194902 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 18:01:26.462894 sshd[5979]: Connection closed by 139.178.89.65 port 52870 May 14 18:01:26.465153 sshd-session[5977]: pam_unix(sshd:session): session closed for user core May 14 18:01:26.475337 systemd[1]: sshd@22-172.31.31.64:22-139.178.89.65:52870.service: Deactivated successfully. May 14 18:01:26.482045 systemd[1]: session-23.scope: Deactivated successfully. May 14 18:01:26.484140 systemd-logind[1969]: Session 23 logged out. Waiting for processes to exit. May 14 18:01:26.487744 systemd-logind[1969]: Removed session 23. May 14 18:01:31.508371 systemd[1]: Started sshd@23-172.31.31.64:22-139.178.89.65:49590.service - OpenSSH per-connection server daemon (139.178.89.65:49590). May 14 18:01:31.718570 sshd[5991]: Accepted publickey for core from 139.178.89.65 port 49590 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 18:01:31.721639 sshd-session[5991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:31.730823 systemd-logind[1969]: New session 24 of user core. May 14 18:01:31.738796 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 18:01:31.984543 sshd[5993]: Connection closed by 139.178.89.65 port 49590 May 14 18:01:31.985022 sshd-session[5991]: pam_unix(sshd:session): session closed for user core May 14 18:01:31.992389 systemd[1]: sshd@23-172.31.31.64:22-139.178.89.65:49590.service: Deactivated successfully. May 14 18:01:31.997344 systemd[1]: session-24.scope: Deactivated successfully. May 14 18:01:32.000922 systemd-logind[1969]: Session 24 logged out. Waiting for processes to exit. May 14 18:01:32.004417 systemd-logind[1969]: Removed session 24. May 14 18:01:37.023912 systemd[1]: Started sshd@24-172.31.31.64:22-139.178.89.65:45120.service - OpenSSH per-connection server daemon (139.178.89.65:45120). May 14 18:01:37.223706 sshd[6005]: Accepted publickey for core from 139.178.89.65 port 45120 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 18:01:37.227123 sshd-session[6005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:37.236411 systemd-logind[1969]: New session 25 of user core. May 14 18:01:37.242801 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 18:01:37.485764 sshd[6007]: Connection closed by 139.178.89.65 port 45120 May 14 18:01:37.486922 sshd-session[6005]: pam_unix(sshd:session): session closed for user core May 14 18:01:37.494284 systemd[1]: sshd@24-172.31.31.64:22-139.178.89.65:45120.service: Deactivated successfully. May 14 18:01:37.497826 systemd[1]: session-25.scope: Deactivated successfully. May 14 18:01:37.501396 systemd-logind[1969]: Session 25 logged out. Waiting for processes to exit. May 14 18:01:37.503853 systemd-logind[1969]: Removed session 25. May 14 18:01:42.523786 systemd[1]: Started sshd@25-172.31.31.64:22-139.178.89.65:45126.service - OpenSSH per-connection server daemon (139.178.89.65:45126). May 14 18:01:42.721778 sshd[6021]: Accepted publickey for core from 139.178.89.65 port 45126 ssh2: RSA SHA256:URn7ZoHBoU077VSjynW3kvkQ/AEB/rllRhTyVV1XWG4 May 14 18:01:42.724839 sshd-session[6021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 18:01:42.733039 systemd-logind[1969]: New session 26 of user core. May 14 18:01:42.749770 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 18:01:42.983628 sshd[6023]: Connection closed by 139.178.89.65 port 45126 May 14 18:01:42.984399 sshd-session[6021]: pam_unix(sshd:session): session closed for user core May 14 18:01:42.991631 systemd-logind[1969]: Session 26 logged out. Waiting for processes to exit. May 14 18:01:42.993025 systemd[1]: sshd@25-172.31.31.64:22-139.178.89.65:45126.service: Deactivated successfully. May 14 18:01:42.997946 systemd[1]: session-26.scope: Deactivated successfully. May 14 18:01:43.002966 systemd-logind[1969]: Removed session 26. May 14 18:01:53.201164 containerd[1983]: time="2025-05-14T18:01:53.201062308Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c8e34a6e9f5071a19b7bd12c7e259266bd710893792d452ab3287f78c64f1e03\" id:\"27c05d754fdd0520320021a1d79f6f6fedf2bd0e0aa9739fcb06222e05631e5f\" pid:6049 exited_at:{seconds:1747245713 nanos:200617852}" May 14 18:01:54.638690 containerd[1983]: time="2025-05-14T18:01:54.638584699Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2d5645cbaf5fb358a5dbc7696d0cbd08389a068afc143abfd259d1858564d43\" id:\"5033212639ec76571c3e3771f525552c48d1c746e47dfe5c8134576da304d118\" pid:6073 exit_status:1 exited_at:{seconds:1747245714 nanos:637987783}" May 14 18:01:56.983556 kubelet[3264]: E0514 18:01:56.982695 3264 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-64?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 14 18:01:57.269253 systemd[1]: cri-containerd-016f2a60a392fe0d3575eb9b7b4725ac3ed843201619e9b0b0ddd84f64cb50a2.scope: Deactivated successfully. May 14 18:01:57.273170 containerd[1983]: time="2025-05-14T18:01:57.273054908Z" level=info msg="TaskExit event in podsandbox handler container_id:\"016f2a60a392fe0d3575eb9b7b4725ac3ed843201619e9b0b0ddd84f64cb50a2\" id:\"016f2a60a392fe0d3575eb9b7b4725ac3ed843201619e9b0b0ddd84f64cb50a2\" pid:3886 exit_status:1 exited_at:{seconds:1747245717 nanos:272506868}" May 14 18:01:57.273257 systemd[1]: cri-containerd-016f2a60a392fe0d3575eb9b7b4725ac3ed843201619e9b0b0ddd84f64cb50a2.scope: Consumed 6.228s CPU time, 45.9M memory peak. May 14 18:01:57.274510 containerd[1983]: time="2025-05-14T18:01:57.274208804Z" level=info msg="received exit event container_id:\"016f2a60a392fe0d3575eb9b7b4725ac3ed843201619e9b0b0ddd84f64cb50a2\" id:\"016f2a60a392fe0d3575eb9b7b4725ac3ed843201619e9b0b0ddd84f64cb50a2\" pid:3886 exit_status:1 exited_at:{seconds:1747245717 nanos:272506868}" May 14 18:01:57.316580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-016f2a60a392fe0d3575eb9b7b4725ac3ed843201619e9b0b0ddd84f64cb50a2-rootfs.mount: Deactivated successfully. May 14 18:01:57.627296 kubelet[3264]: I0514 18:01:57.626887 3264 scope.go:117] "RemoveContainer" containerID="016f2a60a392fe0d3575eb9b7b4725ac3ed843201619e9b0b0ddd84f64cb50a2" May 14 18:01:57.631037 containerd[1983]: time="2025-05-14T18:01:57.630950122Z" level=info msg="CreateContainer within sandbox \"ad99a7bb223b4a45b517763f0157638bd47c9633ca8d31248aeba7604bac841b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 14 18:01:57.647661 containerd[1983]: time="2025-05-14T18:01:57.647594362Z" level=info msg="Container 4a33725b96d9d65e3e0f865e4a642633197399b05a6a062eb87225bdb06631c4: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:57.662433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1384989559.mount: Deactivated successfully. May 14 18:01:57.663767 containerd[1983]: time="2025-05-14T18:01:57.663673402Z" level=info msg="CreateContainer within sandbox \"ad99a7bb223b4a45b517763f0157638bd47c9633ca8d31248aeba7604bac841b\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"4a33725b96d9d65e3e0f865e4a642633197399b05a6a062eb87225bdb06631c4\"" May 14 18:01:57.666432 containerd[1983]: time="2025-05-14T18:01:57.665210074Z" level=info msg="StartContainer for \"4a33725b96d9d65e3e0f865e4a642633197399b05a6a062eb87225bdb06631c4\"" May 14 18:01:57.667372 containerd[1983]: time="2025-05-14T18:01:57.667292866Z" level=info msg="connecting to shim 4a33725b96d9d65e3e0f865e4a642633197399b05a6a062eb87225bdb06631c4" address="unix:///run/containerd/s/968e0b086356b1bdef9c5a5c2c87a449fa92f24e6bcf2e2336dd87a46687e202" protocol=ttrpc version=3 May 14 18:01:57.705836 systemd[1]: Started cri-containerd-4a33725b96d9d65e3e0f865e4a642633197399b05a6a062eb87225bdb06631c4.scope - libcontainer container 4a33725b96d9d65e3e0f865e4a642633197399b05a6a062eb87225bdb06631c4. May 14 18:01:57.769644 containerd[1983]: time="2025-05-14T18:01:57.769572587Z" level=info msg="StartContainer for \"4a33725b96d9d65e3e0f865e4a642633197399b05a6a062eb87225bdb06631c4\" returns successfully" May 14 18:01:58.180055 systemd[1]: cri-containerd-b7fbae9c48ca18075c2094823d96989ab3cbb9b8d352aa0f3c440015c22ee39b.scope: Deactivated successfully. May 14 18:01:58.181904 systemd[1]: cri-containerd-b7fbae9c48ca18075c2094823d96989ab3cbb9b8d352aa0f3c440015c22ee39b.scope: Consumed 5.765s CPU time, 58M memory peak, 236K read from disk. May 14 18:01:58.187890 containerd[1983]: time="2025-05-14T18:01:58.187746393Z" level=info msg="received exit event container_id:\"b7fbae9c48ca18075c2094823d96989ab3cbb9b8d352aa0f3c440015c22ee39b\" id:\"b7fbae9c48ca18075c2094823d96989ab3cbb9b8d352aa0f3c440015c22ee39b\" pid:3104 exit_status:1 exited_at:{seconds:1747245718 nanos:185986833}" May 14 18:01:58.188757 containerd[1983]: time="2025-05-14T18:01:58.188145201Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7fbae9c48ca18075c2094823d96989ab3cbb9b8d352aa0f3c440015c22ee39b\" id:\"b7fbae9c48ca18075c2094823d96989ab3cbb9b8d352aa0f3c440015c22ee39b\" pid:3104 exit_status:1 exited_at:{seconds:1747245718 nanos:185986833}" May 14 18:01:58.237331 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7fbae9c48ca18075c2094823d96989ab3cbb9b8d352aa0f3c440015c22ee39b-rootfs.mount: Deactivated successfully. May 14 18:01:58.634282 kubelet[3264]: I0514 18:01:58.633169 3264 scope.go:117] "RemoveContainer" containerID="b7fbae9c48ca18075c2094823d96989ab3cbb9b8d352aa0f3c440015c22ee39b" May 14 18:01:58.638969 containerd[1983]: time="2025-05-14T18:01:58.638906339Z" level=info msg="CreateContainer within sandbox \"3f20f3f80fe6e943579aac782ec14955b101d4242143d67b91e99015c277eec3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 14 18:01:58.659254 containerd[1983]: time="2025-05-14T18:01:58.659186159Z" level=info msg="Container 8e9e76f92dd0aec88926dc2bb3913f19812f1e89ebca351e6eb58c7f5e057050: CDI devices from CRI Config.CDIDevices: []" May 14 18:01:58.680750 containerd[1983]: time="2025-05-14T18:01:58.680621015Z" level=info msg="CreateContainer within sandbox \"3f20f3f80fe6e943579aac782ec14955b101d4242143d67b91e99015c277eec3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8e9e76f92dd0aec88926dc2bb3913f19812f1e89ebca351e6eb58c7f5e057050\"" May 14 18:01:58.683205 containerd[1983]: time="2025-05-14T18:01:58.682257491Z" level=info msg="StartContainer for \"8e9e76f92dd0aec88926dc2bb3913f19812f1e89ebca351e6eb58c7f5e057050\"" May 14 18:01:58.686743 containerd[1983]: time="2025-05-14T18:01:58.686664863Z" level=info msg="connecting to shim 8e9e76f92dd0aec88926dc2bb3913f19812f1e89ebca351e6eb58c7f5e057050" address="unix:///run/containerd/s/dd1c59231ebeeb4bc0fae1e4cb93b27d51e0eadacfa523de3ea3f3272a429e5a" protocol=ttrpc version=3 May 14 18:01:58.729820 systemd[1]: Started cri-containerd-8e9e76f92dd0aec88926dc2bb3913f19812f1e89ebca351e6eb58c7f5e057050.scope - libcontainer container 8e9e76f92dd0aec88926dc2bb3913f19812f1e89ebca351e6eb58c7f5e057050. May 14 18:01:58.824714 containerd[1983]: time="2025-05-14T18:01:58.824664744Z" level=info msg="StartContainer for \"8e9e76f92dd0aec88926dc2bb3913f19812f1e89ebca351e6eb58c7f5e057050\" returns successfully" May 14 18:01:59.436376 containerd[1983]: time="2025-05-14T18:01:59.436310267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2d5645cbaf5fb358a5dbc7696d0cbd08389a068afc143abfd259d1858564d43\" id:\"c92d26b2e4726508e79cd67084d2f96686c966f189dd267d5362a50db9b9c3ec\" pid:6188 exit_status:1 exited_at:{seconds:1747245719 nanos:435281171}" May 14 18:02:02.075921 systemd[1]: cri-containerd-167208dd0ab050b3d11986aa934ed059ac0de5496e67dad817e9c3c16a9ffab9.scope: Deactivated successfully. May 14 18:02:02.077837 systemd[1]: cri-containerd-167208dd0ab050b3d11986aa934ed059ac0de5496e67dad817e9c3c16a9ffab9.scope: Consumed 2.132s CPU time, 20.9M memory peak. May 14 18:02:02.083727 containerd[1983]: time="2025-05-14T18:02:02.083655084Z" level=info msg="received exit event container_id:\"167208dd0ab050b3d11986aa934ed059ac0de5496e67dad817e9c3c16a9ffab9\" id:\"167208dd0ab050b3d11986aa934ed059ac0de5496e67dad817e9c3c16a9ffab9\" pid:3114 exit_status:1 exited_at:{seconds:1747245722 nanos:80843592}" May 14 18:02:02.084804 containerd[1983]: time="2025-05-14T18:02:02.084615048Z" level=info msg="TaskExit event in podsandbox handler container_id:\"167208dd0ab050b3d11986aa934ed059ac0de5496e67dad817e9c3c16a9ffab9\" id:\"167208dd0ab050b3d11986aa934ed059ac0de5496e67dad817e9c3c16a9ffab9\" pid:3114 exit_status:1 exited_at:{seconds:1747245722 nanos:80843592}" May 14 18:02:02.126401 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-167208dd0ab050b3d11986aa934ed059ac0de5496e67dad817e9c3c16a9ffab9-rootfs.mount: Deactivated successfully. May 14 18:02:02.664209 kubelet[3264]: I0514 18:02:02.664093 3264 scope.go:117] "RemoveContainer" containerID="167208dd0ab050b3d11986aa934ed059ac0de5496e67dad817e9c3c16a9ffab9" May 14 18:02:02.668594 containerd[1983]: time="2025-05-14T18:02:02.668237427Z" level=info msg="CreateContainer within sandbox \"6dd421eb719997fa4a5daa6ec7593fb7cbfd8758ed25b59b8d99f04a20a1c7b6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 14 18:02:02.688083 containerd[1983]: time="2025-05-14T18:02:02.685820823Z" level=info msg="Container aee47eef5a376a82f76f58c8fa96e521e3439063b3b74a5ca4ea7f353a4e1c73: CDI devices from CRI Config.CDIDevices: []" May 14 18:02:02.705906 containerd[1983]: time="2025-05-14T18:02:02.705855771Z" level=info msg="CreateContainer within sandbox \"6dd421eb719997fa4a5daa6ec7593fb7cbfd8758ed25b59b8d99f04a20a1c7b6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"aee47eef5a376a82f76f58c8fa96e521e3439063b3b74a5ca4ea7f353a4e1c73\"" May 14 18:02:02.706810 containerd[1983]: time="2025-05-14T18:02:02.706757259Z" level=info msg="StartContainer for \"aee47eef5a376a82f76f58c8fa96e521e3439063b3b74a5ca4ea7f353a4e1c73\"" May 14 18:02:02.708955 containerd[1983]: time="2025-05-14T18:02:02.708886131Z" level=info msg="connecting to shim aee47eef5a376a82f76f58c8fa96e521e3439063b3b74a5ca4ea7f353a4e1c73" address="unix:///run/containerd/s/e9ba4bbd8c7528e8f88f96c6e232e0879b8b0421956eeea788a384a1644967f7" protocol=ttrpc version=3 May 14 18:02:02.754830 systemd[1]: Started cri-containerd-aee47eef5a376a82f76f58c8fa96e521e3439063b3b74a5ca4ea7f353a4e1c73.scope - libcontainer container aee47eef5a376a82f76f58c8fa96e521e3439063b3b74a5ca4ea7f353a4e1c73. May 14 18:02:02.840720 containerd[1983]: time="2025-05-14T18:02:02.840617608Z" level=info msg="StartContainer for \"aee47eef5a376a82f76f58c8fa96e521e3439063b3b74a5ca4ea7f353a4e1c73\" returns successfully" May 14 18:02:06.983335 kubelet[3264]: E0514 18:02:06.983257 3264 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-64?timeout=10s\": context deadline exceeded"