May 27 17:01:10.081840 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] May 27 17:01:10.081885 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 27 15:31:23 -00 2025 May 27 17:01:10.081909 kernel: KASLR disabled due to lack of seed May 27 17:01:10.081925 kernel: efi: EFI v2.7 by EDK II May 27 17:01:10.081941 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a733a98 MEMRESERVE=0x78551598 May 27 17:01:10.081981 kernel: secureboot: Secure boot disabled May 27 17:01:10.082003 kernel: ACPI: Early table checksum verification disabled May 27 17:01:10.082019 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) May 27 17:01:10.082034 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) May 27 17:01:10.082050 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) May 27 17:01:10.082071 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) May 27 17:01:10.082086 kernel: ACPI: FACS 0x0000000078630000 000040 May 27 17:01:10.082101 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 27 17:01:10.082116 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) May 27 17:01:10.082134 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) May 27 17:01:10.082150 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) May 27 17:01:10.082170 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 27 17:01:10.082186 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) May 27 17:01:10.082202 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) May 27 17:01:10.082218 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 May 27 17:01:10.082269 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') May 27 17:01:10.082286 kernel: printk: legacy bootconsole [uart0] enabled May 27 17:01:10.082320 kernel: ACPI: Use ACPI SPCR as default console: Yes May 27 17:01:10.082339 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] May 27 17:01:10.082356 kernel: NODE_DATA(0) allocated [mem 0x4b584cdc0-0x4b5853fff] May 27 17:01:10.082372 kernel: Zone ranges: May 27 17:01:10.082396 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 27 17:01:10.082412 kernel: DMA32 empty May 27 17:01:10.082427 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] May 27 17:01:10.082443 kernel: Device empty May 27 17:01:10.082458 kernel: Movable zone start for each node May 27 17:01:10.082474 kernel: Early memory node ranges May 27 17:01:10.082489 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] May 27 17:01:10.082505 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] May 27 17:01:10.082521 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] May 27 17:01:10.082536 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] May 27 17:01:10.082552 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] May 27 17:01:10.082568 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] May 27 17:01:10.082587 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] May 27 17:01:10.082603 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] May 27 17:01:10.082626 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] May 27 17:01:10.082643 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges May 27 17:01:10.082660 kernel: psci: probing for conduit method from ACPI. May 27 17:01:10.082680 kernel: psci: PSCIv1.0 detected in firmware. May 27 17:01:10.082697 kernel: psci: Using standard PSCI v0.2 function IDs May 27 17:01:10.082713 kernel: psci: Trusted OS migration not required May 27 17:01:10.082730 kernel: psci: SMC Calling Convention v1.1 May 27 17:01:10.082746 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 27 17:01:10.082763 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 27 17:01:10.082780 kernel: pcpu-alloc: [0] 0 [0] 1 May 27 17:01:10.082797 kernel: Detected PIPT I-cache on CPU0 May 27 17:01:10.082813 kernel: CPU features: detected: GIC system register CPU interface May 27 17:01:10.082830 kernel: CPU features: detected: Spectre-v2 May 27 17:01:10.082847 kernel: CPU features: detected: Spectre-v3a May 27 17:01:10.082863 kernel: CPU features: detected: Spectre-BHB May 27 17:01:10.082884 kernel: CPU features: detected: ARM erratum 1742098 May 27 17:01:10.082901 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 May 27 17:01:10.082917 kernel: alternatives: applying boot alternatives May 27 17:01:10.082936 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4e706b869299e1c88703222069cdfa08c45ebce568f762053eea5b3f5f0939c3 May 27 17:01:10.083021 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 17:01:10.083048 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 17:01:10.083065 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 17:01:10.083082 kernel: Fallback order for Node 0: 0 May 27 17:01:10.083099 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 May 27 17:01:10.083115 kernel: Policy zone: Normal May 27 17:01:10.083139 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 17:01:10.083155 kernel: software IO TLB: area num 2. May 27 17:01:10.083172 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) May 27 17:01:10.083188 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 27 17:01:10.083205 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 17:01:10.083223 kernel: rcu: RCU event tracing is enabled. May 27 17:01:10.083240 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 27 17:01:10.083257 kernel: Trampoline variant of Tasks RCU enabled. May 27 17:01:10.083274 kernel: Tracing variant of Tasks RCU enabled. May 27 17:01:10.083291 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 17:01:10.083307 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 27 17:01:10.083324 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 17:01:10.083345 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 17:01:10.083362 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 27 17:01:10.083379 kernel: GICv3: 96 SPIs implemented May 27 17:01:10.083396 kernel: GICv3: 0 Extended SPIs implemented May 27 17:01:10.083412 kernel: Root IRQ handler: gic_handle_irq May 27 17:01:10.083429 kernel: GICv3: GICv3 features: 16 PPIs May 27 17:01:10.083445 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 27 17:01:10.083462 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 May 27 17:01:10.083478 kernel: ITS [mem 0x10080000-0x1009ffff] May 27 17:01:10.083495 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) May 27 17:01:10.083512 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) May 27 17:01:10.083533 kernel: GICv3: using LPI property table @0x00000004000e0000 May 27 17:01:10.083550 kernel: ITS: Using hypervisor restricted LPI range [128] May 27 17:01:10.083567 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 May 27 17:01:10.083584 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 17:01:10.083600 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). May 27 17:01:10.083617 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns May 27 17:01:10.083634 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns May 27 17:01:10.083651 kernel: Console: colour dummy device 80x25 May 27 17:01:10.083668 kernel: printk: legacy console [tty1] enabled May 27 17:01:10.083686 kernel: ACPI: Core revision 20240827 May 27 17:01:10.083703 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) May 27 17:01:10.083725 kernel: pid_max: default: 32768 minimum: 301 May 27 17:01:10.083742 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 17:01:10.083759 kernel: landlock: Up and running. May 27 17:01:10.083775 kernel: SELinux: Initializing. May 27 17:01:10.083792 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 17:01:10.083810 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 17:01:10.083827 kernel: rcu: Hierarchical SRCU implementation. May 27 17:01:10.083844 kernel: rcu: Max phase no-delay instances is 400. May 27 17:01:10.083861 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 17:01:10.083882 kernel: Remapping and enabling EFI services. May 27 17:01:10.083899 kernel: smp: Bringing up secondary CPUs ... May 27 17:01:10.083916 kernel: Detected PIPT I-cache on CPU1 May 27 17:01:10.083933 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 May 27 17:01:10.083951 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 May 27 17:01:10.084012 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] May 27 17:01:10.084030 kernel: smp: Brought up 1 node, 2 CPUs May 27 17:01:10.084047 kernel: SMP: Total of 2 processors activated. May 27 17:01:10.084064 kernel: CPU: All CPU(s) started at EL1 May 27 17:01:10.084087 kernel: CPU features: detected: 32-bit EL0 Support May 27 17:01:10.084115 kernel: CPU features: detected: 32-bit EL1 Support May 27 17:01:10.084133 kernel: CPU features: detected: CRC32 instructions May 27 17:01:10.084156 kernel: alternatives: applying system-wide alternatives May 27 17:01:10.084175 kernel: Memory: 3813536K/4030464K available (11072K kernel code, 2276K rwdata, 8936K rodata, 39424K init, 1034K bss, 212156K reserved, 0K cma-reserved) May 27 17:01:10.084193 kernel: devtmpfs: initialized May 27 17:01:10.084211 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 17:01:10.084229 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 27 17:01:10.084251 kernel: 17024 pages in range for non-PLT usage May 27 17:01:10.084269 kernel: 508544 pages in range for PLT usage May 27 17:01:10.084286 kernel: pinctrl core: initialized pinctrl subsystem May 27 17:01:10.084304 kernel: SMBIOS 3.0.0 present. May 27 17:01:10.084321 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 May 27 17:01:10.084339 kernel: DMI: Memory slots populated: 0/0 May 27 17:01:10.084356 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 17:01:10.084374 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 27 17:01:10.084392 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 27 17:01:10.084414 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 27 17:01:10.084432 kernel: audit: initializing netlink subsys (disabled) May 27 17:01:10.084450 kernel: audit: type=2000 audit(0.271:1): state=initialized audit_enabled=0 res=1 May 27 17:01:10.084467 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 17:01:10.084485 kernel: cpuidle: using governor menu May 27 17:01:10.084503 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 27 17:01:10.084521 kernel: ASID allocator initialised with 65536 entries May 27 17:01:10.084538 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 17:01:10.084560 kernel: Serial: AMBA PL011 UART driver May 27 17:01:10.084578 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 17:01:10.084596 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 27 17:01:10.084614 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 27 17:01:10.084631 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 27 17:01:10.084649 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 17:01:10.084667 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 27 17:01:10.084685 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 27 17:01:10.084703 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 27 17:01:10.084724 kernel: ACPI: Added _OSI(Module Device) May 27 17:01:10.084742 kernel: ACPI: Added _OSI(Processor Device) May 27 17:01:10.084760 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 17:01:10.084777 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 17:01:10.084795 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 17:01:10.084813 kernel: ACPI: Interpreter enabled May 27 17:01:10.084830 kernel: ACPI: Using GIC for interrupt routing May 27 17:01:10.084848 kernel: ACPI: MCFG table detected, 1 entries May 27 17:01:10.084865 kernel: ACPI: CPU0 has been hot-added May 27 17:01:10.084883 kernel: ACPI: CPU1 has been hot-added May 27 17:01:10.084904 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) May 27 17:01:10.085782 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 17:01:10.086053 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 27 17:01:10.086278 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 27 17:01:10.086499 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 May 27 17:01:10.086697 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] May 27 17:01:10.086722 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] May 27 17:01:10.086750 kernel: acpiphp: Slot [1] registered May 27 17:01:10.086769 kernel: acpiphp: Slot [2] registered May 27 17:01:10.086787 kernel: acpiphp: Slot [3] registered May 27 17:01:10.086804 kernel: acpiphp: Slot [4] registered May 27 17:01:10.086822 kernel: acpiphp: Slot [5] registered May 27 17:01:10.086839 kernel: acpiphp: Slot [6] registered May 27 17:01:10.086857 kernel: acpiphp: Slot [7] registered May 27 17:01:10.086892 kernel: acpiphp: Slot [8] registered May 27 17:01:10.086917 kernel: acpiphp: Slot [9] registered May 27 17:01:10.089020 kernel: acpiphp: Slot [10] registered May 27 17:01:10.089052 kernel: acpiphp: Slot [11] registered May 27 17:01:10.089071 kernel: acpiphp: Slot [12] registered May 27 17:01:10.089089 kernel: acpiphp: Slot [13] registered May 27 17:01:10.089107 kernel: acpiphp: Slot [14] registered May 27 17:01:10.089124 kernel: acpiphp: Slot [15] registered May 27 17:01:10.089142 kernel: acpiphp: Slot [16] registered May 27 17:01:10.089159 kernel: acpiphp: Slot [17] registered May 27 17:01:10.089177 kernel: acpiphp: Slot [18] registered May 27 17:01:10.089195 kernel: acpiphp: Slot [19] registered May 27 17:01:10.089221 kernel: acpiphp: Slot [20] registered May 27 17:01:10.089239 kernel: acpiphp: Slot [21] registered May 27 17:01:10.089257 kernel: acpiphp: Slot [22] registered May 27 17:01:10.089274 kernel: acpiphp: Slot [23] registered May 27 17:01:10.089291 kernel: acpiphp: Slot [24] registered May 27 17:01:10.089309 kernel: acpiphp: Slot [25] registered May 27 17:01:10.089360 kernel: acpiphp: Slot [26] registered May 27 17:01:10.089380 kernel: acpiphp: Slot [27] registered May 27 17:01:10.089400 kernel: acpiphp: Slot [28] registered May 27 17:01:10.089426 kernel: acpiphp: Slot [29] registered May 27 17:01:10.089444 kernel: acpiphp: Slot [30] registered May 27 17:01:10.089463 kernel: acpiphp: Slot [31] registered May 27 17:01:10.089481 kernel: PCI host bridge to bus 0000:00 May 27 17:01:10.089736 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] May 27 17:01:10.089919 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 27 17:01:10.090146 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] May 27 17:01:10.090361 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] May 27 17:01:10.090616 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint May 27 17:01:10.090855 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint May 27 17:01:10.093224 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] May 27 17:01:10.093482 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint May 27 17:01:10.093686 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] May 27 17:01:10.093884 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold May 27 17:01:10.094141 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint May 27 17:01:10.094371 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] May 27 17:01:10.094587 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] May 27 17:01:10.094784 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] May 27 17:01:10.096133 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold May 27 17:01:10.096379 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned May 27 17:01:10.096576 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned May 27 17:01:10.096786 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned May 27 17:01:10.097009 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned May 27 17:01:10.098184 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned May 27 17:01:10.098424 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] May 27 17:01:10.098608 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 27 17:01:10.098789 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] May 27 17:01:10.098814 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 27 17:01:10.098844 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 27 17:01:10.098863 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 27 17:01:10.098881 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 27 17:01:10.098900 kernel: iommu: Default domain type: Translated May 27 17:01:10.098917 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 27 17:01:10.098936 kernel: efivars: Registered efivars operations May 27 17:01:10.100985 kernel: vgaarb: loaded May 27 17:01:10.101039 kernel: clocksource: Switched to clocksource arch_sys_counter May 27 17:01:10.101058 kernel: VFS: Disk quotas dquot_6.6.0 May 27 17:01:10.101086 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 17:01:10.101105 kernel: pnp: PnP ACPI init May 27 17:01:10.101374 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved May 27 17:01:10.101402 kernel: pnp: PnP ACPI: found 1 devices May 27 17:01:10.101420 kernel: NET: Registered PF_INET protocol family May 27 17:01:10.101439 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 17:01:10.101458 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 17:01:10.101476 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 17:01:10.101499 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 17:01:10.101518 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 17:01:10.101536 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 17:01:10.101553 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 17:01:10.101571 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 17:01:10.101589 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 17:01:10.101607 kernel: PCI: CLS 0 bytes, default 64 May 27 17:01:10.101625 kernel: kvm [1]: HYP mode not available May 27 17:01:10.101642 kernel: Initialise system trusted keyrings May 27 17:01:10.101664 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 17:01:10.101682 kernel: Key type asymmetric registered May 27 17:01:10.101700 kernel: Asymmetric key parser 'x509' registered May 27 17:01:10.101717 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 27 17:01:10.101735 kernel: io scheduler mq-deadline registered May 27 17:01:10.101753 kernel: io scheduler kyber registered May 27 17:01:10.101771 kernel: io scheduler bfq registered May 27 17:01:10.102013 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered May 27 17:01:10.102043 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 27 17:01:10.102068 kernel: ACPI: button: Power Button [PWRB] May 27 17:01:10.102086 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 May 27 17:01:10.102104 kernel: ACPI: button: Sleep Button [SLPB] May 27 17:01:10.102122 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 17:01:10.102141 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 27 17:01:10.102366 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) May 27 17:01:10.102393 kernel: printk: legacy console [ttyS0] disabled May 27 17:01:10.102413 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A May 27 17:01:10.102436 kernel: printk: legacy console [ttyS0] enabled May 27 17:01:10.102454 kernel: printk: legacy bootconsole [uart0] disabled May 27 17:01:10.102472 kernel: thunder_xcv, ver 1.0 May 27 17:01:10.102490 kernel: thunder_bgx, ver 1.0 May 27 17:01:10.102507 kernel: nicpf, ver 1.0 May 27 17:01:10.102525 kernel: nicvf, ver 1.0 May 27 17:01:10.102741 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 27 17:01:10.102928 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-27T17:01:09 UTC (1748365269) May 27 17:01:10.102953 kernel: hid: raw HID events driver (C) Jiri Kosina May 27 17:01:10.103750 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available May 27 17:01:10.103771 kernel: NET: Registered PF_INET6 protocol family May 27 17:01:10.103789 kernel: watchdog: NMI not fully supported May 27 17:01:10.103807 kernel: watchdog: Hard watchdog permanently disabled May 27 17:01:10.103824 kernel: Segment Routing with IPv6 May 27 17:01:10.103843 kernel: In-situ OAM (IOAM) with IPv6 May 27 17:01:10.103860 kernel: NET: Registered PF_PACKET protocol family May 27 17:01:10.103878 kernel: Key type dns_resolver registered May 27 17:01:10.103896 kernel: registered taskstats version 1 May 27 17:01:10.103918 kernel: Loading compiled-in X.509 certificates May 27 17:01:10.103937 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 8e5e45c34fa91568ef1fa3bdfd5a71a43b4c4580' May 27 17:01:10.103985 kernel: Demotion targets for Node 0: null May 27 17:01:10.104010 kernel: Key type .fscrypt registered May 27 17:01:10.104028 kernel: Key type fscrypt-provisioning registered May 27 17:01:10.104046 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 17:01:10.104064 kernel: ima: Allocated hash algorithm: sha1 May 27 17:01:10.104082 kernel: ima: No architecture policies found May 27 17:01:10.104100 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 27 17:01:10.104124 kernel: clk: Disabling unused clocks May 27 17:01:10.104143 kernel: PM: genpd: Disabling unused power domains May 27 17:01:10.104161 kernel: Warning: unable to open an initial console. May 27 17:01:10.104179 kernel: Freeing unused kernel memory: 39424K May 27 17:01:10.104198 kernel: Run /init as init process May 27 17:01:10.104216 kernel: with arguments: May 27 17:01:10.104234 kernel: /init May 27 17:01:10.104252 kernel: with environment: May 27 17:01:10.104269 kernel: HOME=/ May 27 17:01:10.104291 kernel: TERM=linux May 27 17:01:10.104309 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 17:01:10.104329 systemd[1]: Successfully made /usr/ read-only. May 27 17:01:10.104355 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:01:10.104376 systemd[1]: Detected virtualization amazon. May 27 17:01:10.104395 systemd[1]: Detected architecture arm64. May 27 17:01:10.104414 systemd[1]: Running in initrd. May 27 17:01:10.104438 systemd[1]: No hostname configured, using default hostname. May 27 17:01:10.104460 systemd[1]: Hostname set to . May 27 17:01:10.104479 systemd[1]: Initializing machine ID from VM UUID. May 27 17:01:10.104498 systemd[1]: Queued start job for default target initrd.target. May 27 17:01:10.104518 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:01:10.104538 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:01:10.104559 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 17:01:10.104579 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:01:10.104603 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 17:01:10.104625 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 17:01:10.104647 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 17:01:10.104668 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 17:01:10.104688 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:01:10.104710 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:01:10.104731 systemd[1]: Reached target paths.target - Path Units. May 27 17:01:10.104756 systemd[1]: Reached target slices.target - Slice Units. May 27 17:01:10.104778 systemd[1]: Reached target swap.target - Swaps. May 27 17:01:10.104798 systemd[1]: Reached target timers.target - Timer Units. May 27 17:01:10.104819 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:01:10.104839 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:01:10.104859 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 17:01:10.104881 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 17:01:10.104901 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:01:10.104922 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:01:10.104949 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:01:10.105007 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:01:10.105030 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 17:01:10.105051 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:01:10.105071 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 17:01:10.105092 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 17:01:10.105113 systemd[1]: Starting systemd-fsck-usr.service... May 27 17:01:10.105134 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:01:10.105161 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:01:10.105181 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:01:10.105204 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 17:01:10.105225 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 17:01:10.105250 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:01:10.105269 kernel: Bridge firewalling registered May 27 17:01:10.105349 systemd-journald[257]: Collecting audit messages is disabled. May 27 17:01:10.105395 systemd[1]: Finished systemd-fsck-usr.service. May 27 17:01:10.105423 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:01:10.105445 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:01:10.105481 systemd-journald[257]: Journal started May 27 17:01:10.105523 systemd-journald[257]: Runtime Journal (/run/log/journal/ec23e6c9cf8c696da6b037a9169aaa82) is 8M, max 75.3M, 67.3M free. May 27 17:01:10.045131 systemd-modules-load[259]: Inserted module 'overlay' May 27 17:01:10.069052 systemd-modules-load[259]: Inserted module 'br_netfilter' May 27 17:01:10.120193 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:01:10.120259 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:01:10.131276 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:01:10.148062 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:01:10.161361 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 17:01:10.168067 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:01:10.183066 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:01:10.194197 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:01:10.199074 systemd-tmpfiles[274]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 17:01:10.208210 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:01:10.216339 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:01:10.253534 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:01:10.261070 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 17:01:10.273518 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:01:10.317176 dracut-cmdline[300]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4e706b869299e1c88703222069cdfa08c45ebce568f762053eea5b3f5f0939c3 May 27 17:01:10.369842 systemd-resolved[288]: Positive Trust Anchors: May 27 17:01:10.369884 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:01:10.369949 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:01:10.507997 kernel: SCSI subsystem initialized May 27 17:01:10.515997 kernel: Loading iSCSI transport class v2.0-870. May 27 17:01:10.529020 kernel: iscsi: registered transport (tcp) May 27 17:01:10.550520 kernel: iscsi: registered transport (qla4xxx) May 27 17:01:10.550593 kernel: QLogic iSCSI HBA Driver May 27 17:01:10.584601 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:01:10.627218 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:01:10.641446 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:01:10.684001 kernel: random: crng init done May 27 17:01:10.684621 systemd-resolved[288]: Defaulting to hostname 'linux'. May 27 17:01:10.688643 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:01:10.694897 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:01:10.768644 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 17:01:10.780338 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 17:01:10.872038 kernel: raid6: neonx8 gen() 6342 MB/s May 27 17:01:10.889019 kernel: raid6: neonx4 gen() 5441 MB/s May 27 17:01:10.906014 kernel: raid6: neonx2 gen() 4510 MB/s May 27 17:01:10.923016 kernel: raid6: neonx1 gen() 3432 MB/s May 27 17:01:10.940006 kernel: raid6: int64x8 gen() 3290 MB/s May 27 17:01:10.957026 kernel: raid6: int64x4 gen() 3634 MB/s May 27 17:01:10.974026 kernel: raid6: int64x2 gen() 3518 MB/s May 27 17:01:10.991901 kernel: raid6: int64x1 gen() 2143 MB/s May 27 17:01:10.992012 kernel: raid6: using algorithm neonx8 gen() 6342 MB/s May 27 17:01:11.009895 kernel: raid6: .... xor() 4692 MB/s, rmw enabled May 27 17:01:11.009996 kernel: raid6: using neon recovery algorithm May 27 17:01:11.018004 kernel: xor: measuring software checksum speed May 27 17:01:11.020139 kernel: 8regs : 11657 MB/sec May 27 17:01:11.020224 kernel: 32regs : 12989 MB/sec May 27 17:01:11.021379 kernel: arm64_neon : 9171 MB/sec May 27 17:01:11.021444 kernel: xor: using function: 32regs (12989 MB/sec) May 27 17:01:11.127005 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 17:01:11.141434 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 17:01:11.153216 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:01:11.204242 systemd-udevd[508]: Using default interface naming scheme 'v255'. May 27 17:01:11.215790 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:01:11.235802 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 17:01:11.280461 dracut-pre-trigger[520]: rd.md=0: removing MD RAID activation May 27 17:01:11.337756 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:01:11.345131 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:01:11.486339 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:01:11.492139 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 17:01:11.656759 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 27 17:01:11.656827 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) May 27 17:01:11.670468 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 27 17:01:11.670876 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 27 17:01:11.676006 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 27 17:01:11.676082 kernel: nvme nvme0: pci function 0000:00:04.0 May 27 17:01:11.687218 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:cf:50:d2:2a:0b May 27 17:01:11.689821 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 27 17:01:11.706652 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 17:01:11.706732 kernel: GPT:9289727 != 16777215 May 27 17:01:11.706760 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 17:01:11.707157 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:01:11.708855 kernel: GPT:9289727 != 16777215 May 27 17:01:11.707399 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:01:11.718328 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 17:01:11.732436 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 17:01:11.715819 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:01:11.722179 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:01:11.725077 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:01:11.755465 (udev-worker)[562]: Network interface NamePolicy= disabled on kernel command line. May 27 17:01:11.773009 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:01:11.798006 kernel: nvme nvme0: using unchecked data buffer May 27 17:01:11.982708 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 27 17:01:12.026431 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 27 17:01:12.035006 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 17:01:12.062166 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 27 17:01:12.099892 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 27 17:01:12.105912 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 27 17:01:12.113272 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:01:12.119006 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:01:12.122645 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:01:12.130841 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 17:01:12.136952 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 17:01:12.178159 disk-uuid[689]: Primary Header is updated. May 27 17:01:12.178159 disk-uuid[689]: Secondary Entries is updated. May 27 17:01:12.178159 disk-uuid[689]: Secondary Header is updated. May 27 17:01:12.190405 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 17:01:12.201009 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 17:01:13.217998 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 17:01:13.219559 disk-uuid[698]: The operation has completed successfully. May 27 17:01:13.426516 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 17:01:13.427131 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 17:01:13.499181 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 17:01:13.519838 sh[958]: Success May 27 17:01:13.541635 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 17:01:13.541715 kernel: device-mapper: uevent: version 1.0.3 May 27 17:01:13.545008 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 17:01:13.558004 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 27 17:01:13.663712 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 17:01:13.674331 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 17:01:13.698857 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 17:01:13.722841 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 17:01:13.722913 kernel: BTRFS: device fsid 3c8c76ef-f1da-40fe-979d-11bdf765e403 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (994) May 27 17:01:13.727559 kernel: BTRFS info (device dm-0): first mount of filesystem 3c8c76ef-f1da-40fe-979d-11bdf765e403 May 27 17:01:13.727613 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 27 17:01:13.728711 kernel: BTRFS info (device dm-0): using free-space-tree May 27 17:01:13.846857 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 17:01:13.853516 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 17:01:13.859017 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 17:01:13.865086 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 17:01:13.873244 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 17:01:13.936011 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1027) May 27 17:01:13.942112 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:01:13.942187 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 27 17:01:13.943815 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 17:01:13.964655 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:01:13.967592 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 17:01:13.973518 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 17:01:14.076084 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:01:14.095697 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:01:14.178356 systemd-networkd[1163]: lo: Link UP May 27 17:01:14.178385 systemd-networkd[1163]: lo: Gained carrier May 27 17:01:14.182154 systemd-networkd[1163]: Enumeration completed May 27 17:01:14.182835 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:01:14.182842 systemd-networkd[1163]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:01:14.183135 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:01:14.186556 systemd[1]: Reached target network.target - Network. May 27 17:01:14.189705 systemd-networkd[1163]: eth0: Link UP May 27 17:01:14.189713 systemd-networkd[1163]: eth0: Gained carrier May 27 17:01:14.189736 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:01:14.240104 systemd-networkd[1163]: eth0: DHCPv4 address 172.31.22.21/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 27 17:01:14.461938 ignition[1084]: Ignition 2.21.0 May 27 17:01:14.462622 ignition[1084]: Stage: fetch-offline May 27 17:01:14.463139 ignition[1084]: no configs at "/usr/lib/ignition/base.d" May 27 17:01:14.463164 ignition[1084]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 17:01:14.464317 ignition[1084]: Ignition finished successfully May 27 17:01:14.475507 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:01:14.483262 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 27 17:01:14.527012 ignition[1173]: Ignition 2.21.0 May 27 17:01:14.527582 ignition[1173]: Stage: fetch May 27 17:01:14.528208 ignition[1173]: no configs at "/usr/lib/ignition/base.d" May 27 17:01:14.528233 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 17:01:14.528442 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 17:01:14.554672 ignition[1173]: PUT result: OK May 27 17:01:14.559320 ignition[1173]: parsed url from cmdline: "" May 27 17:01:14.559518 ignition[1173]: no config URL provided May 27 17:01:14.559773 ignition[1173]: reading system config file "/usr/lib/ignition/user.ign" May 27 17:01:14.559823 ignition[1173]: no config at "/usr/lib/ignition/user.ign" May 27 17:01:14.559986 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 17:01:14.565545 ignition[1173]: PUT result: OK May 27 17:01:14.566472 ignition[1173]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 27 17:01:14.574695 ignition[1173]: GET result: OK May 27 17:01:14.575206 ignition[1173]: parsing config with SHA512: 9e1fedb599324a4e0a9a4f43308d2b83c3e6b008665ffbdfad4e7e00721525c98261c5dea78abf419267b0777cb43abaf76b0529e64fed639bd5c8f32ecc01ca May 27 17:01:14.591810 unknown[1173]: fetched base config from "system" May 27 17:01:14.592122 unknown[1173]: fetched base config from "system" May 27 17:01:14.592868 ignition[1173]: fetch: fetch complete May 27 17:01:14.592136 unknown[1173]: fetched user config from "aws" May 27 17:01:14.592904 ignition[1173]: fetch: fetch passed May 27 17:01:14.603250 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 27 17:01:14.593102 ignition[1173]: Ignition finished successfully May 27 17:01:14.610233 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 17:01:14.653311 ignition[1180]: Ignition 2.21.0 May 27 17:01:14.653354 ignition[1180]: Stage: kargs May 27 17:01:14.654172 ignition[1180]: no configs at "/usr/lib/ignition/base.d" May 27 17:01:14.654207 ignition[1180]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 17:01:14.654750 ignition[1180]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 17:01:14.658605 ignition[1180]: PUT result: OK May 27 17:01:14.671416 ignition[1180]: kargs: kargs passed May 27 17:01:14.671534 ignition[1180]: Ignition finished successfully May 27 17:01:14.675534 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 17:01:14.682833 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 17:01:14.726824 ignition[1186]: Ignition 2.21.0 May 27 17:01:14.727489 ignition[1186]: Stage: disks May 27 17:01:14.728088 ignition[1186]: no configs at "/usr/lib/ignition/base.d" May 27 17:01:14.728114 ignition[1186]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 17:01:14.728298 ignition[1186]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 17:01:14.731492 ignition[1186]: PUT result: OK May 27 17:01:14.749095 ignition[1186]: disks: disks passed May 27 17:01:14.749246 ignition[1186]: Ignition finished successfully May 27 17:01:14.755882 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 17:01:14.761268 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 17:01:14.776876 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 17:01:14.782610 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:01:14.785210 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:01:14.792426 systemd[1]: Reached target basic.target - Basic System. May 27 17:01:14.798695 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 17:01:14.849902 systemd-fsck[1195]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 17:01:14.857383 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 17:01:14.864668 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 17:01:15.018999 kernel: EXT4-fs (nvme0n1p9): mounted filesystem a5483afc-8426-4c3e-85ef-8146f9077e7d r/w with ordered data mode. Quota mode: none. May 27 17:01:15.020981 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 17:01:15.026195 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 17:01:15.031217 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:01:15.051755 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 17:01:15.056699 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 17:01:15.057207 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 17:01:15.057266 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:01:15.081514 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 17:01:15.087653 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 17:01:15.124009 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1214) May 27 17:01:15.129267 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:01:15.129380 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 27 17:01:15.130779 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 17:01:15.141117 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:01:15.544797 initrd-setup-root[1238]: cut: /sysroot/etc/passwd: No such file or directory May 27 17:01:15.568298 initrd-setup-root[1245]: cut: /sysroot/etc/group: No such file or directory May 27 17:01:15.578508 initrd-setup-root[1252]: cut: /sysroot/etc/shadow: No such file or directory May 27 17:01:15.586766 initrd-setup-root[1259]: cut: /sysroot/etc/gshadow: No such file or directory May 27 17:01:15.649290 systemd-networkd[1163]: eth0: Gained IPv6LL May 27 17:01:15.927581 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 17:01:15.933106 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 17:01:15.938316 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 17:01:15.967808 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 17:01:15.971317 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:01:16.009393 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 17:01:16.022886 ignition[1327]: INFO : Ignition 2.21.0 May 27 17:01:16.022886 ignition[1327]: INFO : Stage: mount May 27 17:01:16.028534 ignition[1327]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:01:16.028534 ignition[1327]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 17:01:16.028534 ignition[1327]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 17:01:16.028534 ignition[1327]: INFO : PUT result: OK May 27 17:01:16.042550 ignition[1327]: INFO : mount: mount passed May 27 17:01:16.045234 ignition[1327]: INFO : Ignition finished successfully May 27 17:01:16.048042 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 17:01:16.057122 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 17:01:16.084207 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:01:16.133022 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1339) May 27 17:01:16.133089 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:01:16.136941 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 27 17:01:16.137024 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 17:01:16.146202 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:01:16.189045 ignition[1356]: INFO : Ignition 2.21.0 May 27 17:01:16.189045 ignition[1356]: INFO : Stage: files May 27 17:01:16.194747 ignition[1356]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:01:16.194747 ignition[1356]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 17:01:16.194747 ignition[1356]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 17:01:16.206871 ignition[1356]: INFO : PUT result: OK May 27 17:01:16.212249 ignition[1356]: DEBUG : files: compiled without relabeling support, skipping May 27 17:01:16.228700 ignition[1356]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 17:01:16.232409 ignition[1356]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 17:01:16.240715 ignition[1356]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 17:01:16.244445 ignition[1356]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 17:01:16.248038 unknown[1356]: wrote ssh authorized keys file for user: core May 27 17:01:16.250204 ignition[1356]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 17:01:16.275429 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 27 17:01:16.280135 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 27 17:01:16.387027 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 17:01:16.562159 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 27 17:01:16.562159 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 17:01:16.570885 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 27 17:01:24.132031 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 27 17:01:24.254020 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 17:01:24.254020 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 27 17:01:24.264017 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 27 17:01:24.264017 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 17:01:24.264017 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 17:01:24.264017 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:01:24.264017 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:01:24.264017 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:01:24.264017 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:01:24.293056 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:01:24.293056 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:01:24.293056 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 27 17:01:24.308155 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 27 17:01:24.308155 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 27 17:01:24.319085 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 May 27 17:01:25.010670 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 27 17:01:25.380055 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 27 17:01:25.380055 ignition[1356]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 27 17:01:25.388638 ignition[1356]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:01:25.398008 ignition[1356]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:01:25.398008 ignition[1356]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 27 17:01:25.398008 ignition[1356]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 27 17:01:25.412201 ignition[1356]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 27 17:01:25.412201 ignition[1356]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 17:01:25.412201 ignition[1356]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 17:01:25.412201 ignition[1356]: INFO : files: files passed May 27 17:01:25.412201 ignition[1356]: INFO : Ignition finished successfully May 27 17:01:25.420470 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 17:01:25.430341 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 17:01:25.444794 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 17:01:25.468329 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 17:01:25.471070 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 17:01:25.492254 initrd-setup-root-after-ignition[1386]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:01:25.492254 initrd-setup-root-after-ignition[1386]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 17:01:25.502320 initrd-setup-root-after-ignition[1390]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:01:25.509937 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:01:25.517785 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 17:01:25.522069 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 17:01:25.620646 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 17:01:25.622502 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 17:01:25.626777 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 17:01:25.629537 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 17:01:25.638720 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 17:01:25.643448 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 17:01:25.691460 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:01:25.701363 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 17:01:25.750798 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 17:01:25.755236 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:01:25.763632 systemd[1]: Stopped target timers.target - Timer Units. May 27 17:01:25.766992 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 17:01:25.767385 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:01:25.775652 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 17:01:25.783293 systemd[1]: Stopped target basic.target - Basic System. May 27 17:01:25.788081 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 17:01:25.794111 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:01:25.797125 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 17:01:25.804635 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 17:01:25.808004 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 17:01:25.815283 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:01:25.818808 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 17:01:25.827292 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 17:01:25.830483 systemd[1]: Stopped target swap.target - Swaps. May 27 17:01:25.836056 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 17:01:25.836608 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 17:01:25.843953 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 17:01:25.847448 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:01:25.856455 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 17:01:25.859041 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:01:25.861836 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 17:01:25.862158 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 17:01:25.870000 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 17:01:25.871105 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:01:25.879789 systemd[1]: ignition-files.service: Deactivated successfully. May 27 17:01:25.880158 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 17:01:25.891923 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 17:01:25.897701 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 17:01:25.898081 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:01:25.932808 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 17:01:25.950415 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 17:01:25.953558 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:01:25.970593 ignition[1410]: INFO : Ignition 2.21.0 May 27 17:01:25.970593 ignition[1410]: INFO : Stage: umount May 27 17:01:25.975071 ignition[1410]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:01:25.975071 ignition[1410]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 17:01:25.975071 ignition[1410]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 17:01:25.986539 ignition[1410]: INFO : PUT result: OK May 27 17:01:25.976409 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 17:01:25.977173 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:01:25.999250 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 17:01:26.004627 ignition[1410]: INFO : umount: umount passed May 27 17:01:26.007683 ignition[1410]: INFO : Ignition finished successfully May 27 17:01:26.010580 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 17:01:26.019565 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 17:01:26.022419 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 17:01:26.026537 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 17:01:26.026739 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 17:01:26.032914 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 17:01:26.033078 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 17:01:26.037116 systemd[1]: ignition-fetch.service: Deactivated successfully. May 27 17:01:26.037226 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 27 17:01:26.041515 systemd[1]: Stopped target network.target - Network. May 27 17:01:26.046394 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 17:01:26.046536 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:01:26.067912 systemd[1]: Stopped target paths.target - Path Units. May 27 17:01:26.080164 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 17:01:26.084138 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:01:26.089803 systemd[1]: Stopped target slices.target - Slice Units. May 27 17:01:26.095585 systemd[1]: Stopped target sockets.target - Socket Units. May 27 17:01:26.102446 systemd[1]: iscsid.socket: Deactivated successfully. May 27 17:01:26.102743 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:01:26.109798 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 17:01:26.109895 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:01:26.112447 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 17:01:26.112587 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 17:01:26.119047 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 17:01:26.119183 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 17:01:26.122777 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 17:01:26.129759 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 17:01:26.134998 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 17:01:26.136318 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 17:01:26.136582 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 17:01:26.137636 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 17:01:26.137848 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 17:01:26.183157 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 17:01:26.183415 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 17:01:26.195443 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 17:01:26.195918 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 17:01:26.196566 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 17:01:26.211466 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 17:01:26.213298 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 17:01:26.219949 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 17:01:26.220696 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 17:01:26.233205 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 17:01:26.241130 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 17:01:26.241505 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:01:26.250821 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 17:01:26.251212 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 17:01:26.257526 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 17:01:26.257648 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 17:01:26.264992 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 17:01:26.265123 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:01:26.271158 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:01:26.288786 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 17:01:26.289328 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 17:01:26.301582 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 17:01:26.303886 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:01:26.307397 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 17:01:26.307496 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 17:01:26.314667 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 17:01:26.314754 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:01:26.322071 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 17:01:26.322188 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 17:01:26.325547 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 17:01:26.325662 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 17:01:26.332940 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 17:01:26.333116 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:01:26.358101 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 17:01:26.367760 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 17:01:26.367925 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:01:26.376833 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 17:01:26.376947 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:01:26.386493 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:01:26.386608 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:01:26.406175 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 17:01:26.406342 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 17:01:26.406439 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:01:26.407360 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 17:01:26.410092 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 17:01:26.415195 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 17:01:26.415888 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 17:01:26.422358 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 17:01:26.430493 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 17:01:26.487601 systemd[1]: Switching root. May 27 17:01:26.536592 systemd-journald[257]: Journal stopped May 27 17:01:28.830676 systemd-journald[257]: Received SIGTERM from PID 1 (systemd). May 27 17:01:28.830806 kernel: SELinux: policy capability network_peer_controls=1 May 27 17:01:28.830850 kernel: SELinux: policy capability open_perms=1 May 27 17:01:28.830885 kernel: SELinux: policy capability extended_socket_class=1 May 27 17:01:28.830916 kernel: SELinux: policy capability always_check_network=0 May 27 17:01:28.830946 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 17:01:28.831007 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 17:01:28.831039 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 17:01:28.831069 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 17:01:28.831098 kernel: SELinux: policy capability userspace_initial_context=0 May 27 17:01:28.831140 kernel: audit: type=1403 audit(1748365286.909:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 17:01:28.831182 systemd[1]: Successfully loaded SELinux policy in 53.753ms. May 27 17:01:28.831232 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 30.416ms. May 27 17:01:28.831265 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:01:28.831296 systemd[1]: Detected virtualization amazon. May 27 17:01:28.831327 systemd[1]: Detected architecture arm64. May 27 17:01:28.831357 systemd[1]: Detected first boot. May 27 17:01:28.831386 systemd[1]: Initializing machine ID from VM UUID. May 27 17:01:28.831417 zram_generator::config[1454]: No configuration found. May 27 17:01:28.831452 kernel: NET: Registered PF_VSOCK protocol family May 27 17:01:28.831487 systemd[1]: Populated /etc with preset unit settings. May 27 17:01:28.831519 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 17:01:28.831552 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 17:01:28.831582 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 17:01:28.831611 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 17:01:28.831642 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 17:01:28.831676 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 17:01:28.831707 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 17:01:28.831748 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 17:01:28.831781 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 17:01:28.831811 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 17:01:28.831854 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 17:01:28.831885 systemd[1]: Created slice user.slice - User and Session Slice. May 27 17:01:28.831920 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:01:28.831952 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:01:28.832022 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 17:01:28.832054 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 17:01:28.832093 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 17:01:28.832126 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:01:28.832157 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 17:01:28.832189 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:01:28.832219 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:01:28.832247 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 17:01:28.832278 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 17:01:28.832307 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 17:01:28.832339 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 17:01:28.832369 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:01:28.832402 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:01:28.832433 systemd[1]: Reached target slices.target - Slice Units. May 27 17:01:28.832463 systemd[1]: Reached target swap.target - Swaps. May 27 17:01:28.832491 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 17:01:28.832519 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 17:01:28.832549 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 17:01:28.832578 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:01:28.832614 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:01:28.832644 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:01:28.832674 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 17:01:28.832704 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 17:01:28.832733 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 17:01:28.832760 systemd[1]: Mounting media.mount - External Media Directory... May 27 17:01:28.832804 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 17:01:28.832836 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 17:01:28.832867 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 17:01:28.832902 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 17:01:28.832930 systemd[1]: Reached target machines.target - Containers. May 27 17:01:28.839169 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 17:01:28.839242 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:01:28.839272 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:01:28.839301 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 17:01:28.839329 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:01:28.839356 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:01:28.839394 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:01:28.839427 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 17:01:28.839459 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:01:28.839490 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 17:01:28.839517 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 17:01:28.839545 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 17:01:28.839575 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 17:01:28.839602 systemd[1]: Stopped systemd-fsck-usr.service. May 27 17:01:28.839631 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:01:28.839663 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:01:28.839690 kernel: loop: module loaded May 27 17:01:28.839718 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:01:28.839746 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:01:28.839775 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 17:01:28.839802 kernel: fuse: init (API version 7.41) May 27 17:01:28.839829 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 17:01:28.839862 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:01:28.839927 systemd[1]: verity-setup.service: Deactivated successfully. May 27 17:01:28.839987 kernel: ACPI: bus type drm_connector registered May 27 17:01:28.840021 systemd[1]: Stopped verity-setup.service. May 27 17:01:28.840050 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 17:01:28.840082 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 17:01:28.840116 systemd[1]: Mounted media.mount - External Media Directory. May 27 17:01:28.840147 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 17:01:28.840177 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 17:01:28.840207 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 17:01:28.840235 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:01:28.840262 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 17:01:28.840294 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 17:01:28.840324 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:01:28.840355 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:01:28.840384 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:01:28.840413 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:01:28.840441 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:01:28.840473 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:01:28.840505 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 17:01:28.840535 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 17:01:28.840570 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:01:28.840600 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:01:28.840667 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:01:28.840704 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:01:28.840736 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 17:01:28.840766 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 17:01:28.840798 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:01:28.840828 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 17:01:28.840856 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 17:01:28.840893 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 17:01:28.840924 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:01:28.840952 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 17:01:28.841021 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 17:01:28.841059 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:01:28.841088 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 17:01:28.841119 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:01:28.841148 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 17:01:28.841178 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:01:28.841274 systemd-journald[1538]: Collecting audit messages is disabled. May 27 17:01:28.841337 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:01:28.841498 systemd-journald[1538]: Journal started May 27 17:01:28.841560 systemd-journald[1538]: Runtime Journal (/run/log/journal/ec23e6c9cf8c696da6b037a9169aaa82) is 8M, max 75.3M, 67.3M free. May 27 17:01:28.051497 systemd[1]: Queued start job for default target multi-user.target. May 27 17:01:28.065568 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 27 17:01:28.066484 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 17:01:28.852890 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 17:01:28.862982 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:01:28.869347 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 17:01:28.874710 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 17:01:28.880400 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 17:01:28.891422 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 17:01:28.959473 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 17:01:28.972491 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 17:01:28.984179 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 17:01:28.995476 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 17:01:29.009071 kernel: loop0: detected capacity change from 0 to 107312 May 27 17:01:29.041101 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:01:29.081677 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 17:01:29.091858 systemd-journald[1538]: Time spent on flushing to /var/log/journal/ec23e6c9cf8c696da6b037a9169aaa82 is 79.410ms for 936 entries. May 27 17:01:29.091858 systemd-journald[1538]: System Journal (/var/log/journal/ec23e6c9cf8c696da6b037a9169aaa82) is 8M, max 195.6M, 187.6M free. May 27 17:01:29.205117 systemd-journald[1538]: Received client request to flush runtime journal. May 27 17:01:29.205206 kernel: loop1: detected capacity change from 0 to 203944 May 27 17:01:29.100910 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 17:01:29.108705 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 17:01:29.184650 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:01:29.197274 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 17:01:29.207477 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:01:29.216082 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 17:01:29.259135 kernel: loop2: detected capacity change from 0 to 61240 May 27 17:01:29.269680 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. May 27 17:01:29.269721 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. May 27 17:01:29.280579 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:01:29.342013 kernel: loop3: detected capacity change from 0 to 138376 May 27 17:01:29.395991 kernel: loop4: detected capacity change from 0 to 107312 May 27 17:01:29.435000 kernel: loop5: detected capacity change from 0 to 203944 May 27 17:01:29.481006 kernel: loop6: detected capacity change from 0 to 61240 May 27 17:01:29.518000 kernel: loop7: detected capacity change from 0 to 138376 May 27 17:01:29.561232 (sd-merge)[1613]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 27 17:01:29.563300 (sd-merge)[1613]: Merged extensions into '/usr'. May 27 17:01:29.579682 systemd[1]: Reload requested from client PID 1570 ('systemd-sysext') (unit systemd-sysext.service)... May 27 17:01:29.580268 systemd[1]: Reloading... May 27 17:01:29.826021 zram_generator::config[1635]: No configuration found. May 27 17:01:29.957013 ldconfig[1566]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 17:01:30.119697 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:01:30.314398 systemd[1]: Reloading finished in 733 ms. May 27 17:01:30.353441 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 17:01:30.356948 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 17:01:30.360292 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 17:01:30.378501 systemd[1]: Starting ensure-sysext.service... May 27 17:01:30.383283 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:01:30.391240 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:01:30.423290 systemd[1]: Reload requested from client PID 1692 ('systemctl') (unit ensure-sysext.service)... May 27 17:01:30.423331 systemd[1]: Reloading... May 27 17:01:30.483840 systemd-udevd[1694]: Using default interface naming scheme 'v255'. May 27 17:01:30.485588 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 17:01:30.485691 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 17:01:30.486516 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 17:01:30.487187 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 17:01:30.489281 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 17:01:30.490011 systemd-tmpfiles[1693]: ACLs are not supported, ignoring. May 27 17:01:30.490163 systemd-tmpfiles[1693]: ACLs are not supported, ignoring. May 27 17:01:30.499740 systemd-tmpfiles[1693]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:01:30.499770 systemd-tmpfiles[1693]: Skipping /boot May 27 17:01:30.558634 systemd-tmpfiles[1693]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:01:30.560154 systemd-tmpfiles[1693]: Skipping /boot May 27 17:01:30.653994 zram_generator::config[1724]: No configuration found. May 27 17:01:31.001107 (udev-worker)[1730]: Network interface NamePolicy= disabled on kernel command line. May 27 17:01:31.009999 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:01:31.257023 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 17:01:31.257362 systemd[1]: Reloading finished in 833 ms. May 27 17:01:31.280828 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:01:31.286036 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:01:31.385238 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:01:31.395374 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 17:01:31.402660 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 17:01:31.410800 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:01:31.422085 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:01:31.430910 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 17:01:31.445552 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:01:31.450527 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:01:31.463575 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:01:31.475507 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:01:31.478309 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:01:31.478561 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:01:31.488391 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:01:31.489385 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:01:31.489601 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:01:31.500191 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 17:01:31.509534 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:01:31.516491 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:01:31.519279 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:01:31.519543 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:01:31.519904 systemd[1]: Reached target time-set.target - System Time Set. May 27 17:01:31.536102 systemd[1]: Finished ensure-sysext.service. May 27 17:01:31.557580 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 17:01:31.565406 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 17:01:31.595926 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 17:01:31.612160 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 17:01:31.673603 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:01:31.675238 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:01:31.680918 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 17:01:31.690825 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:01:31.692855 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:01:31.696650 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:01:31.696704 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 17:01:31.702481 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:01:31.705434 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:01:31.711863 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:01:31.756884 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:01:31.759557 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:01:31.791443 augenrules[1938]: No rules May 27 17:01:31.794667 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:01:31.796422 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:01:31.850511 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:01:31.971270 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 27 17:01:31.978196 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 17:01:32.028700 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 17:01:32.040753 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 17:01:32.091137 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:01:32.186186 systemd-networkd[1859]: lo: Link UP May 27 17:01:32.186217 systemd-networkd[1859]: lo: Gained carrier May 27 17:01:32.189807 systemd-networkd[1859]: Enumeration completed May 27 17:01:32.190186 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:01:32.195808 systemd-networkd[1859]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:01:32.195817 systemd-networkd[1859]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:01:32.196810 systemd-resolved[1862]: Positive Trust Anchors: May 27 17:01:32.196849 systemd-resolved[1862]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:01:32.196912 systemd-resolved[1862]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:01:32.197671 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 17:01:32.203391 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 17:01:32.208539 systemd-networkd[1859]: eth0: Link UP May 27 17:01:32.208842 systemd-networkd[1859]: eth0: Gained carrier May 27 17:01:32.208877 systemd-networkd[1859]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:01:32.213429 systemd-resolved[1862]: Defaulting to hostname 'linux'. May 27 17:01:32.219045 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:01:32.221849 systemd[1]: Reached target network.target - Network. May 27 17:01:32.224195 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:01:32.225490 systemd-networkd[1859]: eth0: DHCPv4 address 172.31.22.21/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 27 17:01:32.231248 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:01:32.234088 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 17:01:32.237220 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 17:01:32.245495 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 17:01:32.248209 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 17:01:32.251175 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 17:01:32.253731 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 17:01:32.253800 systemd[1]: Reached target paths.target - Path Units. May 27 17:01:32.256050 systemd[1]: Reached target timers.target - Timer Units. May 27 17:01:32.259635 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 17:01:32.265074 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 17:01:32.272652 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 17:01:32.276285 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 17:01:32.279408 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 17:01:32.292348 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 17:01:32.295642 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 17:01:32.301069 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 17:01:32.304553 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 17:01:32.308643 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:01:32.311523 systemd[1]: Reached target basic.target - Basic System. May 27 17:01:32.314101 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 17:01:32.314166 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 17:01:32.316863 systemd[1]: Starting containerd.service - containerd container runtime... May 27 17:01:32.329231 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 27 17:01:32.339036 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 17:01:32.347342 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 17:01:32.352282 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 17:01:32.358464 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 17:01:32.361200 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 17:01:32.374459 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 17:01:32.381473 systemd[1]: Started ntpd.service - Network Time Service. May 27 17:01:32.393286 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 17:01:32.404168 systemd[1]: Starting setup-oem.service - Setup OEM... May 27 17:01:32.413424 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 17:01:32.422441 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 17:01:32.443142 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 17:01:32.447932 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 17:01:32.448899 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 17:01:32.454764 jq[1980]: false May 27 17:01:32.456260 systemd[1]: Starting update-engine.service - Update Engine... May 27 17:01:32.468310 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 17:01:32.479474 extend-filesystems[1981]: Found loop4 May 27 17:01:32.481615 extend-filesystems[1981]: Found loop5 May 27 17:01:32.481615 extend-filesystems[1981]: Found loop6 May 27 17:01:32.485104 extend-filesystems[1981]: Found loop7 May 27 17:01:32.485104 extend-filesystems[1981]: Found nvme0n1 May 27 17:01:32.485104 extend-filesystems[1981]: Found nvme0n1p1 May 27 17:01:32.485104 extend-filesystems[1981]: Found nvme0n1p2 May 27 17:01:32.485104 extend-filesystems[1981]: Found nvme0n1p3 May 27 17:01:32.485104 extend-filesystems[1981]: Found usr May 27 17:01:32.485104 extend-filesystems[1981]: Found nvme0n1p4 May 27 17:01:32.485104 extend-filesystems[1981]: Found nvme0n1p6 May 27 17:01:32.485104 extend-filesystems[1981]: Found nvme0n1p7 May 27 17:01:32.520402 extend-filesystems[1981]: Found nvme0n1p9 May 27 17:01:32.520402 extend-filesystems[1981]: Checking size of /dev/nvme0n1p9 May 27 17:01:32.493706 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 17:01:32.508098 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 17:01:32.508619 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 17:01:32.531860 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 17:01:32.549141 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 17:01:32.591023 jq[1996]: true May 27 17:01:32.597342 ntpd[1985]: ntpd 4.2.8p17@1.4004-o Tue May 27 14:54:38 UTC 2025 (1): Starting May 27 17:01:32.602168 ntpd[1985]: 27 May 17:01:32 ntpd[1985]: ntpd 4.2.8p17@1.4004-o Tue May 27 14:54:38 UTC 2025 (1): Starting May 27 17:01:32.602168 ntpd[1985]: 27 May 17:01:32 ntpd[1985]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 27 17:01:32.602168 ntpd[1985]: 27 May 17:01:32 ntpd[1985]: ---------------------------------------------------- May 27 17:01:32.602168 ntpd[1985]: 27 May 17:01:32 ntpd[1985]: ntp-4 is maintained by Network Time Foundation, May 27 17:01:32.602168 ntpd[1985]: 27 May 17:01:32 ntpd[1985]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 27 17:01:32.602168 ntpd[1985]: 27 May 17:01:32 ntpd[1985]: corporation. Support and training for ntp-4 are May 27 17:01:32.602168 ntpd[1985]: 27 May 17:01:32 ntpd[1985]: available at https://www.nwtime.org/support May 27 17:01:32.602168 ntpd[1985]: 27 May 17:01:32 ntpd[1985]: ---------------------------------------------------- May 27 17:01:32.598713 ntpd[1985]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 27 17:01:32.598734 ntpd[1985]: ---------------------------------------------------- May 27 17:01:32.608494 systemd[1]: motdgen.service: Deactivated successfully. May 27 17:01:32.598751 ntpd[1985]: ntp-4 is maintained by Network Time Foundation, May 27 17:01:32.611139 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 17:01:32.598769 ntpd[1985]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 27 17:01:32.598785 ntpd[1985]: corporation. Support and training for ntp-4 are May 27 17:01:32.598801 ntpd[1985]: available at https://www.nwtime.org/support May 27 17:01:32.598818 ntpd[1985]: ---------------------------------------------------- May 27 17:01:32.620603 extend-filesystems[1981]: Resized partition /dev/nvme0n1p9 May 27 17:01:32.622839 ntpd[1985]: 27 May 17:01:32 ntpd[1985]: proto: precision = 0.096 usec (-23) May 27 17:01:32.617687 ntpd[1985]: proto: precision = 0.096 usec (-23) May 27 17:01:32.626334 ntpd[1985]: basedate set to 2025-05-15 May 27 17:01:32.627191 ntpd[1985]: 27 May 17:01:32 ntpd[1985]: basedate set to 2025-05-15 May 27 17:01:32.627191 ntpd[1985]: 27 May 17:01:32 ntpd[1985]: gps base set to 2025-05-18 (week 2367) May 27 17:01:32.626365 ntpd[1985]: gps base set to 2025-05-18 (week 2367) May 27 17:01:32.629989 extend-filesystems[2018]: resize2fs 1.47.2 (1-Jan-2025) May 27 17:01:32.642212 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 27 17:01:32.646789 ntpd[1985]: Listen and drop on 0 v6wildcard [::]:123 May 27 17:01:32.650081 ntpd[1985]: 27 May 17:01:32 ntpd[1985]: Listen and drop on 0 v6wildcard [::]:123 May 27 17:01:32.650081 ntpd[1985]: 27 May 17:01:32 ntpd[1985]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 27 17:01:32.647035 ntpd[1985]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 27 17:01:32.654021 ntpd[1985]: Listen normally on 2 lo 127.0.0.1:123 May 27 17:01:32.655235 ntpd[1985]: 27 May 17:01:32 ntpd[1985]: Listen normally on 2 lo 127.0.0.1:123 May 27 17:01:32.657679 ntpd[1985]: Listen normally on 3 eth0 172.31.22.21:123 May 27 17:01:32.658115 tar[2001]: linux-arm64/helm May 27 17:01:32.658513 ntpd[1985]: 27 May 17:01:32 ntpd[1985]: Listen normally on 3 eth0 172.31.22.21:123 May 27 17:01:32.658513 ntpd[1985]: 27 May 17:01:32 ntpd[1985]: Listen normally on 4 lo [::1]:123 May 27 17:01:32.658513 ntpd[1985]: 27 May 17:01:32 ntpd[1985]: bind(21) AF_INET6 fe80::4cf:50ff:fed2:2a0b%2#123 flags 0x11 failed: Cannot assign requested address May 27 17:01:32.658513 ntpd[1985]: 27 May 17:01:32 ntpd[1985]: unable to create socket on eth0 (5) for fe80::4cf:50ff:fed2:2a0b%2#123 May 27 17:01:32.658513 ntpd[1985]: 27 May 17:01:32 ntpd[1985]: failed to init interface for address fe80::4cf:50ff:fed2:2a0b%2 May 27 17:01:32.657769 ntpd[1985]: Listen normally on 4 lo [::1]:123 May 27 17:01:32.657847 ntpd[1985]: bind(21) AF_INET6 fe80::4cf:50ff:fed2:2a0b%2#123 flags 0x11 failed: Cannot assign requested address May 27 17:01:32.657885 ntpd[1985]: unable to create socket on eth0 (5) for fe80::4cf:50ff:fed2:2a0b%2#123 May 27 17:01:32.657911 ntpd[1985]: failed to init interface for address fe80::4cf:50ff:fed2:2a0b%2 May 27 17:01:32.661579 ntpd[1985]: Listening on routing socket on fd #21 for interface updates May 27 17:01:32.663114 ntpd[1985]: 27 May 17:01:32 ntpd[1985]: Listening on routing socket on fd #21 for interface updates May 27 17:01:32.666882 (ntainerd)[2017]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 17:01:32.671249 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 17:01:32.712382 ntpd[1985]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 17:01:32.712879 ntpd[1985]: 27 May 17:01:32 ntpd[1985]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 17:01:32.712879 ntpd[1985]: 27 May 17:01:32 ntpd[1985]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 17:01:32.712444 ntpd[1985]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 17:01:32.729055 coreos-metadata[1977]: May 27 17:01:32.725 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 27 17:01:32.786803 coreos-metadata[1977]: May 27 17:01:32.745 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 27 17:01:32.786803 coreos-metadata[1977]: May 27 17:01:32.747 INFO Fetch successful May 27 17:01:32.786803 coreos-metadata[1977]: May 27 17:01:32.747 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 27 17:01:32.786803 coreos-metadata[1977]: May 27 17:01:32.751 INFO Fetch successful May 27 17:01:32.786803 coreos-metadata[1977]: May 27 17:01:32.751 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 27 17:01:32.786803 coreos-metadata[1977]: May 27 17:01:32.761 INFO Fetch successful May 27 17:01:32.786803 coreos-metadata[1977]: May 27 17:01:32.761 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 27 17:01:32.786803 coreos-metadata[1977]: May 27 17:01:32.761 INFO Fetch successful May 27 17:01:32.786803 coreos-metadata[1977]: May 27 17:01:32.761 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 27 17:01:32.786803 coreos-metadata[1977]: May 27 17:01:32.766 INFO Fetch failed with 404: resource not found May 27 17:01:32.786803 coreos-metadata[1977]: May 27 17:01:32.767 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 27 17:01:32.786803 coreos-metadata[1977]: May 27 17:01:32.772 INFO Fetch successful May 27 17:01:32.786803 coreos-metadata[1977]: May 27 17:01:32.772 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 27 17:01:32.786803 coreos-metadata[1977]: May 27 17:01:32.772 INFO Fetch successful May 27 17:01:32.786803 coreos-metadata[1977]: May 27 17:01:32.772 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 27 17:01:32.786803 coreos-metadata[1977]: May 27 17:01:32.772 INFO Fetch successful May 27 17:01:32.786803 coreos-metadata[1977]: May 27 17:01:32.772 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 27 17:01:32.786803 coreos-metadata[1977]: May 27 17:01:32.780 INFO Fetch successful May 27 17:01:32.786803 coreos-metadata[1977]: May 27 17:01:32.780 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 27 17:01:32.786803 coreos-metadata[1977]: May 27 17:01:32.781 INFO Fetch successful May 27 17:01:32.787943 jq[2016]: true May 27 17:01:32.788175 update_engine[1993]: I20250527 17:01:32.740661 1993 main.cc:92] Flatcar Update Engine starting May 27 17:01:32.802074 systemd[1]: Finished setup-oem.service - Setup OEM. May 27 17:01:32.810035 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 27 17:01:32.825904 dbus-daemon[1978]: [system] SELinux support is enabled May 27 17:01:32.835306 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 17:01:32.846013 extend-filesystems[2018]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 27 17:01:32.846013 extend-filesystems[2018]: old_desc_blocks = 1, new_desc_blocks = 1 May 27 17:01:32.846013 extend-filesystems[2018]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 27 17:01:32.845714 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 17:01:32.881648 extend-filesystems[1981]: Resized filesystem in /dev/nvme0n1p9 May 27 17:01:32.865279 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 17:01:32.875328 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 17:01:32.875395 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 17:01:32.878328 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 17:01:32.878372 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 17:01:32.899461 dbus-daemon[1978]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1859 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 27 17:01:32.908446 update_engine[1993]: I20250527 17:01:32.908344 1993 update_check_scheduler.cc:74] Next update check in 9m37s May 27 17:01:32.909147 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 27 17:01:32.912289 systemd[1]: Started update-engine.service - Update Engine. May 27 17:01:32.985934 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 17:01:33.021683 bash[2062]: Updated "/home/core/.ssh/authorized_keys" May 27 17:01:33.028055 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 17:01:33.041435 systemd[1]: Starting sshkeys.service... May 27 17:01:33.104750 systemd-logind[1991]: Watching system buttons on /dev/input/event0 (Power Button) May 27 17:01:33.105081 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 27 17:01:33.106311 systemd-logind[1991]: Watching system buttons on /dev/input/event1 (Sleep Button) May 27 17:01:33.109381 systemd-logind[1991]: New seat seat0. May 27 17:01:33.111827 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 17:01:33.113161 systemd[1]: Started systemd-logind.service - User Login Management. May 27 17:01:33.175304 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 27 17:01:33.182476 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 27 17:01:33.378145 systemd-networkd[1859]: eth0: Gained IPv6LL May 27 17:01:33.391287 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 17:01:33.395291 systemd[1]: Reached target network-online.target - Network is Online. May 27 17:01:33.422938 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 27 17:01:33.439766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:01:33.445831 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 17:01:33.479443 coreos-metadata[2108]: May 27 17:01:33.479 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 27 17:01:33.491873 coreos-metadata[2108]: May 27 17:01:33.490 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 27 17:01:33.496848 coreos-metadata[2108]: May 27 17:01:33.496 INFO Fetch successful May 27 17:01:33.496848 coreos-metadata[2108]: May 27 17:01:33.496 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 27 17:01:33.500761 coreos-metadata[2108]: May 27 17:01:33.500 INFO Fetch successful May 27 17:01:33.507149 unknown[2108]: wrote ssh authorized keys file for user: core May 27 17:01:33.636635 amazon-ssm-agent[2139]: Initializing new seelog logger May 27 17:01:33.637811 amazon-ssm-agent[2139]: New Seelog Logger Creation Complete May 27 17:01:33.637811 amazon-ssm-agent[2139]: 2025/05/27 17:01:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 17:01:33.637811 amazon-ssm-agent[2139]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 17:01:33.638425 amazon-ssm-agent[2139]: 2025/05/27 17:01:33 processing appconfig overrides May 27 17:01:33.639264 amazon-ssm-agent[2139]: 2025/05/27 17:01:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 17:01:33.639431 amazon-ssm-agent[2139]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 17:01:33.639687 amazon-ssm-agent[2139]: 2025/05/27 17:01:33 processing appconfig overrides May 27 17:01:33.640075 amazon-ssm-agent[2139]: 2025/05/27 17:01:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 17:01:33.640180 amazon-ssm-agent[2139]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 17:01:33.640377 amazon-ssm-agent[2139]: 2025/05/27 17:01:33 processing appconfig overrides May 27 17:01:33.641475 amazon-ssm-agent[2139]: 2025-05-27 17:01:33.6391 INFO Proxy environment variables: May 27 17:01:33.646998 amazon-ssm-agent[2139]: 2025/05/27 17:01:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 17:01:33.646998 amazon-ssm-agent[2139]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 17:01:33.646998 amazon-ssm-agent[2139]: 2025/05/27 17:01:33 processing appconfig overrides May 27 17:01:33.655140 locksmithd[2053]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 17:01:33.739985 containerd[2017]: time="2025-05-27T17:01:33Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 17:01:33.747006 amazon-ssm-agent[2139]: 2025-05-27 17:01:33.6391 INFO https_proxy: May 27 17:01:33.762413 update-ssh-keys[2150]: Updated "/home/core/.ssh/authorized_keys" May 27 17:01:33.766107 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 27 17:01:33.788336 containerd[2017]: time="2025-05-27T17:01:33.788274512Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 17:01:33.791373 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 17:01:33.795210 systemd[1]: Finished sshkeys.service. May 27 17:01:33.804898 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 27 17:01:33.809628 dbus-daemon[1978]: [system] Successfully activated service 'org.freedesktop.hostname1' May 27 17:01:33.810500 dbus-daemon[1978]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2049 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 27 17:01:33.825194 systemd[1]: Starting polkit.service - Authorization Manager... May 27 17:01:33.851989 amazon-ssm-agent[2139]: 2025-05-27 17:01:33.6391 INFO http_proxy: May 27 17:01:33.915116 containerd[2017]: time="2025-05-27T17:01:33.914416077Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.828µs" May 27 17:01:33.915116 containerd[2017]: time="2025-05-27T17:01:33.914482617Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 17:01:33.915116 containerd[2017]: time="2025-05-27T17:01:33.914521569Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 17:01:33.915116 containerd[2017]: time="2025-05-27T17:01:33.914872785Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 17:01:33.915116 containerd[2017]: time="2025-05-27T17:01:33.914921421Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 17:01:33.915116 containerd[2017]: time="2025-05-27T17:01:33.915024069Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:01:33.915451 containerd[2017]: time="2025-05-27T17:01:33.915178845Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:01:33.915451 containerd[2017]: time="2025-05-27T17:01:33.915209649Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:01:33.915729 containerd[2017]: time="2025-05-27T17:01:33.915657957Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:01:33.915729 containerd[2017]: time="2025-05-27T17:01:33.915718869Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:01:33.915836 containerd[2017]: time="2025-05-27T17:01:33.915752937Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:01:33.915836 containerd[2017]: time="2025-05-27T17:01:33.915777717Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 17:01:33.933711 containerd[2017]: time="2025-05-27T17:01:33.933636105Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 17:01:33.934761 containerd[2017]: time="2025-05-27T17:01:33.934213401Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:01:33.934761 containerd[2017]: time="2025-05-27T17:01:33.934319769Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:01:33.934761 containerd[2017]: time="2025-05-27T17:01:33.934353909Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 17:01:33.934761 containerd[2017]: time="2025-05-27T17:01:33.934428141Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 17:01:33.935069 containerd[2017]: time="2025-05-27T17:01:33.934823517Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 17:01:33.940340 containerd[2017]: time="2025-05-27T17:01:33.940037205Z" level=info msg="metadata content store policy set" policy=shared May 27 17:01:33.957227 amazon-ssm-agent[2139]: 2025-05-27 17:01:33.6391 INFO no_proxy: May 27 17:01:33.959657 containerd[2017]: time="2025-05-27T17:01:33.959560689Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 17:01:33.959900 containerd[2017]: time="2025-05-27T17:01:33.959693925Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 17:01:33.959900 containerd[2017]: time="2025-05-27T17:01:33.959737341Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 17:01:33.959900 containerd[2017]: time="2025-05-27T17:01:33.959772669Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 17:01:33.959900 containerd[2017]: time="2025-05-27T17:01:33.959804361Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 17:01:33.959900 containerd[2017]: time="2025-05-27T17:01:33.959832573Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 17:01:33.959900 containerd[2017]: time="2025-05-27T17:01:33.959861985Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 17:01:33.960896 containerd[2017]: time="2025-05-27T17:01:33.959910357Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 17:01:33.960896 containerd[2017]: time="2025-05-27T17:01:33.959941761Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 17:01:33.960896 containerd[2017]: time="2025-05-27T17:01:33.960017181Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 17:01:33.960896 containerd[2017]: time="2025-05-27T17:01:33.960053145Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 17:01:33.960896 containerd[2017]: time="2025-05-27T17:01:33.960087789Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 17:01:33.960896 containerd[2017]: time="2025-05-27T17:01:33.960365661Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 17:01:33.960896 containerd[2017]: time="2025-05-27T17:01:33.960432441Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 17:01:33.960896 containerd[2017]: time="2025-05-27T17:01:33.960467973Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 17:01:33.960896 containerd[2017]: time="2025-05-27T17:01:33.960499929Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 17:01:33.960896 containerd[2017]: time="2025-05-27T17:01:33.960529941Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 17:01:33.960896 containerd[2017]: time="2025-05-27T17:01:33.960557877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 17:01:33.960896 containerd[2017]: time="2025-05-27T17:01:33.960585909Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 17:01:33.960896 containerd[2017]: time="2025-05-27T17:01:33.960622257Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 17:01:33.960896 containerd[2017]: time="2025-05-27T17:01:33.960676341Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 17:01:33.960896 containerd[2017]: time="2025-05-27T17:01:33.960714333Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 17:01:33.961613 containerd[2017]: time="2025-05-27T17:01:33.960747285Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 17:01:33.961613 containerd[2017]: time="2025-05-27T17:01:33.960896397Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 17:01:33.961613 containerd[2017]: time="2025-05-27T17:01:33.960929853Z" level=info msg="Start snapshots syncer" May 27 17:01:33.972024 containerd[2017]: time="2025-05-27T17:01:33.971056965Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 17:01:33.972024 containerd[2017]: time="2025-05-27T17:01:33.971570277Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 17:01:33.972336 containerd[2017]: time="2025-05-27T17:01:33.971664489Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 17:01:33.972336 containerd[2017]: time="2025-05-27T17:01:33.971828913Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 17:01:33.978338 containerd[2017]: time="2025-05-27T17:01:33.976266657Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 17:01:33.978338 containerd[2017]: time="2025-05-27T17:01:33.976345665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 17:01:33.978338 containerd[2017]: time="2025-05-27T17:01:33.976376649Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 17:01:33.978338 containerd[2017]: time="2025-05-27T17:01:33.976405497Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 17:01:33.978338 containerd[2017]: time="2025-05-27T17:01:33.976439781Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 17:01:33.978338 containerd[2017]: time="2025-05-27T17:01:33.976468989Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 17:01:33.978338 containerd[2017]: time="2025-05-27T17:01:33.976501725Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 17:01:33.978338 containerd[2017]: time="2025-05-27T17:01:33.976561041Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 17:01:33.989016 containerd[2017]: time="2025-05-27T17:01:33.987330837Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 17:01:33.989016 containerd[2017]: time="2025-05-27T17:01:33.987433113Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 17:01:33.989016 containerd[2017]: time="2025-05-27T17:01:33.987536709Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:01:33.989016 containerd[2017]: time="2025-05-27T17:01:33.987581325Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:01:33.989016 containerd[2017]: time="2025-05-27T17:01:33.987616101Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:01:33.989016 containerd[2017]: time="2025-05-27T17:01:33.987653337Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:01:33.989016 containerd[2017]: time="2025-05-27T17:01:33.987727977Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 17:01:33.989016 containerd[2017]: time="2025-05-27T17:01:33.987767649Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 17:01:33.989016 containerd[2017]: time="2025-05-27T17:01:33.987807153Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 17:01:33.989016 containerd[2017]: time="2025-05-27T17:01:33.987870645Z" level=info msg="runtime interface created" May 27 17:01:33.989016 containerd[2017]: time="2025-05-27T17:01:33.987896661Z" level=info msg="created NRI interface" May 27 17:01:33.989016 containerd[2017]: time="2025-05-27T17:01:33.987923313Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 17:01:33.989814 containerd[2017]: time="2025-05-27T17:01:33.989677305Z" level=info msg="Connect containerd service" May 27 17:01:33.989900 containerd[2017]: time="2025-05-27T17:01:33.989823537Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 17:01:34.003286 containerd[2017]: time="2025-05-27T17:01:34.000601961Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:01:34.070151 amazon-ssm-agent[2139]: 2025-05-27 17:01:33.6397 INFO Checking if agent identity type OnPrem can be assumed May 27 17:01:34.170319 amazon-ssm-agent[2139]: 2025-05-27 17:01:33.6398 INFO Checking if agent identity type EC2 can be assumed May 27 17:01:34.268059 amazon-ssm-agent[2139]: 2025-05-27 17:01:33.9400 INFO Agent will take identity from EC2 May 27 17:01:34.370892 amazon-ssm-agent[2139]: 2025-05-27 17:01:33.9745 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 May 27 17:01:34.376554 polkitd[2182]: Started polkitd version 126 May 27 17:01:34.391022 polkitd[2182]: Loading rules from directory /etc/polkit-1/rules.d May 27 17:01:34.391637 polkitd[2182]: Loading rules from directory /run/polkit-1/rules.d May 27 17:01:34.391714 polkitd[2182]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 17:01:34.393304 polkitd[2182]: Loading rules from directory /usr/local/share/polkit-1/rules.d May 27 17:01:34.393381 polkitd[2182]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 17:01:34.393461 polkitd[2182]: Loading rules from directory /usr/share/polkit-1/rules.d May 27 17:01:34.397440 polkitd[2182]: Finished loading, compiling and executing 2 rules May 27 17:01:34.398137 systemd[1]: Started polkit.service - Authorization Manager. May 27 17:01:34.405355 dbus-daemon[1978]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 27 17:01:34.407343 polkitd[2182]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 27 17:01:34.454841 containerd[2017]: time="2025-05-27T17:01:34.454220887Z" level=info msg="Start subscribing containerd event" May 27 17:01:34.454841 containerd[2017]: time="2025-05-27T17:01:34.454337095Z" level=info msg="Start recovering state" May 27 17:01:34.454841 containerd[2017]: time="2025-05-27T17:01:34.454516651Z" level=info msg="Start event monitor" May 27 17:01:34.454841 containerd[2017]: time="2025-05-27T17:01:34.454546507Z" level=info msg="Start cni network conf syncer for default" May 27 17:01:34.454841 containerd[2017]: time="2025-05-27T17:01:34.454567927Z" level=info msg="Start streaming server" May 27 17:01:34.454841 containerd[2017]: time="2025-05-27T17:01:34.454588411Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 17:01:34.454841 containerd[2017]: time="2025-05-27T17:01:34.454605175Z" level=info msg="runtime interface starting up..." May 27 17:01:34.454841 containerd[2017]: time="2025-05-27T17:01:34.454620319Z" level=info msg="starting plugins..." May 27 17:01:34.454841 containerd[2017]: time="2025-05-27T17:01:34.454647319Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 17:01:34.457508 containerd[2017]: time="2025-05-27T17:01:34.457317691Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 17:01:34.462053 containerd[2017]: time="2025-05-27T17:01:34.457701103Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 17:01:34.458291 systemd[1]: Started containerd.service - containerd container runtime. May 27 17:01:34.464246 containerd[2017]: time="2025-05-27T17:01:34.463792135Z" level=info msg="containerd successfully booted in 0.724541s" May 27 17:01:34.466115 systemd-resolved[1862]: System hostname changed to 'ip-172-31-22-21'. May 27 17:01:34.466117 systemd-hostnamed[2049]: Hostname set to (transient) May 27 17:01:34.476316 amazon-ssm-agent[2139]: 2025-05-27 17:01:33.9746 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 May 27 17:01:34.572862 amazon-ssm-agent[2139]: 2025-05-27 17:01:33.9746 INFO [amazon-ssm-agent] Starting Core Agent May 27 17:01:34.673173 amazon-ssm-agent[2139]: 2025-05-27 17:01:33.9746 INFO [amazon-ssm-agent] Registrar detected. Attempting registration May 27 17:01:34.773487 amazon-ssm-agent[2139]: 2025-05-27 17:01:33.9746 INFO [Registrar] Starting registrar module May 27 17:01:34.880437 amazon-ssm-agent[2139]: 2025-05-27 17:01:33.9838 INFO [EC2Identity] Checking disk for registration info May 27 17:01:34.976939 amazon-ssm-agent[2139]: 2025-05-27 17:01:33.9840 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration May 27 17:01:35.077585 amazon-ssm-agent[2139]: 2025-05-27 17:01:33.9840 INFO [EC2Identity] Generating registration keypair May 27 17:01:35.128583 tar[2001]: linux-arm64/LICENSE May 27 17:01:35.128583 tar[2001]: linux-arm64/README.md May 27 17:01:35.172721 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 17:01:35.586576 sshd_keygen[2030]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 17:01:35.606590 ntpd[1985]: Listen normally on 6 eth0 [fe80::4cf:50ff:fed2:2a0b%2]:123 May 27 17:01:35.609918 ntpd[1985]: 27 May 17:01:35 ntpd[1985]: Listen normally on 6 eth0 [fe80::4cf:50ff:fed2:2a0b%2]:123 May 27 17:01:35.647103 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 17:01:35.657173 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 17:01:35.663717 systemd[1]: Started sshd@0-172.31.22.21:22-139.178.68.195:39350.service - OpenSSH per-connection server daemon (139.178.68.195:39350). May 27 17:01:35.709368 systemd[1]: issuegen.service: Deactivated successfully. May 27 17:01:35.709799 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 17:01:35.724132 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 17:01:35.741193 amazon-ssm-agent[2139]: 2025-05-27 17:01:35.7407 INFO [EC2Identity] Checking write access before registering May 27 17:01:35.773576 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 17:01:35.785271 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 17:01:35.791686 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 17:01:35.798421 amazon-ssm-agent[2139]: 2025/05/27 17:01:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 17:01:35.798421 amazon-ssm-agent[2139]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 17:01:35.798421 amazon-ssm-agent[2139]: 2025/05/27 17:01:35 processing appconfig overrides May 27 17:01:35.798743 systemd[1]: Reached target getty.target - Login Prompts. May 27 17:01:35.834899 amazon-ssm-agent[2139]: 2025-05-27 17:01:35.7430 INFO [EC2Identity] Registering EC2 instance with Systems Manager May 27 17:01:35.835193 amazon-ssm-agent[2139]: 2025-05-27 17:01:35.7975 INFO [EC2Identity] EC2 registration was successful. May 27 17:01:35.835295 amazon-ssm-agent[2139]: 2025-05-27 17:01:35.7975 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. May 27 17:01:35.835552 amazon-ssm-agent[2139]: 2025-05-27 17:01:35.7976 INFO [CredentialRefresher] credentialRefresher has started May 27 17:01:35.835552 amazon-ssm-agent[2139]: 2025-05-27 17:01:35.7977 INFO [CredentialRefresher] Starting credentials refresher loop May 27 17:01:35.835552 amazon-ssm-agent[2139]: 2025-05-27 17:01:35.8345 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 27 17:01:35.835552 amazon-ssm-agent[2139]: 2025-05-27 17:01:35.8348 INFO [CredentialRefresher] Credentials ready May 27 17:01:35.842817 amazon-ssm-agent[2139]: 2025-05-27 17:01:35.8354 INFO [CredentialRefresher] Next credential rotation will be in 29.9999847259 minutes May 27 17:01:35.939396 sshd[2230]: Accepted publickey for core from 139.178.68.195 port 39350 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:01:35.942626 sshd-session[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:01:35.959192 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 17:01:35.965830 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 17:01:36.003453 systemd-logind[1991]: New session 1 of user core. May 27 17:01:36.015810 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 17:01:36.025948 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 17:01:36.057302 (systemd)[2241]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 17:01:36.063703 systemd-logind[1991]: New session c1 of user core. May 27 17:01:36.097610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:01:36.103724 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 17:01:36.114712 (kubelet)[2248]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:01:36.396259 systemd[2241]: Queued start job for default target default.target. May 27 17:01:36.415346 systemd[2241]: Created slice app.slice - User Application Slice. May 27 17:01:36.415430 systemd[2241]: Reached target paths.target - Paths. May 27 17:01:36.415537 systemd[2241]: Reached target timers.target - Timers. May 27 17:01:36.420195 systemd[2241]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 17:01:36.520675 systemd[2241]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 17:01:36.520803 systemd[2241]: Reached target sockets.target - Sockets. May 27 17:01:36.520900 systemd[2241]: Reached target basic.target - Basic System. May 27 17:01:36.521027 systemd[2241]: Reached target default.target - Main User Target. May 27 17:01:36.521094 systemd[2241]: Startup finished in 438ms. May 27 17:01:36.521182 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 17:01:36.536845 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 17:01:36.542810 systemd[1]: Startup finished in 3.863s (kernel) + 17.231s (initrd) + 9.685s (userspace) = 30.781s. May 27 17:01:36.708156 systemd[1]: Started sshd@1-172.31.22.21:22-139.178.68.195:57682.service - OpenSSH per-connection server daemon (139.178.68.195:57682). May 27 17:01:36.875319 amazon-ssm-agent[2139]: 2025-05-27 17:01:36.8751 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 27 17:01:36.903709 sshd[2267]: Accepted publickey for core from 139.178.68.195 port 57682 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:01:36.908653 sshd-session[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:01:36.927743 systemd-logind[1991]: New session 2 of user core. May 27 17:01:36.930313 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 17:01:36.976158 amazon-ssm-agent[2139]: 2025-05-27 17:01:36.8780 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2271) started May 27 17:01:37.060997 sshd[2275]: Connection closed by 139.178.68.195 port 57682 May 27 17:01:37.059802 sshd-session[2267]: pam_unix(sshd:session): session closed for user core May 27 17:01:37.067943 systemd[1]: sshd@1-172.31.22.21:22-139.178.68.195:57682.service: Deactivated successfully. May 27 17:01:37.074759 systemd[1]: session-2.scope: Deactivated successfully. May 27 17:01:37.077146 amazon-ssm-agent[2139]: 2025-05-27 17:01:36.8780 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 27 17:01:37.079979 systemd-logind[1991]: Session 2 logged out. Waiting for processes to exit. May 27 17:01:37.100945 systemd[1]: Started sshd@2-172.31.22.21:22-139.178.68.195:57696.service - OpenSSH per-connection server daemon (139.178.68.195:57696). May 27 17:01:37.109359 systemd-logind[1991]: Removed session 2. May 27 17:01:37.200177 kubelet[2248]: E0527 17:01:37.200118 2248 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:01:37.205447 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:01:37.205777 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:01:37.208104 systemd[1]: kubelet.service: Consumed 1.453s CPU time, 258.9M memory peak. May 27 17:01:37.315485 sshd[2285]: Accepted publickey for core from 139.178.68.195 port 57696 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:01:37.318411 sshd-session[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:01:37.329774 systemd-logind[1991]: New session 3 of user core. May 27 17:01:37.339263 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 17:01:37.459698 sshd[2292]: Connection closed by 139.178.68.195 port 57696 May 27 17:01:37.459554 sshd-session[2285]: pam_unix(sshd:session): session closed for user core May 27 17:01:37.467286 systemd[1]: sshd@2-172.31.22.21:22-139.178.68.195:57696.service: Deactivated successfully. May 27 17:01:37.471979 systemd[1]: session-3.scope: Deactivated successfully. May 27 17:01:37.477061 systemd-logind[1991]: Session 3 logged out. Waiting for processes to exit. May 27 17:01:37.479666 systemd-logind[1991]: Removed session 3. May 27 17:01:37.501716 systemd[1]: Started sshd@3-172.31.22.21:22-139.178.68.195:57708.service - OpenSSH per-connection server daemon (139.178.68.195:57708). May 27 17:01:37.719613 sshd[2298]: Accepted publickey for core from 139.178.68.195 port 57708 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:01:37.722340 sshd-session[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:01:37.731870 systemd-logind[1991]: New session 4 of user core. May 27 17:01:37.741297 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 17:01:37.869818 sshd[2301]: Connection closed by 139.178.68.195 port 57708 May 27 17:01:37.870761 sshd-session[2298]: pam_unix(sshd:session): session closed for user core May 27 17:01:37.878408 systemd[1]: sshd@3-172.31.22.21:22-139.178.68.195:57708.service: Deactivated successfully. May 27 17:01:37.882142 systemd[1]: session-4.scope: Deactivated successfully. May 27 17:01:37.885503 systemd-logind[1991]: Session 4 logged out. Waiting for processes to exit. May 27 17:01:37.888896 systemd-logind[1991]: Removed session 4. May 27 17:01:37.906403 systemd[1]: Started sshd@4-172.31.22.21:22-139.178.68.195:57720.service - OpenSSH per-connection server daemon (139.178.68.195:57720). May 27 17:01:38.103706 sshd[2307]: Accepted publickey for core from 139.178.68.195 port 57720 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:01:38.106255 sshd-session[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:01:38.116058 systemd-logind[1991]: New session 5 of user core. May 27 17:01:38.122273 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 17:01:38.239089 sudo[2310]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 17:01:38.239687 sudo[2310]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:01:38.254732 sudo[2310]: pam_unix(sudo:session): session closed for user root May 27 17:01:38.279656 sshd[2309]: Connection closed by 139.178.68.195 port 57720 May 27 17:01:38.278353 sshd-session[2307]: pam_unix(sshd:session): session closed for user core May 27 17:01:38.286789 systemd[1]: sshd@4-172.31.22.21:22-139.178.68.195:57720.service: Deactivated successfully. May 27 17:01:38.290263 systemd[1]: session-5.scope: Deactivated successfully. May 27 17:01:38.292155 systemd-logind[1991]: Session 5 logged out. Waiting for processes to exit. May 27 17:01:38.295510 systemd-logind[1991]: Removed session 5. May 27 17:01:38.314353 systemd[1]: Started sshd@5-172.31.22.21:22-139.178.68.195:57732.service - OpenSSH per-connection server daemon (139.178.68.195:57732). May 27 17:01:38.510044 sshd[2316]: Accepted publickey for core from 139.178.68.195 port 57732 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:01:38.512504 sshd-session[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:01:38.522060 systemd-logind[1991]: New session 6 of user core. May 27 17:01:38.528217 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 17:01:38.630765 sudo[2320]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 17:01:38.631706 sudo[2320]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:01:38.641052 sudo[2320]: pam_unix(sudo:session): session closed for user root May 27 17:01:38.650537 sudo[2319]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 17:01:38.651599 sudo[2319]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:01:38.668488 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:01:38.728046 augenrules[2342]: No rules May 27 17:01:38.730730 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:01:38.731255 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:01:38.733130 sudo[2319]: pam_unix(sudo:session): session closed for user root May 27 17:01:38.755894 sshd[2318]: Connection closed by 139.178.68.195 port 57732 May 27 17:01:38.757104 sshd-session[2316]: pam_unix(sshd:session): session closed for user core May 27 17:01:38.764119 systemd[1]: sshd@5-172.31.22.21:22-139.178.68.195:57732.service: Deactivated successfully. May 27 17:01:38.768060 systemd[1]: session-6.scope: Deactivated successfully. May 27 17:01:38.770693 systemd-logind[1991]: Session 6 logged out. Waiting for processes to exit. May 27 17:01:38.773444 systemd-logind[1991]: Removed session 6. May 27 17:01:38.797132 systemd[1]: Started sshd@6-172.31.22.21:22-139.178.68.195:57746.service - OpenSSH per-connection server daemon (139.178.68.195:57746). May 27 17:01:39.006382 sshd[2351]: Accepted publickey for core from 139.178.68.195 port 57746 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:01:39.008725 sshd-session[2351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:01:39.019070 systemd-logind[1991]: New session 7 of user core. May 27 17:01:39.022303 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 17:01:39.128186 sudo[2354]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 17:01:39.128800 sudo[2354]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:01:39.112287 systemd-resolved[1862]: Clock change detected. Flushing caches. May 27 17:01:39.135356 systemd-journald[1538]: Time jumped backwards, rotating. May 27 17:01:39.153587 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 17:01:39.178611 (dockerd)[2372]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 17:01:39.556231 dockerd[2372]: time="2025-05-27T17:01:39.555151778Z" level=info msg="Starting up" May 27 17:01:39.558207 dockerd[2372]: time="2025-05-27T17:01:39.558026342Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 17:01:39.957807 systemd[1]: var-lib-docker-metacopy\x2dcheck1403733532-merged.mount: Deactivated successfully. May 27 17:01:39.978491 dockerd[2372]: time="2025-05-27T17:01:39.978407680Z" level=info msg="Loading containers: start." May 27 17:01:39.994093 kernel: Initializing XFRM netlink socket May 27 17:01:40.340142 (udev-worker)[2395]: Network interface NamePolicy= disabled on kernel command line. May 27 17:01:40.419744 systemd-networkd[1859]: docker0: Link UP May 27 17:01:40.426549 dockerd[2372]: time="2025-05-27T17:01:40.426494390Z" level=info msg="Loading containers: done." May 27 17:01:40.455326 dockerd[2372]: time="2025-05-27T17:01:40.455105234Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 17:01:40.455599 dockerd[2372]: time="2025-05-27T17:01:40.455537738Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 17:01:40.456109 dockerd[2372]: time="2025-05-27T17:01:40.456010562Z" level=info msg="Initializing buildkit" May 27 17:01:40.502281 dockerd[2372]: time="2025-05-27T17:01:40.502134902Z" level=info msg="Completed buildkit initialization" May 27 17:01:40.518277 dockerd[2372]: time="2025-05-27T17:01:40.517963706Z" level=info msg="Daemon has completed initialization" May 27 17:01:40.518277 dockerd[2372]: time="2025-05-27T17:01:40.518085542Z" level=info msg="API listen on /run/docker.sock" May 27 17:01:40.518731 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 17:01:40.627085 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3359620106-merged.mount: Deactivated successfully. May 27 17:01:41.652872 containerd[2017]: time="2025-05-27T17:01:41.652713100Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 27 17:01:42.257431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2191163040.mount: Deactivated successfully. May 27 17:01:43.591306 containerd[2017]: time="2025-05-27T17:01:43.591213102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:01:43.596117 containerd[2017]: time="2025-05-27T17:01:43.595842294Z" level=info msg="ImageCreate event name:\"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:01:43.596117 containerd[2017]: time="2025-05-27T17:01:43.595994178Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=25651974" May 27 17:01:43.604669 containerd[2017]: time="2025-05-27T17:01:43.604549362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:01:43.607149 containerd[2017]: time="2025-05-27T17:01:43.606843294Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"25648774\" in 1.953856054s" May 27 17:01:43.607149 containerd[2017]: time="2025-05-27T17:01:43.606922182Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\"" May 27 17:01:43.613078 containerd[2017]: time="2025-05-27T17:01:43.612946062Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 27 17:01:45.107644 containerd[2017]: time="2025-05-27T17:01:45.107569409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:01:45.109465 containerd[2017]: time="2025-05-27T17:01:45.109408529Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=22459528" May 27 17:01:45.110382 containerd[2017]: time="2025-05-27T17:01:45.109833221Z" level=info msg="ImageCreate event name:\"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:01:45.114854 containerd[2017]: time="2025-05-27T17:01:45.114728909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:01:45.117169 containerd[2017]: time="2025-05-27T17:01:45.116842385Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"23995294\" in 1.503804259s" May 27 17:01:45.117169 containerd[2017]: time="2025-05-27T17:01:45.116905025Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\"" May 27 17:01:45.117983 containerd[2017]: time="2025-05-27T17:01:45.117928025Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 27 17:01:46.225068 containerd[2017]: time="2025-05-27T17:01:46.224771503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:01:46.227209 containerd[2017]: time="2025-05-27T17:01:46.227125027Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=17125279" May 27 17:01:46.229063 containerd[2017]: time="2025-05-27T17:01:46.228313279Z" level=info msg="ImageCreate event name:\"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:01:46.234814 containerd[2017]: time="2025-05-27T17:01:46.234766375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:01:46.237922 containerd[2017]: time="2025-05-27T17:01:46.237862939Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"18661063\" in 1.119872454s" May 27 17:01:46.238058 containerd[2017]: time="2025-05-27T17:01:46.237937891Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\"" May 27 17:01:46.238699 containerd[2017]: time="2025-05-27T17:01:46.238659379Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 27 17:01:46.800958 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 17:01:46.804641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:01:47.287615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:01:47.302695 (kubelet)[2649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:01:47.398592 kubelet[2649]: E0527 17:01:47.398508 2649 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:01:47.407666 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:01:47.407975 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:01:47.409238 systemd[1]: kubelet.service: Consumed 360ms CPU time, 107.1M memory peak. May 27 17:01:47.741422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4023524774.mount: Deactivated successfully. May 27 17:01:48.243095 containerd[2017]: time="2025-05-27T17:01:48.242679429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:01:48.245724 containerd[2017]: time="2025-05-27T17:01:48.245634489Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=26871375" May 27 17:01:48.249081 containerd[2017]: time="2025-05-27T17:01:48.248955537Z" level=info msg="ImageCreate event name:\"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:01:48.250372 containerd[2017]: time="2025-05-27T17:01:48.250303137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:01:48.252704 containerd[2017]: time="2025-05-27T17:01:48.252574629Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"26870394\" in 2.01373675s" May 27 17:01:48.252924 containerd[2017]: time="2025-05-27T17:01:48.252662853Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\"" May 27 17:01:48.253731 containerd[2017]: time="2025-05-27T17:01:48.253673445Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 17:01:48.790343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3771045337.mount: Deactivated successfully. May 27 17:01:50.029226 containerd[2017]: time="2025-05-27T17:01:50.028422442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:01:50.039961 containerd[2017]: time="2025-05-27T17:01:50.039883378Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" May 27 17:01:50.045069 containerd[2017]: time="2025-05-27T17:01:50.044542390Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:01:50.052869 containerd[2017]: time="2025-05-27T17:01:50.052795474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:01:50.058361 containerd[2017]: time="2025-05-27T17:01:50.057168346Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.803426405s" May 27 17:01:50.058361 containerd[2017]: time="2025-05-27T17:01:50.057239290Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 27 17:01:50.058361 containerd[2017]: time="2025-05-27T17:01:50.057863938Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 17:01:50.543862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3525473029.mount: Deactivated successfully. May 27 17:01:50.552085 containerd[2017]: time="2025-05-27T17:01:50.551618400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:01:50.553597 containerd[2017]: time="2025-05-27T17:01:50.553543416Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 27 17:01:50.555361 containerd[2017]: time="2025-05-27T17:01:50.555296400Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:01:50.559845 containerd[2017]: time="2025-05-27T17:01:50.559727220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:01:50.561522 containerd[2017]: time="2025-05-27T17:01:50.561138468Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 503.167478ms" May 27 17:01:50.561522 containerd[2017]: time="2025-05-27T17:01:50.561199260Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 27 17:01:50.562143 containerd[2017]: time="2025-05-27T17:01:50.562088304Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 27 17:01:51.086107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3947150293.mount: Deactivated successfully. May 27 17:01:53.089957 containerd[2017]: time="2025-05-27T17:01:53.089876701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:01:53.091161 containerd[2017]: time="2025-05-27T17:01:53.090959509Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" May 27 17:01:53.093165 containerd[2017]: time="2025-05-27T17:01:53.093024865Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:01:53.098446 containerd[2017]: time="2025-05-27T17:01:53.098394301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:01:53.101201 containerd[2017]: time="2025-05-27T17:01:53.100705357Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.538557665s" May 27 17:01:53.101201 containerd[2017]: time="2025-05-27T17:01:53.100763725Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 27 17:01:57.552145 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 17:01:57.557353 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:01:57.884295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:01:57.896705 (kubelet)[2798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:01:57.984537 kubelet[2798]: E0527 17:01:57.984432 2798 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:01:57.989378 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:01:57.989754 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:01:57.991997 systemd[1]: kubelet.service: Consumed 291ms CPU time, 105.2M memory peak. May 27 17:02:00.396410 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:02:00.396739 systemd[1]: kubelet.service: Consumed 291ms CPU time, 105.2M memory peak. May 27 17:02:00.402522 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:02:00.455729 systemd[1]: Reload requested from client PID 2813 ('systemctl') (unit session-7.scope)... May 27 17:02:00.455767 systemd[1]: Reloading... May 27 17:02:00.715096 zram_generator::config[2861]: No configuration found. May 27 17:02:00.903492 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:02:01.165396 systemd[1]: Reloading finished in 708 ms. May 27 17:02:01.270519 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 17:02:01.270699 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 17:02:01.271295 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:02:01.271420 systemd[1]: kubelet.service: Consumed 218ms CPU time, 95M memory peak. May 27 17:02:01.274546 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:02:01.597796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:02:01.612926 (kubelet)[2921]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:02:01.684990 kubelet[2921]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:02:01.684990 kubelet[2921]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 27 17:02:01.684990 kubelet[2921]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:02:01.685528 kubelet[2921]: I0527 17:02:01.685058 2921 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:02:03.294834 kubelet[2921]: I0527 17:02:03.294739 2921 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 27 17:02:03.294834 kubelet[2921]: I0527 17:02:03.294803 2921 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:02:03.295622 kubelet[2921]: I0527 17:02:03.295334 2921 server.go:934] "Client rotation is on, will bootstrap in background" May 27 17:02:03.341155 kubelet[2921]: E0527 17:02:03.341078 2921 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.22.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.22.21:6443: connect: connection refused" logger="UnhandledError" May 27 17:02:03.347083 kubelet[2921]: I0527 17:02:03.345457 2921 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:02:03.362622 kubelet[2921]: I0527 17:02:03.362576 2921 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:02:03.370467 kubelet[2921]: I0527 17:02:03.370408 2921 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:02:03.370767 kubelet[2921]: I0527 17:02:03.370727 2921 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 27 17:02:03.371195 kubelet[2921]: I0527 17:02:03.371137 2921 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:02:03.371525 kubelet[2921]: I0527 17:02:03.371195 2921 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-21","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:02:03.371765 kubelet[2921]: I0527 17:02:03.371547 2921 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:02:03.371765 kubelet[2921]: I0527 17:02:03.371572 2921 container_manager_linux.go:300] "Creating device plugin manager" May 27 17:02:03.371890 kubelet[2921]: I0527 17:02:03.371820 2921 state_mem.go:36] "Initialized new in-memory state store" May 27 17:02:03.378097 kubelet[2921]: I0527 17:02:03.377586 2921 kubelet.go:408] "Attempting to sync node with API server" May 27 17:02:03.378097 kubelet[2921]: I0527 17:02:03.377662 2921 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:02:03.378097 kubelet[2921]: I0527 17:02:03.377703 2921 kubelet.go:314] "Adding apiserver pod source" May 27 17:02:03.378097 kubelet[2921]: I0527 17:02:03.377756 2921 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:02:03.384530 kubelet[2921]: W0527 17:02:03.384426 2921 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-21&limit=500&resourceVersion=0": dial tcp 172.31.22.21:6443: connect: connection refused May 27 17:02:03.384652 kubelet[2921]: E0527 17:02:03.384542 2921 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.22.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-21&limit=500&resourceVersion=0\": dial tcp 172.31.22.21:6443: connect: connection refused" logger="UnhandledError" May 27 17:02:03.387094 kubelet[2921]: W0527 17:02:03.385339 2921 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.22.21:6443: connect: connection refused May 27 17:02:03.387094 kubelet[2921]: E0527 17:02:03.385452 2921 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.22.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.21:6443: connect: connection refused" logger="UnhandledError" May 27 17:02:03.387094 kubelet[2921]: I0527 17:02:03.385630 2921 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:02:03.387094 kubelet[2921]: I0527 17:02:03.386521 2921 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 17:02:03.387094 kubelet[2921]: W0527 17:02:03.386634 2921 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 17:02:03.388645 kubelet[2921]: I0527 17:02:03.388578 2921 server.go:1274] "Started kubelet" May 27 17:02:03.393383 kubelet[2921]: I0527 17:02:03.393315 2921 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:02:03.403017 kubelet[2921]: E0527 17:02:03.402954 2921 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:02:03.405167 kubelet[2921]: I0527 17:02:03.404954 2921 volume_manager.go:289] "Starting Kubelet Volume Manager" May 27 17:02:03.406700 kubelet[2921]: I0527 17:02:03.406618 2921 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:02:03.407309 kubelet[2921]: E0527 17:02:03.407247 2921 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-22-21\" not found" May 27 17:02:03.407854 kubelet[2921]: I0527 17:02:03.407783 2921 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 27 17:02:03.407983 kubelet[2921]: I0527 17:02:03.407916 2921 reconciler.go:26] "Reconciler: start to sync state" May 27 17:02:03.409357 kubelet[2921]: I0527 17:02:03.409317 2921 server.go:449] "Adding debug handlers to kubelet server" May 27 17:02:03.412563 kubelet[2921]: I0527 17:02:03.412460 2921 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:02:03.414015 kubelet[2921]: I0527 17:02:03.413863 2921 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:02:03.416757 kubelet[2921]: I0527 17:02:03.416695 2921 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:02:03.417374 kubelet[2921]: W0527 17:02:03.417292 2921 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.21:6443: connect: connection refused May 27 17:02:03.417571 kubelet[2921]: E0527 17:02:03.417530 2921 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.22.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.21:6443: connect: connection refused" logger="UnhandledError" May 27 17:02:03.417887 kubelet[2921]: E0527 17:02:03.417837 2921 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-21?timeout=10s\": dial tcp 172.31.22.21:6443: connect: connection refused" interval="200ms" May 27 17:02:03.424557 kubelet[2921]: I0527 17:02:03.424473 2921 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:02:03.427862 kubelet[2921]: E0527 17:02:03.425598 2921 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.21:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.21:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-21.184370ff962f42c8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-21,UID:ip-172-31-22-21,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-21,},FirstTimestamp:2025-05-27 17:02:03.388535496 +0000 UTC m=+1.769158246,LastTimestamp:2025-05-27 17:02:03.388535496 +0000 UTC m=+1.769158246,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-21,}" May 27 17:02:03.433097 kubelet[2921]: I0527 17:02:03.431714 2921 factory.go:221] Registration of the containerd container factory successfully May 27 17:02:03.433097 kubelet[2921]: I0527 17:02:03.431754 2921 factory.go:221] Registration of the systemd container factory successfully May 27 17:02:03.448500 kubelet[2921]: I0527 17:02:03.448424 2921 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 17:02:03.451299 kubelet[2921]: I0527 17:02:03.451251 2921 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 17:02:03.451516 kubelet[2921]: I0527 17:02:03.451489 2921 status_manager.go:217] "Starting to sync pod status with apiserver" May 27 17:02:03.451677 kubelet[2921]: I0527 17:02:03.451652 2921 kubelet.go:2321] "Starting kubelet main sync loop" May 27 17:02:03.451878 kubelet[2921]: E0527 17:02:03.451839 2921 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:02:03.463884 kubelet[2921]: W0527 17:02:03.463461 2921 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.21:6443: connect: connection refused May 27 17:02:03.464854 kubelet[2921]: E0527 17:02:03.464783 2921 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.22.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.21:6443: connect: connection refused" logger="UnhandledError" May 27 17:02:03.478649 kubelet[2921]: I0527 17:02:03.478597 2921 cpu_manager.go:214] "Starting CPU manager" policy="none" May 27 17:02:03.478649 kubelet[2921]: I0527 17:02:03.478635 2921 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 27 17:02:03.478811 kubelet[2921]: I0527 17:02:03.478671 2921 state_mem.go:36] "Initialized new in-memory state store" May 27 17:02:03.481637 kubelet[2921]: I0527 17:02:03.481563 2921 policy_none.go:49] "None policy: Start" May 27 17:02:03.483571 kubelet[2921]: I0527 17:02:03.483537 2921 memory_manager.go:170] "Starting memorymanager" policy="None" May 27 17:02:03.483835 kubelet[2921]: I0527 17:02:03.483813 2921 state_mem.go:35] "Initializing new in-memory state store" May 27 17:02:03.495929 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 17:02:03.507792 kubelet[2921]: E0527 17:02:03.507694 2921 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-22-21\" not found" May 27 17:02:03.516684 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 17:02:03.526677 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 17:02:03.542002 kubelet[2921]: I0527 17:02:03.541960 2921 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 17:02:03.543079 kubelet[2921]: I0527 17:02:03.542516 2921 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:02:03.543079 kubelet[2921]: I0527 17:02:03.542547 2921 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:02:03.543079 kubelet[2921]: I0527 17:02:03.542926 2921 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:02:03.550776 kubelet[2921]: E0527 17:02:03.549360 2921 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-22-21\" not found" May 27 17:02:03.576434 systemd[1]: Created slice kubepods-burstable-pod49534ef854888bbb501a0c8b70022b99.slice - libcontainer container kubepods-burstable-pod49534ef854888bbb501a0c8b70022b99.slice. May 27 17:02:03.609981 kubelet[2921]: I0527 17:02:03.608353 2921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49534ef854888bbb501a0c8b70022b99-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-21\" (UID: \"49534ef854888bbb501a0c8b70022b99\") " pod="kube-system/kube-controller-manager-ip-172-31-22-21" May 27 17:02:03.609981 kubelet[2921]: I0527 17:02:03.608425 2921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49534ef854888bbb501a0c8b70022b99-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-21\" (UID: \"49534ef854888bbb501a0c8b70022b99\") " pod="kube-system/kube-controller-manager-ip-172-31-22-21" May 27 17:02:03.609981 kubelet[2921]: I0527 17:02:03.608471 2921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49534ef854888bbb501a0c8b70022b99-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-21\" (UID: \"49534ef854888bbb501a0c8b70022b99\") " pod="kube-system/kube-controller-manager-ip-172-31-22-21" May 27 17:02:03.609981 kubelet[2921]: I0527 17:02:03.608510 2921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df6d36775496f66c4b98df7c7b7d7dca-ca-certs\") pod \"kube-apiserver-ip-172-31-22-21\" (UID: \"df6d36775496f66c4b98df7c7b7d7dca\") " pod="kube-system/kube-apiserver-ip-172-31-22-21" May 27 17:02:03.609981 kubelet[2921]: I0527 17:02:03.608550 2921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49534ef854888bbb501a0c8b70022b99-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-21\" (UID: \"49534ef854888bbb501a0c8b70022b99\") " pod="kube-system/kube-controller-manager-ip-172-31-22-21" May 27 17:02:03.609693 systemd[1]: Created slice kubepods-burstable-poda0ff6dca7f1fa07103fa95f2936d7e34.slice - libcontainer container kubepods-burstable-poda0ff6dca7f1fa07103fa95f2936d7e34.slice. May 27 17:02:03.610450 kubelet[2921]: I0527 17:02:03.608585 2921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/49534ef854888bbb501a0c8b70022b99-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-21\" (UID: \"49534ef854888bbb501a0c8b70022b99\") " pod="kube-system/kube-controller-manager-ip-172-31-22-21" May 27 17:02:03.610450 kubelet[2921]: I0527 17:02:03.608623 2921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df6d36775496f66c4b98df7c7b7d7dca-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-21\" (UID: \"df6d36775496f66c4b98df7c7b7d7dca\") " pod="kube-system/kube-apiserver-ip-172-31-22-21" May 27 17:02:03.610450 kubelet[2921]: I0527 17:02:03.608659 2921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a0ff6dca7f1fa07103fa95f2936d7e34-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-21\" (UID: \"a0ff6dca7f1fa07103fa95f2936d7e34\") " pod="kube-system/kube-scheduler-ip-172-31-22-21" May 27 17:02:03.610450 kubelet[2921]: I0527 17:02:03.608697 2921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df6d36775496f66c4b98df7c7b7d7dca-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-21\" (UID: \"df6d36775496f66c4b98df7c7b7d7dca\") " pod="kube-system/kube-apiserver-ip-172-31-22-21" May 27 17:02:03.618637 kubelet[2921]: E0527 17:02:03.618562 2921 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-21?timeout=10s\": dial tcp 172.31.22.21:6443: connect: connection refused" interval="400ms" May 27 17:02:03.630597 systemd[1]: Created slice kubepods-burstable-poddf6d36775496f66c4b98df7c7b7d7dca.slice - libcontainer container kubepods-burstable-poddf6d36775496f66c4b98df7c7b7d7dca.slice. May 27 17:02:03.646326 kubelet[2921]: I0527 17:02:03.646275 2921 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-21" May 27 17:02:03.647005 kubelet[2921]: E0527 17:02:03.646890 2921 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.22.21:6443/api/v1/nodes\": dial tcp 172.31.22.21:6443: connect: connection refused" node="ip-172-31-22-21" May 27 17:02:03.850149 kubelet[2921]: I0527 17:02:03.850004 2921 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-21" May 27 17:02:03.851296 kubelet[2921]: E0527 17:02:03.851243 2921 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.22.21:6443/api/v1/nodes\": dial tcp 172.31.22.21:6443: connect: connection refused" node="ip-172-31-22-21" May 27 17:02:03.899312 containerd[2017]: time="2025-05-27T17:02:03.898950711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-21,Uid:49534ef854888bbb501a0c8b70022b99,Namespace:kube-system,Attempt:0,}" May 27 17:02:03.927059 containerd[2017]: time="2025-05-27T17:02:03.926964243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-21,Uid:a0ff6dca7f1fa07103fa95f2936d7e34,Namespace:kube-system,Attempt:0,}" May 27 17:02:03.939083 containerd[2017]: time="2025-05-27T17:02:03.938688243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-21,Uid:df6d36775496f66c4b98df7c7b7d7dca,Namespace:kube-system,Attempt:0,}" May 27 17:02:03.940327 containerd[2017]: time="2025-05-27T17:02:03.940116939Z" level=info msg="connecting to shim 652d42bfb2324f54f72d1ef6bb842a829086dff391dd4e5f0d6814d9b69c2ea0" address="unix:///run/containerd/s/ff8f930e73ecb67b533a72ae893c16db8e7da71f814f15c030c20ebe9f3cc147" namespace=k8s.io protocol=ttrpc version=3 May 27 17:02:04.003814 containerd[2017]: time="2025-05-27T17:02:04.003747347Z" level=info msg="connecting to shim d5cb6d01f92e29a1ca0aa29b6508a41731c4a033848579e098f52063c692b636" address="unix:///run/containerd/s/86eb21ab8ca40da02304cc3bea742cce38c42b170af86bec858f0d4b1ec37cb7" namespace=k8s.io protocol=ttrpc version=3 May 27 17:02:04.011332 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 27 17:02:04.019481 kubelet[2921]: E0527 17:02:04.019422 2921 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-21?timeout=10s\": dial tcp 172.31.22.21:6443: connect: connection refused" interval="800ms" May 27 17:02:04.025229 containerd[2017]: time="2025-05-27T17:02:04.025166015Z" level=info msg="connecting to shim e2e3b6ab0c6bdab0b1310b69b5fe4566ba6a04daec36aa75699437a9c6b2550e" address="unix:///run/containerd/s/b9da6331193ec14e41193eb007c50e0f1338179ad35de9b8396a90a5399ca2fb" namespace=k8s.io protocol=ttrpc version=3 May 27 17:02:04.037398 systemd[1]: Started cri-containerd-652d42bfb2324f54f72d1ef6bb842a829086dff391dd4e5f0d6814d9b69c2ea0.scope - libcontainer container 652d42bfb2324f54f72d1ef6bb842a829086dff391dd4e5f0d6814d9b69c2ea0. May 27 17:02:04.103481 systemd[1]: Started cri-containerd-d5cb6d01f92e29a1ca0aa29b6508a41731c4a033848579e098f52063c692b636.scope - libcontainer container d5cb6d01f92e29a1ca0aa29b6508a41731c4a033848579e098f52063c692b636. May 27 17:02:04.119470 systemd[1]: Started cri-containerd-e2e3b6ab0c6bdab0b1310b69b5fe4566ba6a04daec36aa75699437a9c6b2550e.scope - libcontainer container e2e3b6ab0c6bdab0b1310b69b5fe4566ba6a04daec36aa75699437a9c6b2550e. May 27 17:02:04.197711 containerd[2017]: time="2025-05-27T17:02:04.197641500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-21,Uid:49534ef854888bbb501a0c8b70022b99,Namespace:kube-system,Attempt:0,} returns sandbox id \"652d42bfb2324f54f72d1ef6bb842a829086dff391dd4e5f0d6814d9b69c2ea0\"" May 27 17:02:04.208884 containerd[2017]: time="2025-05-27T17:02:04.208574580Z" level=info msg="CreateContainer within sandbox \"652d42bfb2324f54f72d1ef6bb842a829086dff391dd4e5f0d6814d9b69c2ea0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 17:02:04.237856 containerd[2017]: time="2025-05-27T17:02:04.237776088Z" level=info msg="Container f6a5db09dc2a8c0543531b836ba428ea351fd70db9e286adf9256ad2778abbe0: CDI devices from CRI Config.CDIDevices: []" May 27 17:02:04.255803 kubelet[2921]: I0527 17:02:04.255686 2921 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-21" May 27 17:02:04.256696 kubelet[2921]: E0527 17:02:04.256623 2921 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.22.21:6443/api/v1/nodes\": dial tcp 172.31.22.21:6443: connect: connection refused" node="ip-172-31-22-21" May 27 17:02:04.261662 containerd[2017]: time="2025-05-27T17:02:04.261612612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-21,Uid:df6d36775496f66c4b98df7c7b7d7dca,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2e3b6ab0c6bdab0b1310b69b5fe4566ba6a04daec36aa75699437a9c6b2550e\"" May 27 17:02:04.262581 containerd[2017]: time="2025-05-27T17:02:04.262253880Z" level=info msg="CreateContainer within sandbox \"652d42bfb2324f54f72d1ef6bb842a829086dff391dd4e5f0d6814d9b69c2ea0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f6a5db09dc2a8c0543531b836ba428ea351fd70db9e286adf9256ad2778abbe0\"" May 27 17:02:04.263925 kubelet[2921]: W0527 17:02:04.263713 2921 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.22.21:6443: connect: connection refused May 27 17:02:04.263925 kubelet[2921]: E0527 17:02:04.263832 2921 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.22.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.21:6443: connect: connection refused" logger="UnhandledError" May 27 17:02:04.266094 containerd[2017]: time="2025-05-27T17:02:04.265518552Z" level=info msg="StartContainer for \"f6a5db09dc2a8c0543531b836ba428ea351fd70db9e286adf9256ad2778abbe0\"" May 27 17:02:04.270369 containerd[2017]: time="2025-05-27T17:02:04.270313692Z" level=info msg="CreateContainer within sandbox \"e2e3b6ab0c6bdab0b1310b69b5fe4566ba6a04daec36aa75699437a9c6b2550e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 17:02:04.276003 kubelet[2921]: W0527 17:02:04.275906 2921 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-21&limit=500&resourceVersion=0": dial tcp 172.31.22.21:6443: connect: connection refused May 27 17:02:04.276003 kubelet[2921]: E0527 17:02:04.276016 2921 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.22.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-21&limit=500&resourceVersion=0\": dial tcp 172.31.22.21:6443: connect: connection refused" logger="UnhandledError" May 27 17:02:04.276877 containerd[2017]: time="2025-05-27T17:02:04.276811200Z" level=info msg="connecting to shim f6a5db09dc2a8c0543531b836ba428ea351fd70db9e286adf9256ad2778abbe0" address="unix:///run/containerd/s/ff8f930e73ecb67b533a72ae893c16db8e7da71f814f15c030c20ebe9f3cc147" protocol=ttrpc version=3 May 27 17:02:04.284152 containerd[2017]: time="2025-05-27T17:02:04.284076481Z" level=info msg="Container 20759dab029893812c04d59b1de06553c2b031d45686fed509429519d0dd678a: CDI devices from CRI Config.CDIDevices: []" May 27 17:02:04.303510 containerd[2017]: time="2025-05-27T17:02:04.301023109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-21,Uid:a0ff6dca7f1fa07103fa95f2936d7e34,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5cb6d01f92e29a1ca0aa29b6508a41731c4a033848579e098f52063c692b636\"" May 27 17:02:04.303704 containerd[2017]: time="2025-05-27T17:02:04.303370333Z" level=info msg="CreateContainer within sandbox \"e2e3b6ab0c6bdab0b1310b69b5fe4566ba6a04daec36aa75699437a9c6b2550e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"20759dab029893812c04d59b1de06553c2b031d45686fed509429519d0dd678a\"" May 27 17:02:04.304849 containerd[2017]: time="2025-05-27T17:02:04.304474297Z" level=info msg="StartContainer for \"20759dab029893812c04d59b1de06553c2b031d45686fed509429519d0dd678a\"" May 27 17:02:04.307201 containerd[2017]: time="2025-05-27T17:02:04.307114705Z" level=info msg="connecting to shim 20759dab029893812c04d59b1de06553c2b031d45686fed509429519d0dd678a" address="unix:///run/containerd/s/b9da6331193ec14e41193eb007c50e0f1338179ad35de9b8396a90a5399ca2fb" protocol=ttrpc version=3 May 27 17:02:04.315088 containerd[2017]: time="2025-05-27T17:02:04.314821969Z" level=info msg="CreateContainer within sandbox \"d5cb6d01f92e29a1ca0aa29b6508a41731c4a033848579e098f52063c692b636\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 17:02:04.318495 systemd[1]: Started cri-containerd-f6a5db09dc2a8c0543531b836ba428ea351fd70db9e286adf9256ad2778abbe0.scope - libcontainer container f6a5db09dc2a8c0543531b836ba428ea351fd70db9e286adf9256ad2778abbe0. May 27 17:02:04.338343 containerd[2017]: time="2025-05-27T17:02:04.338290237Z" level=info msg="Container d3b6165e4cf21801003784b39546f631b2728f0aaaae2d02da62158ab5502a30: CDI devices from CRI Config.CDIDevices: []" May 27 17:02:04.356775 containerd[2017]: time="2025-05-27T17:02:04.356603305Z" level=info msg="CreateContainer within sandbox \"d5cb6d01f92e29a1ca0aa29b6508a41731c4a033848579e098f52063c692b636\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d3b6165e4cf21801003784b39546f631b2728f0aaaae2d02da62158ab5502a30\"" May 27 17:02:04.358394 containerd[2017]: time="2025-05-27T17:02:04.358323205Z" level=info msg="StartContainer for \"d3b6165e4cf21801003784b39546f631b2728f0aaaae2d02da62158ab5502a30\"" May 27 17:02:04.360843 containerd[2017]: time="2025-05-27T17:02:04.360787141Z" level=info msg="connecting to shim d3b6165e4cf21801003784b39546f631b2728f0aaaae2d02da62158ab5502a30" address="unix:///run/containerd/s/86eb21ab8ca40da02304cc3bea742cce38c42b170af86bec858f0d4b1ec37cb7" protocol=ttrpc version=3 May 27 17:02:04.361364 systemd[1]: Started cri-containerd-20759dab029893812c04d59b1de06553c2b031d45686fed509429519d0dd678a.scope - libcontainer container 20759dab029893812c04d59b1de06553c2b031d45686fed509429519d0dd678a. May 27 17:02:04.426386 systemd[1]: Started cri-containerd-d3b6165e4cf21801003784b39546f631b2728f0aaaae2d02da62158ab5502a30.scope - libcontainer container d3b6165e4cf21801003784b39546f631b2728f0aaaae2d02da62158ab5502a30. May 27 17:02:04.493126 containerd[2017]: time="2025-05-27T17:02:04.493074302Z" level=info msg="StartContainer for \"f6a5db09dc2a8c0543531b836ba428ea351fd70db9e286adf9256ad2778abbe0\" returns successfully" May 27 17:02:04.539196 containerd[2017]: time="2025-05-27T17:02:04.539124206Z" level=info msg="StartContainer for \"20759dab029893812c04d59b1de06553c2b031d45686fed509429519d0dd678a\" returns successfully" May 27 17:02:04.621182 containerd[2017]: time="2025-05-27T17:02:04.620436950Z" level=info msg="StartContainer for \"d3b6165e4cf21801003784b39546f631b2728f0aaaae2d02da62158ab5502a30\" returns successfully" May 27 17:02:05.061416 kubelet[2921]: I0527 17:02:05.061368 2921 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-21" May 27 17:02:07.997333 kubelet[2921]: E0527 17:02:07.997259 2921 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-22-21\" not found" node="ip-172-31-22-21" May 27 17:02:08.051114 kubelet[2921]: I0527 17:02:08.049700 2921 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-22-21" May 27 17:02:08.388072 kubelet[2921]: I0527 17:02:08.387958 2921 apiserver.go:52] "Watching apiserver" May 27 17:02:08.408410 kubelet[2921]: I0527 17:02:08.408348 2921 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 27 17:02:10.092150 systemd[1]: Reload requested from client PID 3196 ('systemctl') (unit session-7.scope)... May 27 17:02:10.092183 systemd[1]: Reloading... May 27 17:02:10.346116 zram_generator::config[3252]: No configuration found. May 27 17:02:10.546441 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:02:10.909655 systemd[1]: Reloading finished in 816 ms. May 27 17:02:10.963675 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:02:10.981880 systemd[1]: kubelet.service: Deactivated successfully. May 27 17:02:10.984220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:02:10.984519 systemd[1]: kubelet.service: Consumed 2.530s CPU time, 127.6M memory peak. May 27 17:02:10.988960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:02:11.348599 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:02:11.363670 (kubelet)[3300]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:02:11.456387 kubelet[3300]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:02:11.456387 kubelet[3300]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 27 17:02:11.456387 kubelet[3300]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:02:11.458157 kubelet[3300]: I0527 17:02:11.456528 3300 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:02:11.469018 kubelet[3300]: I0527 17:02:11.468942 3300 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 27 17:02:11.469018 kubelet[3300]: I0527 17:02:11.469003 3300 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:02:11.469681 kubelet[3300]: I0527 17:02:11.469615 3300 server.go:934] "Client rotation is on, will bootstrap in background" May 27 17:02:11.472852 kubelet[3300]: I0527 17:02:11.472789 3300 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 17:02:11.477672 kubelet[3300]: I0527 17:02:11.477097 3300 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:02:11.494796 kubelet[3300]: I0527 17:02:11.494729 3300 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:02:11.505903 kubelet[3300]: I0527 17:02:11.505821 3300 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:02:11.506216 kubelet[3300]: I0527 17:02:11.506166 3300 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 27 17:02:11.506517 kubelet[3300]: I0527 17:02:11.506451 3300 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:02:11.507074 kubelet[3300]: I0527 17:02:11.506513 3300 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-21","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:02:11.510084 kubelet[3300]: I0527 17:02:11.509350 3300 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:02:11.510084 kubelet[3300]: I0527 17:02:11.509416 3300 container_manager_linux.go:300] "Creating device plugin manager" May 27 17:02:11.510084 kubelet[3300]: I0527 17:02:11.509516 3300 state_mem.go:36] "Initialized new in-memory state store" May 27 17:02:11.510084 kubelet[3300]: I0527 17:02:11.509739 3300 kubelet.go:408] "Attempting to sync node with API server" May 27 17:02:11.510084 kubelet[3300]: I0527 17:02:11.509767 3300 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:02:11.510084 kubelet[3300]: I0527 17:02:11.509847 3300 kubelet.go:314] "Adding apiserver pod source" May 27 17:02:11.510084 kubelet[3300]: I0527 17:02:11.509879 3300 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:02:11.523878 kubelet[3300]: I0527 17:02:11.523797 3300 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:02:11.526791 kubelet[3300]: I0527 17:02:11.526726 3300 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 17:02:11.527726 kubelet[3300]: I0527 17:02:11.527675 3300 server.go:1274] "Started kubelet" May 27 17:02:11.542111 kubelet[3300]: I0527 17:02:11.540955 3300 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:02:11.551074 sudo[3314]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 27 17:02:11.552573 sudo[3314]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 27 17:02:11.554102 kubelet[3300]: I0527 17:02:11.553798 3300 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:02:11.564490 kubelet[3300]: I0527 17:02:11.564427 3300 server.go:449] "Adding debug handlers to kubelet server" May 27 17:02:11.567287 kubelet[3300]: I0527 17:02:11.567180 3300 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:02:11.567869 kubelet[3300]: I0527 17:02:11.567798 3300 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:02:11.571395 kubelet[3300]: I0527 17:02:11.569364 3300 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:02:11.577646 kubelet[3300]: I0527 17:02:11.577565 3300 volume_manager.go:289] "Starting Kubelet Volume Manager" May 27 17:02:11.578023 kubelet[3300]: E0527 17:02:11.577966 3300 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-22-21\" not found" May 27 17:02:11.583024 kubelet[3300]: I0527 17:02:11.582954 3300 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 27 17:02:11.597265 kubelet[3300]: I0527 17:02:11.596962 3300 reconciler.go:26] "Reconciler: start to sync state" May 27 17:02:11.639484 kubelet[3300]: I0527 17:02:11.639332 3300 factory.go:221] Registration of the systemd container factory successfully May 27 17:02:11.640610 kubelet[3300]: I0527 17:02:11.639567 3300 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:02:11.645297 kubelet[3300]: I0527 17:02:11.645219 3300 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 17:02:11.655280 kubelet[3300]: I0527 17:02:11.655213 3300 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 17:02:11.655280 kubelet[3300]: I0527 17:02:11.655269 3300 status_manager.go:217] "Starting to sync pod status with apiserver" May 27 17:02:11.655485 kubelet[3300]: I0527 17:02:11.655304 3300 kubelet.go:2321] "Starting kubelet main sync loop" May 27 17:02:11.655485 kubelet[3300]: E0527 17:02:11.655391 3300 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:02:11.685010 kubelet[3300]: I0527 17:02:11.682715 3300 factory.go:221] Registration of the containerd container factory successfully May 27 17:02:11.710757 kubelet[3300]: E0527 17:02:11.709958 3300 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:02:11.761117 kubelet[3300]: E0527 17:02:11.760431 3300 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 17:02:11.950966 kubelet[3300]: I0527 17:02:11.950421 3300 cpu_manager.go:214] "Starting CPU manager" policy="none" May 27 17:02:11.952518 kubelet[3300]: I0527 17:02:11.951855 3300 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 27 17:02:11.952518 kubelet[3300]: I0527 17:02:11.951924 3300 state_mem.go:36] "Initialized new in-memory state store" May 27 17:02:11.952518 kubelet[3300]: I0527 17:02:11.952432 3300 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 17:02:11.952518 kubelet[3300]: I0527 17:02:11.952464 3300 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 17:02:11.952518 kubelet[3300]: I0527 17:02:11.952503 3300 policy_none.go:49] "None policy: Start" May 27 17:02:11.957526 kubelet[3300]: I0527 17:02:11.957177 3300 memory_manager.go:170] "Starting memorymanager" policy="None" May 27 17:02:11.957526 kubelet[3300]: I0527 17:02:11.957236 3300 state_mem.go:35] "Initializing new in-memory state store" May 27 17:02:11.957715 kubelet[3300]: I0527 17:02:11.957538 3300 state_mem.go:75] "Updated machine memory state" May 27 17:02:11.963583 kubelet[3300]: E0527 17:02:11.963235 3300 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 17:02:11.982412 kubelet[3300]: I0527 17:02:11.982353 3300 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 17:02:11.982759 kubelet[3300]: I0527 17:02:11.982694 3300 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:02:11.982893 kubelet[3300]: I0527 17:02:11.982733 3300 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:02:11.997193 kubelet[3300]: I0527 17:02:11.993072 3300 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:02:12.125227 kubelet[3300]: I0527 17:02:12.125146 3300 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-21" May 27 17:02:12.148982 kubelet[3300]: I0527 17:02:12.148522 3300 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-22-21" May 27 17:02:12.149925 kubelet[3300]: I0527 17:02:12.149491 3300 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-22-21" May 27 17:02:12.403474 kubelet[3300]: I0527 17:02:12.403005 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49534ef854888bbb501a0c8b70022b99-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-21\" (UID: \"49534ef854888bbb501a0c8b70022b99\") " pod="kube-system/kube-controller-manager-ip-172-31-22-21" May 27 17:02:12.403652 kubelet[3300]: I0527 17:02:12.403547 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a0ff6dca7f1fa07103fa95f2936d7e34-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-21\" (UID: \"a0ff6dca7f1fa07103fa95f2936d7e34\") " pod="kube-system/kube-scheduler-ip-172-31-22-21" May 27 17:02:12.404063 kubelet[3300]: I0527 17:02:12.403802 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df6d36775496f66c4b98df7c7b7d7dca-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-21\" (UID: \"df6d36775496f66c4b98df7c7b7d7dca\") " pod="kube-system/kube-apiserver-ip-172-31-22-21" May 27 17:02:12.404063 kubelet[3300]: I0527 17:02:12.403885 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df6d36775496f66c4b98df7c7b7d7dca-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-21\" (UID: \"df6d36775496f66c4b98df7c7b7d7dca\") " pod="kube-system/kube-apiserver-ip-172-31-22-21" May 27 17:02:12.404268 kubelet[3300]: I0527 17:02:12.404167 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49534ef854888bbb501a0c8b70022b99-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-21\" (UID: \"49534ef854888bbb501a0c8b70022b99\") " pod="kube-system/kube-controller-manager-ip-172-31-22-21" May 27 17:02:12.405189 kubelet[3300]: I0527 17:02:12.404277 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49534ef854888bbb501a0c8b70022b99-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-21\" (UID: \"49534ef854888bbb501a0c8b70022b99\") " pod="kube-system/kube-controller-manager-ip-172-31-22-21" May 27 17:02:12.405189 kubelet[3300]: I0527 17:02:12.404420 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df6d36775496f66c4b98df7c7b7d7dca-ca-certs\") pod \"kube-apiserver-ip-172-31-22-21\" (UID: \"df6d36775496f66c4b98df7c7b7d7dca\") " pod="kube-system/kube-apiserver-ip-172-31-22-21" May 27 17:02:12.405189 kubelet[3300]: I0527 17:02:12.404636 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49534ef854888bbb501a0c8b70022b99-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-21\" (UID: \"49534ef854888bbb501a0c8b70022b99\") " pod="kube-system/kube-controller-manager-ip-172-31-22-21" May 27 17:02:12.405189 kubelet[3300]: I0527 17:02:12.404831 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/49534ef854888bbb501a0c8b70022b99-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-21\" (UID: \"49534ef854888bbb501a0c8b70022b99\") " pod="kube-system/kube-controller-manager-ip-172-31-22-21" May 27 17:02:12.519832 kubelet[3300]: I0527 17:02:12.519684 3300 apiserver.go:52] "Watching apiserver" May 27 17:02:12.584542 kubelet[3300]: I0527 17:02:12.584394 3300 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 27 17:02:12.654385 sudo[3314]: pam_unix(sudo:session): session closed for user root May 27 17:02:12.766619 kubelet[3300]: I0527 17:02:12.766505 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-22-21" podStartSLOduration=0.766453163 podStartE2EDuration="766.453163ms" podCreationTimestamp="2025-05-27 17:02:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:02:12.765975803 +0000 UTC m=+1.395485192" watchObservedRunningTime="2025-05-27 17:02:12.766453163 +0000 UTC m=+1.395962552" May 27 17:02:12.813070 kubelet[3300]: I0527 17:02:12.812567 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-22-21" podStartSLOduration=0.812542415 podStartE2EDuration="812.542415ms" podCreationTimestamp="2025-05-27 17:02:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:02:12.809706011 +0000 UTC m=+1.439215388" watchObservedRunningTime="2025-05-27 17:02:12.812542415 +0000 UTC m=+1.442051780" May 27 17:02:12.813534 kubelet[3300]: I0527 17:02:12.813412 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-22-21" podStartSLOduration=0.813392987 podStartE2EDuration="813.392987ms" podCreationTimestamp="2025-05-27 17:02:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:02:12.786642371 +0000 UTC m=+1.416151820" watchObservedRunningTime="2025-05-27 17:02:12.813392987 +0000 UTC m=+1.442902364" May 27 17:02:15.623644 sudo[2354]: pam_unix(sudo:session): session closed for user root May 27 17:02:15.646968 sshd[2353]: Connection closed by 139.178.68.195 port 57746 May 27 17:02:15.647781 sshd-session[2351]: pam_unix(sshd:session): session closed for user core May 27 17:02:15.655287 systemd[1]: sshd@6-172.31.22.21:22-139.178.68.195:57746.service: Deactivated successfully. May 27 17:02:15.661004 systemd[1]: session-7.scope: Deactivated successfully. May 27 17:02:15.661701 systemd[1]: session-7.scope: Consumed 11.244s CPU time, 268.5M memory peak. May 27 17:02:15.668355 systemd-logind[1991]: Session 7 logged out. Waiting for processes to exit. May 27 17:02:15.671995 systemd-logind[1991]: Removed session 7. May 27 17:02:16.226302 kubelet[3300]: I0527 17:02:16.226253 3300 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 17:02:16.227827 kubelet[3300]: I0527 17:02:16.227645 3300 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 17:02:16.227919 containerd[2017]: time="2025-05-27T17:02:16.227248716Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 17:02:17.128179 systemd[1]: Created slice kubepods-besteffort-pod2c10e185_4125_40d1_93ce_e2946571e393.slice - libcontainer container kubepods-besteffort-pod2c10e185_4125_40d1_93ce_e2946571e393.slice. May 27 17:02:17.150421 kubelet[3300]: I0527 17:02:17.150364 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2c10e185-4125-40d1-93ce-e2946571e393-kube-proxy\") pod \"kube-proxy-6k5pw\" (UID: \"2c10e185-4125-40d1-93ce-e2946571e393\") " pod="kube-system/kube-proxy-6k5pw" May 27 17:02:17.150558 kubelet[3300]: I0527 17:02:17.150442 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c10e185-4125-40d1-93ce-e2946571e393-xtables-lock\") pod \"kube-proxy-6k5pw\" (UID: \"2c10e185-4125-40d1-93ce-e2946571e393\") " pod="kube-system/kube-proxy-6k5pw" May 27 17:02:17.150558 kubelet[3300]: I0527 17:02:17.150500 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c10e185-4125-40d1-93ce-e2946571e393-lib-modules\") pod \"kube-proxy-6k5pw\" (UID: \"2c10e185-4125-40d1-93ce-e2946571e393\") " pod="kube-system/kube-proxy-6k5pw" May 27 17:02:17.150558 kubelet[3300]: I0527 17:02:17.150548 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9gsc\" (UniqueName: \"kubernetes.io/projected/2c10e185-4125-40d1-93ce-e2946571e393-kube-api-access-b9gsc\") pod \"kube-proxy-6k5pw\" (UID: \"2c10e185-4125-40d1-93ce-e2946571e393\") " pod="kube-system/kube-proxy-6k5pw" May 27 17:02:17.189999 systemd[1]: Created slice kubepods-burstable-pod68261b05_f4d8_439a_b6a7_0fb0c5b4299f.slice - libcontainer container kubepods-burstable-pod68261b05_f4d8_439a_b6a7_0fb0c5b4299f.slice. May 27 17:02:17.250977 kubelet[3300]: I0527 17:02:17.250903 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cw6t\" (UniqueName: \"kubernetes.io/projected/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-kube-api-access-4cw6t\") pod \"cilium-dvd78\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " pod="kube-system/cilium-dvd78" May 27 17:02:17.250977 kubelet[3300]: I0527 17:02:17.250981 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-hostproc\") pod \"cilium-dvd78\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " pod="kube-system/cilium-dvd78" May 27 17:02:17.251662 kubelet[3300]: I0527 17:02:17.251065 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-cni-path\") pod \"cilium-dvd78\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " pod="kube-system/cilium-dvd78" May 27 17:02:17.251662 kubelet[3300]: I0527 17:02:17.251115 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-clustermesh-secrets\") pod \"cilium-dvd78\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " pod="kube-system/cilium-dvd78" May 27 17:02:17.251662 kubelet[3300]: I0527 17:02:17.251153 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-host-proc-sys-kernel\") pod \"cilium-dvd78\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " pod="kube-system/cilium-dvd78" May 27 17:02:17.251662 kubelet[3300]: I0527 17:02:17.251223 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-etc-cni-netd\") pod \"cilium-dvd78\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " pod="kube-system/cilium-dvd78" May 27 17:02:17.251662 kubelet[3300]: I0527 17:02:17.251263 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-cilium-cgroup\") pod \"cilium-dvd78\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " pod="kube-system/cilium-dvd78" May 27 17:02:17.251662 kubelet[3300]: I0527 17:02:17.251299 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-xtables-lock\") pod \"cilium-dvd78\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " pod="kube-system/cilium-dvd78" May 27 17:02:17.252091 kubelet[3300]: I0527 17:02:17.251341 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-host-proc-sys-net\") pod \"cilium-dvd78\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " pod="kube-system/cilium-dvd78" May 27 17:02:17.252091 kubelet[3300]: I0527 17:02:17.251377 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-cilium-run\") pod \"cilium-dvd78\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " pod="kube-system/cilium-dvd78" May 27 17:02:17.252091 kubelet[3300]: I0527 17:02:17.251414 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-cilium-config-path\") pod \"cilium-dvd78\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " pod="kube-system/cilium-dvd78" May 27 17:02:17.252091 kubelet[3300]: I0527 17:02:17.251493 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-bpf-maps\") pod \"cilium-dvd78\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " pod="kube-system/cilium-dvd78" May 27 17:02:17.252091 kubelet[3300]: I0527 17:02:17.251529 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-lib-modules\") pod \"cilium-dvd78\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " pod="kube-system/cilium-dvd78" May 27 17:02:17.252091 kubelet[3300]: I0527 17:02:17.251571 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-hubble-tls\") pod \"cilium-dvd78\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " pod="kube-system/cilium-dvd78" May 27 17:02:17.268252 update_engine[1993]: I20250527 17:02:17.268154 1993 update_attempter.cc:509] Updating boot flags... May 27 17:02:17.320810 kubelet[3300]: E0527 17:02:17.320302 3300 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 27 17:02:17.320810 kubelet[3300]: E0527 17:02:17.320357 3300 projected.go:194] Error preparing data for projected volume kube-api-access-b9gsc for pod kube-system/kube-proxy-6k5pw: configmap "kube-root-ca.crt" not found May 27 17:02:17.320810 kubelet[3300]: E0527 17:02:17.320465 3300 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2c10e185-4125-40d1-93ce-e2946571e393-kube-api-access-b9gsc podName:2c10e185-4125-40d1-93ce-e2946571e393 nodeName:}" failed. No retries permitted until 2025-05-27 17:02:17.820428993 +0000 UTC m=+6.449938358 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b9gsc" (UniqueName: "kubernetes.io/projected/2c10e185-4125-40d1-93ce-e2946571e393-kube-api-access-b9gsc") pod "kube-proxy-6k5pw" (UID: "2c10e185-4125-40d1-93ce-e2946571e393") : configmap "kube-root-ca.crt" not found May 27 17:02:17.706235 systemd[1]: Created slice kubepods-besteffort-pod42aec18f_1188_4e9c_a224_02d3af835120.slice - libcontainer container kubepods-besteffort-pod42aec18f_1188_4e9c_a224_02d3af835120.slice. May 27 17:02:17.756445 kubelet[3300]: I0527 17:02:17.755897 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42aec18f-1188-4e9c-a224-02d3af835120-cilium-config-path\") pod \"cilium-operator-5d85765b45-nm7md\" (UID: \"42aec18f-1188-4e9c-a224-02d3af835120\") " pod="kube-system/cilium-operator-5d85765b45-nm7md" May 27 17:02:17.757291 kubelet[3300]: I0527 17:02:17.756988 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77v4x\" (UniqueName: \"kubernetes.io/projected/42aec18f-1188-4e9c-a224-02d3af835120-kube-api-access-77v4x\") pod \"cilium-operator-5d85765b45-nm7md\" (UID: \"42aec18f-1188-4e9c-a224-02d3af835120\") " pod="kube-system/cilium-operator-5d85765b45-nm7md" May 27 17:02:17.807778 containerd[2017]: time="2025-05-27T17:02:17.806134480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dvd78,Uid:68261b05-f4d8-439a-b6a7-0fb0c5b4299f,Namespace:kube-system,Attempt:0,}" May 27 17:02:17.945230 containerd[2017]: time="2025-05-27T17:02:17.942951256Z" level=info msg="connecting to shim 690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea" address="unix:///run/containerd/s/6c83230c109150034eaa62a2625eed098043054928d5f2cd1f8b408613619b95" namespace=k8s.io protocol=ttrpc version=3 May 27 17:02:18.030460 containerd[2017]: time="2025-05-27T17:02:18.030009625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-nm7md,Uid:42aec18f-1188-4e9c-a224-02d3af835120,Namespace:kube-system,Attempt:0,}" May 27 17:02:18.066629 containerd[2017]: time="2025-05-27T17:02:18.066201685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6k5pw,Uid:2c10e185-4125-40d1-93ce-e2946571e393,Namespace:kube-system,Attempt:0,}" May 27 17:02:18.116437 systemd[1]: Started cri-containerd-690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea.scope - libcontainer container 690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea. May 27 17:02:18.145874 containerd[2017]: time="2025-05-27T17:02:18.145012573Z" level=info msg="connecting to shim 1e224d0c22c68f327014affad7f4eebf93ad35391cf6c34ea60bb88c94a91f03" address="unix:///run/containerd/s/e1637d15f892eb382ecc4e27597a6eae0e9b6ff0996118953fbb0d55c094ef0b" namespace=k8s.io protocol=ttrpc version=3 May 27 17:02:18.258750 containerd[2017]: time="2025-05-27T17:02:18.258670874Z" level=info msg="connecting to shim f9e8b0bc61e7ef5d9f7a026e967bc880fce96f7ef5457aede6bb095ab67195f1" address="unix:///run/containerd/s/2eefb7965d151469ac75906fb1387fd5d30422d29ea9a604df6c4149626e9256" namespace=k8s.io protocol=ttrpc version=3 May 27 17:02:18.332078 containerd[2017]: time="2025-05-27T17:02:18.328683722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dvd78,Uid:68261b05-f4d8-439a-b6a7-0fb0c5b4299f,Namespace:kube-system,Attempt:0,} returns sandbox id \"690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea\"" May 27 17:02:18.342743 systemd[1]: Started cri-containerd-1e224d0c22c68f327014affad7f4eebf93ad35391cf6c34ea60bb88c94a91f03.scope - libcontainer container 1e224d0c22c68f327014affad7f4eebf93ad35391cf6c34ea60bb88c94a91f03. May 27 17:02:18.356915 containerd[2017]: time="2025-05-27T17:02:18.356294078Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 27 17:02:18.632214 containerd[2017]: time="2025-05-27T17:02:18.632024944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-nm7md,Uid:42aec18f-1188-4e9c-a224-02d3af835120,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e224d0c22c68f327014affad7f4eebf93ad35391cf6c34ea60bb88c94a91f03\"" May 27 17:02:18.712443 systemd[1]: Started cri-containerd-f9e8b0bc61e7ef5d9f7a026e967bc880fce96f7ef5457aede6bb095ab67195f1.scope - libcontainer container f9e8b0bc61e7ef5d9f7a026e967bc880fce96f7ef5457aede6bb095ab67195f1. May 27 17:02:18.778133 containerd[2017]: time="2025-05-27T17:02:18.777754817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6k5pw,Uid:2c10e185-4125-40d1-93ce-e2946571e393,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9e8b0bc61e7ef5d9f7a026e967bc880fce96f7ef5457aede6bb095ab67195f1\"" May 27 17:02:18.787202 containerd[2017]: time="2025-05-27T17:02:18.787111865Z" level=info msg="CreateContainer within sandbox \"f9e8b0bc61e7ef5d9f7a026e967bc880fce96f7ef5457aede6bb095ab67195f1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 17:02:18.807358 containerd[2017]: time="2025-05-27T17:02:18.807276749Z" level=info msg="Container 93ac4dd486c9a0f4508331feed42f13eb6077677c79702200f35cab5d6c510ea: CDI devices from CRI Config.CDIDevices: []" May 27 17:02:18.828521 containerd[2017]: time="2025-05-27T17:02:18.828371981Z" level=info msg="CreateContainer within sandbox \"f9e8b0bc61e7ef5d9f7a026e967bc880fce96f7ef5457aede6bb095ab67195f1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"93ac4dd486c9a0f4508331feed42f13eb6077677c79702200f35cab5d6c510ea\"" May 27 17:02:18.830766 containerd[2017]: time="2025-05-27T17:02:18.830698037Z" level=info msg="StartContainer for \"93ac4dd486c9a0f4508331feed42f13eb6077677c79702200f35cab5d6c510ea\"" May 27 17:02:18.835029 containerd[2017]: time="2025-05-27T17:02:18.834962945Z" level=info msg="connecting to shim 93ac4dd486c9a0f4508331feed42f13eb6077677c79702200f35cab5d6c510ea" address="unix:///run/containerd/s/2eefb7965d151469ac75906fb1387fd5d30422d29ea9a604df6c4149626e9256" protocol=ttrpc version=3 May 27 17:02:18.885389 systemd[1]: Started cri-containerd-93ac4dd486c9a0f4508331feed42f13eb6077677c79702200f35cab5d6c510ea.scope - libcontainer container 93ac4dd486c9a0f4508331feed42f13eb6077677c79702200f35cab5d6c510ea. May 27 17:02:18.971326 containerd[2017]: time="2025-05-27T17:02:18.971204381Z" level=info msg="StartContainer for \"93ac4dd486c9a0f4508331feed42f13eb6077677c79702200f35cab5d6c510ea\" returns successfully" May 27 17:02:19.851913 kubelet[3300]: I0527 17:02:19.851242 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6k5pw" podStartSLOduration=2.851221134 podStartE2EDuration="2.851221134s" podCreationTimestamp="2025-05-27 17:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:02:19.851105814 +0000 UTC m=+8.480615203" watchObservedRunningTime="2025-05-27 17:02:19.851221134 +0000 UTC m=+8.480730499" May 27 17:02:25.667961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3617674567.mount: Deactivated successfully. May 27 17:02:28.379088 containerd[2017]: time="2025-05-27T17:02:28.378526716Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:02:28.381524 containerd[2017]: time="2025-05-27T17:02:28.381449964Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 27 17:02:28.382596 containerd[2017]: time="2025-05-27T17:02:28.382541076Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:02:28.387592 containerd[2017]: time="2025-05-27T17:02:28.387509376Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.02628619s" May 27 17:02:28.387891 containerd[2017]: time="2025-05-27T17:02:28.387748944Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 27 17:02:28.390073 containerd[2017]: time="2025-05-27T17:02:28.389983836Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 27 17:02:28.395778 containerd[2017]: time="2025-05-27T17:02:28.395643384Z" level=info msg="CreateContainer within sandbox \"690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 17:02:28.417244 containerd[2017]: time="2025-05-27T17:02:28.414585168Z" level=info msg="Container 5bf375d9f466277f71113038529354ec7963e89dbddf743134a73bb558f84aa7: CDI devices from CRI Config.CDIDevices: []" May 27 17:02:28.421867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1488267539.mount: Deactivated successfully. May 27 17:02:28.430688 containerd[2017]: time="2025-05-27T17:02:28.430618980Z" level=info msg="CreateContainer within sandbox \"690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5bf375d9f466277f71113038529354ec7963e89dbddf743134a73bb558f84aa7\"" May 27 17:02:28.431619 containerd[2017]: time="2025-05-27T17:02:28.431309004Z" level=info msg="StartContainer for \"5bf375d9f466277f71113038529354ec7963e89dbddf743134a73bb558f84aa7\"" May 27 17:02:28.434727 containerd[2017]: time="2025-05-27T17:02:28.434495496Z" level=info msg="connecting to shim 5bf375d9f466277f71113038529354ec7963e89dbddf743134a73bb558f84aa7" address="unix:///run/containerd/s/6c83230c109150034eaa62a2625eed098043054928d5f2cd1f8b408613619b95" protocol=ttrpc version=3 May 27 17:02:28.483429 systemd[1]: Started cri-containerd-5bf375d9f466277f71113038529354ec7963e89dbddf743134a73bb558f84aa7.scope - libcontainer container 5bf375d9f466277f71113038529354ec7963e89dbddf743134a73bb558f84aa7. May 27 17:02:28.553418 containerd[2017]: time="2025-05-27T17:02:28.553349557Z" level=info msg="StartContainer for \"5bf375d9f466277f71113038529354ec7963e89dbddf743134a73bb558f84aa7\" returns successfully" May 27 17:02:28.576402 systemd[1]: cri-containerd-5bf375d9f466277f71113038529354ec7963e89dbddf743134a73bb558f84aa7.scope: Deactivated successfully. May 27 17:02:28.585356 containerd[2017]: time="2025-05-27T17:02:28.585177589Z" level=info msg="received exit event container_id:\"5bf375d9f466277f71113038529354ec7963e89dbddf743134a73bb558f84aa7\" id:\"5bf375d9f466277f71113038529354ec7963e89dbddf743134a73bb558f84aa7\" pid:3902 exited_at:{seconds:1748365348 nanos:583439833}" May 27 17:02:28.585547 containerd[2017]: time="2025-05-27T17:02:28.585292933Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5bf375d9f466277f71113038529354ec7963e89dbddf743134a73bb558f84aa7\" id:\"5bf375d9f466277f71113038529354ec7963e89dbddf743134a73bb558f84aa7\" pid:3902 exited_at:{seconds:1748365348 nanos:583439833}" May 27 17:02:28.626553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bf375d9f466277f71113038529354ec7963e89dbddf743134a73bb558f84aa7-rootfs.mount: Deactivated successfully. May 27 17:02:29.900834 containerd[2017]: time="2025-05-27T17:02:29.900693268Z" level=info msg="CreateContainer within sandbox \"690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 17:02:29.929096 containerd[2017]: time="2025-05-27T17:02:29.925337092Z" level=info msg="Container 794d57b3b5394061583c48f52f071ec94ca151c81b492939443152c106b0b661: CDI devices from CRI Config.CDIDevices: []" May 27 17:02:29.945090 containerd[2017]: time="2025-05-27T17:02:29.944980768Z" level=info msg="CreateContainer within sandbox \"690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"794d57b3b5394061583c48f52f071ec94ca151c81b492939443152c106b0b661\"" May 27 17:02:29.947320 containerd[2017]: time="2025-05-27T17:02:29.947264404Z" level=info msg="StartContainer for \"794d57b3b5394061583c48f52f071ec94ca151c81b492939443152c106b0b661\"" May 27 17:02:29.951703 containerd[2017]: time="2025-05-27T17:02:29.951626716Z" level=info msg="connecting to shim 794d57b3b5394061583c48f52f071ec94ca151c81b492939443152c106b0b661" address="unix:///run/containerd/s/6c83230c109150034eaa62a2625eed098043054928d5f2cd1f8b408613619b95" protocol=ttrpc version=3 May 27 17:02:29.995374 systemd[1]: Started cri-containerd-794d57b3b5394061583c48f52f071ec94ca151c81b492939443152c106b0b661.scope - libcontainer container 794d57b3b5394061583c48f52f071ec94ca151c81b492939443152c106b0b661. May 27 17:02:30.063305 containerd[2017]: time="2025-05-27T17:02:30.063246685Z" level=info msg="StartContainer for \"794d57b3b5394061583c48f52f071ec94ca151c81b492939443152c106b0b661\" returns successfully" May 27 17:02:30.085351 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 17:02:30.085840 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 17:02:30.089232 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 27 17:02:30.096561 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:02:30.104278 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 17:02:30.105362 systemd[1]: cri-containerd-794d57b3b5394061583c48f52f071ec94ca151c81b492939443152c106b0b661.scope: Deactivated successfully. May 27 17:02:30.108832 containerd[2017]: time="2025-05-27T17:02:30.108701233Z" level=info msg="received exit event container_id:\"794d57b3b5394061583c48f52f071ec94ca151c81b492939443152c106b0b661\" id:\"794d57b3b5394061583c48f52f071ec94ca151c81b492939443152c106b0b661\" pid:3945 exited_at:{seconds:1748365350 nanos:95824465}" May 27 17:02:30.112800 containerd[2017]: time="2025-05-27T17:02:30.112496413Z" level=info msg="TaskExit event in podsandbox handler container_id:\"794d57b3b5394061583c48f52f071ec94ca151c81b492939443152c106b0b661\" id:\"794d57b3b5394061583c48f52f071ec94ca151c81b492939443152c106b0b661\" pid:3945 exited_at:{seconds:1748365350 nanos:95824465}" May 27 17:02:30.151514 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:02:30.901384 containerd[2017]: time="2025-05-27T17:02:30.901278113Z" level=info msg="CreateContainer within sandbox \"690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 17:02:30.924524 containerd[2017]: time="2025-05-27T17:02:30.924451205Z" level=info msg="Container 78b31c560e53ef41bcf4cc017a600ec5f8195a42baa3b64b68a30d16d571952a: CDI devices from CRI Config.CDIDevices: []" May 27 17:02:30.925656 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-794d57b3b5394061583c48f52f071ec94ca151c81b492939443152c106b0b661-rootfs.mount: Deactivated successfully. May 27 17:02:30.955678 containerd[2017]: time="2025-05-27T17:02:30.955328345Z" level=info msg="CreateContainer within sandbox \"690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"78b31c560e53ef41bcf4cc017a600ec5f8195a42baa3b64b68a30d16d571952a\"" May 27 17:02:30.960776 containerd[2017]: time="2025-05-27T17:02:30.960722081Z" level=info msg="StartContainer for \"78b31c560e53ef41bcf4cc017a600ec5f8195a42baa3b64b68a30d16d571952a\"" May 27 17:02:30.965504 containerd[2017]: time="2025-05-27T17:02:30.965447489Z" level=info msg="connecting to shim 78b31c560e53ef41bcf4cc017a600ec5f8195a42baa3b64b68a30d16d571952a" address="unix:///run/containerd/s/6c83230c109150034eaa62a2625eed098043054928d5f2cd1f8b408613619b95" protocol=ttrpc version=3 May 27 17:02:31.031764 systemd[1]: Started cri-containerd-78b31c560e53ef41bcf4cc017a600ec5f8195a42baa3b64b68a30d16d571952a.scope - libcontainer container 78b31c560e53ef41bcf4cc017a600ec5f8195a42baa3b64b68a30d16d571952a. May 27 17:02:31.142246 containerd[2017]: time="2025-05-27T17:02:31.142161062Z" level=info msg="StartContainer for \"78b31c560e53ef41bcf4cc017a600ec5f8195a42baa3b64b68a30d16d571952a\" returns successfully" May 27 17:02:31.143765 systemd[1]: cri-containerd-78b31c560e53ef41bcf4cc017a600ec5f8195a42baa3b64b68a30d16d571952a.scope: Deactivated successfully. May 27 17:02:31.149885 containerd[2017]: time="2025-05-27T17:02:31.149706662Z" level=info msg="TaskExit event in podsandbox handler container_id:\"78b31c560e53ef41bcf4cc017a600ec5f8195a42baa3b64b68a30d16d571952a\" id:\"78b31c560e53ef41bcf4cc017a600ec5f8195a42baa3b64b68a30d16d571952a\" pid:4000 exited_at:{seconds:1748365351 nanos:149134070}" May 27 17:02:31.150180 containerd[2017]: time="2025-05-27T17:02:31.149877986Z" level=info msg="received exit event container_id:\"78b31c560e53ef41bcf4cc017a600ec5f8195a42baa3b64b68a30d16d571952a\" id:\"78b31c560e53ef41bcf4cc017a600ec5f8195a42baa3b64b68a30d16d571952a\" pid:4000 exited_at:{seconds:1748365351 nanos:149134070}" May 27 17:02:31.203966 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78b31c560e53ef41bcf4cc017a600ec5f8195a42baa3b64b68a30d16d571952a-rootfs.mount: Deactivated successfully. May 27 17:02:31.915664 containerd[2017]: time="2025-05-27T17:02:31.915542214Z" level=info msg="CreateContainer within sandbox \"690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 17:02:31.938074 containerd[2017]: time="2025-05-27T17:02:31.935385918Z" level=info msg="Container d7f377abab62062322726b53893b3065da7772c489a8ba52199e2fbcf14d19ba: CDI devices from CRI Config.CDIDevices: []" May 27 17:02:31.950729 containerd[2017]: time="2025-05-27T17:02:31.950628366Z" level=info msg="CreateContainer within sandbox \"690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d7f377abab62062322726b53893b3065da7772c489a8ba52199e2fbcf14d19ba\"" May 27 17:02:31.953119 containerd[2017]: time="2025-05-27T17:02:31.953023074Z" level=info msg="StartContainer for \"d7f377abab62062322726b53893b3065da7772c489a8ba52199e2fbcf14d19ba\"" May 27 17:02:31.960237 containerd[2017]: time="2025-05-27T17:02:31.960135210Z" level=info msg="connecting to shim d7f377abab62062322726b53893b3065da7772c489a8ba52199e2fbcf14d19ba" address="unix:///run/containerd/s/6c83230c109150034eaa62a2625eed098043054928d5f2cd1f8b408613619b95" protocol=ttrpc version=3 May 27 17:02:32.014578 systemd[1]: Started cri-containerd-d7f377abab62062322726b53893b3065da7772c489a8ba52199e2fbcf14d19ba.scope - libcontainer container d7f377abab62062322726b53893b3065da7772c489a8ba52199e2fbcf14d19ba. May 27 17:02:32.078767 systemd[1]: cri-containerd-d7f377abab62062322726b53893b3065da7772c489a8ba52199e2fbcf14d19ba.scope: Deactivated successfully. May 27 17:02:32.089078 containerd[2017]: time="2025-05-27T17:02:32.088933455Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d7f377abab62062322726b53893b3065da7772c489a8ba52199e2fbcf14d19ba\" id:\"d7f377abab62062322726b53893b3065da7772c489a8ba52199e2fbcf14d19ba\" pid:4039 exited_at:{seconds:1748365352 nanos:84562143}" May 27 17:02:32.089234 containerd[2017]: time="2025-05-27T17:02:32.089057475Z" level=info msg="received exit event container_id:\"d7f377abab62062322726b53893b3065da7772c489a8ba52199e2fbcf14d19ba\" id:\"d7f377abab62062322726b53893b3065da7772c489a8ba52199e2fbcf14d19ba\" pid:4039 exited_at:{seconds:1748365352 nanos:84562143}" May 27 17:02:32.105074 containerd[2017]: time="2025-05-27T17:02:32.104930559Z" level=info msg="StartContainer for \"d7f377abab62062322726b53893b3065da7772c489a8ba52199e2fbcf14d19ba\" returns successfully" May 27 17:02:32.159801 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7f377abab62062322726b53893b3065da7772c489a8ba52199e2fbcf14d19ba-rootfs.mount: Deactivated successfully. May 27 17:02:32.882381 containerd[2017]: time="2025-05-27T17:02:32.881983687Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:02:32.883411 containerd[2017]: time="2025-05-27T17:02:32.883346419Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 27 17:02:32.884401 containerd[2017]: time="2025-05-27T17:02:32.884281339Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:02:32.887606 containerd[2017]: time="2025-05-27T17:02:32.887516719Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.497331547s" May 27 17:02:32.887606 containerd[2017]: time="2025-05-27T17:02:32.887590279Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 27 17:02:32.896114 containerd[2017]: time="2025-05-27T17:02:32.895917811Z" level=info msg="CreateContainer within sandbox \"1e224d0c22c68f327014affad7f4eebf93ad35391cf6c34ea60bb88c94a91f03\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 27 17:02:32.909260 containerd[2017]: time="2025-05-27T17:02:32.909137131Z" level=info msg="Container 2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259: CDI devices from CRI Config.CDIDevices: []" May 27 17:02:32.922861 containerd[2017]: time="2025-05-27T17:02:32.922777363Z" level=info msg="CreateContainer within sandbox \"1e224d0c22c68f327014affad7f4eebf93ad35391cf6c34ea60bb88c94a91f03\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259\"" May 27 17:02:32.924374 containerd[2017]: time="2025-05-27T17:02:32.924218887Z" level=info msg="StartContainer for \"2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259\"" May 27 17:02:32.936691 containerd[2017]: time="2025-05-27T17:02:32.934474699Z" level=info msg="connecting to shim 2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259" address="unix:///run/containerd/s/e1637d15f892eb382ecc4e27597a6eae0e9b6ff0996118953fbb0d55c094ef0b" protocol=ttrpc version=3 May 27 17:02:32.936691 containerd[2017]: time="2025-05-27T17:02:32.936352603Z" level=info msg="CreateContainer within sandbox \"690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 17:02:33.004278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount749684263.mount: Deactivated successfully. May 27 17:02:33.010651 containerd[2017]: time="2025-05-27T17:02:33.009944979Z" level=info msg="Container 150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e: CDI devices from CRI Config.CDIDevices: []" May 27 17:02:33.025377 systemd[1]: Started cri-containerd-2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259.scope - libcontainer container 2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259. May 27 17:02:33.032755 containerd[2017]: time="2025-05-27T17:02:33.032642415Z" level=info msg="CreateContainer within sandbox \"690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e\"" May 27 17:02:33.035373 containerd[2017]: time="2025-05-27T17:02:33.035325207Z" level=info msg="StartContainer for \"150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e\"" May 27 17:02:33.038762 containerd[2017]: time="2025-05-27T17:02:33.038617815Z" level=info msg="connecting to shim 150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e" address="unix:///run/containerd/s/6c83230c109150034eaa62a2625eed098043054928d5f2cd1f8b408613619b95" protocol=ttrpc version=3 May 27 17:02:33.086408 systemd[1]: Started cri-containerd-150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e.scope - libcontainer container 150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e. May 27 17:02:33.123815 containerd[2017]: time="2025-05-27T17:02:33.123612964Z" level=info msg="StartContainer for \"2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259\" returns successfully" May 27 17:02:33.188833 containerd[2017]: time="2025-05-27T17:02:33.188449468Z" level=info msg="StartContainer for \"150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e\" returns successfully" May 27 17:02:33.375720 containerd[2017]: time="2025-05-27T17:02:33.375665465Z" level=info msg="TaskExit event in podsandbox handler container_id:\"150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e\" id:\"9ae12aa921e73e9bad863794f3a75ce4c296cb68721095d56e634476095a8455\" pid:4147 exited_at:{seconds:1748365353 nanos:373649777}" May 27 17:02:33.440296 kubelet[3300]: I0527 17:02:33.440134 3300 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 27 17:02:33.518016 systemd[1]: Created slice kubepods-burstable-podda77be3c_5634_46af_bff2_f4f8c8736300.slice - libcontainer container kubepods-burstable-podda77be3c_5634_46af_bff2_f4f8c8736300.slice. May 27 17:02:33.540479 systemd[1]: Created slice kubepods-burstable-podad9fed75_5b1b_46e3_a9c3_f17a9142347e.slice - libcontainer container kubepods-burstable-podad9fed75_5b1b_46e3_a9c3_f17a9142347e.slice. May 27 17:02:33.573394 kubelet[3300]: I0527 17:02:33.573321 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad9fed75-5b1b-46e3-a9c3-f17a9142347e-config-volume\") pod \"coredns-7c65d6cfc9-lptw6\" (UID: \"ad9fed75-5b1b-46e3-a9c3-f17a9142347e\") " pod="kube-system/coredns-7c65d6cfc9-lptw6" May 27 17:02:33.573394 kubelet[3300]: I0527 17:02:33.573405 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da77be3c-5634-46af-bff2-f4f8c8736300-config-volume\") pod \"coredns-7c65d6cfc9-tjnqd\" (UID: \"da77be3c-5634-46af-bff2-f4f8c8736300\") " pod="kube-system/coredns-7c65d6cfc9-tjnqd" May 27 17:02:33.574287 kubelet[3300]: I0527 17:02:33.573447 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24h4t\" (UniqueName: \"kubernetes.io/projected/da77be3c-5634-46af-bff2-f4f8c8736300-kube-api-access-24h4t\") pod \"coredns-7c65d6cfc9-tjnqd\" (UID: \"da77be3c-5634-46af-bff2-f4f8c8736300\") " pod="kube-system/coredns-7c65d6cfc9-tjnqd" May 27 17:02:33.574287 kubelet[3300]: I0527 17:02:33.573496 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25mnb\" (UniqueName: \"kubernetes.io/projected/ad9fed75-5b1b-46e3-a9c3-f17a9142347e-kube-api-access-25mnb\") pod \"coredns-7c65d6cfc9-lptw6\" (UID: \"ad9fed75-5b1b-46e3-a9c3-f17a9142347e\") " pod="kube-system/coredns-7c65d6cfc9-lptw6" May 27 17:02:33.830148 containerd[2017]: time="2025-05-27T17:02:33.829762855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tjnqd,Uid:da77be3c-5634-46af-bff2-f4f8c8736300,Namespace:kube-system,Attempt:0,}" May 27 17:02:33.849664 containerd[2017]: time="2025-05-27T17:02:33.849247627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lptw6,Uid:ad9fed75-5b1b-46e3-a9c3-f17a9142347e,Namespace:kube-system,Attempt:0,}" May 27 17:02:34.217210 kubelet[3300]: I0527 17:02:34.216979 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dvd78" podStartSLOduration=7.159760399 podStartE2EDuration="17.216551969s" podCreationTimestamp="2025-05-27 17:02:17 +0000 UTC" firstStartedPulling="2025-05-27 17:02:18.33227483 +0000 UTC m=+6.961784195" lastFinishedPulling="2025-05-27 17:02:28.389066376 +0000 UTC m=+17.018575765" observedRunningTime="2025-05-27 17:02:34.144557909 +0000 UTC m=+22.774067310" watchObservedRunningTime="2025-05-27 17:02:34.216551969 +0000 UTC m=+22.846061334" May 27 17:02:37.800228 systemd-networkd[1859]: cilium_host: Link UP May 27 17:02:37.801872 systemd-networkd[1859]: cilium_net: Link UP May 27 17:02:37.802402 (udev-worker)[4249]: Network interface NamePolicy= disabled on kernel command line. May 27 17:02:37.810211 systemd-networkd[1859]: cilium_net: Gained carrier May 27 17:02:37.814069 systemd-networkd[1859]: cilium_host: Gained carrier May 27 17:02:37.820112 (udev-worker)[4250]: Network interface NamePolicy= disabled on kernel command line. May 27 17:02:37.985566 (udev-worker)[4267]: Network interface NamePolicy= disabled on kernel command line. May 27 17:02:37.998884 systemd-networkd[1859]: cilium_vxlan: Link UP May 27 17:02:37.998900 systemd-networkd[1859]: cilium_vxlan: Gained carrier May 27 17:02:38.355352 systemd-networkd[1859]: cilium_host: Gained IPv6LL May 27 17:02:38.547424 systemd-networkd[1859]: cilium_net: Gained IPv6LL May 27 17:02:38.559114 kernel: NET: Registered PF_ALG protocol family May 27 17:02:39.572719 systemd-networkd[1859]: cilium_vxlan: Gained IPv6LL May 27 17:02:40.011667 (udev-worker)[4265]: Network interface NamePolicy= disabled on kernel command line. May 27 17:02:40.029908 systemd-networkd[1859]: lxc_health: Link UP May 27 17:02:40.034904 systemd-networkd[1859]: lxc_health: Gained carrier May 27 17:02:40.427095 kernel: eth0: renamed from tmp2e245 May 27 17:02:40.428201 systemd-networkd[1859]: lxcf894f21502a1: Link UP May 27 17:02:40.436200 systemd-networkd[1859]: lxcf894f21502a1: Gained carrier May 27 17:02:40.471434 systemd-networkd[1859]: lxc0674a608f4e6: Link UP May 27 17:02:40.478266 kernel: eth0: renamed from tmp6bb19 May 27 17:02:40.489959 (udev-worker)[4263]: Network interface NamePolicy= disabled on kernel command line. May 27 17:02:40.491282 systemd-networkd[1859]: lxc0674a608f4e6: Gained carrier May 27 17:02:41.427457 systemd-networkd[1859]: lxc_health: Gained IPv6LL May 27 17:02:41.873799 kubelet[3300]: I0527 17:02:41.872876 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-nm7md" podStartSLOduration=10.619186572 podStartE2EDuration="24.872850243s" podCreationTimestamp="2025-05-27 17:02:17 +0000 UTC" firstStartedPulling="2025-05-27 17:02:18.636683224 +0000 UTC m=+7.266192589" lastFinishedPulling="2025-05-27 17:02:32.890346883 +0000 UTC m=+21.519856260" observedRunningTime="2025-05-27 17:02:34.220181801 +0000 UTC m=+22.849691202" watchObservedRunningTime="2025-05-27 17:02:41.872850243 +0000 UTC m=+30.502359608" May 27 17:02:42.195416 systemd-networkd[1859]: lxcf894f21502a1: Gained IPv6LL May 27 17:02:42.451323 systemd-networkd[1859]: lxc0674a608f4e6: Gained IPv6LL May 27 17:02:44.634174 kubelet[3300]: I0527 17:02:44.633634 3300 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 17:02:45.112370 ntpd[1985]: Listen normally on 7 cilium_host 192.168.0.74:123 May 27 17:02:45.112926 ntpd[1985]: 27 May 17:02:45 ntpd[1985]: Listen normally on 7 cilium_host 192.168.0.74:123 May 27 17:02:45.112926 ntpd[1985]: 27 May 17:02:45 ntpd[1985]: Listen normally on 8 cilium_net [fe80::c50:33ff:fe19:5f38%4]:123 May 27 17:02:45.112926 ntpd[1985]: 27 May 17:02:45 ntpd[1985]: Listen normally on 9 cilium_host [fe80::c47c:c2ff:fe2a:81d%5]:123 May 27 17:02:45.112926 ntpd[1985]: 27 May 17:02:45 ntpd[1985]: Listen normally on 10 cilium_vxlan [fe80::e44d:dff:fe8d:fdf0%6]:123 May 27 17:02:45.112926 ntpd[1985]: 27 May 17:02:45 ntpd[1985]: Listen normally on 11 lxc_health [fe80::d4bf:a7ff:fecb:5c3f%8]:123 May 27 17:02:45.112926 ntpd[1985]: 27 May 17:02:45 ntpd[1985]: Listen normally on 12 lxcf894f21502a1 [fe80::2820:d5ff:fef3:be74%10]:123 May 27 17:02:45.112926 ntpd[1985]: 27 May 17:02:45 ntpd[1985]: Listen normally on 13 lxc0674a608f4e6 [fe80::fcb0:d4ff:fef7:9b82%12]:123 May 27 17:02:45.112539 ntpd[1985]: Listen normally on 8 cilium_net [fe80::c50:33ff:fe19:5f38%4]:123 May 27 17:02:45.112629 ntpd[1985]: Listen normally on 9 cilium_host [fe80::c47c:c2ff:fe2a:81d%5]:123 May 27 17:02:45.112703 ntpd[1985]: Listen normally on 10 cilium_vxlan [fe80::e44d:dff:fe8d:fdf0%6]:123 May 27 17:02:45.112775 ntpd[1985]: Listen normally on 11 lxc_health [fe80::d4bf:a7ff:fecb:5c3f%8]:123 May 27 17:02:45.112848 ntpd[1985]: Listen normally on 12 lxcf894f21502a1 [fe80::2820:d5ff:fef3:be74%10]:123 May 27 17:02:45.112922 ntpd[1985]: Listen normally on 13 lxc0674a608f4e6 [fe80::fcb0:d4ff:fef7:9b82%12]:123 May 27 17:02:50.246220 containerd[2017]: time="2025-05-27T17:02:50.246136365Z" level=info msg="connecting to shim 6bb19d66700085167e661856e857ba0f41e9ffc42a6a865e58833b79a9778307" address="unix:///run/containerd/s/224467b090ada74bcaaef96c473ce0ef5196f2222288cffdc0991a7c39e5be9c" namespace=k8s.io protocol=ttrpc version=3 May 27 17:02:50.347581 systemd[1]: Started cri-containerd-6bb19d66700085167e661856e857ba0f41e9ffc42a6a865e58833b79a9778307.scope - libcontainer container 6bb19d66700085167e661856e857ba0f41e9ffc42a6a865e58833b79a9778307. May 27 17:02:50.359335 containerd[2017]: time="2025-05-27T17:02:50.359259009Z" level=info msg="connecting to shim 2e2450543dd14f62fb1ebce8825d27481e8468f336efabfdf4cb3f701733fac7" address="unix:///run/containerd/s/fe0b690d17284dfddb05088f25cc6eda6491c9659a76e7647adaa20e4fd57649" namespace=k8s.io protocol=ttrpc version=3 May 27 17:02:50.431461 systemd[1]: Started cri-containerd-2e2450543dd14f62fb1ebce8825d27481e8468f336efabfdf4cb3f701733fac7.scope - libcontainer container 2e2450543dd14f62fb1ebce8825d27481e8468f336efabfdf4cb3f701733fac7. May 27 17:02:50.521510 containerd[2017]: time="2025-05-27T17:02:50.521116582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lptw6,Uid:ad9fed75-5b1b-46e3-a9c3-f17a9142347e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bb19d66700085167e661856e857ba0f41e9ffc42a6a865e58833b79a9778307\"" May 27 17:02:50.537534 containerd[2017]: time="2025-05-27T17:02:50.537324238Z" level=info msg="CreateContainer within sandbox \"6bb19d66700085167e661856e857ba0f41e9ffc42a6a865e58833b79a9778307\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:02:50.564339 containerd[2017]: time="2025-05-27T17:02:50.564259738Z" level=info msg="Container dfade62fb2cf1105c88a56eaacb3bd51bbe5804b17da5f669c90124387b05d07: CDI devices from CRI Config.CDIDevices: []" May 27 17:02:50.580496 containerd[2017]: time="2025-05-27T17:02:50.580421722Z" level=info msg="CreateContainer within sandbox \"6bb19d66700085167e661856e857ba0f41e9ffc42a6a865e58833b79a9778307\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dfade62fb2cf1105c88a56eaacb3bd51bbe5804b17da5f669c90124387b05d07\"" May 27 17:02:50.581452 containerd[2017]: time="2025-05-27T17:02:50.581394658Z" level=info msg="StartContainer for \"dfade62fb2cf1105c88a56eaacb3bd51bbe5804b17da5f669c90124387b05d07\"" May 27 17:02:50.587614 containerd[2017]: time="2025-05-27T17:02:50.587418695Z" level=info msg="connecting to shim dfade62fb2cf1105c88a56eaacb3bd51bbe5804b17da5f669c90124387b05d07" address="unix:///run/containerd/s/224467b090ada74bcaaef96c473ce0ef5196f2222288cffdc0991a7c39e5be9c" protocol=ttrpc version=3 May 27 17:02:50.659093 systemd[1]: Started cri-containerd-dfade62fb2cf1105c88a56eaacb3bd51bbe5804b17da5f669c90124387b05d07.scope - libcontainer container dfade62fb2cf1105c88a56eaacb3bd51bbe5804b17da5f669c90124387b05d07. May 27 17:02:50.673782 containerd[2017]: time="2025-05-27T17:02:50.673689959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tjnqd,Uid:da77be3c-5634-46af-bff2-f4f8c8736300,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e2450543dd14f62fb1ebce8825d27481e8468f336efabfdf4cb3f701733fac7\"" May 27 17:02:50.684019 containerd[2017]: time="2025-05-27T17:02:50.683020019Z" level=info msg="CreateContainer within sandbox \"2e2450543dd14f62fb1ebce8825d27481e8468f336efabfdf4cb3f701733fac7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:02:50.697932 containerd[2017]: time="2025-05-27T17:02:50.697876691Z" level=info msg="Container f8ff06b771f16a0597f4d3ecb6c240f085e469e57f9d6e68d8bb75b42a265000: CDI devices from CRI Config.CDIDevices: []" May 27 17:02:50.711456 containerd[2017]: time="2025-05-27T17:02:50.711399095Z" level=info msg="CreateContainer within sandbox \"2e2450543dd14f62fb1ebce8825d27481e8468f336efabfdf4cb3f701733fac7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f8ff06b771f16a0597f4d3ecb6c240f085e469e57f9d6e68d8bb75b42a265000\"" May 27 17:02:50.714494 containerd[2017]: time="2025-05-27T17:02:50.714436667Z" level=info msg="StartContainer for \"f8ff06b771f16a0597f4d3ecb6c240f085e469e57f9d6e68d8bb75b42a265000\"" May 27 17:02:50.716576 containerd[2017]: time="2025-05-27T17:02:50.716515823Z" level=info msg="connecting to shim f8ff06b771f16a0597f4d3ecb6c240f085e469e57f9d6e68d8bb75b42a265000" address="unix:///run/containerd/s/fe0b690d17284dfddb05088f25cc6eda6491c9659a76e7647adaa20e4fd57649" protocol=ttrpc version=3 May 27 17:02:50.768419 systemd[1]: Started cri-containerd-f8ff06b771f16a0597f4d3ecb6c240f085e469e57f9d6e68d8bb75b42a265000.scope - libcontainer container f8ff06b771f16a0597f4d3ecb6c240f085e469e57f9d6e68d8bb75b42a265000. May 27 17:02:50.782455 containerd[2017]: time="2025-05-27T17:02:50.782235995Z" level=info msg="StartContainer for \"dfade62fb2cf1105c88a56eaacb3bd51bbe5804b17da5f669c90124387b05d07\" returns successfully" May 27 17:02:50.862987 containerd[2017]: time="2025-05-27T17:02:50.862910028Z" level=info msg="StartContainer for \"f8ff06b771f16a0597f4d3ecb6c240f085e469e57f9d6e68d8bb75b42a265000\" returns successfully" May 27 17:02:51.081697 kubelet[3300]: I0527 17:02:51.081592 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-lptw6" podStartSLOduration=34.081567369 podStartE2EDuration="34.081567369s" podCreationTimestamp="2025-05-27 17:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:02:51.078753273 +0000 UTC m=+39.708262662" watchObservedRunningTime="2025-05-27 17:02:51.081567369 +0000 UTC m=+39.711076758" May 27 17:02:51.106721 kubelet[3300]: I0527 17:02:51.106503 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-tjnqd" podStartSLOduration=34.106451721 podStartE2EDuration="34.106451721s" podCreationTimestamp="2025-05-27 17:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:02:51.102285393 +0000 UTC m=+39.731794794" watchObservedRunningTime="2025-05-27 17:02:51.106451721 +0000 UTC m=+39.735961146" May 27 17:02:51.227450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount953177319.mount: Deactivated successfully. May 27 17:02:54.093502 systemd[1]: Started sshd@7-172.31.22.21:22-139.178.68.195:33648.service - OpenSSH per-connection server daemon (139.178.68.195:33648). May 27 17:02:54.301561 sshd[4788]: Accepted publickey for core from 139.178.68.195 port 33648 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:02:54.305556 sshd-session[4788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:02:54.316970 systemd-logind[1991]: New session 8 of user core. May 27 17:02:54.325442 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 17:02:54.627722 sshd[4790]: Connection closed by 139.178.68.195 port 33648 May 27 17:02:54.628768 sshd-session[4788]: pam_unix(sshd:session): session closed for user core May 27 17:02:54.637598 systemd[1]: sshd@7-172.31.22.21:22-139.178.68.195:33648.service: Deactivated successfully. May 27 17:02:54.642069 systemd[1]: session-8.scope: Deactivated successfully. May 27 17:02:54.645208 systemd-logind[1991]: Session 8 logged out. Waiting for processes to exit. May 27 17:02:54.649581 systemd-logind[1991]: Removed session 8. May 27 17:02:59.671825 systemd[1]: Started sshd@8-172.31.22.21:22-139.178.68.195:33658.service - OpenSSH per-connection server daemon (139.178.68.195:33658). May 27 17:02:59.886719 sshd[4803]: Accepted publickey for core from 139.178.68.195 port 33658 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:02:59.889642 sshd-session[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:02:59.898817 systemd-logind[1991]: New session 9 of user core. May 27 17:02:59.907408 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 17:03:00.172091 sshd[4805]: Connection closed by 139.178.68.195 port 33658 May 27 17:03:00.172395 sshd-session[4803]: pam_unix(sshd:session): session closed for user core May 27 17:03:00.178496 systemd[1]: sshd@8-172.31.22.21:22-139.178.68.195:33658.service: Deactivated successfully. May 27 17:03:00.183583 systemd[1]: session-9.scope: Deactivated successfully. May 27 17:03:00.190195 systemd-logind[1991]: Session 9 logged out. Waiting for processes to exit. May 27 17:03:00.194360 systemd-logind[1991]: Removed session 9. May 27 17:03:05.208613 systemd[1]: Started sshd@9-172.31.22.21:22-139.178.68.195:41810.service - OpenSSH per-connection server daemon (139.178.68.195:41810). May 27 17:03:05.411797 sshd[4818]: Accepted publickey for core from 139.178.68.195 port 41810 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:03:05.414187 sshd-session[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:03:05.423155 systemd-logind[1991]: New session 10 of user core. May 27 17:03:05.435347 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 17:03:05.696335 sshd[4820]: Connection closed by 139.178.68.195 port 41810 May 27 17:03:05.696925 sshd-session[4818]: pam_unix(sshd:session): session closed for user core May 27 17:03:05.709981 systemd[1]: sshd@9-172.31.22.21:22-139.178.68.195:41810.service: Deactivated successfully. May 27 17:03:05.715971 systemd[1]: session-10.scope: Deactivated successfully. May 27 17:03:05.719532 systemd-logind[1991]: Session 10 logged out. Waiting for processes to exit. May 27 17:03:05.723917 systemd-logind[1991]: Removed session 10. May 27 17:03:10.748169 systemd[1]: Started sshd@10-172.31.22.21:22-139.178.68.195:41818.service - OpenSSH per-connection server daemon (139.178.68.195:41818). May 27 17:03:10.959165 sshd[4832]: Accepted publickey for core from 139.178.68.195 port 41818 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:03:10.963148 sshd-session[4832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:03:10.973658 systemd-logind[1991]: New session 11 of user core. May 27 17:03:10.984428 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 17:03:11.250188 sshd[4834]: Connection closed by 139.178.68.195 port 41818 May 27 17:03:11.251184 sshd-session[4832]: pam_unix(sshd:session): session closed for user core May 27 17:03:11.258787 systemd[1]: sshd@10-172.31.22.21:22-139.178.68.195:41818.service: Deactivated successfully. May 27 17:03:11.265131 systemd[1]: session-11.scope: Deactivated successfully. May 27 17:03:11.268756 systemd-logind[1991]: Session 11 logged out. Waiting for processes to exit. May 27 17:03:11.294645 systemd[1]: Started sshd@11-172.31.22.21:22-139.178.68.195:41822.service - OpenSSH per-connection server daemon (139.178.68.195:41822). May 27 17:03:11.303417 systemd-logind[1991]: Removed session 11. May 27 17:03:11.504026 sshd[4847]: Accepted publickey for core from 139.178.68.195 port 41822 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:03:11.506860 sshd-session[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:03:11.520498 systemd-logind[1991]: New session 12 of user core. May 27 17:03:11.528342 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 17:03:11.870112 sshd[4849]: Connection closed by 139.178.68.195 port 41822 May 27 17:03:11.871238 sshd-session[4847]: pam_unix(sshd:session): session closed for user core May 27 17:03:11.883241 systemd-logind[1991]: Session 12 logged out. Waiting for processes to exit. May 27 17:03:11.886700 systemd[1]: sshd@11-172.31.22.21:22-139.178.68.195:41822.service: Deactivated successfully. May 27 17:03:11.894633 systemd[1]: session-12.scope: Deactivated successfully. May 27 17:03:11.925753 systemd-logind[1991]: Removed session 12. May 27 17:03:11.926471 systemd[1]: Started sshd@12-172.31.22.21:22-139.178.68.195:41836.service - OpenSSH per-connection server daemon (139.178.68.195:41836). May 27 17:03:12.136470 sshd[4861]: Accepted publickey for core from 139.178.68.195 port 41836 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:03:12.138666 sshd-session[4861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:03:12.150352 systemd-logind[1991]: New session 13 of user core. May 27 17:03:12.159419 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 17:03:12.462836 sshd[4863]: Connection closed by 139.178.68.195 port 41836 May 27 17:03:12.464159 sshd-session[4861]: pam_unix(sshd:session): session closed for user core May 27 17:03:12.473251 systemd[1]: sshd@12-172.31.22.21:22-139.178.68.195:41836.service: Deactivated successfully. May 27 17:03:12.477459 systemd[1]: session-13.scope: Deactivated successfully. May 27 17:03:12.480400 systemd-logind[1991]: Session 13 logged out. Waiting for processes to exit. May 27 17:03:12.485998 systemd-logind[1991]: Removed session 13. May 27 17:03:17.505443 systemd[1]: Started sshd@13-172.31.22.21:22-139.178.68.195:52932.service - OpenSSH per-connection server daemon (139.178.68.195:52932). May 27 17:03:17.715143 sshd[4875]: Accepted publickey for core from 139.178.68.195 port 52932 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:03:17.718608 sshd-session[4875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:03:17.730120 systemd-logind[1991]: New session 14 of user core. May 27 17:03:17.737416 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 17:03:18.002611 sshd[4878]: Connection closed by 139.178.68.195 port 52932 May 27 17:03:18.003881 sshd-session[4875]: pam_unix(sshd:session): session closed for user core May 27 17:03:18.012353 systemd[1]: sshd@13-172.31.22.21:22-139.178.68.195:52932.service: Deactivated successfully. May 27 17:03:18.017223 systemd[1]: session-14.scope: Deactivated successfully. May 27 17:03:18.019591 systemd-logind[1991]: Session 14 logged out. Waiting for processes to exit. May 27 17:03:18.023862 systemd-logind[1991]: Removed session 14. May 27 17:03:23.047882 systemd[1]: Started sshd@14-172.31.22.21:22-139.178.68.195:52938.service - OpenSSH per-connection server daemon (139.178.68.195:52938). May 27 17:03:23.259256 sshd[4893]: Accepted publickey for core from 139.178.68.195 port 52938 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:03:23.262598 sshd-session[4893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:03:23.270944 systemd-logind[1991]: New session 15 of user core. May 27 17:03:23.280474 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 17:03:23.552139 sshd[4895]: Connection closed by 139.178.68.195 port 52938 May 27 17:03:23.553092 sshd-session[4893]: pam_unix(sshd:session): session closed for user core May 27 17:03:23.562609 systemd-logind[1991]: Session 15 logged out. Waiting for processes to exit. May 27 17:03:23.562842 systemd[1]: sshd@14-172.31.22.21:22-139.178.68.195:52938.service: Deactivated successfully. May 27 17:03:23.569204 systemd[1]: session-15.scope: Deactivated successfully. May 27 17:03:23.573987 systemd-logind[1991]: Removed session 15. May 27 17:03:28.598745 systemd[1]: Started sshd@15-172.31.22.21:22-139.178.68.195:45982.service - OpenSSH per-connection server daemon (139.178.68.195:45982). May 27 17:03:28.798972 sshd[4906]: Accepted publickey for core from 139.178.68.195 port 45982 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:03:28.801812 sshd-session[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:03:28.811371 systemd-logind[1991]: New session 16 of user core. May 27 17:03:28.829435 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 17:03:29.107331 sshd[4908]: Connection closed by 139.178.68.195 port 45982 May 27 17:03:29.108394 sshd-session[4906]: pam_unix(sshd:session): session closed for user core May 27 17:03:29.116361 systemd[1]: sshd@15-172.31.22.21:22-139.178.68.195:45982.service: Deactivated successfully. May 27 17:03:29.121562 systemd[1]: session-16.scope: Deactivated successfully. May 27 17:03:29.126648 systemd-logind[1991]: Session 16 logged out. Waiting for processes to exit. May 27 17:03:29.146607 systemd-logind[1991]: Removed session 16. May 27 17:03:29.147700 systemd[1]: Started sshd@16-172.31.22.21:22-139.178.68.195:45990.service - OpenSSH per-connection server daemon (139.178.68.195:45990). May 27 17:03:29.352897 sshd[4920]: Accepted publickey for core from 139.178.68.195 port 45990 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:03:29.355745 sshd-session[4920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:03:29.367541 systemd-logind[1991]: New session 17 of user core. May 27 17:03:29.373549 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 17:03:29.687741 sshd[4922]: Connection closed by 139.178.68.195 port 45990 May 27 17:03:29.688743 sshd-session[4920]: pam_unix(sshd:session): session closed for user core May 27 17:03:29.698468 systemd[1]: sshd@16-172.31.22.21:22-139.178.68.195:45990.service: Deactivated successfully. May 27 17:03:29.706498 systemd[1]: session-17.scope: Deactivated successfully. May 27 17:03:29.711398 systemd-logind[1991]: Session 17 logged out. Waiting for processes to exit. May 27 17:03:29.735509 systemd[1]: Started sshd@17-172.31.22.21:22-139.178.68.195:45994.service - OpenSSH per-connection server daemon (139.178.68.195:45994). May 27 17:03:29.739425 systemd-logind[1991]: Removed session 17. May 27 17:03:29.940005 sshd[4932]: Accepted publickey for core from 139.178.68.195 port 45994 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:03:29.942773 sshd-session[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:03:29.955159 systemd-logind[1991]: New session 18 of user core. May 27 17:03:29.961368 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 17:03:32.916298 sshd[4934]: Connection closed by 139.178.68.195 port 45994 May 27 17:03:32.917513 sshd-session[4932]: pam_unix(sshd:session): session closed for user core May 27 17:03:32.933653 systemd-logind[1991]: Session 18 logged out. Waiting for processes to exit. May 27 17:03:32.936333 systemd[1]: sshd@17-172.31.22.21:22-139.178.68.195:45994.service: Deactivated successfully. May 27 17:03:32.944627 systemd[1]: session-18.scope: Deactivated successfully. May 27 17:03:32.971539 systemd[1]: Started sshd@18-172.31.22.21:22-139.178.68.195:46006.service - OpenSSH per-connection server daemon (139.178.68.195:46006). May 27 17:03:32.972535 systemd-logind[1991]: Removed session 18. May 27 17:03:33.177911 sshd[4952]: Accepted publickey for core from 139.178.68.195 port 46006 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:03:33.181378 sshd-session[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:03:33.193362 systemd-logind[1991]: New session 19 of user core. May 27 17:03:33.199462 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 17:03:33.729353 sshd[4954]: Connection closed by 139.178.68.195 port 46006 May 27 17:03:33.729901 sshd-session[4952]: pam_unix(sshd:session): session closed for user core May 27 17:03:33.742898 systemd[1]: sshd@18-172.31.22.21:22-139.178.68.195:46006.service: Deactivated successfully. May 27 17:03:33.749564 systemd[1]: session-19.scope: Deactivated successfully. May 27 17:03:33.754213 systemd-logind[1991]: Session 19 logged out. Waiting for processes to exit. May 27 17:03:33.774581 systemd[1]: Started sshd@19-172.31.22.21:22-139.178.68.195:39538.service - OpenSSH per-connection server daemon (139.178.68.195:39538). May 27 17:03:33.778206 systemd-logind[1991]: Removed session 19. May 27 17:03:33.966083 sshd[4964]: Accepted publickey for core from 139.178.68.195 port 39538 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:03:33.969188 sshd-session[4964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:03:33.978703 systemd-logind[1991]: New session 20 of user core. May 27 17:03:33.986363 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 17:03:34.240779 sshd[4966]: Connection closed by 139.178.68.195 port 39538 May 27 17:03:34.241889 sshd-session[4964]: pam_unix(sshd:session): session closed for user core May 27 17:03:34.250735 systemd-logind[1991]: Session 20 logged out. Waiting for processes to exit. May 27 17:03:34.252313 systemd[1]: sshd@19-172.31.22.21:22-139.178.68.195:39538.service: Deactivated successfully. May 27 17:03:34.256721 systemd[1]: session-20.scope: Deactivated successfully. May 27 17:03:34.261104 systemd-logind[1991]: Removed session 20. May 27 17:03:39.278357 systemd[1]: Started sshd@20-172.31.22.21:22-139.178.68.195:39552.service - OpenSSH per-connection server daemon (139.178.68.195:39552). May 27 17:03:39.483980 sshd[4978]: Accepted publickey for core from 139.178.68.195 port 39552 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:03:39.486480 sshd-session[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:03:39.497215 systemd-logind[1991]: New session 21 of user core. May 27 17:03:39.505426 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 17:03:39.775822 sshd[4980]: Connection closed by 139.178.68.195 port 39552 May 27 17:03:39.776367 sshd-session[4978]: pam_unix(sshd:session): session closed for user core May 27 17:03:39.783873 systemd[1]: sshd@20-172.31.22.21:22-139.178.68.195:39552.service: Deactivated successfully. May 27 17:03:39.789382 systemd[1]: session-21.scope: Deactivated successfully. May 27 17:03:39.791552 systemd-logind[1991]: Session 21 logged out. Waiting for processes to exit. May 27 17:03:39.796275 systemd-logind[1991]: Removed session 21. May 27 17:03:44.821699 systemd[1]: Started sshd@21-172.31.22.21:22-139.178.68.195:54770.service - OpenSSH per-connection server daemon (139.178.68.195:54770). May 27 17:03:45.035870 sshd[4994]: Accepted publickey for core from 139.178.68.195 port 54770 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:03:45.039332 sshd-session[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:03:45.051153 systemd-logind[1991]: New session 22 of user core. May 27 17:03:45.056376 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 17:03:45.317347 sshd[4996]: Connection closed by 139.178.68.195 port 54770 May 27 17:03:45.318307 sshd-session[4994]: pam_unix(sshd:session): session closed for user core May 27 17:03:45.326747 systemd-logind[1991]: Session 22 logged out. Waiting for processes to exit. May 27 17:03:45.328674 systemd[1]: sshd@21-172.31.22.21:22-139.178.68.195:54770.service: Deactivated successfully. May 27 17:03:45.333944 systemd[1]: session-22.scope: Deactivated successfully. May 27 17:03:45.337867 systemd-logind[1991]: Removed session 22. May 27 17:03:50.359242 systemd[1]: Started sshd@22-172.31.22.21:22-139.178.68.195:54772.service - OpenSSH per-connection server daemon (139.178.68.195:54772). May 27 17:03:50.569523 sshd[5010]: Accepted publickey for core from 139.178.68.195 port 54772 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:03:50.572250 sshd-session[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:03:50.582853 systemd-logind[1991]: New session 23 of user core. May 27 17:03:50.588385 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 17:03:50.855840 sshd[5013]: Connection closed by 139.178.68.195 port 54772 May 27 17:03:50.856961 sshd-session[5010]: pam_unix(sshd:session): session closed for user core May 27 17:03:50.866204 systemd-logind[1991]: Session 23 logged out. Waiting for processes to exit. May 27 17:03:50.866331 systemd[1]: sshd@22-172.31.22.21:22-139.178.68.195:54772.service: Deactivated successfully. May 27 17:03:50.873109 systemd[1]: session-23.scope: Deactivated successfully. May 27 17:03:50.880910 systemd-logind[1991]: Removed session 23. May 27 17:03:55.902795 systemd[1]: Started sshd@23-172.31.22.21:22-139.178.68.195:50036.service - OpenSSH per-connection server daemon (139.178.68.195:50036). May 27 17:03:56.114562 sshd[5027]: Accepted publickey for core from 139.178.68.195 port 50036 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:03:56.117455 sshd-session[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:03:56.130624 systemd-logind[1991]: New session 24 of user core. May 27 17:03:56.139384 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 17:03:56.392996 sshd[5029]: Connection closed by 139.178.68.195 port 50036 May 27 17:03:56.394288 sshd-session[5027]: pam_unix(sshd:session): session closed for user core May 27 17:03:56.403768 systemd-logind[1991]: Session 24 logged out. Waiting for processes to exit. May 27 17:03:56.404202 systemd[1]: sshd@23-172.31.22.21:22-139.178.68.195:50036.service: Deactivated successfully. May 27 17:03:56.408629 systemd[1]: session-24.scope: Deactivated successfully. May 27 17:03:56.413131 systemd-logind[1991]: Removed session 24. May 27 17:03:56.431647 systemd[1]: Started sshd@24-172.31.22.21:22-139.178.68.195:50046.service - OpenSSH per-connection server daemon (139.178.68.195:50046). May 27 17:03:56.645256 sshd[5041]: Accepted publickey for core from 139.178.68.195 port 50046 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:03:56.647677 sshd-session[5041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:03:56.659163 systemd-logind[1991]: New session 25 of user core. May 27 17:03:56.664479 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 17:04:00.753099 containerd[2017]: time="2025-05-27T17:04:00.752609455Z" level=info msg="StopContainer for \"2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259\" with timeout 30 (s)" May 27 17:04:00.754617 containerd[2017]: time="2025-05-27T17:04:00.754557727Z" level=info msg="Stop container \"2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259\" with signal terminated" May 27 17:04:00.791750 containerd[2017]: time="2025-05-27T17:04:00.791676463Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:04:00.797364 systemd[1]: cri-containerd-2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259.scope: Deactivated successfully. May 27 17:04:00.800510 containerd[2017]: time="2025-05-27T17:04:00.800443075Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259\" id:\"2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259\" pid:4086 exited_at:{seconds:1748365440 nanos:799273519}" May 27 17:04:00.800873 containerd[2017]: time="2025-05-27T17:04:00.800793115Z" level=info msg="received exit event container_id:\"2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259\" id:\"2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259\" pid:4086 exited_at:{seconds:1748365440 nanos:799273519}" May 27 17:04:00.811697 containerd[2017]: time="2025-05-27T17:04:00.811587847Z" level=info msg="TaskExit event in podsandbox handler container_id:\"150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e\" id:\"11364ba0641b2ea3a1155de32045a2c1b36005ffefaae9e0c546cf2744bcc122\" pid:5063 exited_at:{seconds:1748365440 nanos:810941047}" May 27 17:04:00.816643 containerd[2017]: time="2025-05-27T17:04:00.816576931Z" level=info msg="StopContainer for \"150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e\" with timeout 2 (s)" May 27 17:04:00.818073 containerd[2017]: time="2025-05-27T17:04:00.817617955Z" level=info msg="Stop container \"150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e\" with signal terminated" May 27 17:04:00.836083 systemd-networkd[1859]: lxc_health: Link DOWN May 27 17:04:00.836454 systemd-networkd[1859]: lxc_health: Lost carrier May 27 17:04:00.886766 systemd[1]: cri-containerd-150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e.scope: Deactivated successfully. May 27 17:04:00.887418 systemd[1]: cri-containerd-150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e.scope: Consumed 16.149s CPU time, 125.1M memory peak, 136K read from disk, 12.9M written to disk. May 27 17:04:00.904877 containerd[2017]: time="2025-05-27T17:04:00.904797560Z" level=info msg="received exit event container_id:\"150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e\" id:\"150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e\" pid:4107 exited_at:{seconds:1748365440 nanos:904397312}" May 27 17:04:00.909537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259-rootfs.mount: Deactivated successfully. May 27 17:04:00.912355 containerd[2017]: time="2025-05-27T17:04:00.911640392Z" level=info msg="TaskExit event in podsandbox handler container_id:\"150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e\" id:\"150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e\" pid:4107 exited_at:{seconds:1748365440 nanos:904397312}" May 27 17:04:00.944110 containerd[2017]: time="2025-05-27T17:04:00.943011056Z" level=info msg="StopContainer for \"2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259\" returns successfully" May 27 17:04:00.946888 containerd[2017]: time="2025-05-27T17:04:00.946814744Z" level=info msg="StopPodSandbox for \"1e224d0c22c68f327014affad7f4eebf93ad35391cf6c34ea60bb88c94a91f03\"" May 27 17:04:00.947275 containerd[2017]: time="2025-05-27T17:04:00.946946972Z" level=info msg="Container to stop \"2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:04:00.987628 systemd[1]: cri-containerd-1e224d0c22c68f327014affad7f4eebf93ad35391cf6c34ea60bb88c94a91f03.scope: Deactivated successfully. May 27 17:04:01.000244 containerd[2017]: time="2025-05-27T17:04:01.000144952Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1e224d0c22c68f327014affad7f4eebf93ad35391cf6c34ea60bb88c94a91f03\" id:\"1e224d0c22c68f327014affad7f4eebf93ad35391cf6c34ea60bb88c94a91f03\" pid:3664 exit_status:137 exited_at:{seconds:1748365440 nanos:998930612}" May 27 17:04:01.033280 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e-rootfs.mount: Deactivated successfully. May 27 17:04:01.061471 containerd[2017]: time="2025-05-27T17:04:01.059768105Z" level=info msg="StopContainer for \"150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e\" returns successfully" May 27 17:04:01.062463 containerd[2017]: time="2025-05-27T17:04:01.062392553Z" level=info msg="StopPodSandbox for \"690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea\"" May 27 17:04:01.062632 containerd[2017]: time="2025-05-27T17:04:01.062512145Z" level=info msg="Container to stop \"5bf375d9f466277f71113038529354ec7963e89dbddf743134a73bb558f84aa7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:04:01.062632 containerd[2017]: time="2025-05-27T17:04:01.062544161Z" level=info msg="Container to stop \"150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:04:01.062632 containerd[2017]: time="2025-05-27T17:04:01.062573213Z" level=info msg="Container to stop \"794d57b3b5394061583c48f52f071ec94ca151c81b492939443152c106b0b661\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:04:01.062632 containerd[2017]: time="2025-05-27T17:04:01.062596049Z" level=info msg="Container to stop \"78b31c560e53ef41bcf4cc017a600ec5f8195a42baa3b64b68a30d16d571952a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:04:01.062632 containerd[2017]: time="2025-05-27T17:04:01.062617193Z" level=info msg="Container to stop \"d7f377abab62062322726b53893b3065da7772c489a8ba52199e2fbcf14d19ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:04:01.096438 systemd[1]: cri-containerd-690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea.scope: Deactivated successfully. May 27 17:04:01.120140 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e224d0c22c68f327014affad7f4eebf93ad35391cf6c34ea60bb88c94a91f03-rootfs.mount: Deactivated successfully. May 27 17:04:01.129099 containerd[2017]: time="2025-05-27T17:04:01.128886533Z" level=info msg="shim disconnected" id=1e224d0c22c68f327014affad7f4eebf93ad35391cf6c34ea60bb88c94a91f03 namespace=k8s.io May 27 17:04:01.129099 containerd[2017]: time="2025-05-27T17:04:01.128943869Z" level=warning msg="cleaning up after shim disconnected" id=1e224d0c22c68f327014affad7f4eebf93ad35391cf6c34ea60bb88c94a91f03 namespace=k8s.io May 27 17:04:01.129099 containerd[2017]: time="2025-05-27T17:04:01.128995181Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 17:04:01.133080 containerd[2017]: time="2025-05-27T17:04:01.130352717Z" level=info msg="received exit event sandbox_id:\"1e224d0c22c68f327014affad7f4eebf93ad35391cf6c34ea60bb88c94a91f03\" exit_status:137 exited_at:{seconds:1748365440 nanos:998930612}" May 27 17:04:01.137405 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1e224d0c22c68f327014affad7f4eebf93ad35391cf6c34ea60bb88c94a91f03-shm.mount: Deactivated successfully. May 27 17:04:01.140856 containerd[2017]: time="2025-05-27T17:04:01.140788613Z" level=info msg="TearDown network for sandbox \"1e224d0c22c68f327014affad7f4eebf93ad35391cf6c34ea60bb88c94a91f03\" successfully" May 27 17:04:01.141907 containerd[2017]: time="2025-05-27T17:04:01.141757637Z" level=info msg="StopPodSandbox for \"1e224d0c22c68f327014affad7f4eebf93ad35391cf6c34ea60bb88c94a91f03\" returns successfully" May 27 17:04:01.178878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea-rootfs.mount: Deactivated successfully. May 27 17:04:01.186125 containerd[2017]: time="2025-05-27T17:04:01.186023885Z" level=info msg="shim disconnected" id=690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea namespace=k8s.io May 27 17:04:01.186788 containerd[2017]: time="2025-05-27T17:04:01.186111929Z" level=warning msg="cleaning up after shim disconnected" id=690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea namespace=k8s.io May 27 17:04:01.186788 containerd[2017]: time="2025-05-27T17:04:01.186164213Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 17:04:01.200413 containerd[2017]: time="2025-05-27T17:04:01.200327033Z" level=info msg="TaskExit event in podsandbox handler container_id:\"690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea\" id:\"690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea\" pid:3538 exit_status:137 exited_at:{seconds:1748365441 nanos:101849525}" May 27 17:04:01.201457 containerd[2017]: time="2025-05-27T17:04:01.201372401Z" level=info msg="received exit event sandbox_id:\"690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea\" exit_status:137 exited_at:{seconds:1748365441 nanos:101849525}" May 27 17:04:01.201879 containerd[2017]: time="2025-05-27T17:04:01.201491681Z" level=info msg="TearDown network for sandbox \"690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea\" successfully" May 27 17:04:01.201879 containerd[2017]: time="2025-05-27T17:04:01.201857345Z" level=info msg="StopPodSandbox for \"690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea\" returns successfully" May 27 17:04:01.272519 kubelet[3300]: I0527 17:04:01.271814 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77v4x\" (UniqueName: \"kubernetes.io/projected/42aec18f-1188-4e9c-a224-02d3af835120-kube-api-access-77v4x\") pod \"42aec18f-1188-4e9c-a224-02d3af835120\" (UID: \"42aec18f-1188-4e9c-a224-02d3af835120\") " May 27 17:04:01.272519 kubelet[3300]: I0527 17:04:01.271893 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42aec18f-1188-4e9c-a224-02d3af835120-cilium-config-path\") pod \"42aec18f-1188-4e9c-a224-02d3af835120\" (UID: \"42aec18f-1188-4e9c-a224-02d3af835120\") " May 27 17:04:01.278009 kubelet[3300]: I0527 17:04:01.277955 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42aec18f-1188-4e9c-a224-02d3af835120-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "42aec18f-1188-4e9c-a224-02d3af835120" (UID: "42aec18f-1188-4e9c-a224-02d3af835120"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 27 17:04:01.279083 kubelet[3300]: I0527 17:04:01.278986 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42aec18f-1188-4e9c-a224-02d3af835120-kube-api-access-77v4x" (OuterVolumeSpecName: "kube-api-access-77v4x") pod "42aec18f-1188-4e9c-a224-02d3af835120" (UID: "42aec18f-1188-4e9c-a224-02d3af835120"). InnerVolumeSpecName "kube-api-access-77v4x". PluginName "kubernetes.io/projected", VolumeGidValue "" May 27 17:04:01.291915 kubelet[3300]: I0527 17:04:01.290796 3300 scope.go:117] "RemoveContainer" containerID="2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259" May 27 17:04:01.305136 containerd[2017]: time="2025-05-27T17:04:01.301895634Z" level=info msg="RemoveContainer for \"2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259\"" May 27 17:04:01.305133 systemd[1]: Removed slice kubepods-besteffort-pod42aec18f_1188_4e9c_a224_02d3af835120.slice - libcontainer container kubepods-besteffort-pod42aec18f_1188_4e9c_a224_02d3af835120.slice. May 27 17:04:01.324415 containerd[2017]: time="2025-05-27T17:04:01.324226938Z" level=info msg="RemoveContainer for \"2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259\" returns successfully" May 27 17:04:01.325490 kubelet[3300]: I0527 17:04:01.325288 3300 scope.go:117] "RemoveContainer" containerID="2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259" May 27 17:04:01.327292 containerd[2017]: time="2025-05-27T17:04:01.326031858Z" level=error msg="ContainerStatus for \"2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259\": not found" May 27 17:04:01.329081 kubelet[3300]: E0527 17:04:01.328082 3300 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259\": not found" containerID="2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259" May 27 17:04:01.329081 kubelet[3300]: I0527 17:04:01.328159 3300 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259"} err="failed to get container status \"2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f696e11b413bd20f052acaa18a8ec4548ee496bb0d99a5b04988e4ef90fa259\": not found" May 27 17:04:01.338745 kubelet[3300]: I0527 17:04:01.338705 3300 scope.go:117] "RemoveContainer" containerID="150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e" May 27 17:04:01.348814 containerd[2017]: time="2025-05-27T17:04:01.348205494Z" level=info msg="RemoveContainer for \"150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e\"" May 27 17:04:01.359149 containerd[2017]: time="2025-05-27T17:04:01.359087634Z" level=info msg="RemoveContainer for \"150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e\" returns successfully" May 27 17:04:01.359746 kubelet[3300]: I0527 17:04:01.359707 3300 scope.go:117] "RemoveContainer" containerID="d7f377abab62062322726b53893b3065da7772c489a8ba52199e2fbcf14d19ba" May 27 17:04:01.363357 containerd[2017]: time="2025-05-27T17:04:01.363018210Z" level=info msg="RemoveContainer for \"d7f377abab62062322726b53893b3065da7772c489a8ba52199e2fbcf14d19ba\"" May 27 17:04:01.372446 kubelet[3300]: I0527 17:04:01.372378 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-hubble-tls\") pod \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " May 27 17:04:01.372737 kubelet[3300]: I0527 17:04:01.372677 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-lib-modules\") pod \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " May 27 17:04:01.373270 kubelet[3300]: I0527 17:04:01.373219 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-host-proc-sys-kernel\") pod \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " May 27 17:04:01.373439 kubelet[3300]: I0527 17:04:01.373280 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-host-proc-sys-net\") pod \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " May 27 17:04:01.373439 kubelet[3300]: I0527 17:04:01.373316 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-cilium-run\") pod \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " May 27 17:04:01.373439 kubelet[3300]: I0527 17:04:01.373352 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-hostproc\") pod \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " May 27 17:04:01.373439 kubelet[3300]: I0527 17:04:01.373399 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cw6t\" (UniqueName: \"kubernetes.io/projected/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-kube-api-access-4cw6t\") pod \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " May 27 17:04:01.373439 kubelet[3300]: I0527 17:04:01.373433 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-cilium-cgroup\") pod \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " May 27 17:04:01.373698 kubelet[3300]: I0527 17:04:01.373477 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-bpf-maps\") pod \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " May 27 17:04:01.373698 kubelet[3300]: I0527 17:04:01.373517 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-clustermesh-secrets\") pod \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " May 27 17:04:01.373698 kubelet[3300]: I0527 17:04:01.373552 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-etc-cni-netd\") pod \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " May 27 17:04:01.373698 kubelet[3300]: I0527 17:04:01.373584 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-xtables-lock\") pod \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " May 27 17:04:01.373698 kubelet[3300]: I0527 17:04:01.373620 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-cilium-config-path\") pod \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " May 27 17:04:01.373698 kubelet[3300]: I0527 17:04:01.373689 3300 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-cni-path\") pod \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\" (UID: \"68261b05-f4d8-439a-b6a7-0fb0c5b4299f\") " May 27 17:04:01.374030 kubelet[3300]: I0527 17:04:01.373762 3300 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42aec18f-1188-4e9c-a224-02d3af835120-cilium-config-path\") on node \"ip-172-31-22-21\" DevicePath \"\"" May 27 17:04:01.374030 kubelet[3300]: I0527 17:04:01.373789 3300 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77v4x\" (UniqueName: \"kubernetes.io/projected/42aec18f-1188-4e9c-a224-02d3af835120-kube-api-access-77v4x\") on node \"ip-172-31-22-21\" DevicePath \"\"" May 27 17:04:01.374030 kubelet[3300]: I0527 17:04:01.373143 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "68261b05-f4d8-439a-b6a7-0fb0c5b4299f" (UID: "68261b05-f4d8-439a-b6a7-0fb0c5b4299f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 17:04:01.374030 kubelet[3300]: I0527 17:04:01.373843 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-cni-path" (OuterVolumeSpecName: "cni-path") pod "68261b05-f4d8-439a-b6a7-0fb0c5b4299f" (UID: "68261b05-f4d8-439a-b6a7-0fb0c5b4299f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 17:04:01.374030 kubelet[3300]: I0527 17:04:01.373917 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "68261b05-f4d8-439a-b6a7-0fb0c5b4299f" (UID: "68261b05-f4d8-439a-b6a7-0fb0c5b4299f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 17:04:01.376002 kubelet[3300]: I0527 17:04:01.374004 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "68261b05-f4d8-439a-b6a7-0fb0c5b4299f" (UID: "68261b05-f4d8-439a-b6a7-0fb0c5b4299f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 17:04:01.376002 kubelet[3300]: I0527 17:04:01.375268 3300 scope.go:117] "RemoveContainer" containerID="78b31c560e53ef41bcf4cc017a600ec5f8195a42baa3b64b68a30d16d571952a" May 27 17:04:01.376002 kubelet[3300]: I0527 17:04:01.375518 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "68261b05-f4d8-439a-b6a7-0fb0c5b4299f" (UID: "68261b05-f4d8-439a-b6a7-0fb0c5b4299f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 17:04:01.376223 containerd[2017]: time="2025-05-27T17:04:01.374725134Z" level=info msg="RemoveContainer for \"d7f377abab62062322726b53893b3065da7772c489a8ba52199e2fbcf14d19ba\" returns successfully" May 27 17:04:01.377127 kubelet[3300]: I0527 17:04:01.376340 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-hostproc" (OuterVolumeSpecName: "hostproc") pod "68261b05-f4d8-439a-b6a7-0fb0c5b4299f" (UID: "68261b05-f4d8-439a-b6a7-0fb0c5b4299f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 17:04:01.379202 kubelet[3300]: I0527 17:04:01.379005 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "68261b05-f4d8-439a-b6a7-0fb0c5b4299f" (UID: "68261b05-f4d8-439a-b6a7-0fb0c5b4299f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 17:04:01.379356 kubelet[3300]: I0527 17:04:01.379267 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "68261b05-f4d8-439a-b6a7-0fb0c5b4299f" (UID: "68261b05-f4d8-439a-b6a7-0fb0c5b4299f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 17:04:01.379696 kubelet[3300]: I0527 17:04:01.379646 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "68261b05-f4d8-439a-b6a7-0fb0c5b4299f" (UID: "68261b05-f4d8-439a-b6a7-0fb0c5b4299f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 17:04:01.380475 kubelet[3300]: I0527 17:04:01.380423 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "68261b05-f4d8-439a-b6a7-0fb0c5b4299f" (UID: "68261b05-f4d8-439a-b6a7-0fb0c5b4299f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 17:04:01.384795 containerd[2017]: time="2025-05-27T17:04:01.384731886Z" level=info msg="RemoveContainer for \"78b31c560e53ef41bcf4cc017a600ec5f8195a42baa3b64b68a30d16d571952a\"" May 27 17:04:01.389386 kubelet[3300]: I0527 17:04:01.389311 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "68261b05-f4d8-439a-b6a7-0fb0c5b4299f" (UID: "68261b05-f4d8-439a-b6a7-0fb0c5b4299f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 27 17:04:01.392657 kubelet[3300]: I0527 17:04:01.392576 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "68261b05-f4d8-439a-b6a7-0fb0c5b4299f" (UID: "68261b05-f4d8-439a-b6a7-0fb0c5b4299f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 27 17:04:01.395446 kubelet[3300]: I0527 17:04:01.395340 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "68261b05-f4d8-439a-b6a7-0fb0c5b4299f" (UID: "68261b05-f4d8-439a-b6a7-0fb0c5b4299f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 27 17:04:01.395446 kubelet[3300]: I0527 17:04:01.395347 3300 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-kube-api-access-4cw6t" (OuterVolumeSpecName: "kube-api-access-4cw6t") pod "68261b05-f4d8-439a-b6a7-0fb0c5b4299f" (UID: "68261b05-f4d8-439a-b6a7-0fb0c5b4299f"). InnerVolumeSpecName "kube-api-access-4cw6t". PluginName "kubernetes.io/projected", VolumeGidValue "" May 27 17:04:01.403815 containerd[2017]: time="2025-05-27T17:04:01.403613778Z" level=info msg="RemoveContainer for \"78b31c560e53ef41bcf4cc017a600ec5f8195a42baa3b64b68a30d16d571952a\" returns successfully" May 27 17:04:01.404321 kubelet[3300]: I0527 17:04:01.404287 3300 scope.go:117] "RemoveContainer" containerID="794d57b3b5394061583c48f52f071ec94ca151c81b492939443152c106b0b661" May 27 17:04:01.408519 containerd[2017]: time="2025-05-27T17:04:01.408412518Z" level=info msg="RemoveContainer for \"794d57b3b5394061583c48f52f071ec94ca151c81b492939443152c106b0b661\"" May 27 17:04:01.417730 containerd[2017]: time="2025-05-27T17:04:01.417644418Z" level=info msg="RemoveContainer for \"794d57b3b5394061583c48f52f071ec94ca151c81b492939443152c106b0b661\" returns successfully" May 27 17:04:01.418358 kubelet[3300]: I0527 17:04:01.418224 3300 scope.go:117] "RemoveContainer" containerID="5bf375d9f466277f71113038529354ec7963e89dbddf743134a73bb558f84aa7" May 27 17:04:01.422776 containerd[2017]: time="2025-05-27T17:04:01.422686494Z" level=info msg="RemoveContainer for \"5bf375d9f466277f71113038529354ec7963e89dbddf743134a73bb558f84aa7\"" May 27 17:04:01.431490 containerd[2017]: time="2025-05-27T17:04:01.431085234Z" level=info msg="RemoveContainer for \"5bf375d9f466277f71113038529354ec7963e89dbddf743134a73bb558f84aa7\" returns successfully" May 27 17:04:01.432104 kubelet[3300]: I0527 17:04:01.431994 3300 scope.go:117] "RemoveContainer" containerID="150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e" May 27 17:04:01.434517 containerd[2017]: time="2025-05-27T17:04:01.434106666Z" level=error msg="ContainerStatus for \"150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e\": not found" May 27 17:04:01.434962 kubelet[3300]: E0527 17:04:01.434888 3300 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e\": not found" containerID="150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e" May 27 17:04:01.435118 kubelet[3300]: I0527 17:04:01.434957 3300 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e"} err="failed to get container status \"150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e\": rpc error: code = NotFound desc = an error occurred when try to find container \"150203d69d2e0876fffc0e3b67efeab5a2b27076695a9d1f7a1454011953f30e\": not found" May 27 17:04:01.435118 kubelet[3300]: I0527 17:04:01.435000 3300 scope.go:117] "RemoveContainer" containerID="d7f377abab62062322726b53893b3065da7772c489a8ba52199e2fbcf14d19ba" May 27 17:04:01.435742 containerd[2017]: time="2025-05-27T17:04:01.435657450Z" level=error msg="ContainerStatus for \"d7f377abab62062322726b53893b3065da7772c489a8ba52199e2fbcf14d19ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7f377abab62062322726b53893b3065da7772c489a8ba52199e2fbcf14d19ba\": not found" May 27 17:04:01.436006 kubelet[3300]: E0527 17:04:01.435954 3300 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7f377abab62062322726b53893b3065da7772c489a8ba52199e2fbcf14d19ba\": not found" containerID="d7f377abab62062322726b53893b3065da7772c489a8ba52199e2fbcf14d19ba" May 27 17:04:01.436136 kubelet[3300]: I0527 17:04:01.436014 3300 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d7f377abab62062322726b53893b3065da7772c489a8ba52199e2fbcf14d19ba"} err="failed to get container status \"d7f377abab62062322726b53893b3065da7772c489a8ba52199e2fbcf14d19ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7f377abab62062322726b53893b3065da7772c489a8ba52199e2fbcf14d19ba\": not found" May 27 17:04:01.436136 kubelet[3300]: I0527 17:04:01.436106 3300 scope.go:117] "RemoveContainer" containerID="78b31c560e53ef41bcf4cc017a600ec5f8195a42baa3b64b68a30d16d571952a" May 27 17:04:01.436714 containerd[2017]: time="2025-05-27T17:04:01.436616730Z" level=error msg="ContainerStatus for \"78b31c560e53ef41bcf4cc017a600ec5f8195a42baa3b64b68a30d16d571952a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"78b31c560e53ef41bcf4cc017a600ec5f8195a42baa3b64b68a30d16d571952a\": not found" May 27 17:04:01.437319 kubelet[3300]: E0527 17:04:01.437263 3300 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"78b31c560e53ef41bcf4cc017a600ec5f8195a42baa3b64b68a30d16d571952a\": not found" containerID="78b31c560e53ef41bcf4cc017a600ec5f8195a42baa3b64b68a30d16d571952a" May 27 17:04:01.437477 kubelet[3300]: I0527 17:04:01.437332 3300 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"78b31c560e53ef41bcf4cc017a600ec5f8195a42baa3b64b68a30d16d571952a"} err="failed to get container status \"78b31c560e53ef41bcf4cc017a600ec5f8195a42baa3b64b68a30d16d571952a\": rpc error: code = NotFound desc = an error occurred when try to find container \"78b31c560e53ef41bcf4cc017a600ec5f8195a42baa3b64b68a30d16d571952a\": not found" May 27 17:04:01.437477 kubelet[3300]: I0527 17:04:01.437373 3300 scope.go:117] "RemoveContainer" containerID="794d57b3b5394061583c48f52f071ec94ca151c81b492939443152c106b0b661" May 27 17:04:01.438317 containerd[2017]: time="2025-05-27T17:04:01.438247770Z" level=error msg="ContainerStatus for \"794d57b3b5394061583c48f52f071ec94ca151c81b492939443152c106b0b661\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"794d57b3b5394061583c48f52f071ec94ca151c81b492939443152c106b0b661\": not found" May 27 17:04:01.438664 kubelet[3300]: E0527 17:04:01.438622 3300 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"794d57b3b5394061583c48f52f071ec94ca151c81b492939443152c106b0b661\": not found" containerID="794d57b3b5394061583c48f52f071ec94ca151c81b492939443152c106b0b661" May 27 17:04:01.438788 kubelet[3300]: I0527 17:04:01.438677 3300 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"794d57b3b5394061583c48f52f071ec94ca151c81b492939443152c106b0b661"} err="failed to get container status \"794d57b3b5394061583c48f52f071ec94ca151c81b492939443152c106b0b661\": rpc error: code = NotFound desc = an error occurred when try to find container \"794d57b3b5394061583c48f52f071ec94ca151c81b492939443152c106b0b661\": not found" May 27 17:04:01.438788 kubelet[3300]: I0527 17:04:01.438720 3300 scope.go:117] "RemoveContainer" containerID="5bf375d9f466277f71113038529354ec7963e89dbddf743134a73bb558f84aa7" May 27 17:04:01.439291 containerd[2017]: time="2025-05-27T17:04:01.439141806Z" level=error msg="ContainerStatus for \"5bf375d9f466277f71113038529354ec7963e89dbddf743134a73bb558f84aa7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5bf375d9f466277f71113038529354ec7963e89dbddf743134a73bb558f84aa7\": not found" May 27 17:04:01.439557 kubelet[3300]: E0527 17:04:01.439510 3300 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5bf375d9f466277f71113038529354ec7963e89dbddf743134a73bb558f84aa7\": not found" containerID="5bf375d9f466277f71113038529354ec7963e89dbddf743134a73bb558f84aa7" May 27 17:04:01.439933 kubelet[3300]: I0527 17:04:01.439845 3300 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5bf375d9f466277f71113038529354ec7963e89dbddf743134a73bb558f84aa7"} err="failed to get container status \"5bf375d9f466277f71113038529354ec7963e89dbddf743134a73bb558f84aa7\": rpc error: code = NotFound desc = an error occurred when try to find container \"5bf375d9f466277f71113038529354ec7963e89dbddf743134a73bb558f84aa7\": not found" May 27 17:04:01.474529 kubelet[3300]: I0527 17:04:01.474417 3300 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-hubble-tls\") on node \"ip-172-31-22-21\" DevicePath \"\"" May 27 17:04:01.474529 kubelet[3300]: I0527 17:04:01.474464 3300 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-lib-modules\") on node \"ip-172-31-22-21\" DevicePath \"\"" May 27 17:04:01.474529 kubelet[3300]: I0527 17:04:01.474489 3300 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-host-proc-sys-kernel\") on node \"ip-172-31-22-21\" DevicePath \"\"" May 27 17:04:01.474814 kubelet[3300]: I0527 17:04:01.474549 3300 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-host-proc-sys-net\") on node \"ip-172-31-22-21\" DevicePath \"\"" May 27 17:04:01.474814 kubelet[3300]: I0527 17:04:01.474573 3300 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-cilium-run\") on node \"ip-172-31-22-21\" DevicePath \"\"" May 27 17:04:01.474814 kubelet[3300]: I0527 17:04:01.474626 3300 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-hostproc\") on node \"ip-172-31-22-21\" DevicePath \"\"" May 27 17:04:01.474814 kubelet[3300]: I0527 17:04:01.474657 3300 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cw6t\" (UniqueName: \"kubernetes.io/projected/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-kube-api-access-4cw6t\") on node \"ip-172-31-22-21\" DevicePath \"\"" May 27 17:04:01.474814 kubelet[3300]: I0527 17:04:01.474679 3300 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-cilium-cgroup\") on node \"ip-172-31-22-21\" DevicePath \"\"" May 27 17:04:01.474814 kubelet[3300]: I0527 17:04:01.474734 3300 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-bpf-maps\") on node \"ip-172-31-22-21\" DevicePath \"\"" May 27 17:04:01.474814 kubelet[3300]: I0527 17:04:01.474757 3300 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-clustermesh-secrets\") on node \"ip-172-31-22-21\" DevicePath \"\"" May 27 17:04:01.474814 kubelet[3300]: I0527 17:04:01.474809 3300 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-etc-cni-netd\") on node \"ip-172-31-22-21\" DevicePath \"\"" May 27 17:04:01.475356 kubelet[3300]: I0527 17:04:01.474834 3300 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-xtables-lock\") on node \"ip-172-31-22-21\" DevicePath \"\"" May 27 17:04:01.475356 kubelet[3300]: I0527 17:04:01.474855 3300 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-cilium-config-path\") on node \"ip-172-31-22-21\" DevicePath \"\"" May 27 17:04:01.475356 kubelet[3300]: I0527 17:04:01.474901 3300 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/68261b05-f4d8-439a-b6a7-0fb0c5b4299f-cni-path\") on node \"ip-172-31-22-21\" DevicePath \"\"" May 27 17:04:01.652832 systemd[1]: Removed slice kubepods-burstable-pod68261b05_f4d8_439a_b6a7_0fb0c5b4299f.slice - libcontainer container kubepods-burstable-pod68261b05_f4d8_439a_b6a7_0fb0c5b4299f.slice. May 27 17:04:01.654122 systemd[1]: kubepods-burstable-pod68261b05_f4d8_439a_b6a7_0fb0c5b4299f.slice: Consumed 16.340s CPU time, 125.5M memory peak, 136K read from disk, 12.9M written to disk. May 27 17:04:01.665153 kubelet[3300]: I0527 17:04:01.664589 3300 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42aec18f-1188-4e9c-a224-02d3af835120" path="/var/lib/kubelet/pods/42aec18f-1188-4e9c-a224-02d3af835120/volumes" May 27 17:04:01.904939 systemd[1]: var-lib-kubelet-pods-42aec18f\x2d1188\x2d4e9c\x2da224\x2d02d3af835120-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d77v4x.mount: Deactivated successfully. May 27 17:04:01.905847 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea-shm.mount: Deactivated successfully. May 27 17:04:01.906253 systemd[1]: var-lib-kubelet-pods-68261b05\x2df4d8\x2d439a\x2db6a7\x2d0fb0c5b4299f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4cw6t.mount: Deactivated successfully. May 27 17:04:01.906595 systemd[1]: var-lib-kubelet-pods-68261b05\x2df4d8\x2d439a\x2db6a7\x2d0fb0c5b4299f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 27 17:04:01.906916 systemd[1]: var-lib-kubelet-pods-68261b05\x2df4d8\x2d439a\x2db6a7\x2d0fb0c5b4299f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 27 17:04:02.037485 kubelet[3300]: E0527 17:04:02.037429 3300 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 17:04:02.665525 sshd[5043]: Connection closed by 139.178.68.195 port 50046 May 27 17:04:02.666463 sshd-session[5041]: pam_unix(sshd:session): session closed for user core May 27 17:04:02.674792 systemd[1]: sshd@24-172.31.22.21:22-139.178.68.195:50046.service: Deactivated successfully. May 27 17:04:02.675787 systemd-logind[1991]: Session 25 logged out. Waiting for processes to exit. May 27 17:04:02.678898 systemd[1]: session-25.scope: Deactivated successfully. May 27 17:04:02.679673 systemd[1]: session-25.scope: Consumed 3.284s CPU time, 23.7M memory peak. May 27 17:04:02.685526 systemd-logind[1991]: Removed session 25. May 27 17:04:02.702505 systemd[1]: Started sshd@25-172.31.22.21:22-139.178.68.195:50054.service - OpenSSH per-connection server daemon (139.178.68.195:50054). May 27 17:04:02.901533 sshd[5191]: Accepted publickey for core from 139.178.68.195 port 50054 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:04:02.904390 sshd-session[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:04:02.915518 systemd-logind[1991]: New session 26 of user core. May 27 17:04:02.922378 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 17:04:03.112477 ntpd[1985]: Deleting interface #11 lxc_health, fe80::d4bf:a7ff:fecb:5c3f%8#123, interface stats: received=0, sent=0, dropped=0, active_time=78 secs May 27 17:04:03.112964 ntpd[1985]: 27 May 17:04:03 ntpd[1985]: Deleting interface #11 lxc_health, fe80::d4bf:a7ff:fecb:5c3f%8#123, interface stats: received=0, sent=0, dropped=0, active_time=78 secs May 27 17:04:03.662463 kubelet[3300]: I0527 17:04:03.662372 3300 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68261b05-f4d8-439a-b6a7-0fb0c5b4299f" path="/var/lib/kubelet/pods/68261b05-f4d8-439a-b6a7-0fb0c5b4299f/volumes" May 27 17:04:04.146265 kubelet[3300]: I0527 17:04:04.146191 3300 setters.go:600] "Node became not ready" node="ip-172-31-22-21" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-27T17:04:04Z","lastTransitionTime":"2025-05-27T17:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 27 17:04:05.541835 sshd[5193]: Connection closed by 139.178.68.195 port 50054 May 27 17:04:05.542500 sshd-session[5191]: pam_unix(sshd:session): session closed for user core May 27 17:04:05.558936 systemd[1]: sshd@25-172.31.22.21:22-139.178.68.195:50054.service: Deactivated successfully. May 27 17:04:05.561590 kubelet[3300]: E0527 17:04:05.561437 3300 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68261b05-f4d8-439a-b6a7-0fb0c5b4299f" containerName="mount-cgroup" May 27 17:04:05.561590 kubelet[3300]: E0527 17:04:05.561537 3300 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="42aec18f-1188-4e9c-a224-02d3af835120" containerName="cilium-operator" May 27 17:04:05.561590 kubelet[3300]: E0527 17:04:05.561556 3300 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68261b05-f4d8-439a-b6a7-0fb0c5b4299f" containerName="apply-sysctl-overwrites" May 27 17:04:05.562810 kubelet[3300]: E0527 17:04:05.562130 3300 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68261b05-f4d8-439a-b6a7-0fb0c5b4299f" containerName="mount-bpf-fs" May 27 17:04:05.562810 kubelet[3300]: E0527 17:04:05.562162 3300 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68261b05-f4d8-439a-b6a7-0fb0c5b4299f" containerName="clean-cilium-state" May 27 17:04:05.562810 kubelet[3300]: E0527 17:04:05.562223 3300 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="68261b05-f4d8-439a-b6a7-0fb0c5b4299f" containerName="cilium-agent" May 27 17:04:05.562810 kubelet[3300]: I0527 17:04:05.562328 3300 memory_manager.go:354] "RemoveStaleState removing state" podUID="42aec18f-1188-4e9c-a224-02d3af835120" containerName="cilium-operator" May 27 17:04:05.562810 kubelet[3300]: I0527 17:04:05.562349 3300 memory_manager.go:354] "RemoveStaleState removing state" podUID="68261b05-f4d8-439a-b6a7-0fb0c5b4299f" containerName="cilium-agent" May 27 17:04:05.568388 systemd[1]: session-26.scope: Deactivated successfully. May 27 17:04:05.570175 systemd[1]: session-26.scope: Consumed 2.384s CPU time, 23.6M memory peak. May 27 17:04:05.577173 systemd-logind[1991]: Session 26 logged out. Waiting for processes to exit. May 27 17:04:05.617670 systemd[1]: Started sshd@26-172.31.22.21:22-139.178.68.195:59288.service - OpenSSH per-connection server daemon (139.178.68.195:59288). May 27 17:04:05.621580 systemd-logind[1991]: Removed session 26. May 27 17:04:05.642147 systemd[1]: Created slice kubepods-burstable-pod5e760f61_7dd9_4faa_bc6f_8ffba0fd56fb.slice - libcontainer container kubepods-burstable-pod5e760f61_7dd9_4faa_bc6f_8ffba0fd56fb.slice. May 27 17:04:05.711079 kubelet[3300]: I0527 17:04:05.710727 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb-etc-cni-netd\") pod \"cilium-gsxn2\" (UID: \"5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb\") " pod="kube-system/cilium-gsxn2" May 27 17:04:05.711079 kubelet[3300]: I0527 17:04:05.710806 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb-cilium-cgroup\") pod \"cilium-gsxn2\" (UID: \"5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb\") " pod="kube-system/cilium-gsxn2" May 27 17:04:05.711079 kubelet[3300]: I0527 17:04:05.710846 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb-cilium-ipsec-secrets\") pod \"cilium-gsxn2\" (UID: \"5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb\") " pod="kube-system/cilium-gsxn2" May 27 17:04:05.711079 kubelet[3300]: I0527 17:04:05.710885 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb-bpf-maps\") pod \"cilium-gsxn2\" (UID: \"5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb\") " pod="kube-system/cilium-gsxn2" May 27 17:04:05.711079 kubelet[3300]: I0527 17:04:05.710930 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb-host-proc-sys-kernel\") pod \"cilium-gsxn2\" (UID: \"5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb\") " pod="kube-system/cilium-gsxn2" May 27 17:04:05.711433 kubelet[3300]: I0527 17:04:05.710967 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbcsb\" (UniqueName: \"kubernetes.io/projected/5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb-kube-api-access-pbcsb\") pod \"cilium-gsxn2\" (UID: \"5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb\") " pod="kube-system/cilium-gsxn2" May 27 17:04:05.711433 kubelet[3300]: I0527 17:04:05.711007 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb-hostproc\") pod \"cilium-gsxn2\" (UID: \"5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb\") " pod="kube-system/cilium-gsxn2" May 27 17:04:05.712125 kubelet[3300]: I0527 17:04:05.711635 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb-xtables-lock\") pod \"cilium-gsxn2\" (UID: \"5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb\") " pod="kube-system/cilium-gsxn2" May 27 17:04:05.712270 kubelet[3300]: I0527 17:04:05.712215 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb-cilium-run\") pod \"cilium-gsxn2\" (UID: \"5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb\") " pod="kube-system/cilium-gsxn2" May 27 17:04:05.712369 kubelet[3300]: I0527 17:04:05.712302 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb-clustermesh-secrets\") pod \"cilium-gsxn2\" (UID: \"5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb\") " pod="kube-system/cilium-gsxn2" May 27 17:04:05.712369 kubelet[3300]: I0527 17:04:05.712343 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb-host-proc-sys-net\") pod \"cilium-gsxn2\" (UID: \"5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb\") " pod="kube-system/cilium-gsxn2" May 27 17:04:05.712472 kubelet[3300]: I0527 17:04:05.712385 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb-hubble-tls\") pod \"cilium-gsxn2\" (UID: \"5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb\") " pod="kube-system/cilium-gsxn2" May 27 17:04:05.712472 kubelet[3300]: I0527 17:04:05.712452 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb-lib-modules\") pod \"cilium-gsxn2\" (UID: \"5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb\") " pod="kube-system/cilium-gsxn2" May 27 17:04:05.712564 kubelet[3300]: I0527 17:04:05.712498 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb-cilium-config-path\") pod \"cilium-gsxn2\" (UID: \"5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb\") " pod="kube-system/cilium-gsxn2" May 27 17:04:05.712564 kubelet[3300]: I0527 17:04:05.712537 3300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb-cni-path\") pod \"cilium-gsxn2\" (UID: \"5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb\") " pod="kube-system/cilium-gsxn2" May 27 17:04:05.862076 sshd[5204]: Accepted publickey for core from 139.178.68.195 port 59288 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:04:05.866816 sshd-session[5204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:04:05.882614 systemd-logind[1991]: New session 27 of user core. May 27 17:04:05.895396 systemd[1]: Started session-27.scope - Session 27 of User core. May 27 17:04:05.962417 containerd[2017]: time="2025-05-27T17:04:05.962359729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gsxn2,Uid:5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb,Namespace:kube-system,Attempt:0,}" May 27 17:04:05.999376 containerd[2017]: time="2025-05-27T17:04:05.999293377Z" level=info msg="connecting to shim d7fa4fece8eafaeced0563b39ac7d5914f1fdd6ff70f38c4a276ead0b2be9eed" address="unix:///run/containerd/s/12e20d179d9f1f203a3c8ac7d699155a3763a99e80cd046dbffc87b264418e02" namespace=k8s.io protocol=ttrpc version=3 May 27 17:04:06.032237 sshd[5210]: Connection closed by 139.178.68.195 port 59288 May 27 17:04:06.033347 sshd-session[5204]: pam_unix(sshd:session): session closed for user core May 27 17:04:06.044344 systemd[1]: sshd@26-172.31.22.21:22-139.178.68.195:59288.service: Deactivated successfully. May 27 17:04:06.051015 systemd[1]: session-27.scope: Deactivated successfully. May 27 17:04:06.053497 systemd-logind[1991]: Session 27 logged out. Waiting for processes to exit. May 27 17:04:06.079398 systemd[1]: Started cri-containerd-d7fa4fece8eafaeced0563b39ac7d5914f1fdd6ff70f38c4a276ead0b2be9eed.scope - libcontainer container d7fa4fece8eafaeced0563b39ac7d5914f1fdd6ff70f38c4a276ead0b2be9eed. May 27 17:04:06.084513 systemd[1]: Started sshd@27-172.31.22.21:22-139.178.68.195:59304.service - OpenSSH per-connection server daemon (139.178.68.195:59304). May 27 17:04:06.091722 systemd-logind[1991]: Removed session 27. May 27 17:04:06.150820 containerd[2017]: time="2025-05-27T17:04:06.149876626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gsxn2,Uid:5e760f61-7dd9-4faa-bc6f-8ffba0fd56fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7fa4fece8eafaeced0563b39ac7d5914f1fdd6ff70f38c4a276ead0b2be9eed\"" May 27 17:04:06.163069 containerd[2017]: time="2025-05-27T17:04:06.162693790Z" level=info msg="CreateContainer within sandbox \"d7fa4fece8eafaeced0563b39ac7d5914f1fdd6ff70f38c4a276ead0b2be9eed\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 17:04:06.185732 containerd[2017]: time="2025-05-27T17:04:06.184673446Z" level=info msg="Container 422899a143202809f090f673d73e0cf4ad1facac11b26dafcb61af088827d837: CDI devices from CRI Config.CDIDevices: []" May 27 17:04:06.203701 containerd[2017]: time="2025-05-27T17:04:06.203483254Z" level=info msg="CreateContainer within sandbox \"d7fa4fece8eafaeced0563b39ac7d5914f1fdd6ff70f38c4a276ead0b2be9eed\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"422899a143202809f090f673d73e0cf4ad1facac11b26dafcb61af088827d837\"" May 27 17:04:06.207349 containerd[2017]: time="2025-05-27T17:04:06.207258382Z" level=info msg="StartContainer for \"422899a143202809f090f673d73e0cf4ad1facac11b26dafcb61af088827d837\"" May 27 17:04:06.211495 containerd[2017]: time="2025-05-27T17:04:06.211412218Z" level=info msg="connecting to shim 422899a143202809f090f673d73e0cf4ad1facac11b26dafcb61af088827d837" address="unix:///run/containerd/s/12e20d179d9f1f203a3c8ac7d699155a3763a99e80cd046dbffc87b264418e02" protocol=ttrpc version=3 May 27 17:04:06.248403 systemd[1]: Started cri-containerd-422899a143202809f090f673d73e0cf4ad1facac11b26dafcb61af088827d837.scope - libcontainer container 422899a143202809f090f673d73e0cf4ad1facac11b26dafcb61af088827d837. May 27 17:04:06.314999 sshd[5249]: Accepted publickey for core from 139.178.68.195 port 59304 ssh2: RSA SHA256:zVfNVfC8v+Xfrinjy7jCJIf2+ESFz7qymQmXWOreRws May 27 17:04:06.322592 containerd[2017]: time="2025-05-27T17:04:06.322449671Z" level=info msg="StartContainer for \"422899a143202809f090f673d73e0cf4ad1facac11b26dafcb61af088827d837\" returns successfully" May 27 17:04:06.323339 sshd-session[5249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:04:06.340898 systemd-logind[1991]: New session 28 of user core. May 27 17:04:06.346357 systemd[1]: Started session-28.scope - Session 28 of User core. May 27 17:04:06.347352 systemd[1]: cri-containerd-422899a143202809f090f673d73e0cf4ad1facac11b26dafcb61af088827d837.scope: Deactivated successfully. May 27 17:04:06.359712 containerd[2017]: time="2025-05-27T17:04:06.359008223Z" level=info msg="TaskExit event in podsandbox handler container_id:\"422899a143202809f090f673d73e0cf4ad1facac11b26dafcb61af088827d837\" id:\"422899a143202809f090f673d73e0cf4ad1facac11b26dafcb61af088827d837\" pid:5276 exited_at:{seconds:1748365446 nanos:357658859}" May 27 17:04:06.361026 containerd[2017]: time="2025-05-27T17:04:06.359103731Z" level=info msg="received exit event container_id:\"422899a143202809f090f673d73e0cf4ad1facac11b26dafcb61af088827d837\" id:\"422899a143202809f090f673d73e0cf4ad1facac11b26dafcb61af088827d837\" pid:5276 exited_at:{seconds:1748365446 nanos:357658859}" May 27 17:04:07.038932 kubelet[3300]: E0527 17:04:07.038862 3300 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 17:04:07.384891 containerd[2017]: time="2025-05-27T17:04:07.384763920Z" level=info msg="CreateContainer within sandbox \"d7fa4fece8eafaeced0563b39ac7d5914f1fdd6ff70f38c4a276ead0b2be9eed\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 17:04:07.408153 containerd[2017]: time="2025-05-27T17:04:07.406436712Z" level=info msg="Container 06a3d032596e920cd73a5b81fbc7a5b70618117171232df055426763d5b65120: CDI devices from CRI Config.CDIDevices: []" May 27 17:04:07.427840 containerd[2017]: time="2025-05-27T17:04:07.427783704Z" level=info msg="CreateContainer within sandbox \"d7fa4fece8eafaeced0563b39ac7d5914f1fdd6ff70f38c4a276ead0b2be9eed\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"06a3d032596e920cd73a5b81fbc7a5b70618117171232df055426763d5b65120\"" May 27 17:04:07.429232 containerd[2017]: time="2025-05-27T17:04:07.429021324Z" level=info msg="StartContainer for \"06a3d032596e920cd73a5b81fbc7a5b70618117171232df055426763d5b65120\"" May 27 17:04:07.431707 containerd[2017]: time="2025-05-27T17:04:07.431596416Z" level=info msg="connecting to shim 06a3d032596e920cd73a5b81fbc7a5b70618117171232df055426763d5b65120" address="unix:///run/containerd/s/12e20d179d9f1f203a3c8ac7d699155a3763a99e80cd046dbffc87b264418e02" protocol=ttrpc version=3 May 27 17:04:07.474401 systemd[1]: Started cri-containerd-06a3d032596e920cd73a5b81fbc7a5b70618117171232df055426763d5b65120.scope - libcontainer container 06a3d032596e920cd73a5b81fbc7a5b70618117171232df055426763d5b65120. May 27 17:04:07.544428 containerd[2017]: time="2025-05-27T17:04:07.544358629Z" level=info msg="StartContainer for \"06a3d032596e920cd73a5b81fbc7a5b70618117171232df055426763d5b65120\" returns successfully" May 27 17:04:07.553997 systemd[1]: cri-containerd-06a3d032596e920cd73a5b81fbc7a5b70618117171232df055426763d5b65120.scope: Deactivated successfully. May 27 17:04:07.557425 containerd[2017]: time="2025-05-27T17:04:07.557354905Z" level=info msg="received exit event container_id:\"06a3d032596e920cd73a5b81fbc7a5b70618117171232df055426763d5b65120\" id:\"06a3d032596e920cd73a5b81fbc7a5b70618117171232df055426763d5b65120\" pid:5326 exited_at:{seconds:1748365447 nanos:556682017}" May 27 17:04:07.557988 containerd[2017]: time="2025-05-27T17:04:07.557909317Z" level=info msg="TaskExit event in podsandbox handler container_id:\"06a3d032596e920cd73a5b81fbc7a5b70618117171232df055426763d5b65120\" id:\"06a3d032596e920cd73a5b81fbc7a5b70618117171232df055426763d5b65120\" pid:5326 exited_at:{seconds:1748365447 nanos:556682017}" May 27 17:04:07.832178 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06a3d032596e920cd73a5b81fbc7a5b70618117171232df055426763d5b65120-rootfs.mount: Deactivated successfully. May 27 17:04:08.389566 containerd[2017]: time="2025-05-27T17:04:08.389492149Z" level=info msg="CreateContainer within sandbox \"d7fa4fece8eafaeced0563b39ac7d5914f1fdd6ff70f38c4a276ead0b2be9eed\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 17:04:08.417073 containerd[2017]: time="2025-05-27T17:04:08.416970469Z" level=info msg="Container c0afff665b26bc73c7c2ce11fd310687ba93c462eca749d288773120947547c1: CDI devices from CRI Config.CDIDevices: []" May 27 17:04:08.441630 containerd[2017]: time="2025-05-27T17:04:08.441528241Z" level=info msg="CreateContainer within sandbox \"d7fa4fece8eafaeced0563b39ac7d5914f1fdd6ff70f38c4a276ead0b2be9eed\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c0afff665b26bc73c7c2ce11fd310687ba93c462eca749d288773120947547c1\"" May 27 17:04:08.444281 containerd[2017]: time="2025-05-27T17:04:08.444179137Z" level=info msg="StartContainer for \"c0afff665b26bc73c7c2ce11fd310687ba93c462eca749d288773120947547c1\"" May 27 17:04:08.447356 containerd[2017]: time="2025-05-27T17:04:08.447224653Z" level=info msg="connecting to shim c0afff665b26bc73c7c2ce11fd310687ba93c462eca749d288773120947547c1" address="unix:///run/containerd/s/12e20d179d9f1f203a3c8ac7d699155a3763a99e80cd046dbffc87b264418e02" protocol=ttrpc version=3 May 27 17:04:08.498359 systemd[1]: Started cri-containerd-c0afff665b26bc73c7c2ce11fd310687ba93c462eca749d288773120947547c1.scope - libcontainer container c0afff665b26bc73c7c2ce11fd310687ba93c462eca749d288773120947547c1. May 27 17:04:08.629423 containerd[2017]: time="2025-05-27T17:04:08.629343086Z" level=info msg="StartContainer for \"c0afff665b26bc73c7c2ce11fd310687ba93c462eca749d288773120947547c1\" returns successfully" May 27 17:04:08.633525 systemd[1]: cri-containerd-c0afff665b26bc73c7c2ce11fd310687ba93c462eca749d288773120947547c1.scope: Deactivated successfully. May 27 17:04:08.641929 containerd[2017]: time="2025-05-27T17:04:08.641753294Z" level=info msg="received exit event container_id:\"c0afff665b26bc73c7c2ce11fd310687ba93c462eca749d288773120947547c1\" id:\"c0afff665b26bc73c7c2ce11fd310687ba93c462eca749d288773120947547c1\" pid:5372 exited_at:{seconds:1748365448 nanos:641399258}" May 27 17:04:08.644261 containerd[2017]: time="2025-05-27T17:04:08.643709426Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0afff665b26bc73c7c2ce11fd310687ba93c462eca749d288773120947547c1\" id:\"c0afff665b26bc73c7c2ce11fd310687ba93c462eca749d288773120947547c1\" pid:5372 exited_at:{seconds:1748365448 nanos:641399258}" May 27 17:04:08.656810 kubelet[3300]: E0527 17:04:08.656515 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-tjnqd" podUID="da77be3c-5634-46af-bff2-f4f8c8736300" May 27 17:04:08.718749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0afff665b26bc73c7c2ce11fd310687ba93c462eca749d288773120947547c1-rootfs.mount: Deactivated successfully. May 27 17:04:09.403522 containerd[2017]: time="2025-05-27T17:04:09.403339898Z" level=info msg="CreateContainer within sandbox \"d7fa4fece8eafaeced0563b39ac7d5914f1fdd6ff70f38c4a276ead0b2be9eed\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 17:04:09.434829 containerd[2017]: time="2025-05-27T17:04:09.434427746Z" level=info msg="Container 58bd879e0603519e3a253b9ad3b36cfbec819e15ac91a32fc43c7dd6de34b80f: CDI devices from CRI Config.CDIDevices: []" May 27 17:04:09.438942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount720737652.mount: Deactivated successfully. May 27 17:04:09.457530 containerd[2017]: time="2025-05-27T17:04:09.457327622Z" level=info msg="CreateContainer within sandbox \"d7fa4fece8eafaeced0563b39ac7d5914f1fdd6ff70f38c4a276ead0b2be9eed\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"58bd879e0603519e3a253b9ad3b36cfbec819e15ac91a32fc43c7dd6de34b80f\"" May 27 17:04:09.458757 containerd[2017]: time="2025-05-27T17:04:09.458653226Z" level=info msg="StartContainer for \"58bd879e0603519e3a253b9ad3b36cfbec819e15ac91a32fc43c7dd6de34b80f\"" May 27 17:04:09.461748 containerd[2017]: time="2025-05-27T17:04:09.461647766Z" level=info msg="connecting to shim 58bd879e0603519e3a253b9ad3b36cfbec819e15ac91a32fc43c7dd6de34b80f" address="unix:///run/containerd/s/12e20d179d9f1f203a3c8ac7d699155a3763a99e80cd046dbffc87b264418e02" protocol=ttrpc version=3 May 27 17:04:09.506397 systemd[1]: Started cri-containerd-58bd879e0603519e3a253b9ad3b36cfbec819e15ac91a32fc43c7dd6de34b80f.scope - libcontainer container 58bd879e0603519e3a253b9ad3b36cfbec819e15ac91a32fc43c7dd6de34b80f. May 27 17:04:09.561421 systemd[1]: cri-containerd-58bd879e0603519e3a253b9ad3b36cfbec819e15ac91a32fc43c7dd6de34b80f.scope: Deactivated successfully. May 27 17:04:09.567692 containerd[2017]: time="2025-05-27T17:04:09.567594675Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58bd879e0603519e3a253b9ad3b36cfbec819e15ac91a32fc43c7dd6de34b80f\" id:\"58bd879e0603519e3a253b9ad3b36cfbec819e15ac91a32fc43c7dd6de34b80f\" pid:5411 exited_at:{seconds:1748365449 nanos:566904123}" May 27 17:04:09.568171 containerd[2017]: time="2025-05-27T17:04:09.567956343Z" level=info msg="received exit event container_id:\"58bd879e0603519e3a253b9ad3b36cfbec819e15ac91a32fc43c7dd6de34b80f\" id:\"58bd879e0603519e3a253b9ad3b36cfbec819e15ac91a32fc43c7dd6de34b80f\" pid:5411 exited_at:{seconds:1748365449 nanos:566904123}" May 27 17:04:09.584528 containerd[2017]: time="2025-05-27T17:04:09.584468799Z" level=info msg="StartContainer for \"58bd879e0603519e3a253b9ad3b36cfbec819e15ac91a32fc43c7dd6de34b80f\" returns successfully" May 27 17:04:09.609521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58bd879e0603519e3a253b9ad3b36cfbec819e15ac91a32fc43c7dd6de34b80f-rootfs.mount: Deactivated successfully. May 27 17:04:10.414157 containerd[2017]: time="2025-05-27T17:04:10.413573355Z" level=info msg="CreateContainer within sandbox \"d7fa4fece8eafaeced0563b39ac7d5914f1fdd6ff70f38c4a276ead0b2be9eed\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 17:04:10.441081 containerd[2017]: time="2025-05-27T17:04:10.440829843Z" level=info msg="Container 83de9d814b897b66bacbc097cccc6311387104dec34a851b158799e5ef46092b: CDI devices from CRI Config.CDIDevices: []" May 27 17:04:10.463011 containerd[2017]: time="2025-05-27T17:04:10.462931407Z" level=info msg="CreateContainer within sandbox \"d7fa4fece8eafaeced0563b39ac7d5914f1fdd6ff70f38c4a276ead0b2be9eed\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"83de9d814b897b66bacbc097cccc6311387104dec34a851b158799e5ef46092b\"" May 27 17:04:10.464852 containerd[2017]: time="2025-05-27T17:04:10.464798847Z" level=info msg="StartContainer for \"83de9d814b897b66bacbc097cccc6311387104dec34a851b158799e5ef46092b\"" May 27 17:04:10.468588 containerd[2017]: time="2025-05-27T17:04:10.468451239Z" level=info msg="connecting to shim 83de9d814b897b66bacbc097cccc6311387104dec34a851b158799e5ef46092b" address="unix:///run/containerd/s/12e20d179d9f1f203a3c8ac7d699155a3763a99e80cd046dbffc87b264418e02" protocol=ttrpc version=3 May 27 17:04:10.518401 systemd[1]: Started cri-containerd-83de9d814b897b66bacbc097cccc6311387104dec34a851b158799e5ef46092b.scope - libcontainer container 83de9d814b897b66bacbc097cccc6311387104dec34a851b158799e5ef46092b. May 27 17:04:10.596647 containerd[2017]: time="2025-05-27T17:04:10.596583556Z" level=info msg="StartContainer for \"83de9d814b897b66bacbc097cccc6311387104dec34a851b158799e5ef46092b\" returns successfully" May 27 17:04:10.657161 kubelet[3300]: E0527 17:04:10.656510 3300 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7c65d6cfc9-tjnqd" podUID="da77be3c-5634-46af-bff2-f4f8c8736300" May 27 17:04:10.739238 containerd[2017]: time="2025-05-27T17:04:10.737664677Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83de9d814b897b66bacbc097cccc6311387104dec34a851b158799e5ef46092b\" id:\"f0fced94ad28bb852ab9d8a9fc50856ae3aa441015aa02cd935c69791e2474cb\" pid:5478 exited_at:{seconds:1748365450 nanos:735490901}" May 27 17:04:11.698435 containerd[2017]: time="2025-05-27T17:04:11.696727433Z" level=info msg="StopPodSandbox for \"690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea\"" May 27 17:04:11.698435 containerd[2017]: time="2025-05-27T17:04:11.697470917Z" level=info msg="TearDown network for sandbox \"690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea\" successfully" May 27 17:04:11.698435 containerd[2017]: time="2025-05-27T17:04:11.697514393Z" level=info msg="StopPodSandbox for \"690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea\" returns successfully" May 27 17:04:11.701095 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 27 17:04:11.702698 containerd[2017]: time="2025-05-27T17:04:11.702648677Z" level=info msg="RemovePodSandbox for \"690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea\"" May 27 17:04:11.703500 containerd[2017]: time="2025-05-27T17:04:11.703014377Z" level=info msg="Forcibly stopping sandbox \"690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea\"" May 27 17:04:11.704764 containerd[2017]: time="2025-05-27T17:04:11.704679221Z" level=info msg="TearDown network for sandbox \"690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea\" successfully" May 27 17:04:11.710762 containerd[2017]: time="2025-05-27T17:04:11.710314265Z" level=info msg="Ensure that sandbox 690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea in task-service has been cleanup successfully" May 27 17:04:11.718656 containerd[2017]: time="2025-05-27T17:04:11.718527714Z" level=info msg="RemovePodSandbox \"690420d5c0888ebb5e08fce778678b610198450cab1d1902bd3a1784b75396ea\" returns successfully" May 27 17:04:11.723109 containerd[2017]: time="2025-05-27T17:04:11.722325630Z" level=info msg="StopPodSandbox for \"1e224d0c22c68f327014affad7f4eebf93ad35391cf6c34ea60bb88c94a91f03\"" May 27 17:04:11.723109 containerd[2017]: time="2025-05-27T17:04:11.722625570Z" level=info msg="TearDown network for sandbox \"1e224d0c22c68f327014affad7f4eebf93ad35391cf6c34ea60bb88c94a91f03\" successfully" May 27 17:04:11.723109 containerd[2017]: time="2025-05-27T17:04:11.722659698Z" level=info msg="StopPodSandbox for \"1e224d0c22c68f327014affad7f4eebf93ad35391cf6c34ea60bb88c94a91f03\" returns successfully" May 27 17:04:11.725244 containerd[2017]: time="2025-05-27T17:04:11.725167290Z" level=info msg="RemovePodSandbox for \"1e224d0c22c68f327014affad7f4eebf93ad35391cf6c34ea60bb88c94a91f03\"" May 27 17:04:11.725722 containerd[2017]: time="2025-05-27T17:04:11.725547126Z" level=info msg="Forcibly stopping sandbox \"1e224d0c22c68f327014affad7f4eebf93ad35391cf6c34ea60bb88c94a91f03\"" May 27 17:04:11.726458 containerd[2017]: time="2025-05-27T17:04:11.726403446Z" level=info msg="TearDown network for sandbox \"1e224d0c22c68f327014affad7f4eebf93ad35391cf6c34ea60bb88c94a91f03\" successfully" May 27 17:04:11.733830 containerd[2017]: time="2025-05-27T17:04:11.733142274Z" level=info msg="Ensure that sandbox 1e224d0c22c68f327014affad7f4eebf93ad35391cf6c34ea60bb88c94a91f03 in task-service has been cleanup successfully" May 27 17:04:11.752759 containerd[2017]: time="2025-05-27T17:04:11.752671578Z" level=info msg="RemovePodSandbox \"1e224d0c22c68f327014affad7f4eebf93ad35391cf6c34ea60bb88c94a91f03\" returns successfully" May 27 17:04:12.958653 containerd[2017]: time="2025-05-27T17:04:12.957826436Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83de9d814b897b66bacbc097cccc6311387104dec34a851b158799e5ef46092b\" id:\"6904466d565660c781655bbc9773b48a8060533fd6864e8f6c46f92c4d36d456\" pid:5558 exit_status:1 exited_at:{seconds:1748365452 nanos:956771384}" May 27 17:04:15.244473 containerd[2017]: time="2025-05-27T17:04:15.244236607Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83de9d814b897b66bacbc097cccc6311387104dec34a851b158799e5ef46092b\" id:\"637d28338f85db99e491aae3782574fdebc804e9c55b3bd810272aa22bd4e351\" pid:5779 exit_status:1 exited_at:{seconds:1748365455 nanos:243137803}" May 27 17:04:16.378140 systemd-networkd[1859]: lxc_health: Link UP May 27 17:04:16.382637 systemd-networkd[1859]: lxc_health: Gained carrier May 27 17:04:16.387375 (udev-worker)[5995]: Network interface NamePolicy= disabled on kernel command line. May 27 17:04:17.770427 containerd[2017]: time="2025-05-27T17:04:17.770331504Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83de9d814b897b66bacbc097cccc6311387104dec34a851b158799e5ef46092b\" id:\"d8df2ab3e9587b862f4811757b525b463642a9dae371ddea6be789439f479aae\" pid:6024 exited_at:{seconds:1748365457 nanos:767904012}" May 27 17:04:17.780848 kubelet[3300]: E0527 17:04:17.780781 3300 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:49834->127.0.0.1:45277: write tcp 127.0.0.1:49834->127.0.0.1:45277: write: broken pipe May 27 17:04:18.011707 kubelet[3300]: I0527 17:04:18.010011 3300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gsxn2" podStartSLOduration=13.009984549 podStartE2EDuration="13.009984549s" podCreationTimestamp="2025-05-27 17:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:04:11.523375457 +0000 UTC m=+120.152884846" watchObservedRunningTime="2025-05-27 17:04:18.009984549 +0000 UTC m=+126.639493962" May 27 17:04:18.068336 systemd-networkd[1859]: lxc_health: Gained IPv6LL May 27 17:04:20.050088 containerd[2017]: time="2025-05-27T17:04:20.049165727Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83de9d814b897b66bacbc097cccc6311387104dec34a851b158799e5ef46092b\" id:\"2bfff4303273c8f7c17843511ad2bb19d399ffc2ea8fe3492ec96ce158691e36\" pid:6057 exited_at:{seconds:1748365460 nanos:41265503}" May 27 17:04:20.112399 ntpd[1985]: Listen normally on 14 lxc_health [fe80::6ce9:d4ff:fec8:fd11%14]:123 May 27 17:04:20.112892 ntpd[1985]: 27 May 17:04:20 ntpd[1985]: Listen normally on 14 lxc_health [fe80::6ce9:d4ff:fec8:fd11%14]:123 May 27 17:04:22.349579 containerd[2017]: time="2025-05-27T17:04:22.349493450Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83de9d814b897b66bacbc097cccc6311387104dec34a851b158799e5ef46092b\" id:\"930b976ab9e9ddb05ad6381d3ef6cb030c3f654a567af267b55acb304830f379\" pid:6081 exited_at:{seconds:1748365462 nanos:345978530}" May 27 17:04:22.386311 sshd[5295]: Connection closed by 139.178.68.195 port 59304 May 27 17:04:22.388376 sshd-session[5249]: pam_unix(sshd:session): session closed for user core May 27 17:04:22.399811 systemd[1]: sshd@27-172.31.22.21:22-139.178.68.195:59304.service: Deactivated successfully. May 27 17:04:22.404861 systemd[1]: session-28.scope: Deactivated successfully. May 27 17:04:22.409501 systemd-logind[1991]: Session 28 logged out. Waiting for processes to exit. May 27 17:04:22.414567 systemd-logind[1991]: Removed session 28.