May 9 23:57:51.202621 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] May 9 23:57:51.202669 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 9 22:39:45 -00 2025 May 9 23:57:51.202696 kernel: KASLR disabled due to lack of seed May 9 23:57:51.202714 kernel: efi: EFI v2.7 by EDK II May 9 23:57:51.202730 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b000a98 MEMRESERVE=0x7852ee18 May 9 23:57:51.202746 kernel: ACPI: Early table checksum verification disabled May 9 23:57:51.202764 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) May 9 23:57:51.202780 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) May 9 23:57:51.202796 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) May 9 23:57:51.202811 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) May 9 23:57:51.202832 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 9 23:57:51.202848 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) May 9 23:57:51.202863 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) May 9 23:57:51.202879 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) May 9 23:57:51.202897 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 9 23:57:51.204774 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) May 9 23:57:51.204802 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) May 9 23:57:51.204819 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 May 9 23:57:51.204837 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') May 9 23:57:51.204854 kernel: printk: bootconsole [uart0] enabled May 9 23:57:51.204870 kernel: NUMA: Failed to initialise from firmware May 9 23:57:51.204888 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] May 9 23:57:51.204929 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] May 9 23:57:51.204957 kernel: Zone ranges: May 9 23:57:51.204975 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 9 23:57:51.204992 kernel: DMA32 empty May 9 23:57:51.205016 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] May 9 23:57:51.205035 kernel: Movable zone start for each node May 9 23:57:51.205052 kernel: Early memory node ranges May 9 23:57:51.205071 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] May 9 23:57:51.205088 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] May 9 23:57:51.205106 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] May 9 23:57:51.205124 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] May 9 23:57:51.205142 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] May 9 23:57:51.205159 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] May 9 23:57:51.205177 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] May 9 23:57:51.205194 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] May 9 23:57:51.205212 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] May 9 23:57:51.205236 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges May 9 23:57:51.205254 kernel: psci: probing for conduit method from ACPI. May 9 23:57:51.205279 kernel: psci: PSCIv1.0 detected in firmware. May 9 23:57:51.205297 kernel: psci: Using standard PSCI v0.2 function IDs May 9 23:57:51.205316 kernel: psci: Trusted OS migration not required May 9 23:57:51.205339 kernel: psci: SMC Calling Convention v1.1 May 9 23:57:51.205357 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 9 23:57:51.205375 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 9 23:57:51.205393 kernel: pcpu-alloc: [0] 0 [0] 1 May 9 23:57:51.205411 kernel: Detected PIPT I-cache on CPU0 May 9 23:57:51.205429 kernel: CPU features: detected: GIC system register CPU interface May 9 23:57:51.205446 kernel: CPU features: detected: Spectre-v2 May 9 23:57:51.205466 kernel: CPU features: detected: Spectre-v3a May 9 23:57:51.205484 kernel: CPU features: detected: Spectre-BHB May 9 23:57:51.205501 kernel: CPU features: detected: ARM erratum 1742098 May 9 23:57:51.205519 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 May 9 23:57:51.205541 kernel: alternatives: applying boot alternatives May 9 23:57:51.205561 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 9 23:57:51.205581 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 23:57:51.205599 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 23:57:51.205616 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 23:57:51.205635 kernel: Fallback order for Node 0: 0 May 9 23:57:51.205653 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 May 9 23:57:51.205670 kernel: Policy zone: Normal May 9 23:57:51.205688 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 23:57:51.205706 kernel: software IO TLB: area num 2. May 9 23:57:51.205724 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) May 9 23:57:51.205749 kernel: Memory: 3820088K/4030464K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 210376K reserved, 0K cma-reserved) May 9 23:57:51.205767 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 9 23:57:51.205784 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 23:57:51.205803 kernel: rcu: RCU event tracing is enabled. May 9 23:57:51.205821 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 9 23:57:51.205839 kernel: Trampoline variant of Tasks RCU enabled. May 9 23:57:51.205857 kernel: Tracing variant of Tasks RCU enabled. May 9 23:57:51.205874 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 23:57:51.205892 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 9 23:57:51.205945 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 9 23:57:51.205967 kernel: GICv3: 96 SPIs implemented May 9 23:57:51.205992 kernel: GICv3: 0 Extended SPIs implemented May 9 23:57:51.206010 kernel: Root IRQ handler: gic_handle_irq May 9 23:57:51.206028 kernel: GICv3: GICv3 features: 16 PPIs May 9 23:57:51.206046 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 May 9 23:57:51.206064 kernel: ITS [mem 0x10080000-0x1009ffff] May 9 23:57:51.206081 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) May 9 23:57:51.206099 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) May 9 23:57:51.206117 kernel: GICv3: using LPI property table @0x00000004000d0000 May 9 23:57:51.206134 kernel: ITS: Using hypervisor restricted LPI range [128] May 9 23:57:51.206152 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 May 9 23:57:51.206169 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 23:57:51.206205 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). May 9 23:57:51.206234 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns May 9 23:57:51.206252 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns May 9 23:57:51.206270 kernel: Console: colour dummy device 80x25 May 9 23:57:51.206289 kernel: printk: console [tty1] enabled May 9 23:57:51.206308 kernel: ACPI: Core revision 20230628 May 9 23:57:51.206326 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) May 9 23:57:51.206344 kernel: pid_max: default: 32768 minimum: 301 May 9 23:57:51.206363 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 23:57:51.206381 kernel: landlock: Up and running. May 9 23:57:51.206403 kernel: SELinux: Initializing. May 9 23:57:51.206422 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:57:51.206440 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:57:51.206459 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 23:57:51.206477 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 23:57:51.206495 kernel: rcu: Hierarchical SRCU implementation. May 9 23:57:51.206515 kernel: rcu: Max phase no-delay instances is 400. May 9 23:57:51.206533 kernel: Platform MSI: ITS@0x10080000 domain created May 9 23:57:51.206551 kernel: PCI/MSI: ITS@0x10080000 domain created May 9 23:57:51.206574 kernel: Remapping and enabling EFI services. May 9 23:57:51.206592 kernel: smp: Bringing up secondary CPUs ... May 9 23:57:51.206610 kernel: Detected PIPT I-cache on CPU1 May 9 23:57:51.206628 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 May 9 23:57:51.206646 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 May 9 23:57:51.206664 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] May 9 23:57:51.206682 kernel: smp: Brought up 1 node, 2 CPUs May 9 23:57:51.206701 kernel: SMP: Total of 2 processors activated. May 9 23:57:51.206719 kernel: CPU features: detected: 32-bit EL0 Support May 9 23:57:51.206742 kernel: CPU features: detected: 32-bit EL1 Support May 9 23:57:51.206761 kernel: CPU features: detected: CRC32 instructions May 9 23:57:51.206780 kernel: CPU: All CPU(s) started at EL1 May 9 23:57:51.206809 kernel: alternatives: applying system-wide alternatives May 9 23:57:51.206833 kernel: devtmpfs: initialized May 9 23:57:51.206852 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 23:57:51.206871 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 9 23:57:51.206889 kernel: pinctrl core: initialized pinctrl subsystem May 9 23:57:51.207636 kernel: SMBIOS 3.0.0 present. May 9 23:57:51.207670 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 May 9 23:57:51.207697 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 23:57:51.207716 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 9 23:57:51.207735 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 9 23:57:51.207753 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 9 23:57:51.207772 kernel: audit: initializing netlink subsys (disabled) May 9 23:57:51.207791 kernel: audit: type=2000 audit(0.290:1): state=initialized audit_enabled=0 res=1 May 9 23:57:51.207809 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 23:57:51.207832 kernel: cpuidle: using governor menu May 9 23:57:51.207851 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 9 23:57:51.207869 kernel: ASID allocator initialised with 65536 entries May 9 23:57:51.207888 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 23:57:51.207930 kernel: Serial: AMBA PL011 UART driver May 9 23:57:51.207954 kernel: Modules: 17488 pages in range for non-PLT usage May 9 23:57:51.207973 kernel: Modules: 509008 pages in range for PLT usage May 9 23:57:51.207992 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 23:57:51.208011 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 9 23:57:51.208036 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 9 23:57:51.208055 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 9 23:57:51.208074 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 23:57:51.208092 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 9 23:57:51.208111 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 9 23:57:51.208129 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 9 23:57:51.208149 kernel: ACPI: Added _OSI(Module Device) May 9 23:57:51.208167 kernel: ACPI: Added _OSI(Processor Device) May 9 23:57:51.208186 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 23:57:51.208209 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 23:57:51.208227 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 23:57:51.208246 kernel: ACPI: Interpreter enabled May 9 23:57:51.208265 kernel: ACPI: Using GIC for interrupt routing May 9 23:57:51.208283 kernel: ACPI: MCFG table detected, 1 entries May 9 23:57:51.208302 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) May 9 23:57:51.208615 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 23:57:51.208858 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 9 23:57:51.209146 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 9 23:57:51.209369 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 May 9 23:57:51.209582 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] May 9 23:57:51.209608 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] May 9 23:57:51.209628 kernel: acpiphp: Slot [1] registered May 9 23:57:51.209646 kernel: acpiphp: Slot [2] registered May 9 23:57:51.209665 kernel: acpiphp: Slot [3] registered May 9 23:57:51.209683 kernel: acpiphp: Slot [4] registered May 9 23:57:51.209709 kernel: acpiphp: Slot [5] registered May 9 23:57:51.209728 kernel: acpiphp: Slot [6] registered May 9 23:57:51.209746 kernel: acpiphp: Slot [7] registered May 9 23:57:51.209764 kernel: acpiphp: Slot [8] registered May 9 23:57:51.209782 kernel: acpiphp: Slot [9] registered May 9 23:57:51.209801 kernel: acpiphp: Slot [10] registered May 9 23:57:51.209819 kernel: acpiphp: Slot [11] registered May 9 23:57:51.209838 kernel: acpiphp: Slot [12] registered May 9 23:57:51.209856 kernel: acpiphp: Slot [13] registered May 9 23:57:51.209874 kernel: acpiphp: Slot [14] registered May 9 23:57:51.209898 kernel: acpiphp: Slot [15] registered May 9 23:57:51.210043 kernel: acpiphp: Slot [16] registered May 9 23:57:51.210063 kernel: acpiphp: Slot [17] registered May 9 23:57:51.210082 kernel: acpiphp: Slot [18] registered May 9 23:57:51.210100 kernel: acpiphp: Slot [19] registered May 9 23:57:51.210119 kernel: acpiphp: Slot [20] registered May 9 23:57:51.210137 kernel: acpiphp: Slot [21] registered May 9 23:57:51.210155 kernel: acpiphp: Slot [22] registered May 9 23:57:51.210174 kernel: acpiphp: Slot [23] registered May 9 23:57:51.210218 kernel: acpiphp: Slot [24] registered May 9 23:57:51.210240 kernel: acpiphp: Slot [25] registered May 9 23:57:51.210259 kernel: acpiphp: Slot [26] registered May 9 23:57:51.210277 kernel: acpiphp: Slot [27] registered May 9 23:57:51.210295 kernel: acpiphp: Slot [28] registered May 9 23:57:51.210313 kernel: acpiphp: Slot [29] registered May 9 23:57:51.210332 kernel: acpiphp: Slot [30] registered May 9 23:57:51.210350 kernel: acpiphp: Slot [31] registered May 9 23:57:51.210368 kernel: PCI host bridge to bus 0000:00 May 9 23:57:51.210615 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] May 9 23:57:51.210842 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 9 23:57:51.211085 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] May 9 23:57:51.211282 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] May 9 23:57:51.211531 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 May 9 23:57:51.211778 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 May 9 23:57:51.212042 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] May 9 23:57:51.212289 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 9 23:57:51.212503 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] May 9 23:57:51.212721 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold May 9 23:57:51.213019 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 9 23:57:51.213259 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] May 9 23:57:51.213478 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] May 9 23:57:51.213702 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] May 9 23:57:51.213969 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold May 9 23:57:51.214254 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] May 9 23:57:51.214509 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] May 9 23:57:51.214754 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] May 9 23:57:51.215005 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] May 9 23:57:51.215227 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] May 9 23:57:51.215432 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] May 9 23:57:51.215619 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 9 23:57:51.215807 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] May 9 23:57:51.215833 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 9 23:57:51.215852 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 9 23:57:51.215871 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 9 23:57:51.215890 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 9 23:57:51.215930 kernel: iommu: Default domain type: Translated May 9 23:57:51.215953 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 9 23:57:51.215980 kernel: efivars: Registered efivars operations May 9 23:57:51.215998 kernel: vgaarb: loaded May 9 23:57:51.216017 kernel: clocksource: Switched to clocksource arch_sys_counter May 9 23:57:51.216036 kernel: VFS: Disk quotas dquot_6.6.0 May 9 23:57:51.216055 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 23:57:51.216073 kernel: pnp: PnP ACPI init May 9 23:57:51.216303 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved May 9 23:57:51.216332 kernel: pnp: PnP ACPI: found 1 devices May 9 23:57:51.216357 kernel: NET: Registered PF_INET protocol family May 9 23:57:51.216377 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 23:57:51.216397 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 23:57:51.216416 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 23:57:51.216436 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 23:57:51.216455 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 23:57:51.216473 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 23:57:51.216492 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:57:51.216511 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:57:51.216536 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 23:57:51.216555 kernel: PCI: CLS 0 bytes, default 64 May 9 23:57:51.216573 kernel: kvm [1]: HYP mode not available May 9 23:57:51.216592 kernel: Initialise system trusted keyrings May 9 23:57:51.216611 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 23:57:51.216630 kernel: Key type asymmetric registered May 9 23:57:51.216648 kernel: Asymmetric key parser 'x509' registered May 9 23:57:51.216668 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 9 23:57:51.216686 kernel: io scheduler mq-deadline registered May 9 23:57:51.216711 kernel: io scheduler kyber registered May 9 23:57:51.216730 kernel: io scheduler bfq registered May 9 23:57:51.217031 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered May 9 23:57:51.217068 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 9 23:57:51.217087 kernel: ACPI: button: Power Button [PWRB] May 9 23:57:51.217107 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 May 9 23:57:51.217126 kernel: ACPI: button: Sleep Button [SLPB] May 9 23:57:51.217145 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 23:57:51.217174 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 9 23:57:51.217415 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) May 9 23:57:51.217444 kernel: printk: console [ttyS0] disabled May 9 23:57:51.217464 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A May 9 23:57:51.217483 kernel: printk: console [ttyS0] enabled May 9 23:57:51.217502 kernel: printk: bootconsole [uart0] disabled May 9 23:57:51.217521 kernel: thunder_xcv, ver 1.0 May 9 23:57:51.217539 kernel: thunder_bgx, ver 1.0 May 9 23:57:51.217557 kernel: nicpf, ver 1.0 May 9 23:57:51.217583 kernel: nicvf, ver 1.0 May 9 23:57:51.217833 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 9 23:57:51.222237 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-09T23:57:50 UTC (1746835070) May 9 23:57:51.222287 kernel: hid: raw HID events driver (C) Jiri Kosina May 9 23:57:51.222308 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available May 9 23:57:51.222329 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 9 23:57:51.222348 kernel: watchdog: Hard watchdog permanently disabled May 9 23:57:51.222368 kernel: NET: Registered PF_INET6 protocol family May 9 23:57:51.222402 kernel: Segment Routing with IPv6 May 9 23:57:51.222423 kernel: In-situ OAM (IOAM) with IPv6 May 9 23:57:51.222445 kernel: NET: Registered PF_PACKET protocol family May 9 23:57:51.222465 kernel: Key type dns_resolver registered May 9 23:57:51.222485 kernel: registered taskstats version 1 May 9 23:57:51.222505 kernel: Loading compiled-in X.509 certificates May 9 23:57:51.222525 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 02a1572fa4e3e92c40cffc658d8dbcab2e5537ff' May 9 23:57:51.222544 kernel: Key type .fscrypt registered May 9 23:57:51.222563 kernel: Key type fscrypt-provisioning registered May 9 23:57:51.222594 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 23:57:51.222729 kernel: ima: Allocated hash algorithm: sha1 May 9 23:57:51.222797 kernel: ima: No architecture policies found May 9 23:57:51.222824 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 9 23:57:51.222843 kernel: clk: Disabling unused clocks May 9 23:57:51.222862 kernel: Freeing unused kernel memory: 39424K May 9 23:57:51.222881 kernel: Run /init as init process May 9 23:57:51.222900 kernel: with arguments: May 9 23:57:51.222949 kernel: /init May 9 23:57:51.222970 kernel: with environment: May 9 23:57:51.222998 kernel: HOME=/ May 9 23:57:51.223017 kernel: TERM=linux May 9 23:57:51.223036 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 23:57:51.223061 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 23:57:51.223085 systemd[1]: Detected virtualization amazon. May 9 23:57:51.223107 systemd[1]: Detected architecture arm64. May 9 23:57:51.223127 systemd[1]: Running in initrd. May 9 23:57:51.223153 systemd[1]: No hostname configured, using default hostname. May 9 23:57:51.223175 systemd[1]: Hostname set to . May 9 23:57:51.223196 systemd[1]: Initializing machine ID from VM UUID. May 9 23:57:51.223216 systemd[1]: Queued start job for default target initrd.target. May 9 23:57:51.223236 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:57:51.223257 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:57:51.223279 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 23:57:51.223300 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:57:51.223326 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 23:57:51.223347 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 23:57:51.223371 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 23:57:51.223392 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 23:57:51.223413 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:57:51.223433 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:57:51.223453 systemd[1]: Reached target paths.target - Path Units. May 9 23:57:51.223479 systemd[1]: Reached target slices.target - Slice Units. May 9 23:57:51.223499 systemd[1]: Reached target swap.target - Swaps. May 9 23:57:51.223519 systemd[1]: Reached target timers.target - Timer Units. May 9 23:57:51.223540 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:57:51.223560 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:57:51.223580 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 23:57:51.223601 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 23:57:51.223622 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:57:51.223642 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:57:51.223668 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:57:51.223688 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:57:51.223709 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 23:57:51.223730 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:57:51.223750 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 23:57:51.223770 systemd[1]: Starting systemd-fsck-usr.service... May 9 23:57:51.223790 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:57:51.223811 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:57:51.223835 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:57:51.223856 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 23:57:51.223876 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:57:51.223896 systemd[1]: Finished systemd-fsck-usr.service. May 9 23:57:51.225720 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 23:57:51.225820 systemd-journald[251]: Collecting audit messages is disabled. May 9 23:57:51.225866 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 23:57:51.225887 systemd-journald[251]: Journal started May 9 23:57:51.225987 systemd-journald[251]: Runtime Journal (/run/log/journal/ec26dea50f549fee6b9fee95a17f8cde) is 8.0M, max 75.3M, 67.3M free. May 9 23:57:51.188535 systemd-modules-load[252]: Inserted module 'overlay' May 9 23:57:51.235871 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:57:51.235981 kernel: Bridge firewalling registered May 9 23:57:51.232548 systemd-modules-load[252]: Inserted module 'br_netfilter' May 9 23:57:51.237510 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:57:51.240613 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:57:51.248767 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 23:57:51.262206 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:57:51.268930 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:57:51.277197 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:57:51.281863 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:57:51.319990 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:57:51.327062 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:57:51.345406 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:57:51.357450 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:57:51.365028 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:57:51.387238 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 23:57:51.424969 dracut-cmdline[289]: dracut-dracut-053 May 9 23:57:51.430974 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 9 23:57:51.450578 systemd-resolved[286]: Positive Trust Anchors: May 9 23:57:51.450601 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:57:51.450666 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:57:51.578975 kernel: SCSI subsystem initialized May 9 23:57:51.586972 kernel: Loading iSCSI transport class v2.0-870. May 9 23:57:51.599964 kernel: iscsi: registered transport (tcp) May 9 23:57:51.623962 kernel: iscsi: registered transport (qla4xxx) May 9 23:57:51.624032 kernel: QLogic iSCSI HBA Driver May 9 23:57:51.696957 kernel: random: crng init done May 9 23:57:51.697465 systemd-resolved[286]: Defaulting to hostname 'linux'. May 9 23:57:51.701638 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:57:51.706387 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:57:51.730282 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 23:57:51.753389 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 23:57:51.788661 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 23:57:51.788741 kernel: device-mapper: uevent: version 1.0.3 May 9 23:57:51.790962 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 23:57:51.861991 kernel: raid6: neonx8 gen() 6687 MB/s May 9 23:57:51.878969 kernel: raid6: neonx4 gen() 6500 MB/s May 9 23:57:51.895966 kernel: raid6: neonx2 gen() 5405 MB/s May 9 23:57:51.912969 kernel: raid6: neonx1 gen() 3906 MB/s May 9 23:57:51.929971 kernel: raid6: int64x8 gen() 3779 MB/s May 9 23:57:51.946961 kernel: raid6: int64x4 gen() 3685 MB/s May 9 23:57:51.963971 kernel: raid6: int64x2 gen() 3593 MB/s May 9 23:57:51.981878 kernel: raid6: int64x1 gen() 2726 MB/s May 9 23:57:51.981989 kernel: raid6: using algorithm neonx8 gen() 6687 MB/s May 9 23:57:51.999970 kernel: raid6: .... xor() 4705 MB/s, rmw enabled May 9 23:57:52.000046 kernel: raid6: using neon recovery algorithm May 9 23:57:52.009519 kernel: xor: measuring software checksum speed May 9 23:57:52.009597 kernel: 8regs : 11034 MB/sec May 9 23:57:52.010739 kernel: 32regs : 11890 MB/sec May 9 23:57:52.013080 kernel: arm64_neon : 8957 MB/sec May 9 23:57:52.013159 kernel: xor: using function: 32regs (11890 MB/sec) May 9 23:57:52.102976 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 23:57:52.126495 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 23:57:52.136282 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:57:52.182207 systemd-udevd[471]: Using default interface naming scheme 'v255'. May 9 23:57:52.192665 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:57:52.207286 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 23:57:52.248285 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation May 9 23:57:52.314740 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:57:52.325344 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:57:52.459008 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:57:52.481537 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 23:57:52.547572 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 23:57:52.554758 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:57:52.555773 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:57:52.573050 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:57:52.592263 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 23:57:52.634710 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 23:57:52.705496 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:57:52.708333 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:57:52.727353 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:57:52.730061 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:57:52.748683 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:57:52.755091 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:57:52.762926 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 9 23:57:52.763006 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) May 9 23:57:52.775564 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 9 23:57:52.777929 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 9 23:57:52.774442 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:57:52.790957 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:37:a7:27:e4:61 May 9 23:57:52.794811 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 9 23:57:52.794886 kernel: nvme nvme0: pci function 0000:00:04.0 May 9 23:57:52.798588 (udev-worker)[517]: Network interface NamePolicy= disabled on kernel command line. May 9 23:57:52.808938 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 9 23:57:52.820498 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:57:52.828898 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 23:57:52.828964 kernel: GPT:9289727 != 16777215 May 9 23:57:52.828990 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 23:57:52.829016 kernel: GPT:9289727 != 16777215 May 9 23:57:52.830975 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 23:57:52.831948 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 23:57:52.837343 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:57:52.875477 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:57:52.927006 kernel: BTRFS: device fsid 7278434d-1c51-4098-9ab9-92db46b8a354 devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (521) May 9 23:57:52.955951 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (519) May 9 23:57:53.048730 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 9 23:57:53.080437 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 9 23:57:53.098965 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 9 23:57:53.104496 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 9 23:57:53.122422 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 9 23:57:53.137345 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 23:57:53.152508 disk-uuid[661]: Primary Header is updated. May 9 23:57:53.152508 disk-uuid[661]: Secondary Entries is updated. May 9 23:57:53.152508 disk-uuid[661]: Secondary Header is updated. May 9 23:57:53.162944 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 23:57:53.173942 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 23:57:54.180101 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 23:57:54.180775 disk-uuid[662]: The operation has completed successfully. May 9 23:57:54.395416 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 23:57:54.395708 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 23:57:54.451453 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 23:57:54.461792 sh[921]: Success May 9 23:57:54.487034 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 9 23:57:54.625786 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 23:57:54.640141 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 23:57:54.644380 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 23:57:54.686271 kernel: BTRFS info (device dm-0): first mount of filesystem 7278434d-1c51-4098-9ab9-92db46b8a354 May 9 23:57:54.686349 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 9 23:57:54.686376 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 23:57:54.688000 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 23:57:54.689249 kernel: BTRFS info (device dm-0): using free space tree May 9 23:57:54.718963 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 9 23:57:54.736009 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 23:57:54.739636 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 23:57:54.757365 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 23:57:54.767250 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 23:57:54.806895 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:57:54.806993 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 9 23:57:54.808507 kernel: BTRFS info (device nvme0n1p6): using free space tree May 9 23:57:54.815961 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 9 23:57:54.836078 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 23:57:54.839355 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:57:54.850896 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 23:57:54.863429 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 23:57:54.990328 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:57:55.006365 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:57:55.084808 systemd-networkd[1114]: lo: Link UP May 9 23:57:55.087011 systemd-networkd[1114]: lo: Gained carrier May 9 23:57:55.088077 ignition[1033]: Ignition 2.19.0 May 9 23:57:55.088093 ignition[1033]: Stage: fetch-offline May 9 23:57:55.088636 ignition[1033]: no configs at "/usr/lib/ignition/base.d" May 9 23:57:55.095975 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:57:55.088661 ignition[1033]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:57:55.097184 systemd-networkd[1114]: Enumeration completed May 9 23:57:55.089518 ignition[1033]: Ignition finished successfully May 9 23:57:55.098126 systemd-networkd[1114]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:57:55.098133 systemd-networkd[1114]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:57:55.102111 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:57:55.106361 systemd[1]: Reached target network.target - Network. May 9 23:57:55.128357 systemd-networkd[1114]: eth0: Link UP May 9 23:57:55.128377 systemd-networkd[1114]: eth0: Gained carrier May 9 23:57:55.128397 systemd-networkd[1114]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:57:55.129896 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 9 23:57:55.156064 systemd-networkd[1114]: eth0: DHCPv4 address 172.31.24.82/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 9 23:57:55.159210 ignition[1121]: Ignition 2.19.0 May 9 23:57:55.159226 ignition[1121]: Stage: fetch May 9 23:57:55.161330 ignition[1121]: no configs at "/usr/lib/ignition/base.d" May 9 23:57:55.161371 ignition[1121]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:57:55.161578 ignition[1121]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:57:55.162186 ignition[1121]: PUT error: Put "http://169.254.169.254/latest/api/token": dial tcp 169.254.169.254:80: connect: network is unreachable May 9 23:57:55.362312 ignition[1121]: PUT http://169.254.169.254/latest/api/token: attempt #2 May 9 23:57:55.384756 ignition[1121]: PUT result: OK May 9 23:57:55.389143 ignition[1121]: parsed url from cmdline: "" May 9 23:57:55.389169 ignition[1121]: no config URL provided May 9 23:57:55.389186 ignition[1121]: reading system config file "/usr/lib/ignition/user.ign" May 9 23:57:55.389214 ignition[1121]: no config at "/usr/lib/ignition/user.ign" May 9 23:57:55.389251 ignition[1121]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:57:55.395084 ignition[1121]: PUT result: OK May 9 23:57:55.395191 ignition[1121]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 9 23:57:55.401098 ignition[1121]: GET result: OK May 9 23:57:55.401276 ignition[1121]: parsing config with SHA512: c18743ba1dc9b542db8bf4b6f55014204c9d1249d4595ddaaf75f6d599213cb45b3213c47cbf40e1ac726a79bee308f8ae42feb97aa0db881704a9cb5abe5e9a May 9 23:57:55.409802 unknown[1121]: fetched base config from "system" May 9 23:57:55.409843 unknown[1121]: fetched base config from "system" May 9 23:57:55.413385 ignition[1121]: fetch: fetch complete May 9 23:57:55.409859 unknown[1121]: fetched user config from "aws" May 9 23:57:55.413410 ignition[1121]: fetch: fetch passed May 9 23:57:55.413548 ignition[1121]: Ignition finished successfully May 9 23:57:55.421032 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 9 23:57:55.433297 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 23:57:55.471980 ignition[1129]: Ignition 2.19.0 May 9 23:57:55.472014 ignition[1129]: Stage: kargs May 9 23:57:55.473860 ignition[1129]: no configs at "/usr/lib/ignition/base.d" May 9 23:57:55.473890 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:57:55.475141 ignition[1129]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:57:55.481933 ignition[1129]: PUT result: OK May 9 23:57:55.487049 ignition[1129]: kargs: kargs passed May 9 23:57:55.487170 ignition[1129]: Ignition finished successfully May 9 23:57:55.492655 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 23:57:55.503252 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 23:57:55.543095 ignition[1136]: Ignition 2.19.0 May 9 23:57:55.543643 ignition[1136]: Stage: disks May 9 23:57:55.544460 ignition[1136]: no configs at "/usr/lib/ignition/base.d" May 9 23:57:55.544490 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:57:55.544668 ignition[1136]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:57:55.547356 ignition[1136]: PUT result: OK May 9 23:57:55.558680 ignition[1136]: disks: disks passed May 9 23:57:55.559372 ignition[1136]: Ignition finished successfully May 9 23:57:55.563899 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 23:57:55.568242 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 23:57:55.573235 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 23:57:55.577942 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:57:55.580034 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:57:55.582150 systemd[1]: Reached target basic.target - Basic System. May 9 23:57:55.606438 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 23:57:55.652558 systemd-fsck[1145]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 23:57:55.661061 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 23:57:55.671197 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 23:57:55.755981 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ffdb9517-5190-4050-8f70-de9d48dc1858 r/w with ordered data mode. Quota mode: none. May 9 23:57:55.756499 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 23:57:55.760476 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 23:57:55.781154 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:57:55.787943 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 23:57:55.792678 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 23:57:55.801185 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 23:57:55.801243 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:57:55.827152 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1164) May 9 23:57:55.830803 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:57:55.830887 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 9 23:57:55.830940 kernel: BTRFS info (device nvme0n1p6): using free space tree May 9 23:57:55.834485 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 23:57:55.846560 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 23:57:55.854535 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 9 23:57:55.865282 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:57:55.974432 initrd-setup-root[1189]: cut: /sysroot/etc/passwd: No such file or directory May 9 23:57:55.986567 initrd-setup-root[1196]: cut: /sysroot/etc/group: No such file or directory May 9 23:57:55.996813 initrd-setup-root[1203]: cut: /sysroot/etc/shadow: No such file or directory May 9 23:57:56.007593 initrd-setup-root[1210]: cut: /sysroot/etc/gshadow: No such file or directory May 9 23:57:56.181079 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 23:57:56.191163 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 23:57:56.205361 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 23:57:56.225042 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 23:57:56.227051 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:57:56.263024 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 23:57:56.272533 ignition[1277]: INFO : Ignition 2.19.0 May 9 23:57:56.274463 ignition[1277]: INFO : Stage: mount May 9 23:57:56.276423 ignition[1277]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:57:56.276423 ignition[1277]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:57:56.280957 ignition[1277]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:57:56.284217 ignition[1277]: INFO : PUT result: OK May 9 23:57:56.289481 ignition[1277]: INFO : mount: mount passed May 9 23:57:56.291140 ignition[1277]: INFO : Ignition finished successfully May 9 23:57:56.296004 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 23:57:56.315053 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 23:57:56.371132 systemd-networkd[1114]: eth0: Gained IPv6LL May 9 23:57:56.773356 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:57:56.795954 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1289) May 9 23:57:56.799866 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:57:56.799970 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 9 23:57:56.799999 kernel: BTRFS info (device nvme0n1p6): using free space tree May 9 23:57:56.807971 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 9 23:57:56.810316 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:57:56.848655 ignition[1306]: INFO : Ignition 2.19.0 May 9 23:57:56.848655 ignition[1306]: INFO : Stage: files May 9 23:57:56.852126 ignition[1306]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:57:56.852126 ignition[1306]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:57:56.852126 ignition[1306]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:57:56.858957 ignition[1306]: INFO : PUT result: OK May 9 23:57:56.864003 ignition[1306]: DEBUG : files: compiled without relabeling support, skipping May 9 23:57:56.867066 ignition[1306]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 23:57:56.869714 ignition[1306]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 23:57:56.877044 ignition[1306]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 23:57:56.880059 ignition[1306]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 23:57:56.883468 unknown[1306]: wrote ssh authorized keys file for user: core May 9 23:57:56.885703 ignition[1306]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 23:57:56.890771 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 9 23:57:56.894602 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 9 23:57:56.985804 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 9 23:57:57.197448 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 9 23:57:57.197448 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 23:57:57.204762 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 9 23:57:57.565567 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 9 23:57:57.727653 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 23:57:57.731182 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 9 23:57:57.731182 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 9 23:57:57.731182 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 23:57:57.731182 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 23:57:57.731182 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 23:57:57.731182 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 23:57:57.731182 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 23:57:57.731182 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 23:57:57.758880 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:57:57.758880 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:57:57.758880 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 23:57:57.758880 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 23:57:57.758880 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 23:57:57.758880 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 9 23:57:58.118034 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 9 23:57:58.466228 ignition[1306]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 9 23:57:58.466228 ignition[1306]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 9 23:57:58.473365 ignition[1306]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 23:57:58.473365 ignition[1306]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 23:57:58.473365 ignition[1306]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 9 23:57:58.473365 ignition[1306]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 9 23:57:58.473365 ignition[1306]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 9 23:57:58.473365 ignition[1306]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 23:57:58.473365 ignition[1306]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 23:57:58.473365 ignition[1306]: INFO : files: files passed May 9 23:57:58.473365 ignition[1306]: INFO : Ignition finished successfully May 9 23:57:58.498827 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 23:57:58.519192 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 23:57:58.526782 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 23:57:58.528779 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 23:57:58.530983 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 23:57:58.558788 initrd-setup-root-after-ignition[1335]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:57:58.558788 initrd-setup-root-after-ignition[1335]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 23:57:58.569848 initrd-setup-root-after-ignition[1339]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:57:58.569622 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:57:58.573238 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 23:57:58.590785 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 23:57:58.642298 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 23:57:58.644107 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 23:57:58.648951 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 23:57:58.651799 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 23:57:58.656502 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 23:57:58.673212 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 23:57:58.703637 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:57:58.717232 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 23:57:58.744559 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 23:57:58.749138 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:57:58.754134 systemd[1]: Stopped target timers.target - Timer Units. May 9 23:57:58.756156 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 23:57:58.756403 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:57:58.765088 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 23:57:58.767746 systemd[1]: Stopped target basic.target - Basic System. May 9 23:57:58.773636 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 23:57:58.776651 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:57:58.783005 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 23:57:58.785806 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 23:57:58.792431 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:57:58.795021 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 23:57:58.814460 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 23:57:58.818676 systemd[1]: Stopped target swap.target - Swaps. May 9 23:57:58.822630 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 23:57:58.823149 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 23:57:58.829616 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 23:57:58.832285 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:57:58.839638 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 23:57:58.843035 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:57:58.845716 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 23:57:58.846222 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 23:57:58.855858 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 23:57:58.856402 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:57:58.863609 systemd[1]: ignition-files.service: Deactivated successfully. May 9 23:57:58.864418 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 23:57:58.883248 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 23:57:58.890600 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 23:57:58.895103 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 23:57:58.897731 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:57:58.906031 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 23:57:58.910216 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:57:58.927308 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 23:57:58.927530 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 23:57:58.946201 ignition[1359]: INFO : Ignition 2.19.0 May 9 23:57:58.948750 ignition[1359]: INFO : Stage: umount May 9 23:57:58.950852 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:57:58.950852 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:57:58.955255 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:57:58.958667 ignition[1359]: INFO : PUT result: OK May 9 23:57:58.963341 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 23:57:58.967856 ignition[1359]: INFO : umount: umount passed May 9 23:57:58.969679 ignition[1359]: INFO : Ignition finished successfully May 9 23:57:58.972463 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 23:57:58.974364 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 23:57:58.979415 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 23:57:58.983277 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 23:57:58.987855 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 23:57:58.988077 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 23:57:58.991225 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 23:57:58.991334 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 23:57:58.994458 systemd[1]: ignition-fetch.service: Deactivated successfully. May 9 23:57:58.994567 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 9 23:57:59.001307 systemd[1]: Stopped target network.target - Network. May 9 23:57:59.004327 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 23:57:59.004458 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:57:59.008479 systemd[1]: Stopped target paths.target - Path Units. May 9 23:57:59.011812 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 23:57:59.026303 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:57:59.028891 systemd[1]: Stopped target slices.target - Slice Units. May 9 23:57:59.030749 systemd[1]: Stopped target sockets.target - Socket Units. May 9 23:57:59.032812 systemd[1]: iscsid.socket: Deactivated successfully. May 9 23:57:59.032938 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:57:59.035072 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 23:57:59.035161 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:57:59.037500 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 23:57:59.037614 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 23:57:59.041306 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 23:57:59.041418 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 23:57:59.045542 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 23:57:59.045669 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 23:57:59.048963 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 23:57:59.053239 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 23:57:59.060894 systemd-networkd[1114]: eth0: DHCPv6 lease lost May 9 23:57:59.079612 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 23:57:59.083071 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 23:57:59.086025 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 23:57:59.087848 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 23:57:59.096480 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 23:57:59.096599 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 23:57:59.104256 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 23:57:59.120457 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 23:57:59.120594 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:57:59.123440 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 23:57:59.123546 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 23:57:59.126076 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 23:57:59.126209 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 23:57:59.128877 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 23:57:59.129009 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:57:59.132268 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:57:59.174624 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 23:57:59.176688 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:57:59.182617 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 23:57:59.182804 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 23:57:59.189439 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 23:57:59.189562 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:57:59.194088 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 23:57:59.194257 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 23:57:59.202588 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 23:57:59.202705 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 23:57:59.205263 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:57:59.205378 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:57:59.227358 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 23:57:59.231014 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 23:57:59.231135 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:57:59.234060 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 9 23:57:59.234186 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 23:57:59.237395 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 23:57:59.237517 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:57:59.241754 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:57:59.241876 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:57:59.276492 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 23:57:59.276703 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 23:57:59.294700 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 23:57:59.295142 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 23:57:59.303393 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 23:57:59.312264 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 23:57:59.342488 systemd[1]: Switching root. May 9 23:57:59.384379 systemd-journald[251]: Journal stopped May 9 23:58:01.376934 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). May 9 23:58:01.377077 kernel: SELinux: policy capability network_peer_controls=1 May 9 23:58:01.377123 kernel: SELinux: policy capability open_perms=1 May 9 23:58:01.377154 kernel: SELinux: policy capability extended_socket_class=1 May 9 23:58:01.377185 kernel: SELinux: policy capability always_check_network=0 May 9 23:58:01.377215 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 23:58:01.377246 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 23:58:01.377286 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 23:58:01.377316 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 23:58:01.377346 kernel: audit: type=1403 audit(1746835079.720:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 23:58:01.377390 systemd[1]: Successfully loaded SELinux policy in 50.380ms. May 9 23:58:01.377443 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.974ms. May 9 23:58:01.377479 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 23:58:01.377510 systemd[1]: Detected virtualization amazon. May 9 23:58:01.377542 systemd[1]: Detected architecture arm64. May 9 23:58:01.377573 systemd[1]: Detected first boot. May 9 23:58:01.377613 systemd[1]: Initializing machine ID from VM UUID. May 9 23:58:01.377651 zram_generator::config[1402]: No configuration found. May 9 23:58:01.377689 systemd[1]: Populated /etc with preset unit settings. May 9 23:58:01.377726 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 23:58:01.377760 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 23:58:01.377793 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 23:58:01.377824 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 23:58:01.377858 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 23:58:01.377892 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 23:58:01.378279 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 23:58:01.378323 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 23:58:01.378368 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 23:58:01.378403 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 23:58:01.378436 systemd[1]: Created slice user.slice - User and Session Slice. May 9 23:58:01.378468 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:58:01.378500 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:58:01.378531 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 23:58:01.378563 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 23:58:01.378596 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 23:58:01.378636 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:58:01.378672 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 9 23:58:01.378704 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:58:01.378737 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 23:58:01.378771 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 23:58:01.378804 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 23:58:01.378838 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 23:58:01.378871 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:58:01.378994 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:58:01.379043 systemd[1]: Reached target slices.target - Slice Units. May 9 23:58:01.379078 systemd[1]: Reached target swap.target - Swaps. May 9 23:58:01.379109 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 23:58:01.379140 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 23:58:01.379170 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:58:01.379202 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:58:01.379234 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:58:01.379267 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 23:58:01.379304 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 23:58:01.379340 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 23:58:01.379374 systemd[1]: Mounting media.mount - External Media Directory... May 9 23:58:01.379405 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 23:58:01.379439 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 23:58:01.379470 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 23:58:01.379505 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 23:58:01.379536 systemd[1]: Reached target machines.target - Containers. May 9 23:58:01.379567 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 23:58:01.379603 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:58:01.379634 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:58:01.379664 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 23:58:01.379699 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:58:01.379734 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 23:58:01.379768 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:58:01.379798 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 23:58:01.379828 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:58:01.379867 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 23:58:01.379898 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 23:58:01.379959 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 23:58:01.379991 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 23:58:01.380024 systemd[1]: Stopped systemd-fsck-usr.service. May 9 23:58:01.380057 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:58:01.380087 kernel: loop: module loaded May 9 23:58:01.380176 kernel: fuse: init (API version 7.39) May 9 23:58:01.380217 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:58:01.380258 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 23:58:01.380288 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 23:58:01.380319 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:58:01.380351 systemd[1]: verity-setup.service: Deactivated successfully. May 9 23:58:01.380381 systemd[1]: Stopped verity-setup.service. May 9 23:58:01.380412 kernel: ACPI: bus type drm_connector registered May 9 23:58:01.380442 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 23:58:01.380473 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 23:58:01.380502 systemd[1]: Mounted media.mount - External Media Directory. May 9 23:58:01.380536 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 23:58:01.380569 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 23:58:01.380599 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 23:58:01.380631 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:58:01.380661 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 23:58:01.380699 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 23:58:01.380730 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:58:01.380761 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:58:01.380790 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 23:58:01.380877 systemd-journald[1486]: Collecting audit messages is disabled. May 9 23:58:01.380979 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 23:58:01.381015 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:58:01.381053 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:58:01.381087 systemd-journald[1486]: Journal started May 9 23:58:01.381136 systemd-journald[1486]: Runtime Journal (/run/log/journal/ec26dea50f549fee6b9fee95a17f8cde) is 8.0M, max 75.3M, 67.3M free. May 9 23:58:00.782409 systemd[1]: Queued start job for default target multi-user.target. May 9 23:58:00.812158 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 9 23:58:00.813069 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 23:58:01.389208 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:58:01.397321 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 23:58:01.398057 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 23:58:01.403671 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:58:01.405293 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:58:01.410767 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:58:01.418260 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 23:58:01.425600 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 23:58:01.430857 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 23:58:01.458451 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 23:58:01.470211 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 23:58:01.481158 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 23:58:01.484579 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 23:58:01.484641 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:58:01.491579 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 23:58:01.501496 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 23:58:01.510365 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 23:58:01.512727 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:58:01.519308 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 23:58:01.529270 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 23:58:01.531900 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 23:58:01.542548 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 23:58:01.544873 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 23:58:01.550308 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:58:01.569838 systemd-journald[1486]: Time spent on flushing to /var/log/journal/ec26dea50f549fee6b9fee95a17f8cde is 210.991ms for 908 entries. May 9 23:58:01.569838 systemd-journald[1486]: System Journal (/var/log/journal/ec26dea50f549fee6b9fee95a17f8cde) is 8.0M, max 195.6M, 187.6M free. May 9 23:58:01.801624 systemd-journald[1486]: Received client request to flush runtime journal. May 9 23:58:01.801697 kernel: loop0: detected capacity change from 0 to 114432 May 9 23:58:01.801733 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 23:58:01.801766 kernel: loop1: detected capacity change from 0 to 52536 May 9 23:58:01.572420 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 23:58:01.590256 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 23:58:01.597111 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 23:58:01.599695 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 23:58:01.604990 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 23:58:01.676483 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 23:58:01.679876 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 23:58:01.700272 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 23:58:01.703310 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:58:01.723622 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 23:58:01.744280 systemd-tmpfiles[1531]: ACLs are not supported, ignoring. May 9 23:58:01.744305 systemd-tmpfiles[1531]: ACLs are not supported, ignoring. May 9 23:58:01.763382 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 23:58:01.778311 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 23:58:01.785018 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:58:01.807184 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 23:58:01.836582 udevadm[1542]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 9 23:58:01.848078 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 23:58:01.853641 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 23:58:01.911170 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 23:58:01.922533 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:58:01.922947 kernel: loop2: detected capacity change from 0 to 201592 May 9 23:58:01.986497 kernel: loop3: detected capacity change from 0 to 114328 May 9 23:58:01.998804 systemd-tmpfiles[1554]: ACLs are not supported, ignoring. May 9 23:58:01.998847 systemd-tmpfiles[1554]: ACLs are not supported, ignoring. May 9 23:58:02.017172 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:58:02.054980 kernel: loop4: detected capacity change from 0 to 114432 May 9 23:58:02.084963 kernel: loop5: detected capacity change from 0 to 52536 May 9 23:58:02.104945 kernel: loop6: detected capacity change from 0 to 201592 May 9 23:58:02.153577 kernel: loop7: detected capacity change from 0 to 114328 May 9 23:58:02.185225 (sd-merge)[1561]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 9 23:58:02.187427 (sd-merge)[1561]: Merged extensions into '/usr'. May 9 23:58:02.201898 systemd[1]: Reloading requested from client PID 1530 ('systemd-sysext') (unit systemd-sysext.service)... May 9 23:58:02.201961 systemd[1]: Reloading... May 9 23:58:02.407018 zram_generator::config[1588]: No configuration found. May 9 23:58:02.682112 ldconfig[1525]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 23:58:02.700367 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:58:02.813356 systemd[1]: Reloading finished in 609 ms. May 9 23:58:02.864048 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 23:58:02.872830 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 23:58:02.885428 systemd[1]: Starting ensure-sysext.service... May 9 23:58:02.896247 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:58:02.917049 systemd[1]: Reloading requested from client PID 1640 ('systemctl') (unit ensure-sysext.service)... May 9 23:58:02.917096 systemd[1]: Reloading... May 9 23:58:02.987720 systemd-tmpfiles[1641]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 23:58:02.988455 systemd-tmpfiles[1641]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 23:58:02.997138 systemd-tmpfiles[1641]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 23:58:02.998659 systemd-tmpfiles[1641]: ACLs are not supported, ignoring. May 9 23:58:03.002232 systemd-tmpfiles[1641]: ACLs are not supported, ignoring. May 9 23:58:03.021809 systemd-tmpfiles[1641]: Detected autofs mount point /boot during canonicalization of boot. May 9 23:58:03.021834 systemd-tmpfiles[1641]: Skipping /boot May 9 23:58:03.081634 systemd-tmpfiles[1641]: Detected autofs mount point /boot during canonicalization of boot. May 9 23:58:03.081666 systemd-tmpfiles[1641]: Skipping /boot May 9 23:58:03.092804 zram_generator::config[1679]: No configuration found. May 9 23:58:03.325064 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:58:03.436783 systemd[1]: Reloading finished in 519 ms. May 9 23:58:03.468324 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 23:58:03.475759 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:58:03.493250 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 23:58:03.504319 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 23:58:03.515328 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 23:58:03.529229 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:58:03.537343 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:58:03.555163 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 23:58:03.574359 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 23:58:03.582448 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:58:03.592374 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:58:03.598105 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:58:03.604123 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:58:03.606282 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:58:03.612109 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:58:03.612721 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:58:03.616548 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:58:03.622996 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:58:03.638284 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:58:03.658760 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:58:03.663659 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 23:58:03.664177 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:58:03.664595 systemd[1]: Reached target time-set.target - System Time Set. May 9 23:58:03.681637 systemd[1]: Finished ensure-sysext.service. May 9 23:58:03.696989 systemd-udevd[1727]: Using default interface naming scheme 'v255'. May 9 23:58:03.721003 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 23:58:03.742109 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 23:58:03.746195 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 23:58:03.753126 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:58:03.756147 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:58:03.765513 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:58:03.766020 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:58:03.771749 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 23:58:03.777614 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:58:03.779748 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:58:03.787101 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 23:58:03.791740 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 23:58:03.800764 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 23:58:03.822767 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 23:58:03.827253 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 23:58:03.832513 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:58:03.850674 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:58:03.878046 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 23:58:03.886028 augenrules[1765]: No rules May 9 23:58:03.890698 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 23:58:03.908516 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 23:58:04.124341 systemd-networkd[1759]: lo: Link UP May 9 23:58:04.124362 systemd-networkd[1759]: lo: Gained carrier May 9 23:58:04.129585 systemd-networkd[1759]: Enumeration completed May 9 23:58:04.129795 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:58:04.162093 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 23:58:04.168778 (udev-worker)[1774]: Network interface NamePolicy= disabled on kernel command line. May 9 23:58:04.171984 systemd-resolved[1725]: Positive Trust Anchors: May 9 23:58:04.172016 systemd-resolved[1725]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:58:04.172080 systemd-resolved[1725]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:58:04.195051 systemd-resolved[1725]: Defaulting to hostname 'linux'. May 9 23:58:04.200828 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:58:04.203349 systemd[1]: Reached target network.target - Network. May 9 23:58:04.205450 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:58:04.211545 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 9 23:58:04.249289 systemd-networkd[1759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:58:04.249318 systemd-networkd[1759]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:58:04.252370 systemd-networkd[1759]: eth0: Link UP May 9 23:58:04.252765 systemd-networkd[1759]: eth0: Gained carrier May 9 23:58:04.252803 systemd-networkd[1759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:58:04.269175 systemd-networkd[1759]: eth0: DHCPv4 address 172.31.24.82/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 9 23:58:04.346990 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1774) May 9 23:58:04.489492 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:58:04.607474 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 9 23:58:04.616244 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 23:58:04.619673 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 23:58:04.632274 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 23:58:04.662399 lvm[1892]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 23:58:04.670038 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:58:04.673194 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 23:58:04.709560 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 23:58:04.712451 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:58:04.714615 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:58:04.717004 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 23:58:04.719432 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 23:58:04.722191 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 23:58:04.726494 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 23:58:04.728944 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 23:58:04.731341 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 23:58:04.731406 systemd[1]: Reached target paths.target - Path Units. May 9 23:58:04.733383 systemd[1]: Reached target timers.target - Timer Units. May 9 23:58:04.736663 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 23:58:04.741409 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 23:58:04.751295 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 23:58:04.755674 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 23:58:04.759135 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 23:58:04.761556 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:58:04.763633 systemd[1]: Reached target basic.target - Basic System. May 9 23:58:04.765525 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 23:58:04.765576 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 23:58:04.776196 systemd[1]: Starting containerd.service - containerd container runtime... May 9 23:58:04.783237 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 9 23:58:04.791277 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 23:58:04.802153 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 23:58:04.806971 lvm[1901]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 23:58:04.814272 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 23:58:04.816274 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 23:58:04.825556 jq[1905]: false May 9 23:58:04.826289 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 23:58:04.838363 systemd[1]: Started ntpd.service - Network Time Service. May 9 23:58:04.853964 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 9 23:58:04.862196 systemd[1]: Starting setup-oem.service - Setup OEM... May 9 23:58:04.880333 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 23:58:04.889811 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 23:58:04.902233 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 23:58:04.907088 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 23:58:04.910226 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 23:58:04.932398 systemd[1]: Starting update-engine.service - Update Engine... May 9 23:58:04.942228 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 23:58:04.953567 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 23:58:04.954049 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 23:58:05.005031 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 23:58:05.004569 dbus-daemon[1904]: [system] SELinux support is enabled May 9 23:58:05.010816 ntpd[1908]: ntpd 4.2.8p17@1.4004-o Fri May 9 22:02:28 UTC 2025 (1): Starting May 9 23:58:05.040641 ntpd[1908]: 9 May 23:58:05 ntpd[1908]: ntpd 4.2.8p17@1.4004-o Fri May 9 22:02:28 UTC 2025 (1): Starting May 9 23:58:05.040641 ntpd[1908]: 9 May 23:58:05 ntpd[1908]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 9 23:58:05.040641 ntpd[1908]: 9 May 23:58:05 ntpd[1908]: ---------------------------------------------------- May 9 23:58:05.040641 ntpd[1908]: 9 May 23:58:05 ntpd[1908]: ntp-4 is maintained by Network Time Foundation, May 9 23:58:05.040641 ntpd[1908]: 9 May 23:58:05 ntpd[1908]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 9 23:58:05.040641 ntpd[1908]: 9 May 23:58:05 ntpd[1908]: corporation. Support and training for ntp-4 are May 9 23:58:05.040641 ntpd[1908]: 9 May 23:58:05 ntpd[1908]: available at https://www.nwtime.org/support May 9 23:58:05.040641 ntpd[1908]: 9 May 23:58:05 ntpd[1908]: ---------------------------------------------------- May 9 23:58:05.040641 ntpd[1908]: 9 May 23:58:05 ntpd[1908]: proto: precision = 0.096 usec (-23) May 9 23:58:05.040641 ntpd[1908]: 9 May 23:58:05 ntpd[1908]: basedate set to 2025-04-27 May 9 23:58:05.040641 ntpd[1908]: 9 May 23:58:05 ntpd[1908]: gps base set to 2025-04-27 (week 2364) May 9 23:58:05.010877 ntpd[1908]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 9 23:58:05.041057 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 23:58:05.010898 ntpd[1908]: ---------------------------------------------------- May 9 23:58:05.010973 ntpd[1908]: ntp-4 is maintained by Network Time Foundation, May 9 23:58:05.043024 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 23:58:05.010995 ntpd[1908]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 9 23:58:05.011014 ntpd[1908]: corporation. Support and training for ntp-4 are May 9 23:58:05.011032 ntpd[1908]: available at https://www.nwtime.org/support May 9 23:58:05.011051 ntpd[1908]: ---------------------------------------------------- May 9 23:58:05.022784 ntpd[1908]: proto: precision = 0.096 usec (-23) May 9 23:58:05.028967 dbus-daemon[1904]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1759 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 9 23:58:05.035280 ntpd[1908]: basedate set to 2025-04-27 May 9 23:58:05.035315 ntpd[1908]: gps base set to 2025-04-27 (week 2364) May 9 23:58:05.049817 dbus-daemon[1904]: [system] Successfully activated service 'org.freedesktop.systemd1' May 9 23:58:05.066280 ntpd[1908]: 9 May 23:58:05 ntpd[1908]: Listen and drop on 0 v6wildcard [::]:123 May 9 23:58:05.066280 ntpd[1908]: 9 May 23:58:05 ntpd[1908]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 9 23:58:05.066280 ntpd[1908]: 9 May 23:58:05 ntpd[1908]: Listen normally on 2 lo 127.0.0.1:123 May 9 23:58:05.066280 ntpd[1908]: 9 May 23:58:05 ntpd[1908]: Listen normally on 3 eth0 172.31.24.82:123 May 9 23:58:05.066280 ntpd[1908]: 9 May 23:58:05 ntpd[1908]: Listen normally on 4 lo [::1]:123 May 9 23:58:05.066280 ntpd[1908]: 9 May 23:58:05 ntpd[1908]: bind(21) AF_INET6 fe80::437:a7ff:fe27:e461%2#123 flags 0x11 failed: Cannot assign requested address May 9 23:58:05.066280 ntpd[1908]: 9 May 23:58:05 ntpd[1908]: unable to create socket on eth0 (5) for fe80::437:a7ff:fe27:e461%2#123 May 9 23:58:05.066280 ntpd[1908]: 9 May 23:58:05 ntpd[1908]: failed to init interface for address fe80::437:a7ff:fe27:e461%2 May 9 23:58:05.066280 ntpd[1908]: 9 May 23:58:05 ntpd[1908]: Listening on routing socket on fd #21 for interface updates May 9 23:58:05.050990 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 23:58:05.057214 ntpd[1908]: Listen and drop on 0 v6wildcard [::]:123 May 9 23:58:05.051068 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 23:58:05.057303 ntpd[1908]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 9 23:58:05.054228 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 23:58:05.068048 jq[1919]: true May 9 23:58:05.065183 ntpd[1908]: Listen normally on 2 lo 127.0.0.1:123 May 9 23:58:05.054275 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 23:58:05.065269 ntpd[1908]: Listen normally on 3 eth0 172.31.24.82:123 May 9 23:58:05.065348 ntpd[1908]: Listen normally on 4 lo [::1]:123 May 9 23:58:05.065430 ntpd[1908]: bind(21) AF_INET6 fe80::437:a7ff:fe27:e461%2#123 flags 0x11 failed: Cannot assign requested address May 9 23:58:05.065469 ntpd[1908]: unable to create socket on eth0 (5) for fe80::437:a7ff:fe27:e461%2#123 May 9 23:58:05.065497 ntpd[1908]: failed to init interface for address fe80::437:a7ff:fe27:e461%2 May 9 23:58:05.065560 ntpd[1908]: Listening on routing socket on fd #21 for interface updates May 9 23:58:05.102362 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 9 23:58:05.107037 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 23:58:05.128570 systemd[1]: motdgen.service: Deactivated successfully. May 9 23:58:05.130074 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 23:58:05.153307 tar[1925]: linux-arm64/LICENSE May 9 23:58:05.153307 tar[1925]: linux-arm64/helm May 9 23:58:05.153928 ntpd[1908]: 9 May 23:58:05 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 9 23:58:05.153928 ntpd[1908]: 9 May 23:58:05 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 9 23:58:05.143499 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 9 23:58:05.143572 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 9 23:58:05.170090 extend-filesystems[1906]: Found loop4 May 9 23:58:05.170090 extend-filesystems[1906]: Found loop5 May 9 23:58:05.170090 extend-filesystems[1906]: Found loop6 May 9 23:58:05.170090 extend-filesystems[1906]: Found loop7 May 9 23:58:05.170090 extend-filesystems[1906]: Found nvme0n1 May 9 23:58:05.170090 extend-filesystems[1906]: Found nvme0n1p1 May 9 23:58:05.170090 extend-filesystems[1906]: Found nvme0n1p2 May 9 23:58:05.170090 extend-filesystems[1906]: Found nvme0n1p3 May 9 23:58:05.170090 extend-filesystems[1906]: Found usr May 9 23:58:05.170090 extend-filesystems[1906]: Found nvme0n1p4 May 9 23:58:05.170090 extend-filesystems[1906]: Found nvme0n1p6 May 9 23:58:05.170090 extend-filesystems[1906]: Found nvme0n1p7 May 9 23:58:05.170090 extend-filesystems[1906]: Found nvme0n1p9 May 9 23:58:05.170090 extend-filesystems[1906]: Checking size of /dev/nvme0n1p9 May 9 23:58:05.224716 update_engine[1916]: I20250509 23:58:05.192837 1916 main.cc:92] Flatcar Update Engine starting May 9 23:58:05.222509 systemd[1]: Started update-engine.service - Update Engine. May 9 23:58:05.257420 update_engine[1916]: I20250509 23:58:05.228425 1916 update_check_scheduler.cc:74] Next update check in 8m51s May 9 23:58:05.249272 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 23:58:05.254577 (ntainerd)[1945]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 23:58:05.270256 coreos-metadata[1903]: May 09 23:58:05.268 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 9 23:58:05.273460 coreos-metadata[1903]: May 09 23:58:05.273 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 9 23:58:05.278265 coreos-metadata[1903]: May 09 23:58:05.276 INFO Fetch successful May 9 23:58:05.278265 coreos-metadata[1903]: May 09 23:58:05.277 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 9 23:58:05.283934 coreos-metadata[1903]: May 09 23:58:05.280 INFO Fetch successful May 9 23:58:05.283934 coreos-metadata[1903]: May 09 23:58:05.280 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 9 23:58:05.283934 coreos-metadata[1903]: May 09 23:58:05.281 INFO Fetch successful May 9 23:58:05.283934 coreos-metadata[1903]: May 09 23:58:05.281 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 9 23:58:05.283934 coreos-metadata[1903]: May 09 23:58:05.283 INFO Fetch successful May 9 23:58:05.284292 coreos-metadata[1903]: May 09 23:58:05.284 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 9 23:58:05.287020 coreos-metadata[1903]: May 09 23:58:05.286 INFO Fetch failed with 404: resource not found May 9 23:58:05.287020 coreos-metadata[1903]: May 09 23:58:05.286 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 9 23:58:05.288939 coreos-metadata[1903]: May 09 23:58:05.287 INFO Fetch successful May 9 23:58:05.288939 coreos-metadata[1903]: May 09 23:58:05.287 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 9 23:58:05.289150 coreos-metadata[1903]: May 09 23:58:05.289 INFO Fetch successful May 9 23:58:05.289543 coreos-metadata[1903]: May 09 23:58:05.289 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 9 23:58:05.289662 extend-filesystems[1906]: Resized partition /dev/nvme0n1p9 May 9 23:58:05.297166 coreos-metadata[1903]: May 09 23:58:05.296 INFO Fetch successful May 9 23:58:05.297166 coreos-metadata[1903]: May 09 23:58:05.296 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 9 23:58:05.297347 jq[1944]: true May 9 23:58:05.302772 coreos-metadata[1903]: May 09 23:58:05.300 INFO Fetch successful May 9 23:58:05.302772 coreos-metadata[1903]: May 09 23:58:05.300 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 9 23:58:05.303003 extend-filesystems[1958]: resize2fs 1.47.1 (20-May-2024) May 9 23:58:05.312941 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 9 23:58:05.313039 coreos-metadata[1903]: May 09 23:58:05.311 INFO Fetch successful May 9 23:58:05.352527 systemd[1]: Finished setup-oem.service - Setup OEM. May 9 23:58:05.471833 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 9 23:58:05.474542 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 23:58:05.481942 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 9 23:58:05.502974 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1773) May 9 23:58:05.509722 extend-filesystems[1958]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 9 23:58:05.509722 extend-filesystems[1958]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 23:58:05.509722 extend-filesystems[1958]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 9 23:58:05.536377 extend-filesystems[1906]: Resized filesystem in /dev/nvme0n1p9 May 9 23:58:05.538468 bash[1988]: Updated "/home/core/.ssh/authorized_keys" May 9 23:58:05.513875 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 23:58:05.514413 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 23:58:05.546477 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 23:58:05.574051 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 23:58:05.589030 systemd-networkd[1759]: eth0: Gained IPv6LL May 9 23:58:05.594027 systemd-logind[1915]: Watching system buttons on /dev/input/event0 (Power Button) May 9 23:58:05.594066 systemd-logind[1915]: Watching system buttons on /dev/input/event1 (Sleep Button) May 9 23:58:05.600362 systemd-logind[1915]: New seat seat0. May 9 23:58:05.676434 systemd[1]: Starting sshkeys.service... May 9 23:58:05.678242 systemd[1]: Started systemd-logind.service - User Login Management. May 9 23:58:05.681032 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 23:58:05.684887 systemd[1]: Reached target network-online.target - Network is Online. May 9 23:58:05.695270 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 9 23:58:05.700188 locksmithd[1951]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 23:58:05.704819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:05.711240 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 23:58:05.791568 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 9 23:58:05.796496 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 9 23:58:05.985384 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 23:58:06.094496 dbus-daemon[1904]: [system] Successfully activated service 'org.freedesktop.hostname1' May 9 23:58:06.094770 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 9 23:58:06.102379 amazon-ssm-agent[2023]: Initializing new seelog logger May 9 23:58:06.102379 amazon-ssm-agent[2023]: New Seelog Logger Creation Complete May 9 23:58:06.102465 dbus-daemon[1904]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1942 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 9 23:58:06.122927 amazon-ssm-agent[2023]: 2025/05/09 23:58:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:58:06.122927 amazon-ssm-agent[2023]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:58:06.122927 amazon-ssm-agent[2023]: 2025/05/09 23:58:06 processing appconfig overrides May 9 23:58:06.122927 amazon-ssm-agent[2023]: 2025/05/09 23:58:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:58:06.122927 amazon-ssm-agent[2023]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:58:06.122927 amazon-ssm-agent[2023]: 2025/05/09 23:58:06 processing appconfig overrides May 9 23:58:06.122927 amazon-ssm-agent[2023]: 2025/05/09 23:58:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:58:06.122927 amazon-ssm-agent[2023]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:58:06.122927 amazon-ssm-agent[2023]: 2025/05/09 23:58:06 processing appconfig overrides May 9 23:58:06.122927 amazon-ssm-agent[2023]: 2025-05-09 23:58:06 INFO Proxy environment variables: May 9 23:58:06.129666 systemd[1]: Starting polkit.service - Authorization Manager... May 9 23:58:06.138979 amazon-ssm-agent[2023]: 2025/05/09 23:58:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:58:06.138979 amazon-ssm-agent[2023]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:58:06.141364 amazon-ssm-agent[2023]: 2025/05/09 23:58:06 processing appconfig overrides May 9 23:58:06.199811 polkitd[2095]: Started polkitd version 121 May 9 23:58:06.225151 amazon-ssm-agent[2023]: 2025-05-09 23:58:06 INFO https_proxy: May 9 23:58:06.225194 polkitd[2095]: Loading rules from directory /etc/polkit-1/rules.d May 9 23:58:06.225304 polkitd[2095]: Loading rules from directory /usr/share/polkit-1/rules.d May 9 23:58:06.234034 polkitd[2095]: Finished loading, compiling and executing 2 rules May 9 23:58:06.239158 dbus-daemon[1904]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 9 23:58:06.239897 systemd[1]: Started polkit.service - Authorization Manager. May 9 23:58:06.246020 polkitd[2095]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 9 23:58:06.310958 containerd[1945]: time="2025-05-09T23:58:06.310639918Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 9 23:58:06.330734 amazon-ssm-agent[2023]: 2025-05-09 23:58:06 INFO http_proxy: May 9 23:58:06.341522 coreos-metadata[2032]: May 09 23:58:06.341 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 9 23:58:06.344160 systemd-hostnamed[1942]: Hostname set to (transient) May 9 23:58:06.344330 systemd-resolved[1725]: System hostname changed to 'ip-172-31-24-82'. May 9 23:58:06.345891 coreos-metadata[2032]: May 09 23:58:06.344 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 9 23:58:06.352364 coreos-metadata[2032]: May 09 23:58:06.346 INFO Fetch successful May 9 23:58:06.352364 coreos-metadata[2032]: May 09 23:58:06.348 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 9 23:58:06.352364 coreos-metadata[2032]: May 09 23:58:06.349 INFO Fetch successful May 9 23:58:06.357104 unknown[2032]: wrote ssh authorized keys file for user: core May 9 23:58:06.431143 amazon-ssm-agent[2023]: 2025-05-09 23:58:06 INFO no_proxy: May 9 23:58:06.444774 update-ssh-keys[2121]: Updated "/home/core/.ssh/authorized_keys" May 9 23:58:06.446482 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 9 23:58:06.455751 systemd[1]: Finished sshkeys.service. May 9 23:58:06.532468 amazon-ssm-agent[2023]: 2025-05-09 23:58:06 INFO Checking if agent identity type OnPrem can be assumed May 9 23:58:06.532605 containerd[1945]: time="2025-05-09T23:58:06.532302107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 23:58:06.547941 containerd[1945]: time="2025-05-09T23:58:06.546473495Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 23:58:06.547941 containerd[1945]: time="2025-05-09T23:58:06.546546011Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 23:58:06.547941 containerd[1945]: time="2025-05-09T23:58:06.546582443Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 23:58:06.547941 containerd[1945]: time="2025-05-09T23:58:06.546978371Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 23:58:06.547941 containerd[1945]: time="2025-05-09T23:58:06.547027415Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 23:58:06.547941 containerd[1945]: time="2025-05-09T23:58:06.547193459Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:58:06.547941 containerd[1945]: time="2025-05-09T23:58:06.547233359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 23:58:06.547941 containerd[1945]: time="2025-05-09T23:58:06.547573883Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:58:06.547941 containerd[1945]: time="2025-05-09T23:58:06.547615151Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 23:58:06.547941 containerd[1945]: time="2025-05-09T23:58:06.547647251Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:58:06.547941 containerd[1945]: time="2025-05-09T23:58:06.547678703Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 23:58:06.548495 containerd[1945]: time="2025-05-09T23:58:06.547872599Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 23:58:06.554965 containerd[1945]: time="2025-05-09T23:58:06.553261007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 23:58:06.554965 containerd[1945]: time="2025-05-09T23:58:06.553536455Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:58:06.554965 containerd[1945]: time="2025-05-09T23:58:06.553577435Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 23:58:06.554965 containerd[1945]: time="2025-05-09T23:58:06.553784507Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 23:58:06.554965 containerd[1945]: time="2025-05-09T23:58:06.553890023Z" level=info msg="metadata content store policy set" policy=shared May 9 23:58:06.566301 containerd[1945]: time="2025-05-09T23:58:06.563189099Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 23:58:06.566301 containerd[1945]: time="2025-05-09T23:58:06.563295443Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 23:58:06.566301 containerd[1945]: time="2025-05-09T23:58:06.563342375Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 23:58:06.566301 containerd[1945]: time="2025-05-09T23:58:06.563378999Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 23:58:06.566301 containerd[1945]: time="2025-05-09T23:58:06.563411099Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 23:58:06.566301 containerd[1945]: time="2025-05-09T23:58:06.563711459Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 23:58:06.566301 containerd[1945]: time="2025-05-09T23:58:06.564266327Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 23:58:06.566301 containerd[1945]: time="2025-05-09T23:58:06.564549839Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 23:58:06.566301 containerd[1945]: time="2025-05-09T23:58:06.564594803Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 23:58:06.566301 containerd[1945]: time="2025-05-09T23:58:06.564627491Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 23:58:06.566301 containerd[1945]: time="2025-05-09T23:58:06.564660203Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 23:58:06.566301 containerd[1945]: time="2025-05-09T23:58:06.564721631Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 23:58:06.566301 containerd[1945]: time="2025-05-09T23:58:06.564755099Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 23:58:06.566301 containerd[1945]: time="2025-05-09T23:58:06.564787595Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 23:58:06.567006 containerd[1945]: time="2025-05-09T23:58:06.564820559Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 23:58:06.567006 containerd[1945]: time="2025-05-09T23:58:06.564854099Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 23:58:06.567006 containerd[1945]: time="2025-05-09T23:58:06.564887171Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 23:58:06.570940 containerd[1945]: time="2025-05-09T23:58:06.569028839Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 23:58:06.570940 containerd[1945]: time="2025-05-09T23:58:06.569116967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 23:58:06.570940 containerd[1945]: time="2025-05-09T23:58:06.569152319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 23:58:06.570940 containerd[1945]: time="2025-05-09T23:58:06.569184827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 23:58:06.570940 containerd[1945]: time="2025-05-09T23:58:06.569218103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 23:58:06.570940 containerd[1945]: time="2025-05-09T23:58:06.569248859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 23:58:06.570940 containerd[1945]: time="2025-05-09T23:58:06.569298827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 23:58:06.570940 containerd[1945]: time="2025-05-09T23:58:06.569334779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 23:58:06.570940 containerd[1945]: time="2025-05-09T23:58:06.569367203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 23:58:06.570940 containerd[1945]: time="2025-05-09T23:58:06.569399987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 23:58:06.570940 containerd[1945]: time="2025-05-09T23:58:06.569436707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 23:58:06.570940 containerd[1945]: time="2025-05-09T23:58:06.569469407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 23:58:06.570940 containerd[1945]: time="2025-05-09T23:58:06.569501699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 23:58:06.570940 containerd[1945]: time="2025-05-09T23:58:06.569532059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 23:58:06.570940 containerd[1945]: time="2025-05-09T23:58:06.569566739Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 23:58:06.571674 containerd[1945]: time="2025-05-09T23:58:06.569635571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 23:58:06.571674 containerd[1945]: time="2025-05-09T23:58:06.569666387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 23:58:06.571674 containerd[1945]: time="2025-05-09T23:58:06.569693699Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 23:58:06.571674 containerd[1945]: time="2025-05-09T23:58:06.569831615Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 23:58:06.571674 containerd[1945]: time="2025-05-09T23:58:06.569874515Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 23:58:06.571674 containerd[1945]: time="2025-05-09T23:58:06.569934239Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 23:58:06.571674 containerd[1945]: time="2025-05-09T23:58:06.569976995Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 23:58:06.571674 containerd[1945]: time="2025-05-09T23:58:06.570011435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 23:58:06.571674 containerd[1945]: time="2025-05-09T23:58:06.570042047Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 23:58:06.571674 containerd[1945]: time="2025-05-09T23:58:06.570066875Z" level=info msg="NRI interface is disabled by configuration." May 9 23:58:06.571674 containerd[1945]: time="2025-05-09T23:58:06.570102155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 23:58:06.572184 containerd[1945]: time="2025-05-09T23:58:06.570707471Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 23:58:06.572184 containerd[1945]: time="2025-05-09T23:58:06.570829019Z" level=info msg="Connect containerd service" May 9 23:58:06.580958 containerd[1945]: time="2025-05-09T23:58:06.570891947Z" level=info msg="using legacy CRI server" May 9 23:58:06.580958 containerd[1945]: time="2025-05-09T23:58:06.576952883Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 23:58:06.580958 containerd[1945]: time="2025-05-09T23:58:06.577141979Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 23:58:06.580958 containerd[1945]: time="2025-05-09T23:58:06.580303895Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 23:58:06.580958 containerd[1945]: time="2025-05-09T23:58:06.580543691Z" level=info msg="Start subscribing containerd event" May 9 23:58:06.580958 containerd[1945]: time="2025-05-09T23:58:06.580645643Z" level=info msg="Start recovering state" May 9 23:58:06.580958 containerd[1945]: time="2025-05-09T23:58:06.580871351Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 23:58:06.580958 containerd[1945]: time="2025-05-09T23:58:06.580875515Z" level=info msg="Start event monitor" May 9 23:58:06.580958 containerd[1945]: time="2025-05-09T23:58:06.580957811Z" level=info msg="Start snapshots syncer" May 9 23:58:06.581409 containerd[1945]: time="2025-05-09T23:58:06.580980035Z" level=info msg="Start cni network conf syncer for default" May 9 23:58:06.581409 containerd[1945]: time="2025-05-09T23:58:06.581000495Z" level=info msg="Start streaming server" May 9 23:58:06.585050 containerd[1945]: time="2025-05-09T23:58:06.584646299Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 23:58:06.584952 systemd[1]: Started containerd.service - containerd container runtime. May 9 23:58:06.594462 containerd[1945]: time="2025-05-09T23:58:06.587639891Z" level=info msg="containerd successfully booted in 0.291910s" May 9 23:58:06.632026 amazon-ssm-agent[2023]: 2025-05-09 23:58:06 INFO Checking if agent identity type EC2 can be assumed May 9 23:58:06.731937 amazon-ssm-agent[2023]: 2025-05-09 23:58:06 INFO Agent will take identity from EC2 May 9 23:58:06.830310 amazon-ssm-agent[2023]: 2025-05-09 23:58:06 INFO [amazon-ssm-agent] using named pipe channel for IPC May 9 23:58:06.929596 amazon-ssm-agent[2023]: 2025-05-09 23:58:06 INFO [amazon-ssm-agent] using named pipe channel for IPC May 9 23:58:07.032934 amazon-ssm-agent[2023]: 2025-05-09 23:58:06 INFO [amazon-ssm-agent] using named pipe channel for IPC May 9 23:58:07.133587 amazon-ssm-agent[2023]: 2025-05-09 23:58:06 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 May 9 23:58:07.207539 amazon-ssm-agent[2023]: 2025-05-09 23:58:06 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 May 9 23:58:07.207539 amazon-ssm-agent[2023]: 2025-05-09 23:58:06 INFO [amazon-ssm-agent] Starting Core Agent May 9 23:58:07.207539 amazon-ssm-agent[2023]: 2025-05-09 23:58:06 INFO [amazon-ssm-agent] registrar detected. Attempting registration May 9 23:58:07.207539 amazon-ssm-agent[2023]: 2025-05-09 23:58:06 INFO [Registrar] Starting registrar module May 9 23:58:07.207539 amazon-ssm-agent[2023]: 2025-05-09 23:58:06 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration May 9 23:58:07.207975 amazon-ssm-agent[2023]: 2025-05-09 23:58:07 INFO [EC2Identity] EC2 registration was successful. May 9 23:58:07.207975 amazon-ssm-agent[2023]: 2025-05-09 23:58:07 INFO [CredentialRefresher] credentialRefresher has started May 9 23:58:07.207975 amazon-ssm-agent[2023]: 2025-05-09 23:58:07 INFO [CredentialRefresher] Starting credentials refresher loop May 9 23:58:07.207975 amazon-ssm-agent[2023]: 2025-05-09 23:58:07 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 9 23:58:07.233927 amazon-ssm-agent[2023]: 2025-05-09 23:58:07 INFO [CredentialRefresher] Next credential rotation will be in 31.058300366833333 minutes May 9 23:58:07.346958 tar[1925]: linux-arm64/README.md May 9 23:58:07.384733 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 9 23:58:07.866324 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:07.879549 (kubelet)[2135]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:58:08.011635 ntpd[1908]: Listen normally on 6 eth0 [fe80::437:a7ff:fe27:e461%2]:123 May 9 23:58:08.012574 ntpd[1908]: 9 May 23:58:08 ntpd[1908]: Listen normally on 6 eth0 [fe80::437:a7ff:fe27:e461%2]:123 May 9 23:58:08.146666 sshd_keygen[1934]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 23:58:08.191943 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 23:58:08.203512 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 23:58:08.214464 systemd[1]: Started sshd@0-172.31.24.82:22-147.75.109.163:49446.service - OpenSSH per-connection server daemon (147.75.109.163:49446). May 9 23:58:08.246729 systemd[1]: issuegen.service: Deactivated successfully. May 9 23:58:08.247135 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 23:58:08.259974 amazon-ssm-agent[2023]: 2025-05-09 23:58:08 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 9 23:58:08.261482 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 23:58:08.306050 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 23:58:08.317605 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 23:58:08.329675 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 9 23:58:08.332402 systemd[1]: Reached target getty.target - Login Prompts. May 9 23:58:08.334528 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 23:58:08.336785 systemd[1]: Startup finished in 1.195s (kernel) + 8.920s (initrd) + 8.664s (userspace) = 18.780s. May 9 23:58:08.361069 amazon-ssm-agent[2023]: 2025-05-09 23:58:08 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2155) started May 9 23:58:08.461766 amazon-ssm-agent[2023]: 2025-05-09 23:58:08 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 9 23:58:08.482135 sshd[2149]: Accepted publickey for core from 147.75.109.163 port 49446 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:58:08.488370 sshd[2149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:08.513719 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 23:58:08.521491 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 23:58:08.531665 systemd-logind[1915]: New session 1 of user core. May 9 23:58:08.561681 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 23:58:08.572646 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 23:58:08.593107 (systemd)[2174]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 23:58:08.822420 systemd[2174]: Queued start job for default target default.target. May 9 23:58:08.828853 systemd[2174]: Created slice app.slice - User Application Slice. May 9 23:58:08.829232 systemd[2174]: Reached target paths.target - Paths. May 9 23:58:08.829277 systemd[2174]: Reached target timers.target - Timers. May 9 23:58:08.832157 systemd[2174]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 23:58:08.874173 systemd[2174]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 23:58:08.874465 systemd[2174]: Reached target sockets.target - Sockets. May 9 23:58:08.874519 systemd[2174]: Reached target basic.target - Basic System. May 9 23:58:08.874623 systemd[2174]: Reached target default.target - Main User Target. May 9 23:58:08.874694 systemd[2174]: Startup finished in 268ms. May 9 23:58:08.875443 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 23:58:08.883429 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 23:58:08.908896 kubelet[2135]: E0509 23:58:08.908828 2135 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:58:08.913533 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:58:08.913927 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:58:08.914548 systemd[1]: kubelet.service: Consumed 1.316s CPU time. May 9 23:58:09.044788 systemd[1]: Started sshd@1-172.31.24.82:22-147.75.109.163:60692.service - OpenSSH per-connection server daemon (147.75.109.163:60692). May 9 23:58:09.232331 sshd[2188]: Accepted publickey for core from 147.75.109.163 port 60692 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:58:09.235001 sshd[2188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:09.243663 systemd-logind[1915]: New session 2 of user core. May 9 23:58:09.250269 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 23:58:09.379807 sshd[2188]: pam_unix(sshd:session): session closed for user core May 9 23:58:09.391752 systemd[1]: sshd@1-172.31.24.82:22-147.75.109.163:60692.service: Deactivated successfully. May 9 23:58:09.394690 systemd[1]: session-2.scope: Deactivated successfully. May 9 23:58:09.398647 systemd-logind[1915]: Session 2 logged out. Waiting for processes to exit. May 9 23:58:09.423749 systemd[1]: Started sshd@2-172.31.24.82:22-147.75.109.163:60704.service - OpenSSH per-connection server daemon (147.75.109.163:60704). May 9 23:58:09.425878 systemd-logind[1915]: Removed session 2. May 9 23:58:09.592951 sshd[2195]: Accepted publickey for core from 147.75.109.163 port 60704 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:58:09.595617 sshd[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:09.606248 systemd-logind[1915]: New session 3 of user core. May 9 23:58:09.621246 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 23:58:09.742671 sshd[2195]: pam_unix(sshd:session): session closed for user core May 9 23:58:09.749534 systemd-logind[1915]: Session 3 logged out. Waiting for processes to exit. May 9 23:58:09.751130 systemd[1]: sshd@2-172.31.24.82:22-147.75.109.163:60704.service: Deactivated successfully. May 9 23:58:09.754850 systemd[1]: session-3.scope: Deactivated successfully. May 9 23:58:09.757345 systemd-logind[1915]: Removed session 3. May 9 23:58:09.782579 systemd[1]: Started sshd@3-172.31.24.82:22-147.75.109.163:60716.service - OpenSSH per-connection server daemon (147.75.109.163:60716). May 9 23:58:09.967644 sshd[2202]: Accepted publickey for core from 147.75.109.163 port 60716 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:58:09.970256 sshd[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:09.979157 systemd-logind[1915]: New session 4 of user core. May 9 23:58:09.986270 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 23:58:10.114542 sshd[2202]: pam_unix(sshd:session): session closed for user core May 9 23:58:10.120359 systemd[1]: sshd@3-172.31.24.82:22-147.75.109.163:60716.service: Deactivated successfully. May 9 23:58:10.125076 systemd[1]: session-4.scope: Deactivated successfully. May 9 23:58:10.126768 systemd-logind[1915]: Session 4 logged out. Waiting for processes to exit. May 9 23:58:10.128942 systemd-logind[1915]: Removed session 4. May 9 23:58:10.154572 systemd[1]: Started sshd@4-172.31.24.82:22-147.75.109.163:60730.service - OpenSSH per-connection server daemon (147.75.109.163:60730). May 9 23:58:10.324889 sshd[2209]: Accepted publickey for core from 147.75.109.163 port 60730 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:58:10.327436 sshd[2209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:10.334659 systemd-logind[1915]: New session 5 of user core. May 9 23:58:10.345198 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 23:58:10.460595 sudo[2212]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 23:58:10.461477 sudo[2212]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:58:10.477539 sudo[2212]: pam_unix(sudo:session): session closed for user root May 9 23:58:10.501330 sshd[2209]: pam_unix(sshd:session): session closed for user core May 9 23:58:10.508014 systemd[1]: sshd@4-172.31.24.82:22-147.75.109.163:60730.service: Deactivated successfully. May 9 23:58:10.512217 systemd[1]: session-5.scope: Deactivated successfully. May 9 23:58:10.515198 systemd-logind[1915]: Session 5 logged out. Waiting for processes to exit. May 9 23:58:10.517853 systemd-logind[1915]: Removed session 5. May 9 23:58:10.537485 systemd[1]: Started sshd@5-172.31.24.82:22-147.75.109.163:60746.service - OpenSSH per-connection server daemon (147.75.109.163:60746). May 9 23:58:10.707140 sshd[2217]: Accepted publickey for core from 147.75.109.163 port 60746 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:58:10.709968 sshd[2217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:10.718010 systemd-logind[1915]: New session 6 of user core. May 9 23:58:10.725265 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 23:58:10.829003 sudo[2221]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 23:58:10.830104 sudo[2221]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:58:10.836574 sudo[2221]: pam_unix(sudo:session): session closed for user root May 9 23:58:10.846646 sudo[2220]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 9 23:58:10.847624 sudo[2220]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:58:10.874821 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 9 23:58:10.877154 auditctl[2224]: No rules May 9 23:58:10.877835 systemd[1]: audit-rules.service: Deactivated successfully. May 9 23:58:10.878246 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 9 23:58:10.888626 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 23:58:10.935710 augenrules[2242]: No rules May 9 23:58:10.938117 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 23:58:10.940376 sudo[2220]: pam_unix(sudo:session): session closed for user root May 9 23:58:10.963775 sshd[2217]: pam_unix(sshd:session): session closed for user core May 9 23:58:10.969569 systemd[1]: sshd@5-172.31.24.82:22-147.75.109.163:60746.service: Deactivated successfully. May 9 23:58:10.972627 systemd[1]: session-6.scope: Deactivated successfully. May 9 23:58:10.976325 systemd-logind[1915]: Session 6 logged out. Waiting for processes to exit. May 9 23:58:10.979402 systemd-logind[1915]: Removed session 6. May 9 23:58:11.008418 systemd[1]: Started sshd@6-172.31.24.82:22-147.75.109.163:60748.service - OpenSSH per-connection server daemon (147.75.109.163:60748). May 9 23:58:11.177414 sshd[2250]: Accepted publickey for core from 147.75.109.163 port 60748 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:58:11.180175 sshd[2250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:11.188995 systemd-logind[1915]: New session 7 of user core. May 9 23:58:11.199266 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 23:58:11.305339 sudo[2253]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 23:58:11.306148 sudo[2253]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:58:11.773433 systemd[1]: Starting docker.service - Docker Application Container Engine... May 9 23:58:11.773727 (dockerd)[2268]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 9 23:58:12.417332 systemd-resolved[1725]: Clock change detected. Flushing caches. May 9 23:58:12.555281 dockerd[2268]: time="2025-05-09T23:58:12.555076072Z" level=info msg="Starting up" May 9 23:58:12.675846 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3985053372-merged.mount: Deactivated successfully. May 9 23:58:12.764495 systemd[1]: var-lib-docker-metacopy\x2dcheck822837369-merged.mount: Deactivated successfully. May 9 23:58:12.777372 dockerd[2268]: time="2025-05-09T23:58:12.777296741Z" level=info msg="Loading containers: start." May 9 23:58:12.935011 kernel: Initializing XFRM netlink socket May 9 23:58:12.968753 (udev-worker)[2291]: Network interface NamePolicy= disabled on kernel command line. May 9 23:58:13.057083 systemd-networkd[1759]: docker0: Link UP May 9 23:58:13.081697 dockerd[2268]: time="2025-05-09T23:58:13.081552339Z" level=info msg="Loading containers: done." May 9 23:58:13.107319 dockerd[2268]: time="2025-05-09T23:58:13.107243439Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 9 23:58:13.107546 dockerd[2268]: time="2025-05-09T23:58:13.107414415Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 9 23:58:13.107685 dockerd[2268]: time="2025-05-09T23:58:13.107637147Z" level=info msg="Daemon has completed initialization" May 9 23:58:13.166474 dockerd[2268]: time="2025-05-09T23:58:13.166351095Z" level=info msg="API listen on /run/docker.sock" May 9 23:58:13.168090 systemd[1]: Started docker.service - Docker Application Container Engine. May 9 23:58:13.671600 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1714725855-merged.mount: Deactivated successfully. May 9 23:58:14.310832 containerd[1945]: time="2025-05-09T23:58:14.310398725Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 9 23:58:14.911833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2011874301.mount: Deactivated successfully. May 9 23:58:16.247030 containerd[1945]: time="2025-05-09T23:58:16.246172819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:16.248691 containerd[1945]: time="2025-05-09T23:58:16.248603095Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233118" May 9 23:58:16.249850 containerd[1945]: time="2025-05-09T23:58:16.249731743Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:16.263679 containerd[1945]: time="2025-05-09T23:58:16.262171447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:16.264803 containerd[1945]: time="2025-05-09T23:58:16.264321691Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 1.953858562s" May 9 23:58:16.264803 containerd[1945]: time="2025-05-09T23:58:16.264405391Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 9 23:58:16.265458 containerd[1945]: time="2025-05-09T23:58:16.265391779Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 9 23:58:17.643397 containerd[1945]: time="2025-05-09T23:58:17.643329309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:17.645455 containerd[1945]: time="2025-05-09T23:58:17.645379245Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529571" May 9 23:58:17.648094 containerd[1945]: time="2025-05-09T23:58:17.648041817Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:17.654990 containerd[1945]: time="2025-05-09T23:58:17.654592438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:17.658207 containerd[1945]: time="2025-05-09T23:58:17.658146946Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.392684739s" May 9 23:58:17.658349 containerd[1945]: time="2025-05-09T23:58:17.658210870Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 9 23:58:17.659203 containerd[1945]: time="2025-05-09T23:58:17.658855966Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 9 23:58:18.933219 containerd[1945]: time="2025-05-09T23:58:18.933150144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:18.937395 containerd[1945]: time="2025-05-09T23:58:18.937339404Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482173" May 9 23:58:18.939197 containerd[1945]: time="2025-05-09T23:58:18.939134064Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:18.947850 containerd[1945]: time="2025-05-09T23:58:18.947766432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:18.950680 containerd[1945]: time="2025-05-09T23:58:18.950590380Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.291684302s" May 9 23:58:18.950680 containerd[1945]: time="2025-05-09T23:58:18.950664528Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 9 23:58:18.951446 containerd[1945]: time="2025-05-09T23:58:18.951377436Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 9 23:58:19.328259 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 9 23:58:19.347936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:19.769288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:19.785513 (kubelet)[2481]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:58:19.867561 kubelet[2481]: E0509 23:58:19.867368 2481 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:58:19.877590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:58:19.879136 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:58:20.436872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1292787659.mount: Deactivated successfully. May 9 23:58:21.023823 containerd[1945]: time="2025-05-09T23:58:21.023750878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:21.025541 containerd[1945]: time="2025-05-09T23:58:21.025461226Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370351" May 9 23:58:21.026338 containerd[1945]: time="2025-05-09T23:58:21.025750354Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:21.030165 containerd[1945]: time="2025-05-09T23:58:21.030063754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:21.032608 containerd[1945]: time="2025-05-09T23:58:21.031777246Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 2.080324798s" May 9 23:58:21.032608 containerd[1945]: time="2025-05-09T23:58:21.031848238Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 9 23:58:21.033993 containerd[1945]: time="2025-05-09T23:58:21.033557530Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 9 23:58:21.579864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1548404876.mount: Deactivated successfully. May 9 23:58:22.685754 containerd[1945]: time="2025-05-09T23:58:22.685686147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:22.688020 containerd[1945]: time="2025-05-09T23:58:22.687928083Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" May 9 23:58:22.689048 containerd[1945]: time="2025-05-09T23:58:22.689003091Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:22.698005 containerd[1945]: time="2025-05-09T23:58:22.696053955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:22.702872 containerd[1945]: time="2025-05-09T23:58:22.702776187Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.669149177s" May 9 23:58:22.702872 containerd[1945]: time="2025-05-09T23:58:22.702857031Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 9 23:58:22.705383 containerd[1945]: time="2025-05-09T23:58:22.705327627Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 9 23:58:23.207933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2654787171.mount: Deactivated successfully. May 9 23:58:23.214123 containerd[1945]: time="2025-05-09T23:58:23.214053745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:23.215668 containerd[1945]: time="2025-05-09T23:58:23.215618701Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 9 23:58:23.216996 containerd[1945]: time="2025-05-09T23:58:23.216049501Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:23.220416 containerd[1945]: time="2025-05-09T23:58:23.220317757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:23.222344 containerd[1945]: time="2025-05-09T23:58:23.222118693Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 516.731654ms" May 9 23:58:23.222344 containerd[1945]: time="2025-05-09T23:58:23.222175849Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 9 23:58:23.223294 containerd[1945]: time="2025-05-09T23:58:23.223012249Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 9 23:58:23.815548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2583769807.mount: Deactivated successfully. May 9 23:58:26.418151 containerd[1945]: time="2025-05-09T23:58:26.418090625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:26.421093 containerd[1945]: time="2025-05-09T23:58:26.421047233Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" May 9 23:58:26.422322 containerd[1945]: time="2025-05-09T23:58:26.422277977Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:26.430098 containerd[1945]: time="2025-05-09T23:58:26.430026245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:26.433061 containerd[1945]: time="2025-05-09T23:58:26.432976301Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.209892364s" May 9 23:58:26.433166 containerd[1945]: time="2025-05-09T23:58:26.433058285Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 9 23:58:30.077758 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 9 23:58:30.087568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:30.467480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:30.472361 (kubelet)[2632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:58:30.565476 kubelet[2632]: E0509 23:58:30.565406 2632 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:58:30.572218 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:58:30.572763 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:58:32.626558 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:32.635509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:32.693861 systemd[1]: Reloading requested from client PID 2646 ('systemctl') (unit session-7.scope)... May 9 23:58:32.693901 systemd[1]: Reloading... May 9 23:58:32.899995 zram_generator::config[2687]: No configuration found. May 9 23:58:33.173546 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:58:33.358269 systemd[1]: Reloading finished in 663 ms. May 9 23:58:33.469937 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 9 23:58:33.470214 systemd[1]: kubelet.service: Failed with result 'signal'. May 9 23:58:33.472135 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:33.486544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:33.844346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:33.847523 (kubelet)[2751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 23:58:33.930890 kubelet[2751]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:58:33.930890 kubelet[2751]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 9 23:58:33.930890 kubelet[2751]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:58:33.931551 kubelet[2751]: I0509 23:58:33.931159 2751 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 23:58:36.083015 kubelet[2751]: I0509 23:58:36.082179 2751 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 9 23:58:36.083015 kubelet[2751]: I0509 23:58:36.082236 2751 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 23:58:36.083015 kubelet[2751]: I0509 23:58:36.082706 2751 server.go:954] "Client rotation is on, will bootstrap in background" May 9 23:58:36.127836 kubelet[2751]: E0509 23:58:36.127757 2751 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.24.82:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.82:6443: connect: connection refused" logger="UnhandledError" May 9 23:58:36.132347 kubelet[2751]: I0509 23:58:36.132273 2751 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 23:58:36.148116 kubelet[2751]: E0509 23:58:36.148044 2751 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 23:58:36.148116 kubelet[2751]: I0509 23:58:36.148104 2751 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 23:58:36.154250 kubelet[2751]: I0509 23:58:36.154184 2751 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 23:58:36.155808 kubelet[2751]: I0509 23:58:36.155704 2751 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 23:58:36.156164 kubelet[2751]: I0509 23:58:36.155794 2751 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-82","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 23:58:36.156371 kubelet[2751]: I0509 23:58:36.156193 2751 topology_manager.go:138] "Creating topology manager with none policy" May 9 23:58:36.156371 kubelet[2751]: I0509 23:58:36.156216 2751 container_manager_linux.go:304] "Creating device plugin manager" May 9 23:58:36.156499 kubelet[2751]: I0509 23:58:36.156481 2751 state_mem.go:36] "Initialized new in-memory state store" May 9 23:58:36.162341 kubelet[2751]: I0509 23:58:36.162110 2751 kubelet.go:446] "Attempting to sync node with API server" May 9 23:58:36.162341 kubelet[2751]: I0509 23:58:36.162162 2751 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 23:58:36.162341 kubelet[2751]: I0509 23:58:36.162203 2751 kubelet.go:352] "Adding apiserver pod source" May 9 23:58:36.162341 kubelet[2751]: I0509 23:58:36.162224 2751 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 23:58:36.164994 kubelet[2751]: W0509 23:58:36.164154 2751 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.24.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-82&limit=500&resourceVersion=0": dial tcp 172.31.24.82:6443: connect: connection refused May 9 23:58:36.164994 kubelet[2751]: E0509 23:58:36.164271 2751 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.24.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-82&limit=500&resourceVersion=0\": dial tcp 172.31.24.82:6443: connect: connection refused" logger="UnhandledError" May 9 23:58:36.166553 kubelet[2751]: W0509 23:58:36.166472 2751 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.24.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.24.82:6443: connect: connection refused May 9 23:58:36.166679 kubelet[2751]: E0509 23:58:36.166565 2751 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.24.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.82:6443: connect: connection refused" logger="UnhandledError" May 9 23:58:36.167302 kubelet[2751]: I0509 23:58:36.167255 2751 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 23:58:36.168122 kubelet[2751]: I0509 23:58:36.168078 2751 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 23:58:36.168264 kubelet[2751]: W0509 23:58:36.168218 2751 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 23:58:36.170359 kubelet[2751]: I0509 23:58:36.169924 2751 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 9 23:58:36.170359 kubelet[2751]: I0509 23:58:36.170028 2751 server.go:1287] "Started kubelet" May 9 23:58:36.178335 kubelet[2751]: I0509 23:58:36.178056 2751 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 23:58:36.180085 kubelet[2751]: E0509 23:58:36.178667 2751 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.82:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.82:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-82.183e0148747d9ec1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-82,UID:ip-172-31-24-82,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-82,},FirstTimestamp:2025-05-09 23:58:36.169993921 +0000 UTC m=+2.313217464,LastTimestamp:2025-05-09 23:58:36.169993921 +0000 UTC m=+2.313217464,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-82,}" May 9 23:58:36.184680 kubelet[2751]: I0509 23:58:36.184594 2751 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 9 23:58:36.186437 kubelet[2751]: I0509 23:58:36.186382 2751 server.go:490] "Adding debug handlers to kubelet server" May 9 23:58:36.189197 kubelet[2751]: I0509 23:58:36.189089 2751 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 23:58:36.189547 kubelet[2751]: I0509 23:58:36.189501 2751 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 23:58:36.189938 kubelet[2751]: I0509 23:58:36.189891 2751 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 23:58:36.192122 kubelet[2751]: E0509 23:58:36.192063 2751 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 23:58:36.192710 kubelet[2751]: I0509 23:58:36.192638 2751 volume_manager.go:297] "Starting Kubelet Volume Manager" May 9 23:58:36.192939 kubelet[2751]: I0509 23:58:36.192899 2751 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 23:58:36.193060 kubelet[2751]: I0509 23:58:36.193042 2751 reconciler.go:26] "Reconciler: start to sync state" May 9 23:58:36.194417 kubelet[2751]: W0509 23:58:36.193720 2751 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.82:6443: connect: connection refused May 9 23:58:36.194417 kubelet[2751]: E0509 23:58:36.193873 2751 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.24.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.82:6443: connect: connection refused" logger="UnhandledError" May 9 23:58:36.194697 kubelet[2751]: I0509 23:58:36.194587 2751 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 23:58:36.196297 kubelet[2751]: E0509 23:58:36.195241 2751 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-24-82\" not found" May 9 23:58:36.197947 kubelet[2751]: I0509 23:58:36.197763 2751 factory.go:221] Registration of the containerd container factory successfully May 9 23:58:36.197947 kubelet[2751]: I0509 23:58:36.197804 2751 factory.go:221] Registration of the systemd container factory successfully May 9 23:58:36.199087 kubelet[2751]: E0509 23:58:36.198905 2751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-82?timeout=10s\": dial tcp 172.31.24.82:6443: connect: connection refused" interval="200ms" May 9 23:58:36.244056 kubelet[2751]: I0509 23:58:36.243558 2751 cpu_manager.go:221] "Starting CPU manager" policy="none" May 9 23:58:36.244056 kubelet[2751]: I0509 23:58:36.243593 2751 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 9 23:58:36.244056 kubelet[2751]: I0509 23:58:36.243626 2751 state_mem.go:36] "Initialized new in-memory state store" May 9 23:58:36.246796 kubelet[2751]: I0509 23:58:36.246732 2751 policy_none.go:49] "None policy: Start" May 9 23:58:36.246796 kubelet[2751]: I0509 23:58:36.246796 2751 memory_manager.go:186] "Starting memorymanager" policy="None" May 9 23:58:36.247051 kubelet[2751]: I0509 23:58:36.246826 2751 state_mem.go:35] "Initializing new in-memory state store" May 9 23:58:36.248267 kubelet[2751]: I0509 23:58:36.248194 2751 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 23:58:36.254829 kubelet[2751]: I0509 23:58:36.253284 2751 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 23:58:36.254829 kubelet[2751]: I0509 23:58:36.253394 2751 status_manager.go:227] "Starting to sync pod status with apiserver" May 9 23:58:36.254829 kubelet[2751]: I0509 23:58:36.253457 2751 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 9 23:58:36.254829 kubelet[2751]: I0509 23:58:36.253476 2751 kubelet.go:2388] "Starting kubelet main sync loop" May 9 23:58:36.254829 kubelet[2751]: E0509 23:58:36.253577 2751 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 23:58:36.255296 kubelet[2751]: W0509 23:58:36.255156 2751 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.24.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.82:6443: connect: connection refused May 9 23:58:36.255296 kubelet[2751]: E0509 23:58:36.255217 2751 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.24.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.82:6443: connect: connection refused" logger="UnhandledError" May 9 23:58:36.264300 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 9 23:58:36.282941 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 9 23:58:36.292201 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 9 23:58:36.297201 kubelet[2751]: E0509 23:58:36.297139 2751 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-24-82\" not found" May 9 23:58:36.302918 kubelet[2751]: I0509 23:58:36.302861 2751 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 23:58:36.303341 kubelet[2751]: I0509 23:58:36.303256 2751 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 23:58:36.303341 kubelet[2751]: I0509 23:58:36.303299 2751 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 23:58:36.305587 kubelet[2751]: I0509 23:58:36.304616 2751 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 23:58:36.307767 kubelet[2751]: E0509 23:58:36.307498 2751 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 9 23:58:36.307767 kubelet[2751]: E0509 23:58:36.307677 2751 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-82\" not found" May 9 23:58:36.373801 systemd[1]: Created slice kubepods-burstable-podf98807012ee2c7019f8681f8369e3e24.slice - libcontainer container kubepods-burstable-podf98807012ee2c7019f8681f8369e3e24.slice. May 9 23:58:36.394260 kubelet[2751]: I0509 23:58:36.393392 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f98807012ee2c7019f8681f8369e3e24-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-82\" (UID: \"f98807012ee2c7019f8681f8369e3e24\") " pod="kube-system/kube-controller-manager-ip-172-31-24-82" May 9 23:58:36.394260 kubelet[2751]: I0509 23:58:36.393460 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f98807012ee2c7019f8681f8369e3e24-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-82\" (UID: \"f98807012ee2c7019f8681f8369e3e24\") " pod="kube-system/kube-controller-manager-ip-172-31-24-82" May 9 23:58:36.394260 kubelet[2751]: I0509 23:58:36.393499 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f98807012ee2c7019f8681f8369e3e24-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-82\" (UID: \"f98807012ee2c7019f8681f8369e3e24\") " pod="kube-system/kube-controller-manager-ip-172-31-24-82" May 9 23:58:36.394260 kubelet[2751]: I0509 23:58:36.393542 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f98807012ee2c7019f8681f8369e3e24-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-82\" (UID: \"f98807012ee2c7019f8681f8369e3e24\") " pod="kube-system/kube-controller-manager-ip-172-31-24-82" May 9 23:58:36.394260 kubelet[2751]: I0509 23:58:36.393584 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3734bf4f4d053f61c80f403fe8806d7f-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-82\" (UID: \"3734bf4f4d053f61c80f403fe8806d7f\") " pod="kube-system/kube-scheduler-ip-172-31-24-82" May 9 23:58:36.394649 kubelet[2751]: I0509 23:58:36.393621 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e25a336c44a332ea7eb25b8a6bde1ec9-ca-certs\") pod \"kube-apiserver-ip-172-31-24-82\" (UID: \"e25a336c44a332ea7eb25b8a6bde1ec9\") " pod="kube-system/kube-apiserver-ip-172-31-24-82" May 9 23:58:36.394649 kubelet[2751]: I0509 23:58:36.393661 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e25a336c44a332ea7eb25b8a6bde1ec9-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-82\" (UID: \"e25a336c44a332ea7eb25b8a6bde1ec9\") " pod="kube-system/kube-apiserver-ip-172-31-24-82" May 9 23:58:36.394649 kubelet[2751]: I0509 23:58:36.393702 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e25a336c44a332ea7eb25b8a6bde1ec9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-82\" (UID: \"e25a336c44a332ea7eb25b8a6bde1ec9\") " pod="kube-system/kube-apiserver-ip-172-31-24-82" May 9 23:58:36.394649 kubelet[2751]: I0509 23:58:36.393746 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f98807012ee2c7019f8681f8369e3e24-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-82\" (UID: \"f98807012ee2c7019f8681f8369e3e24\") " pod="kube-system/kube-controller-manager-ip-172-31-24-82" May 9 23:58:36.395949 kubelet[2751]: E0509 23:58:36.395545 2751 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-82\" not found" node="ip-172-31-24-82" May 9 23:58:36.399898 kubelet[2751]: E0509 23:58:36.399840 2751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-82?timeout=10s\": dial tcp 172.31.24.82:6443: connect: connection refused" interval="400ms" May 9 23:58:36.402430 systemd[1]: Created slice kubepods-burstable-pod3734bf4f4d053f61c80f403fe8806d7f.slice - libcontainer container kubepods-burstable-pod3734bf4f4d053f61c80f403fe8806d7f.slice. May 9 23:58:36.407820 kubelet[2751]: E0509 23:58:36.407662 2751 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-82\" not found" node="ip-172-31-24-82" May 9 23:58:36.408310 kubelet[2751]: I0509 23:58:36.408159 2751 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-24-82" May 9 23:58:36.408886 kubelet[2751]: E0509 23:58:36.408743 2751 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.24.82:6443/api/v1/nodes\": dial tcp 172.31.24.82:6443: connect: connection refused" node="ip-172-31-24-82" May 9 23:58:36.420230 systemd[1]: Created slice kubepods-burstable-pode25a336c44a332ea7eb25b8a6bde1ec9.slice - libcontainer container kubepods-burstable-pode25a336c44a332ea7eb25b8a6bde1ec9.slice. May 9 23:58:36.425008 kubelet[2751]: E0509 23:58:36.424875 2751 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-82\" not found" node="ip-172-31-24-82" May 9 23:58:36.611889 kubelet[2751]: I0509 23:58:36.611384 2751 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-24-82" May 9 23:58:36.612538 kubelet[2751]: E0509 23:58:36.612462 2751 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.24.82:6443/api/v1/nodes\": dial tcp 172.31.24.82:6443: connect: connection refused" node="ip-172-31-24-82" May 9 23:58:36.697697 containerd[1945]: time="2025-05-09T23:58:36.697520380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-82,Uid:f98807012ee2c7019f8681f8369e3e24,Namespace:kube-system,Attempt:0,}" May 9 23:58:36.710139 containerd[1945]: time="2025-05-09T23:58:36.710073868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-82,Uid:3734bf4f4d053f61c80f403fe8806d7f,Namespace:kube-system,Attempt:0,}" May 9 23:58:36.727397 containerd[1945]: time="2025-05-09T23:58:36.727230820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-82,Uid:e25a336c44a332ea7eb25b8a6bde1ec9,Namespace:kube-system,Attempt:0,}" May 9 23:58:36.762743 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 9 23:58:36.801754 kubelet[2751]: E0509 23:58:36.801684 2751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-82?timeout=10s\": dial tcp 172.31.24.82:6443: connect: connection refused" interval="800ms" May 9 23:58:37.016223 kubelet[2751]: I0509 23:58:37.015876 2751 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-24-82" May 9 23:58:37.016611 kubelet[2751]: E0509 23:58:37.016466 2751 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.24.82:6443/api/v1/nodes\": dial tcp 172.31.24.82:6443: connect: connection refused" node="ip-172-31-24-82" May 9 23:58:37.219355 kubelet[2751]: W0509 23:58:37.219190 2751 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.24.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.24.82:6443: connect: connection refused May 9 23:58:37.219355 kubelet[2751]: E0509 23:58:37.219294 2751 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.24.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.82:6443: connect: connection refused" logger="UnhandledError" May 9 23:58:37.277284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount49282685.mount: Deactivated successfully. May 9 23:58:37.293031 containerd[1945]: time="2025-05-09T23:58:37.292893663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:58:37.299875 containerd[1945]: time="2025-05-09T23:58:37.299772903Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 9 23:58:37.301813 containerd[1945]: time="2025-05-09T23:58:37.301742715Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:58:37.304644 containerd[1945]: time="2025-05-09T23:58:37.304419567Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:58:37.308854 containerd[1945]: time="2025-05-09T23:58:37.308760051Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 23:58:37.310075 containerd[1945]: time="2025-05-09T23:58:37.310007847Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:58:37.310786 containerd[1945]: time="2025-05-09T23:58:37.310721703Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 23:58:37.318261 containerd[1945]: time="2025-05-09T23:58:37.318181539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:58:37.321230 containerd[1945]: time="2025-05-09T23:58:37.320852127Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 623.220087ms" May 9 23:58:37.329329 containerd[1945]: time="2025-05-09T23:58:37.329002503Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 601.620735ms" May 9 23:58:37.331526 containerd[1945]: time="2025-05-09T23:58:37.331446687Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 621.261615ms" May 9 23:58:37.438111 kubelet[2751]: W0509 23:58:37.437915 2751 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.82:6443: connect: connection refused May 9 23:58:37.438111 kubelet[2751]: E0509 23:58:37.438057 2751 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.24.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.82:6443: connect: connection refused" logger="UnhandledError" May 9 23:58:37.534182 containerd[1945]: time="2025-05-09T23:58:37.533672872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:58:37.534182 containerd[1945]: time="2025-05-09T23:58:37.533760256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:58:37.534182 containerd[1945]: time="2025-05-09T23:58:37.533785708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:37.534182 containerd[1945]: time="2025-05-09T23:58:37.533929012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:37.536642 containerd[1945]: time="2025-05-09T23:58:37.536449864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:58:37.538192 containerd[1945]: time="2025-05-09T23:58:37.536576680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:58:37.538458 containerd[1945]: time="2025-05-09T23:58:37.538149484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:37.541743 containerd[1945]: time="2025-05-09T23:58:37.540345616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:58:37.541743 containerd[1945]: time="2025-05-09T23:58:37.540452968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:58:37.541743 containerd[1945]: time="2025-05-09T23:58:37.540491884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:37.541743 containerd[1945]: time="2025-05-09T23:58:37.540782656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:37.542195 containerd[1945]: time="2025-05-09T23:58:37.539866336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:37.588594 systemd[1]: Started cri-containerd-eeed9899d28186bbc35b4138be822922c1518d18a81dc27a9e7a13a880c7f985.scope - libcontainer container eeed9899d28186bbc35b4138be822922c1518d18a81dc27a9e7a13a880c7f985. May 9 23:58:37.603791 systemd[1]: Started cri-containerd-13ddc33de777163cd7955a0ff1626119aa04d44c177833ad22e415f46fbabba3.scope - libcontainer container 13ddc33de777163cd7955a0ff1626119aa04d44c177833ad22e415f46fbabba3. May 9 23:58:37.606333 kubelet[2751]: E0509 23:58:37.606121 2751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-82?timeout=10s\": dial tcp 172.31.24.82:6443: connect: connection refused" interval="1.6s" May 9 23:58:37.632347 systemd[1]: Started cri-containerd-252e841c475593202b1111524b48a7c96dc796ed86d9589be0d77ed19405c566.scope - libcontainer container 252e841c475593202b1111524b48a7c96dc796ed86d9589be0d77ed19405c566. May 9 23:58:37.693772 kubelet[2751]: W0509 23:58:37.693503 2751 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.24.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-82&limit=500&resourceVersion=0": dial tcp 172.31.24.82:6443: connect: connection refused May 9 23:58:37.693772 kubelet[2751]: E0509 23:58:37.693606 2751 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.24.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-82&limit=500&resourceVersion=0\": dial tcp 172.31.24.82:6443: connect: connection refused" logger="UnhandledError" May 9 23:58:37.710972 kubelet[2751]: W0509 23:58:37.710766 2751 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.24.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.82:6443: connect: connection refused May 9 23:58:37.710972 kubelet[2751]: E0509 23:58:37.710855 2751 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.24.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.82:6443: connect: connection refused" logger="UnhandledError" May 9 23:58:37.723855 containerd[1945]: time="2025-05-09T23:58:37.723678677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-82,Uid:3734bf4f4d053f61c80f403fe8806d7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"13ddc33de777163cd7955a0ff1626119aa04d44c177833ad22e415f46fbabba3\"" May 9 23:58:37.742148 containerd[1945]: time="2025-05-09T23:58:37.741038501Z" level=info msg="CreateContainer within sandbox \"13ddc33de777163cd7955a0ff1626119aa04d44c177833ad22e415f46fbabba3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 9 23:58:37.745710 containerd[1945]: time="2025-05-09T23:58:37.745497689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-82,Uid:f98807012ee2c7019f8681f8369e3e24,Namespace:kube-system,Attempt:0,} returns sandbox id \"eeed9899d28186bbc35b4138be822922c1518d18a81dc27a9e7a13a880c7f985\"" May 9 23:58:37.752787 containerd[1945]: time="2025-05-09T23:58:37.752610005Z" level=info msg="CreateContainer within sandbox \"eeed9899d28186bbc35b4138be822922c1518d18a81dc27a9e7a13a880c7f985\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 9 23:58:37.767512 containerd[1945]: time="2025-05-09T23:58:37.767201261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-82,Uid:e25a336c44a332ea7eb25b8a6bde1ec9,Namespace:kube-system,Attempt:0,} returns sandbox id \"252e841c475593202b1111524b48a7c96dc796ed86d9589be0d77ed19405c566\"" May 9 23:58:37.774809 containerd[1945]: time="2025-05-09T23:58:37.774696965Z" level=info msg="CreateContainer within sandbox \"252e841c475593202b1111524b48a7c96dc796ed86d9589be0d77ed19405c566\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 9 23:58:37.796096 containerd[1945]: time="2025-05-09T23:58:37.795715746Z" level=info msg="CreateContainer within sandbox \"eeed9899d28186bbc35b4138be822922c1518d18a81dc27a9e7a13a880c7f985\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8ff7d7696e66a7d24d2a815f2fb0fbbe7eb09bc8b1151d8705677199e129fa08\"" May 9 23:58:37.798148 containerd[1945]: time="2025-05-09T23:58:37.798076422Z" level=info msg="StartContainer for \"8ff7d7696e66a7d24d2a815f2fb0fbbe7eb09bc8b1151d8705677199e129fa08\"" May 9 23:58:37.799847 containerd[1945]: time="2025-05-09T23:58:37.799691010Z" level=info msg="CreateContainer within sandbox \"13ddc33de777163cd7955a0ff1626119aa04d44c177833ad22e415f46fbabba3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cc446c794757b1c2914b181c34ae9458e6489a21abf47433b13d66bf9a2ae229\"" May 9 23:58:37.801288 containerd[1945]: time="2025-05-09T23:58:37.801204894Z" level=info msg="StartContainer for \"cc446c794757b1c2914b181c34ae9458e6489a21abf47433b13d66bf9a2ae229\"" May 9 23:58:37.823365 kubelet[2751]: I0509 23:58:37.823004 2751 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-24-82" May 9 23:58:37.823365 kubelet[2751]: E0509 23:58:37.823585 2751 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.24.82:6443/api/v1/nodes\": dial tcp 172.31.24.82:6443: connect: connection refused" node="ip-172-31-24-82" May 9 23:58:37.840408 containerd[1945]: time="2025-05-09T23:58:37.839702766Z" level=info msg="CreateContainer within sandbox \"252e841c475593202b1111524b48a7c96dc796ed86d9589be0d77ed19405c566\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"53f9296d6d478c3282712478ebe985d00e205e4d80a8b7ee925cbb5f91eb0724\"" May 9 23:58:37.842007 containerd[1945]: time="2025-05-09T23:58:37.840990222Z" level=info msg="StartContainer for \"53f9296d6d478c3282712478ebe985d00e205e4d80a8b7ee925cbb5f91eb0724\"" May 9 23:58:37.871676 systemd[1]: Started cri-containerd-cc446c794757b1c2914b181c34ae9458e6489a21abf47433b13d66bf9a2ae229.scope - libcontainer container cc446c794757b1c2914b181c34ae9458e6489a21abf47433b13d66bf9a2ae229. May 9 23:58:37.894289 systemd[1]: Started cri-containerd-8ff7d7696e66a7d24d2a815f2fb0fbbe7eb09bc8b1151d8705677199e129fa08.scope - libcontainer container 8ff7d7696e66a7d24d2a815f2fb0fbbe7eb09bc8b1151d8705677199e129fa08. May 9 23:58:37.926345 systemd[1]: Started cri-containerd-53f9296d6d478c3282712478ebe985d00e205e4d80a8b7ee925cbb5f91eb0724.scope - libcontainer container 53f9296d6d478c3282712478ebe985d00e205e4d80a8b7ee925cbb5f91eb0724. May 9 23:58:38.038564 containerd[1945]: time="2025-05-09T23:58:38.038274939Z" level=info msg="StartContainer for \"cc446c794757b1c2914b181c34ae9458e6489a21abf47433b13d66bf9a2ae229\" returns successfully" May 9 23:58:38.048427 containerd[1945]: time="2025-05-09T23:58:38.048158547Z" level=info msg="StartContainer for \"8ff7d7696e66a7d24d2a815f2fb0fbbe7eb09bc8b1151d8705677199e129fa08\" returns successfully" May 9 23:58:38.060841 containerd[1945]: time="2025-05-09T23:58:38.060566115Z" level=info msg="StartContainer for \"53f9296d6d478c3282712478ebe985d00e205e4d80a8b7ee925cbb5f91eb0724\" returns successfully" May 9 23:58:38.281131 kubelet[2751]: E0509 23:58:38.281078 2751 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-82\" not found" node="ip-172-31-24-82" May 9 23:58:38.296555 kubelet[2751]: E0509 23:58:38.296499 2751 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-82\" not found" node="ip-172-31-24-82" May 9 23:58:38.304206 kubelet[2751]: E0509 23:58:38.302610 2751 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-82\" not found" node="ip-172-31-24-82" May 9 23:58:39.305376 kubelet[2751]: E0509 23:58:39.305325 2751 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-82\" not found" node="ip-172-31-24-82" May 9 23:58:39.305936 kubelet[2751]: E0509 23:58:39.305882 2751 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-82\" not found" node="ip-172-31-24-82" May 9 23:58:39.426329 kubelet[2751]: I0509 23:58:39.426264 2751 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-24-82" May 9 23:58:41.354349 kubelet[2751]: E0509 23:58:41.354095 2751 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-82\" not found" node="ip-172-31-24-82" May 9 23:58:42.393845 kubelet[2751]: E0509 23:58:42.393529 2751 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-82\" not found" node="ip-172-31-24-82" May 9 23:58:42.427135 kubelet[2751]: E0509 23:58:42.426579 2751 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-82\" not found" node="ip-172-31-24-82" May 9 23:58:42.542325 kubelet[2751]: E0509 23:58:42.542266 2751 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-82\" not found" node="ip-172-31-24-82" May 9 23:58:42.672120 kubelet[2751]: I0509 23:58:42.671720 2751 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-24-82" May 9 23:58:42.696014 kubelet[2751]: I0509 23:58:42.695935 2751 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-82" May 9 23:58:42.708988 kubelet[2751]: E0509 23:58:42.708751 2751 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-82\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-24-82" May 9 23:58:42.708988 kubelet[2751]: I0509 23:58:42.708824 2751 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-82" May 9 23:58:42.714862 kubelet[2751]: E0509 23:58:42.714529 2751 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-82\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-24-82" May 9 23:58:42.714862 kubelet[2751]: I0509 23:58:42.714604 2751 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-82" May 9 23:58:42.721554 kubelet[2751]: E0509 23:58:42.721488 2751 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-82\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-24-82" May 9 23:58:43.171935 kubelet[2751]: I0509 23:58:43.171280 2751 apiserver.go:52] "Watching apiserver" May 9 23:58:43.193408 kubelet[2751]: I0509 23:58:43.193342 2751 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 23:58:44.844615 systemd[1]: Reloading requested from client PID 3030 ('systemctl') (unit session-7.scope)... May 9 23:58:44.844657 systemd[1]: Reloading... May 9 23:58:45.148083 zram_generator::config[3073]: No configuration found. May 9 23:58:45.450855 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:58:45.673599 systemd[1]: Reloading finished in 828 ms. May 9 23:58:45.757273 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:45.775057 systemd[1]: kubelet.service: Deactivated successfully. May 9 23:58:45.776151 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:45.776254 systemd[1]: kubelet.service: Consumed 3.108s CPU time, 124.7M memory peak, 0B memory swap peak. May 9 23:58:45.783552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:46.130225 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:46.148932 (kubelet)[3130]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 23:58:46.259393 kubelet[3130]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:58:46.259393 kubelet[3130]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 9 23:58:46.259393 kubelet[3130]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:58:46.260008 kubelet[3130]: I0509 23:58:46.259552 3130 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 23:58:46.277195 kubelet[3130]: I0509 23:58:46.272344 3130 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 9 23:58:46.277195 kubelet[3130]: I0509 23:58:46.272392 3130 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 23:58:46.277195 kubelet[3130]: I0509 23:58:46.272883 3130 server.go:954] "Client rotation is on, will bootstrap in background" May 9 23:58:46.287201 kubelet[3130]: I0509 23:58:46.284234 3130 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 9 23:58:46.292783 kubelet[3130]: I0509 23:58:46.292711 3130 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 23:58:46.306547 kubelet[3130]: E0509 23:58:46.306314 3130 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 9 23:58:46.307200 kubelet[3130]: I0509 23:58:46.306730 3130 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 9 23:58:46.318667 kubelet[3130]: I0509 23:58:46.318588 3130 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 23:58:46.320506 kubelet[3130]: I0509 23:58:46.319699 3130 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 23:58:46.320506 kubelet[3130]: I0509 23:58:46.319786 3130 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-82","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 9 23:58:46.320506 kubelet[3130]: I0509 23:58:46.320179 3130 topology_manager.go:138] "Creating topology manager with none policy" May 9 23:58:46.320506 kubelet[3130]: I0509 23:58:46.320202 3130 container_manager_linux.go:304] "Creating device plugin manager" May 9 23:58:46.320923 kubelet[3130]: I0509 23:58:46.320293 3130 state_mem.go:36] "Initialized new in-memory state store" May 9 23:58:46.320923 kubelet[3130]: I0509 23:58:46.320617 3130 kubelet.go:446] "Attempting to sync node with API server" May 9 23:58:46.323778 kubelet[3130]: I0509 23:58:46.320650 3130 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 23:58:46.323778 kubelet[3130]: I0509 23:58:46.323130 3130 kubelet.go:352] "Adding apiserver pod source" May 9 23:58:46.323778 kubelet[3130]: I0509 23:58:46.323159 3130 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 23:58:46.326677 kubelet[3130]: I0509 23:58:46.326611 3130 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 23:58:46.331710 kubelet[3130]: I0509 23:58:46.331625 3130 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 23:58:46.334001 kubelet[3130]: I0509 23:58:46.332710 3130 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 9 23:58:46.334001 kubelet[3130]: I0509 23:58:46.332777 3130 server.go:1287] "Started kubelet" May 9 23:58:46.350021 kubelet[3130]: I0509 23:58:46.346650 3130 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 9 23:58:46.354492 kubelet[3130]: I0509 23:58:46.351741 3130 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 23:58:46.353636 sudo[3145]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 9 23:58:46.354574 sudo[3145]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 9 23:58:46.358405 kubelet[3130]: I0509 23:58:46.357799 3130 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 23:58:46.370864 kubelet[3130]: I0509 23:58:46.368926 3130 server.go:490] "Adding debug handlers to kubelet server" May 9 23:58:46.373297 kubelet[3130]: I0509 23:58:46.373234 3130 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 23:58:46.393258 kubelet[3130]: I0509 23:58:46.392802 3130 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 9 23:58:46.406487 kubelet[3130]: I0509 23:58:46.405211 3130 volume_manager.go:297] "Starting Kubelet Volume Manager" May 9 23:58:46.406487 kubelet[3130]: E0509 23:58:46.405691 3130 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-24-82\" not found" May 9 23:58:46.425974 kubelet[3130]: I0509 23:58:46.424864 3130 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 23:58:46.425974 kubelet[3130]: I0509 23:58:46.425231 3130 reconciler.go:26] "Reconciler: start to sync state" May 9 23:58:46.464403 kubelet[3130]: E0509 23:58:46.464345 3130 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 23:58:46.471339 kubelet[3130]: I0509 23:58:46.471261 3130 factory.go:221] Registration of the systemd container factory successfully May 9 23:58:46.471466 kubelet[3130]: I0509 23:58:46.471440 3130 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 23:58:46.493887 kubelet[3130]: I0509 23:58:46.493830 3130 factory.go:221] Registration of the containerd container factory successfully May 9 23:58:46.523379 kubelet[3130]: I0509 23:58:46.523098 3130 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 23:58:46.536762 kubelet[3130]: I0509 23:58:46.536526 3130 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 23:58:46.536762 kubelet[3130]: I0509 23:58:46.536576 3130 status_manager.go:227] "Starting to sync pod status with apiserver" May 9 23:58:46.536762 kubelet[3130]: I0509 23:58:46.536611 3130 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 9 23:58:46.536762 kubelet[3130]: I0509 23:58:46.536626 3130 kubelet.go:2388] "Starting kubelet main sync loop" May 9 23:58:46.536762 kubelet[3130]: E0509 23:58:46.536698 3130 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 23:58:46.637139 kubelet[3130]: E0509 23:58:46.636829 3130 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 9 23:58:46.691812 kubelet[3130]: I0509 23:58:46.691431 3130 cpu_manager.go:221] "Starting CPU manager" policy="none" May 9 23:58:46.691812 kubelet[3130]: I0509 23:58:46.691467 3130 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 9 23:58:46.691812 kubelet[3130]: I0509 23:58:46.691501 3130 state_mem.go:36] "Initialized new in-memory state store" May 9 23:58:46.693891 kubelet[3130]: I0509 23:58:46.693490 3130 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 9 23:58:46.693891 kubelet[3130]: I0509 23:58:46.693581 3130 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 9 23:58:46.693891 kubelet[3130]: I0509 23:58:46.693623 3130 policy_none.go:49] "None policy: Start" May 9 23:58:46.693891 kubelet[3130]: I0509 23:58:46.693643 3130 memory_manager.go:186] "Starting memorymanager" policy="None" May 9 23:58:46.693891 kubelet[3130]: I0509 23:58:46.693667 3130 state_mem.go:35] "Initializing new in-memory state store" May 9 23:58:46.696298 kubelet[3130]: I0509 23:58:46.694575 3130 state_mem.go:75] "Updated machine memory state" May 9 23:58:46.707050 kubelet[3130]: I0509 23:58:46.706952 3130 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 23:58:46.707512 kubelet[3130]: I0509 23:58:46.707489 3130 eviction_manager.go:189] "Eviction manager: starting control loop" May 9 23:58:46.707661 kubelet[3130]: I0509 23:58:46.707611 3130 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 23:58:46.708320 kubelet[3130]: I0509 23:58:46.708155 3130 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 23:58:46.717411 kubelet[3130]: E0509 23:58:46.717337 3130 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 9 23:58:46.839615 kubelet[3130]: I0509 23:58:46.838206 3130 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-82" May 9 23:58:46.839615 kubelet[3130]: I0509 23:58:46.838339 3130 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-82" May 9 23:58:46.839615 kubelet[3130]: I0509 23:58:46.838200 3130 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-82" May 9 23:58:46.844823 kubelet[3130]: I0509 23:58:46.844210 3130 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-24-82" May 9 23:58:46.866012 kubelet[3130]: I0509 23:58:46.864220 3130 kubelet_node_status.go:125] "Node was previously registered" node="ip-172-31-24-82" May 9 23:58:46.866012 kubelet[3130]: I0509 23:58:46.864342 3130 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-24-82" May 9 23:58:46.932742 kubelet[3130]: I0509 23:58:46.932676 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f98807012ee2c7019f8681f8369e3e24-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-82\" (UID: \"f98807012ee2c7019f8681f8369e3e24\") " pod="kube-system/kube-controller-manager-ip-172-31-24-82" May 9 23:58:46.932742 kubelet[3130]: I0509 23:58:46.932741 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f98807012ee2c7019f8681f8369e3e24-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-82\" (UID: \"f98807012ee2c7019f8681f8369e3e24\") " pod="kube-system/kube-controller-manager-ip-172-31-24-82" May 9 23:58:46.932950 kubelet[3130]: I0509 23:58:46.932789 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f98807012ee2c7019f8681f8369e3e24-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-82\" (UID: \"f98807012ee2c7019f8681f8369e3e24\") " pod="kube-system/kube-controller-manager-ip-172-31-24-82" May 9 23:58:46.932950 kubelet[3130]: I0509 23:58:46.932828 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f98807012ee2c7019f8681f8369e3e24-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-82\" (UID: \"f98807012ee2c7019f8681f8369e3e24\") " pod="kube-system/kube-controller-manager-ip-172-31-24-82" May 9 23:58:46.932950 kubelet[3130]: I0509 23:58:46.932870 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e25a336c44a332ea7eb25b8a6bde1ec9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-82\" (UID: \"e25a336c44a332ea7eb25b8a6bde1ec9\") " pod="kube-system/kube-apiserver-ip-172-31-24-82" May 9 23:58:46.932950 kubelet[3130]: I0509 23:58:46.932912 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f98807012ee2c7019f8681f8369e3e24-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-82\" (UID: \"f98807012ee2c7019f8681f8369e3e24\") " pod="kube-system/kube-controller-manager-ip-172-31-24-82" May 9 23:58:46.933270 kubelet[3130]: I0509 23:58:46.932947 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3734bf4f4d053f61c80f403fe8806d7f-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-82\" (UID: \"3734bf4f4d053f61c80f403fe8806d7f\") " pod="kube-system/kube-scheduler-ip-172-31-24-82" May 9 23:58:46.933270 kubelet[3130]: I0509 23:58:46.933013 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e25a336c44a332ea7eb25b8a6bde1ec9-ca-certs\") pod \"kube-apiserver-ip-172-31-24-82\" (UID: \"e25a336c44a332ea7eb25b8a6bde1ec9\") " pod="kube-system/kube-apiserver-ip-172-31-24-82" May 9 23:58:46.933270 kubelet[3130]: I0509 23:58:46.933055 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e25a336c44a332ea7eb25b8a6bde1ec9-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-82\" (UID: \"e25a336c44a332ea7eb25b8a6bde1ec9\") " pod="kube-system/kube-apiserver-ip-172-31-24-82" May 9 23:58:47.326647 kubelet[3130]: I0509 23:58:47.326103 3130 apiserver.go:52] "Watching apiserver" May 9 23:58:47.359539 sudo[3145]: pam_unix(sudo:session): session closed for user root May 9 23:58:47.425080 kubelet[3130]: I0509 23:58:47.425006 3130 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 23:58:47.606245 kubelet[3130]: I0509 23:58:47.605922 3130 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-82" May 9 23:58:47.620824 kubelet[3130]: E0509 23:58:47.619911 3130 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-82\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-24-82" May 9 23:58:47.659618 kubelet[3130]: I0509 23:58:47.659522 3130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-82" podStartSLOduration=1.659464923 podStartE2EDuration="1.659464923s" podCreationTimestamp="2025-05-09 23:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:58:47.64808093 +0000 UTC m=+1.487944784" watchObservedRunningTime="2025-05-09 23:58:47.659464923 +0000 UTC m=+1.499328765" May 9 23:58:47.677043 kubelet[3130]: I0509 23:58:47.676059 3130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-82" podStartSLOduration=1.6760360589999999 podStartE2EDuration="1.676036059s" podCreationTimestamp="2025-05-09 23:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:58:47.661028367 +0000 UTC m=+1.500892209" watchObservedRunningTime="2025-05-09 23:58:47.676036059 +0000 UTC m=+1.515899877" May 9 23:58:47.692837 kubelet[3130]: I0509 23:58:47.692547 3130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-82" podStartSLOduration=1.692523459 podStartE2EDuration="1.692523459s" podCreationTimestamp="2025-05-09 23:58:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:58:47.677627991 +0000 UTC m=+1.517491833" watchObservedRunningTime="2025-05-09 23:58:47.692523459 +0000 UTC m=+1.532387289" May 9 23:58:49.513446 kubelet[3130]: I0509 23:58:49.513385 3130 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 9 23:58:49.516040 containerd[1945]: time="2025-05-09T23:58:49.515537368Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 23:58:49.518382 kubelet[3130]: I0509 23:58:49.518319 3130 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 9 23:58:50.357752 sudo[2253]: pam_unix(sudo:session): session closed for user root May 9 23:58:50.383549 sshd[2250]: pam_unix(sshd:session): session closed for user core May 9 23:58:50.395531 systemd-logind[1915]: Session 7 logged out. Waiting for processes to exit. May 9 23:58:50.396515 systemd[1]: sshd@6-172.31.24.82:22-147.75.109.163:60748.service: Deactivated successfully. May 9 23:58:50.401098 kubelet[3130]: I0509 23:58:50.401019 3130 status_manager.go:890] "Failed to get status for pod" podUID="2929b496-955d-4c46-a3ee-f6356fd7e959" pod="kube-system/kube-proxy-9rxgj" err="pods \"kube-proxy-9rxgj\" is forbidden: User \"system:node:ip-172-31-24-82\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-24-82' and this object" May 9 23:58:50.401676 kubelet[3130]: W0509 23:58:50.401578 3130 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-24-82" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-82' and this object May 9 23:58:50.401676 kubelet[3130]: E0509 23:58:50.401633 3130 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-24-82\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-24-82' and this object" logger="UnhandledError" May 9 23:58:50.402267 kubelet[3130]: W0509 23:58:50.402133 3130 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-24-82" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-82' and this object May 9 23:58:50.402267 kubelet[3130]: E0509 23:58:50.402204 3130 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-24-82\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-24-82' and this object" logger="UnhandledError" May 9 23:58:50.404395 systemd[1]: session-7.scope: Deactivated successfully. May 9 23:58:50.404915 systemd[1]: session-7.scope: Consumed 9.987s CPU time, 150.7M memory peak, 0B memory swap peak. May 9 23:58:50.410352 systemd[1]: Created slice kubepods-besteffort-pod2929b496_955d_4c46_a3ee_f6356fd7e959.slice - libcontainer container kubepods-besteffort-pod2929b496_955d_4c46_a3ee_f6356fd7e959.slice. May 9 23:58:50.413215 systemd-logind[1915]: Removed session 7. May 9 23:58:50.445329 systemd[1]: Created slice kubepods-burstable-pod5c08c227_1eff_4cd7_8d10_21529b9a3a95.slice - libcontainer container kubepods-burstable-pod5c08c227_1eff_4cd7_8d10_21529b9a3a95.slice. May 9 23:58:50.456396 kubelet[3130]: I0509 23:58:50.456322 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5c08c227-1eff-4cd7-8d10-21529b9a3a95-clustermesh-secrets\") pod \"cilium-w5xq8\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " pod="kube-system/cilium-w5xq8" May 9 23:58:50.456553 kubelet[3130]: I0509 23:58:50.456403 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-hostproc\") pod \"cilium-w5xq8\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " pod="kube-system/cilium-w5xq8" May 9 23:58:50.456553 kubelet[3130]: I0509 23:58:50.456444 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c08c227-1eff-4cd7-8d10-21529b9a3a95-cilium-config-path\") pod \"cilium-w5xq8\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " pod="kube-system/cilium-w5xq8" May 9 23:58:50.456553 kubelet[3130]: I0509 23:58:50.456486 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2929b496-955d-4c46-a3ee-f6356fd7e959-lib-modules\") pod \"kube-proxy-9rxgj\" (UID: \"2929b496-955d-4c46-a3ee-f6356fd7e959\") " pod="kube-system/kube-proxy-9rxgj" May 9 23:58:50.456553 kubelet[3130]: I0509 23:58:50.456523 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-xtables-lock\") pod \"cilium-w5xq8\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " pod="kube-system/cilium-w5xq8" May 9 23:58:50.456796 kubelet[3130]: I0509 23:58:50.456558 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2929b496-955d-4c46-a3ee-f6356fd7e959-kube-proxy\") pod \"kube-proxy-9rxgj\" (UID: \"2929b496-955d-4c46-a3ee-f6356fd7e959\") " pod="kube-system/kube-proxy-9rxgj" May 9 23:58:50.456796 kubelet[3130]: I0509 23:58:50.456596 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-cilium-run\") pod \"cilium-w5xq8\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " pod="kube-system/cilium-w5xq8" May 9 23:58:50.456796 kubelet[3130]: I0509 23:58:50.456635 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-cni-path\") pod \"cilium-w5xq8\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " pod="kube-system/cilium-w5xq8" May 9 23:58:50.456796 kubelet[3130]: I0509 23:58:50.456673 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-host-proc-sys-kernel\") pod \"cilium-w5xq8\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " pod="kube-system/cilium-w5xq8" May 9 23:58:50.456796 kubelet[3130]: I0509 23:58:50.456722 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5c08c227-1eff-4cd7-8d10-21529b9a3a95-hubble-tls\") pod \"cilium-w5xq8\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " pod="kube-system/cilium-w5xq8" May 9 23:58:50.456796 kubelet[3130]: I0509 23:58:50.456778 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbvzp\" (UniqueName: \"kubernetes.io/projected/5c08c227-1eff-4cd7-8d10-21529b9a3a95-kube-api-access-gbvzp\") pod \"cilium-w5xq8\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " pod="kube-system/cilium-w5xq8" May 9 23:58:50.457674 kubelet[3130]: I0509 23:58:50.456830 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-etc-cni-netd\") pod \"cilium-w5xq8\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " pod="kube-system/cilium-w5xq8" May 9 23:58:50.457674 kubelet[3130]: I0509 23:58:50.456868 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2929b496-955d-4c46-a3ee-f6356fd7e959-xtables-lock\") pod \"kube-proxy-9rxgj\" (UID: \"2929b496-955d-4c46-a3ee-f6356fd7e959\") " pod="kube-system/kube-proxy-9rxgj" May 9 23:58:50.457674 kubelet[3130]: I0509 23:58:50.456916 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-bpf-maps\") pod \"cilium-w5xq8\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " pod="kube-system/cilium-w5xq8" May 9 23:58:50.459517 kubelet[3130]: I0509 23:58:50.459451 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-host-proc-sys-net\") pod \"cilium-w5xq8\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " pod="kube-system/cilium-w5xq8" May 9 23:58:50.459663 kubelet[3130]: I0509 23:58:50.459561 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfsbs\" (UniqueName: \"kubernetes.io/projected/2929b496-955d-4c46-a3ee-f6356fd7e959-kube-api-access-cfsbs\") pod \"kube-proxy-9rxgj\" (UID: \"2929b496-955d-4c46-a3ee-f6356fd7e959\") " pod="kube-system/kube-proxy-9rxgj" May 9 23:58:50.459663 kubelet[3130]: I0509 23:58:50.459607 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-cilium-cgroup\") pod \"cilium-w5xq8\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " pod="kube-system/cilium-w5xq8" May 9 23:58:50.459663 kubelet[3130]: I0509 23:58:50.459647 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-lib-modules\") pod \"cilium-w5xq8\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " pod="kube-system/cilium-w5xq8" May 9 23:58:50.735997 systemd[1]: Created slice kubepods-besteffort-podbddef3d7_98cc_4f26_8c99_594d985fbcfb.slice - libcontainer container kubepods-besteffort-podbddef3d7_98cc_4f26_8c99_594d985fbcfb.slice. May 9 23:58:50.763193 kubelet[3130]: I0509 23:58:50.762098 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bddef3d7-98cc-4f26-8c99-594d985fbcfb-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-r8vr8\" (UID: \"bddef3d7-98cc-4f26-8c99-594d985fbcfb\") " pod="kube-system/cilium-operator-6c4d7847fc-r8vr8" May 9 23:58:50.763193 kubelet[3130]: I0509 23:58:50.762209 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4kkq\" (UniqueName: \"kubernetes.io/projected/bddef3d7-98cc-4f26-8c99-594d985fbcfb-kube-api-access-b4kkq\") pod \"cilium-operator-6c4d7847fc-r8vr8\" (UID: \"bddef3d7-98cc-4f26-8c99-594d985fbcfb\") " pod="kube-system/cilium-operator-6c4d7847fc-r8vr8" May 9 23:58:50.962482 update_engine[1916]: I20250509 23:58:50.962092 1916 update_attempter.cc:509] Updating boot flags... May 9 23:58:51.043129 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3216) May 9 23:58:51.353400 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3218) May 9 23:58:51.645637 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3218) May 9 23:58:51.690186 kubelet[3130]: E0509 23:58:51.688709 3130 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 9 23:58:51.690186 kubelet[3130]: E0509 23:58:51.688764 3130 projected.go:194] Error preparing data for projected volume kube-api-access-cfsbs for pod kube-system/kube-proxy-9rxgj: failed to sync configmap cache: timed out waiting for the condition May 9 23:58:51.690186 kubelet[3130]: E0509 23:58:51.688876 3130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2929b496-955d-4c46-a3ee-f6356fd7e959-kube-api-access-cfsbs podName:2929b496-955d-4c46-a3ee-f6356fd7e959 nodeName:}" failed. No retries permitted until 2025-05-09 23:58:52.188841119 +0000 UTC m=+6.028704949 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cfsbs" (UniqueName: "kubernetes.io/projected/2929b496-955d-4c46-a3ee-f6356fd7e959-kube-api-access-cfsbs") pod "kube-proxy-9rxgj" (UID: "2929b496-955d-4c46-a3ee-f6356fd7e959") : failed to sync configmap cache: timed out waiting for the condition May 9 23:58:51.717623 kubelet[3130]: E0509 23:58:51.715881 3130 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 9 23:58:51.717623 kubelet[3130]: E0509 23:58:51.715993 3130 projected.go:194] Error preparing data for projected volume kube-api-access-gbvzp for pod kube-system/cilium-w5xq8: failed to sync configmap cache: timed out waiting for the condition May 9 23:58:51.717623 kubelet[3130]: E0509 23:58:51.716118 3130 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5c08c227-1eff-4cd7-8d10-21529b9a3a95-kube-api-access-gbvzp podName:5c08c227-1eff-4cd7-8d10-21529b9a3a95 nodeName:}" failed. No retries permitted until 2025-05-09 23:58:52.216079751 +0000 UTC m=+6.055943593 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gbvzp" (UniqueName: "kubernetes.io/projected/5c08c227-1eff-4cd7-8d10-21529b9a3a95-kube-api-access-gbvzp") pod "cilium-w5xq8" (UID: "5c08c227-1eff-4cd7-8d10-21529b9a3a95") : failed to sync configmap cache: timed out waiting for the condition May 9 23:58:51.946928 containerd[1945]: time="2025-05-09T23:58:51.946032704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-r8vr8,Uid:bddef3d7-98cc-4f26-8c99-594d985fbcfb,Namespace:kube-system,Attempt:0,}" May 9 23:58:51.995385 containerd[1945]: time="2025-05-09T23:58:51.995163800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:58:51.995385 containerd[1945]: time="2025-05-09T23:58:51.995312792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:58:51.995907 containerd[1945]: time="2025-05-09T23:58:51.995357156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:51.995907 containerd[1945]: time="2025-05-09T23:58:51.995543384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:52.042337 systemd[1]: Started cri-containerd-61f1f86d94772c41e340821ae358017a1ff69efab7f930807f1c774ed9470363.scope - libcontainer container 61f1f86d94772c41e340821ae358017a1ff69efab7f930807f1c774ed9470363. May 9 23:58:52.109842 containerd[1945]: time="2025-05-09T23:58:52.109776845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-r8vr8,Uid:bddef3d7-98cc-4f26-8c99-594d985fbcfb,Namespace:kube-system,Attempt:0,} returns sandbox id \"61f1f86d94772c41e340821ae358017a1ff69efab7f930807f1c774ed9470363\"" May 9 23:58:52.114832 containerd[1945]: time="2025-05-09T23:58:52.114452585Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 9 23:58:52.525373 containerd[1945]: time="2025-05-09T23:58:52.525220339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9rxgj,Uid:2929b496-955d-4c46-a3ee-f6356fd7e959,Namespace:kube-system,Attempt:0,}" May 9 23:58:52.564326 containerd[1945]: time="2025-05-09T23:58:52.564060835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w5xq8,Uid:5c08c227-1eff-4cd7-8d10-21529b9a3a95,Namespace:kube-system,Attempt:0,}" May 9 23:58:52.571031 containerd[1945]: time="2025-05-09T23:58:52.570517411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:58:52.571640 containerd[1945]: time="2025-05-09T23:58:52.571530787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:58:52.571769 containerd[1945]: time="2025-05-09T23:58:52.571685851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:52.572062 containerd[1945]: time="2025-05-09T23:58:52.571924579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:52.609295 systemd[1]: Started cri-containerd-b47d1efa27e80b84c529cd2c8893b3c0a0f120003c33c308d7f0e46612a1bebf.scope - libcontainer container b47d1efa27e80b84c529cd2c8893b3c0a0f120003c33c308d7f0e46612a1bebf. May 9 23:58:52.638059 containerd[1945]: time="2025-05-09T23:58:52.637199923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:58:52.639837 containerd[1945]: time="2025-05-09T23:58:52.637994707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:58:52.639837 containerd[1945]: time="2025-05-09T23:58:52.638138335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:52.639837 containerd[1945]: time="2025-05-09T23:58:52.639118135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:52.674793 containerd[1945]: time="2025-05-09T23:58:52.674212363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9rxgj,Uid:2929b496-955d-4c46-a3ee-f6356fd7e959,Namespace:kube-system,Attempt:0,} returns sandbox id \"b47d1efa27e80b84c529cd2c8893b3c0a0f120003c33c308d7f0e46612a1bebf\"" May 9 23:58:52.689289 containerd[1945]: time="2025-05-09T23:58:52.689206952Z" level=info msg="CreateContainer within sandbox \"b47d1efa27e80b84c529cd2c8893b3c0a0f120003c33c308d7f0e46612a1bebf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 23:58:52.695293 systemd[1]: Started cri-containerd-5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed.scope - libcontainer container 5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed. May 9 23:58:52.774028 containerd[1945]: time="2025-05-09T23:58:52.772189532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w5xq8,Uid:5c08c227-1eff-4cd7-8d10-21529b9a3a95,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed\"" May 9 23:58:52.774827 containerd[1945]: time="2025-05-09T23:58:52.774119576Z" level=info msg="CreateContainer within sandbox \"b47d1efa27e80b84c529cd2c8893b3c0a0f120003c33c308d7f0e46612a1bebf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"63e0720c84b3faf31d4f261127111781f00c2d0ef619b6f2b97455e629541a6b\"" May 9 23:58:52.786991 containerd[1945]: time="2025-05-09T23:58:52.784824164Z" level=info msg="StartContainer for \"63e0720c84b3faf31d4f261127111781f00c2d0ef619b6f2b97455e629541a6b\"" May 9 23:58:52.856343 systemd[1]: Started cri-containerd-63e0720c84b3faf31d4f261127111781f00c2d0ef619b6f2b97455e629541a6b.scope - libcontainer container 63e0720c84b3faf31d4f261127111781f00c2d0ef619b6f2b97455e629541a6b. May 9 23:58:52.920005 containerd[1945]: time="2025-05-09T23:58:52.918402453Z" level=info msg="StartContainer for \"63e0720c84b3faf31d4f261127111781f00c2d0ef619b6f2b97455e629541a6b\" returns successfully" May 9 23:58:53.678331 kubelet[3130]: I0509 23:58:53.677757 3130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9rxgj" podStartSLOduration=3.677720384 podStartE2EDuration="3.677720384s" podCreationTimestamp="2025-05-09 23:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:58:53.660750848 +0000 UTC m=+7.500614702" watchObservedRunningTime="2025-05-09 23:58:53.677720384 +0000 UTC m=+7.517584202" May 9 23:58:54.491917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4042544309.mount: Deactivated successfully. May 9 23:58:55.422037 containerd[1945]: time="2025-05-09T23:58:55.421301553Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:55.424799 containerd[1945]: time="2025-05-09T23:58:55.424719045Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 9 23:58:55.427267 containerd[1945]: time="2025-05-09T23:58:55.427188177Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:55.430442 containerd[1945]: time="2025-05-09T23:58:55.430231281Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.315681148s" May 9 23:58:55.430442 containerd[1945]: time="2025-05-09T23:58:55.430291485Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 9 23:58:55.432665 containerd[1945]: time="2025-05-09T23:58:55.432264357Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 9 23:58:55.437106 containerd[1945]: time="2025-05-09T23:58:55.436697373Z" level=info msg="CreateContainer within sandbox \"61f1f86d94772c41e340821ae358017a1ff69efab7f930807f1c774ed9470363\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 9 23:58:55.469468 containerd[1945]: time="2025-05-09T23:58:55.469329693Z" level=info msg="CreateContainer within sandbox \"61f1f86d94772c41e340821ae358017a1ff69efab7f930807f1c774ed9470363\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f666936194007e7cd1f1f1cc150833527153ad6ebc50690ce539f75434ddf3f9\"" May 9 23:58:55.471811 containerd[1945]: time="2025-05-09T23:58:55.471557841Z" level=info msg="StartContainer for \"f666936194007e7cd1f1f1cc150833527153ad6ebc50690ce539f75434ddf3f9\"" May 9 23:58:55.528283 systemd[1]: Started cri-containerd-f666936194007e7cd1f1f1cc150833527153ad6ebc50690ce539f75434ddf3f9.scope - libcontainer container f666936194007e7cd1f1f1cc150833527153ad6ebc50690ce539f75434ddf3f9. May 9 23:58:55.582908 containerd[1945]: time="2025-05-09T23:58:55.582835294Z" level=info msg="StartContainer for \"f666936194007e7cd1f1f1cc150833527153ad6ebc50690ce539f75434ddf3f9\" returns successfully" May 9 23:58:55.728449 kubelet[3130]: I0509 23:58:55.728083 3130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-r8vr8" podStartSLOduration=2.4096449189999998 podStartE2EDuration="5.728058251s" podCreationTimestamp="2025-05-09 23:58:50 +0000 UTC" firstStartedPulling="2025-05-09 23:58:52.113524385 +0000 UTC m=+5.953388215" lastFinishedPulling="2025-05-09 23:58:55.431937729 +0000 UTC m=+9.271801547" observedRunningTime="2025-05-09 23:58:55.668528242 +0000 UTC m=+9.508392096" watchObservedRunningTime="2025-05-09 23:58:55.728058251 +0000 UTC m=+9.567922105" May 9 23:59:07.666201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1684143240.mount: Deactivated successfully. May 9 23:59:10.384694 containerd[1945]: time="2025-05-09T23:59:10.384609419Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:10.386705 containerd[1945]: time="2025-05-09T23:59:10.386625383Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 9 23:59:10.389324 containerd[1945]: time="2025-05-09T23:59:10.389206727Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:10.392762 containerd[1945]: time="2025-05-09T23:59:10.392525279Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 14.96019227s" May 9 23:59:10.392762 containerd[1945]: time="2025-05-09T23:59:10.392594579Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 9 23:59:10.401030 containerd[1945]: time="2025-05-09T23:59:10.400598592Z" level=info msg="CreateContainer within sandbox \"5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 23:59:10.429584 containerd[1945]: time="2025-05-09T23:59:10.429502128Z" level=info msg="CreateContainer within sandbox \"5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e1f493f0fbd63a34b4eebb5bb121af0369770b1f6c90ef6b9166b19b9d54e8b6\"" May 9 23:59:10.432067 containerd[1945]: time="2025-05-09T23:59:10.431322636Z" level=info msg="StartContainer for \"e1f493f0fbd63a34b4eebb5bb121af0369770b1f6c90ef6b9166b19b9d54e8b6\"" May 9 23:59:10.490317 systemd[1]: Started cri-containerd-e1f493f0fbd63a34b4eebb5bb121af0369770b1f6c90ef6b9166b19b9d54e8b6.scope - libcontainer container e1f493f0fbd63a34b4eebb5bb121af0369770b1f6c90ef6b9166b19b9d54e8b6. May 9 23:59:10.554427 containerd[1945]: time="2025-05-09T23:59:10.554324520Z" level=info msg="StartContainer for \"e1f493f0fbd63a34b4eebb5bb121af0369770b1f6c90ef6b9166b19b9d54e8b6\" returns successfully" May 9 23:59:10.574510 systemd[1]: cri-containerd-e1f493f0fbd63a34b4eebb5bb121af0369770b1f6c90ef6b9166b19b9d54e8b6.scope: Deactivated successfully. May 9 23:59:11.419656 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1f493f0fbd63a34b4eebb5bb121af0369770b1f6c90ef6b9166b19b9d54e8b6-rootfs.mount: Deactivated successfully. May 9 23:59:11.479628 containerd[1945]: time="2025-05-09T23:59:11.479506609Z" level=info msg="shim disconnected" id=e1f493f0fbd63a34b4eebb5bb121af0369770b1f6c90ef6b9166b19b9d54e8b6 namespace=k8s.io May 9 23:59:11.479628 containerd[1945]: time="2025-05-09T23:59:11.479586673Z" level=warning msg="cleaning up after shim disconnected" id=e1f493f0fbd63a34b4eebb5bb121af0369770b1f6c90ef6b9166b19b9d54e8b6 namespace=k8s.io May 9 23:59:11.479628 containerd[1945]: time="2025-05-09T23:59:11.479607385Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:59:11.713434 containerd[1945]: time="2025-05-09T23:59:11.713271062Z" level=info msg="CreateContainer within sandbox \"5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 23:59:11.748178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2821903762.mount: Deactivated successfully. May 9 23:59:11.752791 containerd[1945]: time="2025-05-09T23:59:11.752694146Z" level=info msg="CreateContainer within sandbox \"5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a7c40cc0160d5bc3fd808736692f954574e523c535fc873942cb6a5e44b949ad\"" May 9 23:59:11.757684 containerd[1945]: time="2025-05-09T23:59:11.754129070Z" level=info msg="StartContainer for \"a7c40cc0160d5bc3fd808736692f954574e523c535fc873942cb6a5e44b949ad\"" May 9 23:59:11.814275 systemd[1]: Started cri-containerd-a7c40cc0160d5bc3fd808736692f954574e523c535fc873942cb6a5e44b949ad.scope - libcontainer container a7c40cc0160d5bc3fd808736692f954574e523c535fc873942cb6a5e44b949ad. May 9 23:59:11.858996 containerd[1945]: time="2025-05-09T23:59:11.858894711Z" level=info msg="StartContainer for \"a7c40cc0160d5bc3fd808736692f954574e523c535fc873942cb6a5e44b949ad\" returns successfully" May 9 23:59:11.883852 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 23:59:11.886199 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 23:59:11.886333 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 9 23:59:11.894576 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:59:11.895067 systemd[1]: cri-containerd-a7c40cc0160d5bc3fd808736692f954574e523c535fc873942cb6a5e44b949ad.scope: Deactivated successfully. May 9 23:59:11.937655 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:59:11.949664 containerd[1945]: time="2025-05-09T23:59:11.949271151Z" level=info msg="shim disconnected" id=a7c40cc0160d5bc3fd808736692f954574e523c535fc873942cb6a5e44b949ad namespace=k8s.io May 9 23:59:11.949664 containerd[1945]: time="2025-05-09T23:59:11.949423623Z" level=warning msg="cleaning up after shim disconnected" id=a7c40cc0160d5bc3fd808736692f954574e523c535fc873942cb6a5e44b949ad namespace=k8s.io May 9 23:59:11.949664 containerd[1945]: time="2025-05-09T23:59:11.949445223Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:59:12.418118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7c40cc0160d5bc3fd808736692f954574e523c535fc873942cb6a5e44b949ad-rootfs.mount: Deactivated successfully. May 9 23:59:12.722596 containerd[1945]: time="2025-05-09T23:59:12.722073123Z" level=info msg="CreateContainer within sandbox \"5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 23:59:12.765033 containerd[1945]: time="2025-05-09T23:59:12.764556339Z" level=info msg="CreateContainer within sandbox \"5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4a749722d3ec27e62d6281638d96fa5bb40f23a21967b526673de96954b7003e\"" May 9 23:59:12.766632 containerd[1945]: time="2025-05-09T23:59:12.766560771Z" level=info msg="StartContainer for \"4a749722d3ec27e62d6281638d96fa5bb40f23a21967b526673de96954b7003e\"" May 9 23:59:12.822533 systemd[1]: Started cri-containerd-4a749722d3ec27e62d6281638d96fa5bb40f23a21967b526673de96954b7003e.scope - libcontainer container 4a749722d3ec27e62d6281638d96fa5bb40f23a21967b526673de96954b7003e. May 9 23:59:12.877900 containerd[1945]: time="2025-05-09T23:59:12.877464856Z" level=info msg="StartContainer for \"4a749722d3ec27e62d6281638d96fa5bb40f23a21967b526673de96954b7003e\" returns successfully" May 9 23:59:12.882287 systemd[1]: cri-containerd-4a749722d3ec27e62d6281638d96fa5bb40f23a21967b526673de96954b7003e.scope: Deactivated successfully. May 9 23:59:12.937278 containerd[1945]: time="2025-05-09T23:59:12.937190440Z" level=info msg="shim disconnected" id=4a749722d3ec27e62d6281638d96fa5bb40f23a21967b526673de96954b7003e namespace=k8s.io May 9 23:59:12.937543 containerd[1945]: time="2025-05-09T23:59:12.937300588Z" level=warning msg="cleaning up after shim disconnected" id=4a749722d3ec27e62d6281638d96fa5bb40f23a21967b526673de96954b7003e namespace=k8s.io May 9 23:59:12.937543 containerd[1945]: time="2025-05-09T23:59:12.937322920Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:59:13.417888 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a749722d3ec27e62d6281638d96fa5bb40f23a21967b526673de96954b7003e-rootfs.mount: Deactivated successfully. May 9 23:59:13.724180 containerd[1945]: time="2025-05-09T23:59:13.723421972Z" level=info msg="CreateContainer within sandbox \"5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 23:59:13.760731 containerd[1945]: time="2025-05-09T23:59:13.759612064Z" level=info msg="CreateContainer within sandbox \"5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d333e677da11c4f8a4eb0de02b69960279a89e3df2447ba2ad7e9a40d4885c23\"" May 9 23:59:13.762732 containerd[1945]: time="2025-05-09T23:59:13.761005276Z" level=info msg="StartContainer for \"d333e677da11c4f8a4eb0de02b69960279a89e3df2447ba2ad7e9a40d4885c23\"" May 9 23:59:13.820310 systemd[1]: Started cri-containerd-d333e677da11c4f8a4eb0de02b69960279a89e3df2447ba2ad7e9a40d4885c23.scope - libcontainer container d333e677da11c4f8a4eb0de02b69960279a89e3df2447ba2ad7e9a40d4885c23. May 9 23:59:13.868207 systemd[1]: cri-containerd-d333e677da11c4f8a4eb0de02b69960279a89e3df2447ba2ad7e9a40d4885c23.scope: Deactivated successfully. May 9 23:59:13.872506 containerd[1945]: time="2025-05-09T23:59:13.872283389Z" level=info msg="StartContainer for \"d333e677da11c4f8a4eb0de02b69960279a89e3df2447ba2ad7e9a40d4885c23\" returns successfully" May 9 23:59:13.915651 containerd[1945]: time="2025-05-09T23:59:13.915512537Z" level=info msg="shim disconnected" id=d333e677da11c4f8a4eb0de02b69960279a89e3df2447ba2ad7e9a40d4885c23 namespace=k8s.io May 9 23:59:13.915651 containerd[1945]: time="2025-05-09T23:59:13.915588533Z" level=warning msg="cleaning up after shim disconnected" id=d333e677da11c4f8a4eb0de02b69960279a89e3df2447ba2ad7e9a40d4885c23 namespace=k8s.io May 9 23:59:13.915651 containerd[1945]: time="2025-05-09T23:59:13.915610421Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:59:14.417868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d333e677da11c4f8a4eb0de02b69960279a89e3df2447ba2ad7e9a40d4885c23-rootfs.mount: Deactivated successfully. May 9 23:59:14.730698 containerd[1945]: time="2025-05-09T23:59:14.730514381Z" level=info msg="CreateContainer within sandbox \"5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 23:59:14.766575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1407333793.mount: Deactivated successfully. May 9 23:59:14.773470 containerd[1945]: time="2025-05-09T23:59:14.773390465Z" level=info msg="CreateContainer within sandbox \"5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5487e09017558b580bdf9fcff8d064cbe47ac7929b9d502f86ce857c91a77348\"" May 9 23:59:14.774463 containerd[1945]: time="2025-05-09T23:59:14.774404105Z" level=info msg="StartContainer for \"5487e09017558b580bdf9fcff8d064cbe47ac7929b9d502f86ce857c91a77348\"" May 9 23:59:14.836283 systemd[1]: Started cri-containerd-5487e09017558b580bdf9fcff8d064cbe47ac7929b9d502f86ce857c91a77348.scope - libcontainer container 5487e09017558b580bdf9fcff8d064cbe47ac7929b9d502f86ce857c91a77348. May 9 23:59:14.899312 containerd[1945]: time="2025-05-09T23:59:14.899241534Z" level=info msg="StartContainer for \"5487e09017558b580bdf9fcff8d064cbe47ac7929b9d502f86ce857c91a77348\" returns successfully" May 9 23:59:15.103140 kubelet[3130]: I0509 23:59:15.103086 3130 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 9 23:59:15.162806 kubelet[3130]: I0509 23:59:15.161567 3130 status_manager.go:890] "Failed to get status for pod" podUID="417dc51c-990c-48b0-a7b7-e619c8460e2d" pod="kube-system/coredns-668d6bf9bc-4zfk5" err="pods \"coredns-668d6bf9bc-4zfk5\" is forbidden: User \"system:node:ip-172-31-24-82\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-24-82' and this object" May 9 23:59:15.169188 kubelet[3130]: I0509 23:59:15.169122 3130 status_manager.go:890] "Failed to get status for pod" podUID="2ae349c8-814f-498f-b873-866a1e3ae0e7" pod="kube-system/coredns-668d6bf9bc-k2g9k" err="pods \"coredns-668d6bf9bc-k2g9k\" is forbidden: User \"system:node:ip-172-31-24-82\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-24-82' and this object" May 9 23:59:15.178593 systemd[1]: Created slice kubepods-burstable-pod417dc51c_990c_48b0_a7b7_e619c8460e2d.slice - libcontainer container kubepods-burstable-pod417dc51c_990c_48b0_a7b7_e619c8460e2d.slice. May 9 23:59:15.193575 systemd[1]: Created slice kubepods-burstable-pod2ae349c8_814f_498f_b873_866a1e3ae0e7.slice - libcontainer container kubepods-burstable-pod2ae349c8_814f_498f_b873_866a1e3ae0e7.slice. May 9 23:59:15.250389 kubelet[3130]: I0509 23:59:15.250310 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ae349c8-814f-498f-b873-866a1e3ae0e7-config-volume\") pod \"coredns-668d6bf9bc-k2g9k\" (UID: \"2ae349c8-814f-498f-b873-866a1e3ae0e7\") " pod="kube-system/coredns-668d6bf9bc-k2g9k" May 9 23:59:15.250571 kubelet[3130]: I0509 23:59:15.250403 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8kwv\" (UniqueName: \"kubernetes.io/projected/417dc51c-990c-48b0-a7b7-e619c8460e2d-kube-api-access-g8kwv\") pod \"coredns-668d6bf9bc-4zfk5\" (UID: \"417dc51c-990c-48b0-a7b7-e619c8460e2d\") " pod="kube-system/coredns-668d6bf9bc-4zfk5" May 9 23:59:15.250571 kubelet[3130]: I0509 23:59:15.250449 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzq5x\" (UniqueName: \"kubernetes.io/projected/2ae349c8-814f-498f-b873-866a1e3ae0e7-kube-api-access-xzq5x\") pod \"coredns-668d6bf9bc-k2g9k\" (UID: \"2ae349c8-814f-498f-b873-866a1e3ae0e7\") " pod="kube-system/coredns-668d6bf9bc-k2g9k" May 9 23:59:15.250571 kubelet[3130]: I0509 23:59:15.250502 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/417dc51c-990c-48b0-a7b7-e619c8460e2d-config-volume\") pod \"coredns-668d6bf9bc-4zfk5\" (UID: \"417dc51c-990c-48b0-a7b7-e619c8460e2d\") " pod="kube-system/coredns-668d6bf9bc-4zfk5" May 9 23:59:15.489148 containerd[1945]: time="2025-05-09T23:59:15.488545781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4zfk5,Uid:417dc51c-990c-48b0-a7b7-e619c8460e2d,Namespace:kube-system,Attempt:0,}" May 9 23:59:15.501465 containerd[1945]: time="2025-05-09T23:59:15.501399785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k2g9k,Uid:2ae349c8-814f-498f-b873-866a1e3ae0e7,Namespace:kube-system,Attempt:0,}" May 9 23:59:17.822516 systemd-networkd[1759]: cilium_host: Link UP May 9 23:59:17.826403 systemd-networkd[1759]: cilium_net: Link UP May 9 23:59:17.826435 (udev-worker)[4199]: Network interface NamePolicy= disabled on kernel command line. May 9 23:59:17.826800 systemd-networkd[1759]: cilium_net: Gained carrier May 9 23:59:17.827205 systemd-networkd[1759]: cilium_host: Gained carrier May 9 23:59:17.828787 (udev-worker)[4200]: Network interface NamePolicy= disabled on kernel command line. May 9 23:59:18.009175 systemd-networkd[1759]: cilium_vxlan: Link UP May 9 23:59:18.010243 systemd-networkd[1759]: cilium_vxlan: Gained carrier May 9 23:59:18.160146 systemd-networkd[1759]: cilium_host: Gained IPv6LL May 9 23:59:18.500017 kernel: NET: Registered PF_ALG protocol family May 9 23:59:18.697602 systemd-networkd[1759]: cilium_net: Gained IPv6LL May 9 23:59:19.866312 systemd-networkd[1759]: lxc_health: Link UP May 9 23:59:19.870559 (udev-worker)[4247]: Network interface NamePolicy= disabled on kernel command line. May 9 23:59:19.874987 systemd-networkd[1759]: lxc_health: Gained carrier May 9 23:59:19.976191 systemd-networkd[1759]: cilium_vxlan: Gained IPv6LL May 9 23:59:20.608456 kubelet[3130]: I0509 23:59:20.607368 3130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w5xq8" podStartSLOduration=13.005530719 podStartE2EDuration="30.607342906s" podCreationTimestamp="2025-05-09 23:58:50 +0000 UTC" firstStartedPulling="2025-05-09 23:58:52.792900992 +0000 UTC m=+6.632764822" lastFinishedPulling="2025-05-09 23:59:10.394713191 +0000 UTC m=+24.234577009" observedRunningTime="2025-05-09 23:59:15.80280567 +0000 UTC m=+29.642669524" watchObservedRunningTime="2025-05-09 23:59:20.607342906 +0000 UTC m=+34.447206724" May 9 23:59:20.676056 kernel: eth0: renamed from tmp7a933 May 9 23:59:20.691088 systemd-networkd[1759]: lxc12c1c793bfe3: Link UP May 9 23:59:20.708674 systemd-networkd[1759]: lxc12c1c793bfe3: Gained carrier May 9 23:59:20.716378 systemd-networkd[1759]: lxcf947f5e3edfd: Link UP May 9 23:59:20.723622 (udev-worker)[4246]: Network interface NamePolicy= disabled on kernel command line. May 9 23:59:20.728110 kernel: eth0: renamed from tmp20e85 May 9 23:59:20.737551 systemd-networkd[1759]: lxcf947f5e3edfd: Gained carrier May 9 23:59:21.832677 systemd-networkd[1759]: lxc_health: Gained IPv6LL May 9 23:59:21.960195 systemd-networkd[1759]: lxc12c1c793bfe3: Gained IPv6LL May 9 23:59:22.088725 systemd-networkd[1759]: lxcf947f5e3edfd: Gained IPv6LL May 9 23:59:24.416790 ntpd[1908]: Listen normally on 7 cilium_host 192.168.0.237:123 May 9 23:59:24.416931 ntpd[1908]: Listen normally on 8 cilium_net [fe80::f852:9ff:fe82:9185%4]:123 May 9 23:59:24.417413 ntpd[1908]: 9 May 23:59:24 ntpd[1908]: Listen normally on 7 cilium_host 192.168.0.237:123 May 9 23:59:24.417413 ntpd[1908]: 9 May 23:59:24 ntpd[1908]: Listen normally on 8 cilium_net [fe80::f852:9ff:fe82:9185%4]:123 May 9 23:59:24.417413 ntpd[1908]: 9 May 23:59:24 ntpd[1908]: Listen normally on 9 cilium_host [fe80::c42b:ffff:fea7:3fab%5]:123 May 9 23:59:24.417413 ntpd[1908]: 9 May 23:59:24 ntpd[1908]: Listen normally on 10 cilium_vxlan [fe80::5c69:cbff:fee0:db4b%6]:123 May 9 23:59:24.417413 ntpd[1908]: 9 May 23:59:24 ntpd[1908]: Listen normally on 11 lxc_health [fe80::cc97:1ff:fed9:5e8%8]:123 May 9 23:59:24.417413 ntpd[1908]: 9 May 23:59:24 ntpd[1908]: Listen normally on 12 lxc12c1c793bfe3 [fe80::5c2c:e9ff:fe98:a81a%10]:123 May 9 23:59:24.417413 ntpd[1908]: 9 May 23:59:24 ntpd[1908]: Listen normally on 13 lxcf947f5e3edfd [fe80::d0d4:8dff:fe30:dba%12]:123 May 9 23:59:24.417057 ntpd[1908]: Listen normally on 9 cilium_host [fe80::c42b:ffff:fea7:3fab%5]:123 May 9 23:59:24.417131 ntpd[1908]: Listen normally on 10 cilium_vxlan [fe80::5c69:cbff:fee0:db4b%6]:123 May 9 23:59:24.417200 ntpd[1908]: Listen normally on 11 lxc_health [fe80::cc97:1ff:fed9:5e8%8]:123 May 9 23:59:24.417275 ntpd[1908]: Listen normally on 12 lxc12c1c793bfe3 [fe80::5c2c:e9ff:fe98:a81a%10]:123 May 9 23:59:24.417344 ntpd[1908]: Listen normally on 13 lxcf947f5e3edfd [fe80::d0d4:8dff:fe30:dba%12]:123 May 9 23:59:26.602180 systemd[1]: Started sshd@7-172.31.24.82:22-147.75.109.163:40302.service - OpenSSH per-connection server daemon (147.75.109.163:40302). May 9 23:59:26.804363 sshd[4606]: Accepted publickey for core from 147.75.109.163 port 40302 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:26.807468 sshd[4606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:26.818148 systemd-logind[1915]: New session 8 of user core. May 9 23:59:26.825293 systemd[1]: Started session-8.scope - Session 8 of User core. May 9 23:59:27.179119 sshd[4606]: pam_unix(sshd:session): session closed for user core May 9 23:59:27.188281 systemd[1]: sshd@7-172.31.24.82:22-147.75.109.163:40302.service: Deactivated successfully. May 9 23:59:27.194757 systemd[1]: session-8.scope: Deactivated successfully. May 9 23:59:27.198552 systemd-logind[1915]: Session 8 logged out. Waiting for processes to exit. May 9 23:59:27.201950 systemd-logind[1915]: Removed session 8. May 9 23:59:29.378461 containerd[1945]: time="2025-05-09T23:59:29.376944522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:59:29.378461 containerd[1945]: time="2025-05-09T23:59:29.377100102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:59:29.378461 containerd[1945]: time="2025-05-09T23:59:29.377127510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:29.381797 containerd[1945]: time="2025-05-09T23:59:29.381185790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:29.459454 systemd[1]: Started cri-containerd-20e85a43e16a2d2065414e84b0a216d6d83d80868d9df7e7459bad49a16cd48c.scope - libcontainer container 20e85a43e16a2d2065414e84b0a216d6d83d80868d9df7e7459bad49a16cd48c. May 9 23:59:29.474598 containerd[1945]: time="2025-05-09T23:59:29.472849554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:59:29.474598 containerd[1945]: time="2025-05-09T23:59:29.472985286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:59:29.474598 containerd[1945]: time="2025-05-09T23:59:29.473025966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:29.474598 containerd[1945]: time="2025-05-09T23:59:29.473194710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:29.541700 systemd[1]: Started cri-containerd-7a933b80171178d6f91589e007e2c3f252393b29b6bcce5482d15d1fb0bc623f.scope - libcontainer container 7a933b80171178d6f91589e007e2c3f252393b29b6bcce5482d15d1fb0bc623f. May 9 23:59:29.610947 containerd[1945]: time="2025-05-09T23:59:29.610858087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4zfk5,Uid:417dc51c-990c-48b0-a7b7-e619c8460e2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"20e85a43e16a2d2065414e84b0a216d6d83d80868d9df7e7459bad49a16cd48c\"" May 9 23:59:29.629590 containerd[1945]: time="2025-05-09T23:59:29.628666543Z" level=info msg="CreateContainer within sandbox \"20e85a43e16a2d2065414e84b0a216d6d83d80868d9df7e7459bad49a16cd48c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 23:59:29.675951 containerd[1945]: time="2025-05-09T23:59:29.675689575Z" level=info msg="CreateContainer within sandbox \"20e85a43e16a2d2065414e84b0a216d6d83d80868d9df7e7459bad49a16cd48c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"29a6049f1de3fef6bbf4952cc9ae601b730e9c1f9a8ab8c3aee021106edf347e\"" May 9 23:59:29.677068 containerd[1945]: time="2025-05-09T23:59:29.677000755Z" level=info msg="StartContainer for \"29a6049f1de3fef6bbf4952cc9ae601b730e9c1f9a8ab8c3aee021106edf347e\"" May 9 23:59:29.769543 containerd[1945]: time="2025-05-09T23:59:29.769462184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k2g9k,Uid:2ae349c8-814f-498f-b873-866a1e3ae0e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a933b80171178d6f91589e007e2c3f252393b29b6bcce5482d15d1fb0bc623f\"" May 9 23:59:29.770308 systemd[1]: Started cri-containerd-29a6049f1de3fef6bbf4952cc9ae601b730e9c1f9a8ab8c3aee021106edf347e.scope - libcontainer container 29a6049f1de3fef6bbf4952cc9ae601b730e9c1f9a8ab8c3aee021106edf347e. May 9 23:59:29.784595 containerd[1945]: time="2025-05-09T23:59:29.784526696Z" level=info msg="CreateContainer within sandbox \"7a933b80171178d6f91589e007e2c3f252393b29b6bcce5482d15d1fb0bc623f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 23:59:29.829899 containerd[1945]: time="2025-05-09T23:59:29.829840904Z" level=info msg="CreateContainer within sandbox \"7a933b80171178d6f91589e007e2c3f252393b29b6bcce5482d15d1fb0bc623f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d1540fb8e887c3d166628a02cb13359a2212d51e5cd10e9e2b863add6f03400c\"" May 9 23:59:29.839975 containerd[1945]: time="2025-05-09T23:59:29.839884232Z" level=info msg="StartContainer for \"d1540fb8e887c3d166628a02cb13359a2212d51e5cd10e9e2b863add6f03400c\"" May 9 23:59:29.865911 containerd[1945]: time="2025-05-09T23:59:29.865829096Z" level=info msg="StartContainer for \"29a6049f1de3fef6bbf4952cc9ae601b730e9c1f9a8ab8c3aee021106edf347e\" returns successfully" May 9 23:59:29.940460 systemd[1]: Started cri-containerd-d1540fb8e887c3d166628a02cb13359a2212d51e5cd10e9e2b863add6f03400c.scope - libcontainer container d1540fb8e887c3d166628a02cb13359a2212d51e5cd10e9e2b863add6f03400c. May 9 23:59:30.016878 containerd[1945]: time="2025-05-09T23:59:30.016460897Z" level=info msg="StartContainer for \"d1540fb8e887c3d166628a02cb13359a2212d51e5cd10e9e2b863add6f03400c\" returns successfully" May 9 23:59:30.846991 kubelet[3130]: I0509 23:59:30.846452 3130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k2g9k" podStartSLOduration=40.846429069 podStartE2EDuration="40.846429069s" podCreationTimestamp="2025-05-09 23:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:59:30.846030957 +0000 UTC m=+44.685894835" watchObservedRunningTime="2025-05-09 23:59:30.846429069 +0000 UTC m=+44.686292899" May 9 23:59:30.906018 kubelet[3130]: I0509 23:59:30.905583 3130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4zfk5" podStartSLOduration=40.905558793 podStartE2EDuration="40.905558793s" podCreationTimestamp="2025-05-09 23:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:59:30.868541649 +0000 UTC m=+44.708405515" watchObservedRunningTime="2025-05-09 23:59:30.905558793 +0000 UTC m=+44.745422635" May 9 23:59:32.219112 systemd[1]: Started sshd@8-172.31.24.82:22-147.75.109.163:51158.service - OpenSSH per-connection server daemon (147.75.109.163:51158). May 9 23:59:32.408864 sshd[4786]: Accepted publickey for core from 147.75.109.163 port 51158 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:32.411762 sshd[4786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:32.420852 systemd-logind[1915]: New session 9 of user core. May 9 23:59:32.429257 systemd[1]: Started session-9.scope - Session 9 of User core. May 9 23:59:32.685555 sshd[4786]: pam_unix(sshd:session): session closed for user core May 9 23:59:32.692508 systemd[1]: sshd@8-172.31.24.82:22-147.75.109.163:51158.service: Deactivated successfully. May 9 23:59:32.697656 systemd[1]: session-9.scope: Deactivated successfully. May 9 23:59:32.700730 systemd-logind[1915]: Session 9 logged out. Waiting for processes to exit. May 9 23:59:32.703768 systemd-logind[1915]: Removed session 9. May 9 23:59:37.728497 systemd[1]: Started sshd@9-172.31.24.82:22-147.75.109.163:54990.service - OpenSSH per-connection server daemon (147.75.109.163:54990). May 9 23:59:37.904028 sshd[4803]: Accepted publickey for core from 147.75.109.163 port 54990 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:37.906707 sshd[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:37.914774 systemd-logind[1915]: New session 10 of user core. May 9 23:59:37.921248 systemd[1]: Started session-10.scope - Session 10 of User core. May 9 23:59:38.157612 sshd[4803]: pam_unix(sshd:session): session closed for user core May 9 23:59:38.168815 systemd-logind[1915]: Session 10 logged out. Waiting for processes to exit. May 9 23:59:38.169539 systemd[1]: sshd@9-172.31.24.82:22-147.75.109.163:54990.service: Deactivated successfully. May 9 23:59:38.173874 systemd[1]: session-10.scope: Deactivated successfully. May 9 23:59:38.180452 systemd-logind[1915]: Removed session 10. May 9 23:59:43.199444 systemd[1]: Started sshd@10-172.31.24.82:22-147.75.109.163:54996.service - OpenSSH per-connection server daemon (147.75.109.163:54996). May 9 23:59:43.389858 sshd[4817]: Accepted publickey for core from 147.75.109.163 port 54996 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:43.392706 sshd[4817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:43.402205 systemd-logind[1915]: New session 11 of user core. May 9 23:59:43.409332 systemd[1]: Started session-11.scope - Session 11 of User core. May 9 23:59:43.664387 sshd[4817]: pam_unix(sshd:session): session closed for user core May 9 23:59:43.675718 systemd[1]: sshd@10-172.31.24.82:22-147.75.109.163:54996.service: Deactivated successfully. May 9 23:59:43.680533 systemd[1]: session-11.scope: Deactivated successfully. May 9 23:59:43.683299 systemd-logind[1915]: Session 11 logged out. Waiting for processes to exit. May 9 23:59:43.711569 systemd[1]: Started sshd@11-172.31.24.82:22-147.75.109.163:55002.service - OpenSSH per-connection server daemon (147.75.109.163:55002). May 9 23:59:43.713659 systemd-logind[1915]: Removed session 11. May 9 23:59:43.896440 sshd[4831]: Accepted publickey for core from 147.75.109.163 port 55002 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:43.899682 sshd[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:43.908778 systemd-logind[1915]: New session 12 of user core. May 9 23:59:43.918257 systemd[1]: Started session-12.scope - Session 12 of User core. May 9 23:59:44.250789 sshd[4831]: pam_unix(sshd:session): session closed for user core May 9 23:59:44.262334 systemd[1]: sshd@11-172.31.24.82:22-147.75.109.163:55002.service: Deactivated successfully. May 9 23:59:44.269539 systemd[1]: session-12.scope: Deactivated successfully. May 9 23:59:44.279130 systemd-logind[1915]: Session 12 logged out. Waiting for processes to exit. May 9 23:59:44.305540 systemd[1]: Started sshd@12-172.31.24.82:22-147.75.109.163:55006.service - OpenSSH per-connection server daemon (147.75.109.163:55006). May 9 23:59:44.308576 systemd-logind[1915]: Removed session 12. May 9 23:59:44.489796 sshd[4841]: Accepted publickey for core from 147.75.109.163 port 55006 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:44.492927 sshd[4841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:44.505336 systemd-logind[1915]: New session 13 of user core. May 9 23:59:44.513268 systemd[1]: Started session-13.scope - Session 13 of User core. May 9 23:59:44.772157 sshd[4841]: pam_unix(sshd:session): session closed for user core May 9 23:59:44.777167 systemd[1]: sshd@12-172.31.24.82:22-147.75.109.163:55006.service: Deactivated successfully. May 9 23:59:44.780330 systemd[1]: session-13.scope: Deactivated successfully. May 9 23:59:44.784784 systemd-logind[1915]: Session 13 logged out. Waiting for processes to exit. May 9 23:59:44.787506 systemd-logind[1915]: Removed session 13. May 9 23:59:49.817579 systemd[1]: Started sshd@13-172.31.24.82:22-147.75.109.163:34318.service - OpenSSH per-connection server daemon (147.75.109.163:34318). May 9 23:59:49.993638 sshd[4857]: Accepted publickey for core from 147.75.109.163 port 34318 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:49.996317 sshd[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:50.004471 systemd-logind[1915]: New session 14 of user core. May 9 23:59:50.011279 systemd[1]: Started session-14.scope - Session 14 of User core. May 9 23:59:50.251007 sshd[4857]: pam_unix(sshd:session): session closed for user core May 9 23:59:50.256615 systemd-logind[1915]: Session 14 logged out. Waiting for processes to exit. May 9 23:59:50.257203 systemd[1]: sshd@13-172.31.24.82:22-147.75.109.163:34318.service: Deactivated successfully. May 9 23:59:50.261568 systemd[1]: session-14.scope: Deactivated successfully. May 9 23:59:50.266766 systemd-logind[1915]: Removed session 14. May 9 23:59:55.296517 systemd[1]: Started sshd@14-172.31.24.82:22-147.75.109.163:34332.service - OpenSSH per-connection server daemon (147.75.109.163:34332). May 9 23:59:55.478402 sshd[4871]: Accepted publickey for core from 147.75.109.163 port 34332 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:55.481381 sshd[4871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:55.490692 systemd-logind[1915]: New session 15 of user core. May 9 23:59:55.506275 systemd[1]: Started session-15.scope - Session 15 of User core. May 9 23:59:55.759334 sshd[4871]: pam_unix(sshd:session): session closed for user core May 9 23:59:55.765751 systemd[1]: sshd@14-172.31.24.82:22-147.75.109.163:34332.service: Deactivated successfully. May 9 23:59:55.769472 systemd[1]: session-15.scope: Deactivated successfully. May 9 23:59:55.771639 systemd-logind[1915]: Session 15 logged out. Waiting for processes to exit. May 9 23:59:55.773745 systemd-logind[1915]: Removed session 15. May 10 00:00:00.799458 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. May 10 00:00:00.808694 systemd[1]: Started sshd@15-172.31.24.82:22-147.75.109.163:53248.service - OpenSSH per-connection server daemon (147.75.109.163:53248). May 10 00:00:00.822350 systemd[1]: logrotate.service: Deactivated successfully. May 10 00:00:00.988730 sshd[4885]: Accepted publickey for core from 147.75.109.163 port 53248 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:00.991403 sshd[4885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:00.999315 systemd-logind[1915]: New session 16 of user core. May 10 00:00:01.009278 systemd[1]: Started session-16.scope - Session 16 of User core. May 10 00:00:01.251885 sshd[4885]: pam_unix(sshd:session): session closed for user core May 10 00:00:01.256790 systemd[1]: sshd@15-172.31.24.82:22-147.75.109.163:53248.service: Deactivated successfully. May 10 00:00:01.260910 systemd[1]: session-16.scope: Deactivated successfully. May 10 00:00:01.265035 systemd-logind[1915]: Session 16 logged out. Waiting for processes to exit. May 10 00:00:01.267023 systemd-logind[1915]: Removed session 16. May 10 00:00:06.294504 systemd[1]: Started sshd@16-172.31.24.82:22-147.75.109.163:53260.service - OpenSSH per-connection server daemon (147.75.109.163:53260). May 10 00:00:06.476401 sshd[4899]: Accepted publickey for core from 147.75.109.163 port 53260 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:06.479068 sshd[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:06.486714 systemd-logind[1915]: New session 17 of user core. May 10 00:00:06.495256 systemd[1]: Started session-17.scope - Session 17 of User core. May 10 00:00:06.751061 sshd[4899]: pam_unix(sshd:session): session closed for user core May 10 00:00:06.757387 systemd[1]: sshd@16-172.31.24.82:22-147.75.109.163:53260.service: Deactivated successfully. May 10 00:00:06.760950 systemd[1]: session-17.scope: Deactivated successfully. May 10 00:00:06.763242 systemd-logind[1915]: Session 17 logged out. Waiting for processes to exit. May 10 00:00:06.765701 systemd-logind[1915]: Removed session 17. May 10 00:00:06.791667 systemd[1]: Started sshd@17-172.31.24.82:22-147.75.109.163:45794.service - OpenSSH per-connection server daemon (147.75.109.163:45794). May 10 00:00:06.963480 sshd[4911]: Accepted publickey for core from 147.75.109.163 port 45794 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:06.966230 sshd[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:06.973477 systemd-logind[1915]: New session 18 of user core. May 10 00:00:06.979220 systemd[1]: Started session-18.scope - Session 18 of User core. May 10 00:00:07.299814 sshd[4911]: pam_unix(sshd:session): session closed for user core May 10 00:00:07.305278 systemd-logind[1915]: Session 18 logged out. Waiting for processes to exit. May 10 00:00:07.305731 systemd[1]: sshd@17-172.31.24.82:22-147.75.109.163:45794.service: Deactivated successfully. May 10 00:00:07.310097 systemd[1]: session-18.scope: Deactivated successfully. May 10 00:00:07.314871 systemd-logind[1915]: Removed session 18. May 10 00:00:07.333437 systemd[1]: Started sshd@18-172.31.24.82:22-147.75.109.163:45802.service - OpenSSH per-connection server daemon (147.75.109.163:45802). May 10 00:00:07.516574 sshd[4921]: Accepted publickey for core from 147.75.109.163 port 45802 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:07.519266 sshd[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:07.527199 systemd-logind[1915]: New session 19 of user core. May 10 00:00:07.533797 systemd[1]: Started session-19.scope - Session 19 of User core. May 10 00:00:08.849667 sshd[4921]: pam_unix(sshd:session): session closed for user core May 10 00:00:08.863183 systemd[1]: sshd@18-172.31.24.82:22-147.75.109.163:45802.service: Deactivated successfully. May 10 00:00:08.869500 systemd[1]: session-19.scope: Deactivated successfully. May 10 00:00:08.875720 systemd-logind[1915]: Session 19 logged out. Waiting for processes to exit. May 10 00:00:08.899610 systemd[1]: Started sshd@19-172.31.24.82:22-147.75.109.163:45814.service - OpenSSH per-connection server daemon (147.75.109.163:45814). May 10 00:00:08.904361 systemd-logind[1915]: Removed session 19. May 10 00:00:09.083241 sshd[4940]: Accepted publickey for core from 147.75.109.163 port 45814 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:09.086053 sshd[4940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:09.095105 systemd-logind[1915]: New session 20 of user core. May 10 00:00:09.100244 systemd[1]: Started session-20.scope - Session 20 of User core. May 10 00:00:09.623128 sshd[4940]: pam_unix(sshd:session): session closed for user core May 10 00:00:09.633576 systemd[1]: sshd@19-172.31.24.82:22-147.75.109.163:45814.service: Deactivated successfully. May 10 00:00:09.639141 systemd[1]: session-20.scope: Deactivated successfully. May 10 00:00:09.640658 systemd-logind[1915]: Session 20 logged out. Waiting for processes to exit. May 10 00:00:09.644699 systemd-logind[1915]: Removed session 20. May 10 00:00:09.669173 systemd[1]: Started sshd@20-172.31.24.82:22-147.75.109.163:45830.service - OpenSSH per-connection server daemon (147.75.109.163:45830). May 10 00:00:09.850366 sshd[4951]: Accepted publickey for core from 147.75.109.163 port 45830 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:09.853295 sshd[4951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:09.862294 systemd-logind[1915]: New session 21 of user core. May 10 00:00:09.867247 systemd[1]: Started session-21.scope - Session 21 of User core. May 10 00:00:10.109324 sshd[4951]: pam_unix(sshd:session): session closed for user core May 10 00:00:10.116146 systemd[1]: sshd@20-172.31.24.82:22-147.75.109.163:45830.service: Deactivated successfully. May 10 00:00:10.122123 systemd[1]: session-21.scope: Deactivated successfully. May 10 00:00:10.123540 systemd-logind[1915]: Session 21 logged out. Waiting for processes to exit. May 10 00:00:10.125482 systemd-logind[1915]: Removed session 21. May 10 00:00:15.148543 systemd[1]: Started sshd@21-172.31.24.82:22-147.75.109.163:45840.service - OpenSSH per-connection server daemon (147.75.109.163:45840). May 10 00:00:15.323604 sshd[4965]: Accepted publickey for core from 147.75.109.163 port 45840 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:15.326284 sshd[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:15.334185 systemd-logind[1915]: New session 22 of user core. May 10 00:00:15.346263 systemd[1]: Started session-22.scope - Session 22 of User core. May 10 00:00:15.590675 sshd[4965]: pam_unix(sshd:session): session closed for user core May 10 00:00:15.596886 systemd[1]: sshd@21-172.31.24.82:22-147.75.109.163:45840.service: Deactivated successfully. May 10 00:00:15.600815 systemd[1]: session-22.scope: Deactivated successfully. May 10 00:00:15.602512 systemd-logind[1915]: Session 22 logged out. Waiting for processes to exit. May 10 00:00:15.604610 systemd-logind[1915]: Removed session 22. May 10 00:00:20.639501 systemd[1]: Started sshd@22-172.31.24.82:22-147.75.109.163:53704.service - OpenSSH per-connection server daemon (147.75.109.163:53704). May 10 00:00:20.829320 sshd[4980]: Accepted publickey for core from 147.75.109.163 port 53704 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:20.833624 sshd[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:20.845822 systemd-logind[1915]: New session 23 of user core. May 10 00:00:20.856249 systemd[1]: Started session-23.scope - Session 23 of User core. May 10 00:00:21.111329 sshd[4980]: pam_unix(sshd:session): session closed for user core May 10 00:00:21.118151 systemd-logind[1915]: Session 23 logged out. Waiting for processes to exit. May 10 00:00:21.119180 systemd[1]: sshd@22-172.31.24.82:22-147.75.109.163:53704.service: Deactivated successfully. May 10 00:00:21.122454 systemd[1]: session-23.scope: Deactivated successfully. May 10 00:00:21.128095 systemd-logind[1915]: Removed session 23. May 10 00:00:26.148475 systemd[1]: Started sshd@23-172.31.24.82:22-147.75.109.163:53718.service - OpenSSH per-connection server daemon (147.75.109.163:53718). May 10 00:00:26.323364 sshd[4995]: Accepted publickey for core from 147.75.109.163 port 53718 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:26.326436 sshd[4995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:26.336089 systemd-logind[1915]: New session 24 of user core. May 10 00:00:26.342263 systemd[1]: Started session-24.scope - Session 24 of User core. May 10 00:00:26.578134 sshd[4995]: pam_unix(sshd:session): session closed for user core May 10 00:00:26.583642 systemd-logind[1915]: Session 24 logged out. Waiting for processes to exit. May 10 00:00:26.584474 systemd[1]: sshd@23-172.31.24.82:22-147.75.109.163:53718.service: Deactivated successfully. May 10 00:00:26.588187 systemd[1]: session-24.scope: Deactivated successfully. May 10 00:00:26.591866 systemd-logind[1915]: Removed session 24. May 10 00:00:31.622497 systemd[1]: Started sshd@24-172.31.24.82:22-147.75.109.163:43456.service - OpenSSH per-connection server daemon (147.75.109.163:43456). May 10 00:00:31.803865 sshd[5008]: Accepted publickey for core from 147.75.109.163 port 43456 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:31.806776 sshd[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:31.814314 systemd-logind[1915]: New session 25 of user core. May 10 00:00:31.826252 systemd[1]: Started session-25.scope - Session 25 of User core. May 10 00:00:32.067503 sshd[5008]: pam_unix(sshd:session): session closed for user core May 10 00:00:32.075252 systemd[1]: sshd@24-172.31.24.82:22-147.75.109.163:43456.service: Deactivated successfully. May 10 00:00:32.080497 systemd[1]: session-25.scope: Deactivated successfully. May 10 00:00:32.084000 systemd-logind[1915]: Session 25 logged out. Waiting for processes to exit. May 10 00:00:32.086045 systemd-logind[1915]: Removed session 25. May 10 00:00:32.104512 systemd[1]: Started sshd@25-172.31.24.82:22-147.75.109.163:43462.service - OpenSSH per-connection server daemon (147.75.109.163:43462). May 10 00:00:32.289163 sshd[5021]: Accepted publickey for core from 147.75.109.163 port 43462 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:32.291796 sshd[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:32.300133 systemd-logind[1915]: New session 26 of user core. May 10 00:00:32.304243 systemd[1]: Started session-26.scope - Session 26 of User core. May 10 00:00:35.160716 containerd[1945]: time="2025-05-10T00:00:35.156223520Z" level=info msg="StopContainer for \"f666936194007e7cd1f1f1cc150833527153ad6ebc50690ce539f75434ddf3f9\" with timeout 30 (s)" May 10 00:00:35.160716 containerd[1945]: time="2025-05-10T00:00:35.157025300Z" level=info msg="Stop container \"f666936194007e7cd1f1f1cc150833527153ad6ebc50690ce539f75434ddf3f9\" with signal terminated" May 10 00:00:35.192019 systemd[1]: cri-containerd-f666936194007e7cd1f1f1cc150833527153ad6ebc50690ce539f75434ddf3f9.scope: Deactivated successfully. May 10 00:00:35.197434 containerd[1945]: time="2025-05-10T00:00:35.197364177Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:00:35.219648 containerd[1945]: time="2025-05-10T00:00:35.219581781Z" level=info msg="StopContainer for \"5487e09017558b580bdf9fcff8d064cbe47ac7929b9d502f86ce857c91a77348\" with timeout 2 (s)" May 10 00:00:35.220424 containerd[1945]: time="2025-05-10T00:00:35.220254081Z" level=info msg="Stop container \"5487e09017558b580bdf9fcff8d064cbe47ac7929b9d502f86ce857c91a77348\" with signal terminated" May 10 00:00:35.235945 systemd-networkd[1759]: lxc_health: Link DOWN May 10 00:00:35.236835 systemd-networkd[1759]: lxc_health: Lost carrier May 10 00:00:35.266937 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f666936194007e7cd1f1f1cc150833527153ad6ebc50690ce539f75434ddf3f9-rootfs.mount: Deactivated successfully. May 10 00:00:35.269281 systemd[1]: cri-containerd-5487e09017558b580bdf9fcff8d064cbe47ac7929b9d502f86ce857c91a77348.scope: Deactivated successfully. May 10 00:00:35.271220 systemd[1]: cri-containerd-5487e09017558b580bdf9fcff8d064cbe47ac7929b9d502f86ce857c91a77348.scope: Consumed 14.656s CPU time. May 10 00:00:35.289610 containerd[1945]: time="2025-05-10T00:00:35.289379913Z" level=info msg="shim disconnected" id=f666936194007e7cd1f1f1cc150833527153ad6ebc50690ce539f75434ddf3f9 namespace=k8s.io May 10 00:00:35.289610 containerd[1945]: time="2025-05-10T00:00:35.289485237Z" level=warning msg="cleaning up after shim disconnected" id=f666936194007e7cd1f1f1cc150833527153ad6ebc50690ce539f75434ddf3f9 namespace=k8s.io May 10 00:00:35.289610 containerd[1945]: time="2025-05-10T00:00:35.289529133Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:35.325775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5487e09017558b580bdf9fcff8d064cbe47ac7929b9d502f86ce857c91a77348-rootfs.mount: Deactivated successfully. May 10 00:00:35.333250 containerd[1945]: time="2025-05-10T00:00:35.332874717Z" level=info msg="StopContainer for \"f666936194007e7cd1f1f1cc150833527153ad6ebc50690ce539f75434ddf3f9\" returns successfully" May 10 00:00:35.334307 containerd[1945]: time="2025-05-10T00:00:35.333838737Z" level=info msg="StopPodSandbox for \"61f1f86d94772c41e340821ae358017a1ff69efab7f930807f1c774ed9470363\"" May 10 00:00:35.334557 containerd[1945]: time="2025-05-10T00:00:35.334424253Z" level=info msg="shim disconnected" id=5487e09017558b580bdf9fcff8d064cbe47ac7929b9d502f86ce857c91a77348 namespace=k8s.io May 10 00:00:35.334557 containerd[1945]: time="2025-05-10T00:00:35.334514901Z" level=warning msg="cleaning up after shim disconnected" id=5487e09017558b580bdf9fcff8d064cbe47ac7929b9d502f86ce857c91a77348 namespace=k8s.io May 10 00:00:35.334557 containerd[1945]: time="2025-05-10T00:00:35.334535733Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:35.335387 containerd[1945]: time="2025-05-10T00:00:35.334427289Z" level=info msg="Container to stop \"f666936194007e7cd1f1f1cc150833527153ad6ebc50690ce539f75434ddf3f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:00:35.340553 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-61f1f86d94772c41e340821ae358017a1ff69efab7f930807f1c774ed9470363-shm.mount: Deactivated successfully. May 10 00:00:35.355019 systemd[1]: cri-containerd-61f1f86d94772c41e340821ae358017a1ff69efab7f930807f1c774ed9470363.scope: Deactivated successfully. May 10 00:00:35.378430 containerd[1945]: time="2025-05-10T00:00:35.378328054Z" level=info msg="StopContainer for \"5487e09017558b580bdf9fcff8d064cbe47ac7929b9d502f86ce857c91a77348\" returns successfully" May 10 00:00:35.379860 containerd[1945]: time="2025-05-10T00:00:35.379229338Z" level=info msg="StopPodSandbox for \"5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed\"" May 10 00:00:35.379860 containerd[1945]: time="2025-05-10T00:00:35.379302850Z" level=info msg="Container to stop \"4a749722d3ec27e62d6281638d96fa5bb40f23a21967b526673de96954b7003e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:00:35.379860 containerd[1945]: time="2025-05-10T00:00:35.379330126Z" level=info msg="Container to stop \"d333e677da11c4f8a4eb0de02b69960279a89e3df2447ba2ad7e9a40d4885c23\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:00:35.379860 containerd[1945]: time="2025-05-10T00:00:35.379354582Z" level=info msg="Container to stop \"5487e09017558b580bdf9fcff8d064cbe47ac7929b9d502f86ce857c91a77348\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:00:35.379860 containerd[1945]: time="2025-05-10T00:00:35.379378162Z" level=info msg="Container to stop \"e1f493f0fbd63a34b4eebb5bb121af0369770b1f6c90ef6b9166b19b9d54e8b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:00:35.379860 containerd[1945]: time="2025-05-10T00:00:35.379400434Z" level=info msg="Container to stop \"a7c40cc0160d5bc3fd808736692f954574e523c535fc873942cb6a5e44b949ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:00:35.387870 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed-shm.mount: Deactivated successfully. May 10 00:00:35.407464 systemd[1]: cri-containerd-5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed.scope: Deactivated successfully. May 10 00:00:35.419269 containerd[1945]: time="2025-05-10T00:00:35.418997122Z" level=info msg="shim disconnected" id=61f1f86d94772c41e340821ae358017a1ff69efab7f930807f1c774ed9470363 namespace=k8s.io May 10 00:00:35.419269 containerd[1945]: time="2025-05-10T00:00:35.419098150Z" level=warning msg="cleaning up after shim disconnected" id=61f1f86d94772c41e340821ae358017a1ff69efab7f930807f1c774ed9470363 namespace=k8s.io May 10 00:00:35.419269 containerd[1945]: time="2025-05-10T00:00:35.419121454Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:35.452225 containerd[1945]: time="2025-05-10T00:00:35.452156602Z" level=info msg="TearDown network for sandbox \"61f1f86d94772c41e340821ae358017a1ff69efab7f930807f1c774ed9470363\" successfully" May 10 00:00:35.453029 containerd[1945]: time="2025-05-10T00:00:35.452871310Z" level=info msg="StopPodSandbox for \"61f1f86d94772c41e340821ae358017a1ff69efab7f930807f1c774ed9470363\" returns successfully" May 10 00:00:35.469400 containerd[1945]: time="2025-05-10T00:00:35.469305982Z" level=info msg="shim disconnected" id=5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed namespace=k8s.io May 10 00:00:35.470186 containerd[1945]: time="2025-05-10T00:00:35.469708066Z" level=warning msg="cleaning up after shim disconnected" id=5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed namespace=k8s.io May 10 00:00:35.470186 containerd[1945]: time="2025-05-10T00:00:35.469748578Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:35.501380 containerd[1945]: time="2025-05-10T00:00:35.501175822Z" level=info msg="TearDown network for sandbox \"5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed\" successfully" May 10 00:00:35.501380 containerd[1945]: time="2025-05-10T00:00:35.501231958Z" level=info msg="StopPodSandbox for \"5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed\" returns successfully" May 10 00:00:35.605021 kubelet[3130]: I0510 00:00:35.604286 3130 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-cni-path" (OuterVolumeSpecName: "cni-path") pod "5c08c227-1eff-4cd7-8d10-21529b9a3a95" (UID: "5c08c227-1eff-4cd7-8d10-21529b9a3a95"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:00:35.605021 kubelet[3130]: I0510 00:00:35.604312 3130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-cni-path\") pod \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " May 10 00:00:35.605021 kubelet[3130]: I0510 00:00:35.604403 3130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbvzp\" (UniqueName: \"kubernetes.io/projected/5c08c227-1eff-4cd7-8d10-21529b9a3a95-kube-api-access-gbvzp\") pod \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " May 10 00:00:35.605021 kubelet[3130]: I0510 00:00:35.604535 3130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-bpf-maps\") pod \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " May 10 00:00:35.605021 kubelet[3130]: I0510 00:00:35.604572 3130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-host-proc-sys-net\") pod \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " May 10 00:00:35.605021 kubelet[3130]: I0510 00:00:35.604609 3130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-etc-cni-netd\") pod \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " May 10 00:00:35.605862 kubelet[3130]: I0510 00:00:35.604646 3130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-host-proc-sys-kernel\") pod \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " May 10 00:00:35.605862 kubelet[3130]: I0510 00:00:35.604685 3130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5c08c227-1eff-4cd7-8d10-21529b9a3a95-hubble-tls\") pod \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " May 10 00:00:35.605862 kubelet[3130]: I0510 00:00:35.604721 3130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-hostproc\") pod \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " May 10 00:00:35.605862 kubelet[3130]: I0510 00:00:35.604760 3130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4kkq\" (UniqueName: \"kubernetes.io/projected/bddef3d7-98cc-4f26-8c99-594d985fbcfb-kube-api-access-b4kkq\") pod \"bddef3d7-98cc-4f26-8c99-594d985fbcfb\" (UID: \"bddef3d7-98cc-4f26-8c99-594d985fbcfb\") " May 10 00:00:35.605862 kubelet[3130]: I0510 00:00:35.604820 3130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bddef3d7-98cc-4f26-8c99-594d985fbcfb-cilium-config-path\") pod \"bddef3d7-98cc-4f26-8c99-594d985fbcfb\" (UID: \"bddef3d7-98cc-4f26-8c99-594d985fbcfb\") " May 10 00:00:35.605862 kubelet[3130]: I0510 00:00:35.604859 3130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-lib-modules\") pod \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " May 10 00:00:35.606237 kubelet[3130]: I0510 00:00:35.604900 3130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5c08c227-1eff-4cd7-8d10-21529b9a3a95-clustermesh-secrets\") pod \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " May 10 00:00:35.606237 kubelet[3130]: I0510 00:00:35.604938 3130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c08c227-1eff-4cd7-8d10-21529b9a3a95-cilium-config-path\") pod \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " May 10 00:00:35.608906 kubelet[3130]: I0510 00:00:35.606442 3130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-xtables-lock\") pod \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " May 10 00:00:35.608906 kubelet[3130]: I0510 00:00:35.606598 3130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-cilium-run\") pod \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " May 10 00:00:35.608906 kubelet[3130]: I0510 00:00:35.606669 3130 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-cilium-cgroup\") pod \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\" (UID: \"5c08c227-1eff-4cd7-8d10-21529b9a3a95\") " May 10 00:00:35.608906 kubelet[3130]: I0510 00:00:35.606785 3130 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-cni-path\") on node \"ip-172-31-24-82\" DevicePath \"\"" May 10 00:00:35.609428 kubelet[3130]: I0510 00:00:35.608199 3130 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5c08c227-1eff-4cd7-8d10-21529b9a3a95" (UID: "5c08c227-1eff-4cd7-8d10-21529b9a3a95"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:00:35.610362 kubelet[3130]: I0510 00:00:35.610301 3130 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5c08c227-1eff-4cd7-8d10-21529b9a3a95" (UID: "5c08c227-1eff-4cd7-8d10-21529b9a3a95"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:00:35.611284 kubelet[3130]: I0510 00:00:35.611141 3130 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5c08c227-1eff-4cd7-8d10-21529b9a3a95" (UID: "5c08c227-1eff-4cd7-8d10-21529b9a3a95"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:00:35.611284 kubelet[3130]: I0510 00:00:35.611216 3130 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5c08c227-1eff-4cd7-8d10-21529b9a3a95" (UID: "5c08c227-1eff-4cd7-8d10-21529b9a3a95"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:00:35.611284 kubelet[3130]: I0510 00:00:35.611255 3130 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5c08c227-1eff-4cd7-8d10-21529b9a3a95" (UID: "5c08c227-1eff-4cd7-8d10-21529b9a3a95"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:00:35.611284 kubelet[3130]: I0510 00:00:35.611290 3130 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5c08c227-1eff-4cd7-8d10-21529b9a3a95" (UID: "5c08c227-1eff-4cd7-8d10-21529b9a3a95"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:00:35.614245 kubelet[3130]: I0510 00:00:35.613730 3130 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5c08c227-1eff-4cd7-8d10-21529b9a3a95" (UID: "5c08c227-1eff-4cd7-8d10-21529b9a3a95"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:00:35.614245 kubelet[3130]: I0510 00:00:35.613827 3130 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5c08c227-1eff-4cd7-8d10-21529b9a3a95" (UID: "5c08c227-1eff-4cd7-8d10-21529b9a3a95"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:00:35.618443 kubelet[3130]: I0510 00:00:35.618124 3130 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-hostproc" (OuterVolumeSpecName: "hostproc") pod "5c08c227-1eff-4cd7-8d10-21529b9a3a95" (UID: "5c08c227-1eff-4cd7-8d10-21529b9a3a95"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 10 00:00:35.620778 kubelet[3130]: I0510 00:00:35.620708 3130 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c08c227-1eff-4cd7-8d10-21529b9a3a95-kube-api-access-gbvzp" (OuterVolumeSpecName: "kube-api-access-gbvzp") pod "5c08c227-1eff-4cd7-8d10-21529b9a3a95" (UID: "5c08c227-1eff-4cd7-8d10-21529b9a3a95"). InnerVolumeSpecName "kube-api-access-gbvzp". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 10 00:00:35.627667 kubelet[3130]: I0510 00:00:35.627535 3130 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bddef3d7-98cc-4f26-8c99-594d985fbcfb-kube-api-access-b4kkq" (OuterVolumeSpecName: "kube-api-access-b4kkq") pod "bddef3d7-98cc-4f26-8c99-594d985fbcfb" (UID: "bddef3d7-98cc-4f26-8c99-594d985fbcfb"). InnerVolumeSpecName "kube-api-access-b4kkq". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 10 00:00:35.631074 kubelet[3130]: I0510 00:00:35.631002 3130 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bddef3d7-98cc-4f26-8c99-594d985fbcfb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bddef3d7-98cc-4f26-8c99-594d985fbcfb" (UID: "bddef3d7-98cc-4f26-8c99-594d985fbcfb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 10 00:00:35.631388 kubelet[3130]: I0510 00:00:35.630913 3130 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c08c227-1eff-4cd7-8d10-21529b9a3a95-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5c08c227-1eff-4cd7-8d10-21529b9a3a95" (UID: "5c08c227-1eff-4cd7-8d10-21529b9a3a95"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 10 00:00:35.632660 kubelet[3130]: I0510 00:00:35.632603 3130 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c08c227-1eff-4cd7-8d10-21529b9a3a95-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5c08c227-1eff-4cd7-8d10-21529b9a3a95" (UID: "5c08c227-1eff-4cd7-8d10-21529b9a3a95"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 10 00:00:35.634326 kubelet[3130]: I0510 00:00:35.634211 3130 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c08c227-1eff-4cd7-8d10-21529b9a3a95-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5c08c227-1eff-4cd7-8d10-21529b9a3a95" (UID: "5c08c227-1eff-4cd7-8d10-21529b9a3a95"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 10 00:00:35.708478 kubelet[3130]: I0510 00:00:35.708006 3130 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-hostproc\") on node \"ip-172-31-24-82\" DevicePath \"\"" May 10 00:00:35.708478 kubelet[3130]: I0510 00:00:35.708058 3130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b4kkq\" (UniqueName: \"kubernetes.io/projected/bddef3d7-98cc-4f26-8c99-594d985fbcfb-kube-api-access-b4kkq\") on node \"ip-172-31-24-82\" DevicePath \"\"" May 10 00:00:35.708478 kubelet[3130]: I0510 00:00:35.708086 3130 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bddef3d7-98cc-4f26-8c99-594d985fbcfb-cilium-config-path\") on node \"ip-172-31-24-82\" DevicePath \"\"" May 10 00:00:35.708478 kubelet[3130]: I0510 00:00:35.708114 3130 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-lib-modules\") on node \"ip-172-31-24-82\" DevicePath \"\"" May 10 00:00:35.708478 kubelet[3130]: I0510 00:00:35.708137 3130 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5c08c227-1eff-4cd7-8d10-21529b9a3a95-clustermesh-secrets\") on node \"ip-172-31-24-82\" DevicePath \"\"" May 10 00:00:35.708478 kubelet[3130]: I0510 00:00:35.708160 3130 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c08c227-1eff-4cd7-8d10-21529b9a3a95-cilium-config-path\") on node \"ip-172-31-24-82\" DevicePath \"\"" May 10 00:00:35.708478 kubelet[3130]: I0510 00:00:35.708182 3130 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-xtables-lock\") on node \"ip-172-31-24-82\" DevicePath \"\"" May 10 00:00:35.708478 kubelet[3130]: I0510 00:00:35.708202 3130 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-cilium-run\") on node \"ip-172-31-24-82\" DevicePath \"\"" May 10 00:00:35.709267 kubelet[3130]: I0510 00:00:35.708222 3130 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-cilium-cgroup\") on node \"ip-172-31-24-82\" DevicePath \"\"" May 10 00:00:35.709267 kubelet[3130]: I0510 00:00:35.708244 3130 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gbvzp\" (UniqueName: \"kubernetes.io/projected/5c08c227-1eff-4cd7-8d10-21529b9a3a95-kube-api-access-gbvzp\") on node \"ip-172-31-24-82\" DevicePath \"\"" May 10 00:00:35.709267 kubelet[3130]: I0510 00:00:35.708265 3130 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-bpf-maps\") on node \"ip-172-31-24-82\" DevicePath \"\"" May 10 00:00:35.709267 kubelet[3130]: I0510 00:00:35.708286 3130 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-host-proc-sys-net\") on node \"ip-172-31-24-82\" DevicePath \"\"" May 10 00:00:35.709267 kubelet[3130]: I0510 00:00:35.708310 3130 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-etc-cni-netd\") on node \"ip-172-31-24-82\" DevicePath \"\"" May 10 00:00:35.709267 kubelet[3130]: I0510 00:00:35.708331 3130 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5c08c227-1eff-4cd7-8d10-21529b9a3a95-host-proc-sys-kernel\") on node \"ip-172-31-24-82\" DevicePath \"\"" May 10 00:00:35.709267 kubelet[3130]: I0510 00:00:35.708350 3130 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5c08c227-1eff-4cd7-8d10-21529b9a3a95-hubble-tls\") on node \"ip-172-31-24-82\" DevicePath \"\"" May 10 00:00:35.991071 kubelet[3130]: I0510 00:00:35.989517 3130 scope.go:117] "RemoveContainer" containerID="f666936194007e7cd1f1f1cc150833527153ad6ebc50690ce539f75434ddf3f9" May 10 00:00:35.998499 containerd[1945]: time="2025-05-10T00:00:35.998423137Z" level=info msg="RemoveContainer for \"f666936194007e7cd1f1f1cc150833527153ad6ebc50690ce539f75434ddf3f9\"" May 10 00:00:36.001408 systemd[1]: Removed slice kubepods-besteffort-podbddef3d7_98cc_4f26_8c99_594d985fbcfb.slice - libcontainer container kubepods-besteffort-podbddef3d7_98cc_4f26_8c99_594d985fbcfb.slice. May 10 00:00:36.018026 containerd[1945]: time="2025-05-10T00:00:36.017027193Z" level=info msg="RemoveContainer for \"f666936194007e7cd1f1f1cc150833527153ad6ebc50690ce539f75434ddf3f9\" returns successfully" May 10 00:00:36.018162 kubelet[3130]: I0510 00:00:36.017560 3130 scope.go:117] "RemoveContainer" containerID="f666936194007e7cd1f1f1cc150833527153ad6ebc50690ce539f75434ddf3f9" May 10 00:00:36.019055 containerd[1945]: time="2025-05-10T00:00:36.018896049Z" level=error msg="ContainerStatus for \"f666936194007e7cd1f1f1cc150833527153ad6ebc50690ce539f75434ddf3f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f666936194007e7cd1f1f1cc150833527153ad6ebc50690ce539f75434ddf3f9\": not found" May 10 00:00:36.019788 kubelet[3130]: E0510 00:00:36.019750 3130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f666936194007e7cd1f1f1cc150833527153ad6ebc50690ce539f75434ddf3f9\": not found" containerID="f666936194007e7cd1f1f1cc150833527153ad6ebc50690ce539f75434ddf3f9" May 10 00:00:36.020441 kubelet[3130]: I0510 00:00:36.020137 3130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f666936194007e7cd1f1f1cc150833527153ad6ebc50690ce539f75434ddf3f9"} err="failed to get container status \"f666936194007e7cd1f1f1cc150833527153ad6ebc50690ce539f75434ddf3f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"f666936194007e7cd1f1f1cc150833527153ad6ebc50690ce539f75434ddf3f9\": not found" May 10 00:00:36.020654 kubelet[3130]: I0510 00:00:36.020527 3130 scope.go:117] "RemoveContainer" containerID="5487e09017558b580bdf9fcff8d064cbe47ac7929b9d502f86ce857c91a77348" May 10 00:00:36.028035 systemd[1]: Removed slice kubepods-burstable-pod5c08c227_1eff_4cd7_8d10_21529b9a3a95.slice - libcontainer container kubepods-burstable-pod5c08c227_1eff_4cd7_8d10_21529b9a3a95.slice. May 10 00:00:36.028283 systemd[1]: kubepods-burstable-pod5c08c227_1eff_4cd7_8d10_21529b9a3a95.slice: Consumed 14.806s CPU time. May 10 00:00:36.030195 containerd[1945]: time="2025-05-10T00:00:36.029243469Z" level=info msg="RemoveContainer for \"5487e09017558b580bdf9fcff8d064cbe47ac7929b9d502f86ce857c91a77348\"" May 10 00:00:36.037040 containerd[1945]: time="2025-05-10T00:00:36.036834837Z" level=info msg="RemoveContainer for \"5487e09017558b580bdf9fcff8d064cbe47ac7929b9d502f86ce857c91a77348\" returns successfully" May 10 00:00:36.040110 kubelet[3130]: I0510 00:00:36.040038 3130 scope.go:117] "RemoveContainer" containerID="d333e677da11c4f8a4eb0de02b69960279a89e3df2447ba2ad7e9a40d4885c23" May 10 00:00:36.044818 containerd[1945]: time="2025-05-10T00:00:36.044746461Z" level=info msg="RemoveContainer for \"d333e677da11c4f8a4eb0de02b69960279a89e3df2447ba2ad7e9a40d4885c23\"" May 10 00:00:36.054382 containerd[1945]: time="2025-05-10T00:00:36.054255777Z" level=info msg="RemoveContainer for \"d333e677da11c4f8a4eb0de02b69960279a89e3df2447ba2ad7e9a40d4885c23\" returns successfully" May 10 00:00:36.056018 kubelet[3130]: I0510 00:00:36.055687 3130 scope.go:117] "RemoveContainer" containerID="4a749722d3ec27e62d6281638d96fa5bb40f23a21967b526673de96954b7003e" May 10 00:00:36.058883 containerd[1945]: time="2025-05-10T00:00:36.058814949Z" level=info msg="RemoveContainer for \"4a749722d3ec27e62d6281638d96fa5bb40f23a21967b526673de96954b7003e\"" May 10 00:00:36.068850 containerd[1945]: time="2025-05-10T00:00:36.067812309Z" level=info msg="RemoveContainer for \"4a749722d3ec27e62d6281638d96fa5bb40f23a21967b526673de96954b7003e\" returns successfully" May 10 00:00:36.069300 kubelet[3130]: I0510 00:00:36.069239 3130 scope.go:117] "RemoveContainer" containerID="a7c40cc0160d5bc3fd808736692f954574e523c535fc873942cb6a5e44b949ad" May 10 00:00:36.074645 containerd[1945]: time="2025-05-10T00:00:36.074591685Z" level=info msg="RemoveContainer for \"a7c40cc0160d5bc3fd808736692f954574e523c535fc873942cb6a5e44b949ad\"" May 10 00:00:36.081712 containerd[1945]: time="2025-05-10T00:00:36.081658701Z" level=info msg="RemoveContainer for \"a7c40cc0160d5bc3fd808736692f954574e523c535fc873942cb6a5e44b949ad\" returns successfully" May 10 00:00:36.082249 kubelet[3130]: I0510 00:00:36.082209 3130 scope.go:117] "RemoveContainer" containerID="e1f493f0fbd63a34b4eebb5bb121af0369770b1f6c90ef6b9166b19b9d54e8b6" May 10 00:00:36.084218 containerd[1945]: time="2025-05-10T00:00:36.084113001Z" level=info msg="RemoveContainer for \"e1f493f0fbd63a34b4eebb5bb121af0369770b1f6c90ef6b9166b19b9d54e8b6\"" May 10 00:00:36.089849 containerd[1945]: time="2025-05-10T00:00:36.089779797Z" level=info msg="RemoveContainer for \"e1f493f0fbd63a34b4eebb5bb121af0369770b1f6c90ef6b9166b19b9d54e8b6\" returns successfully" May 10 00:00:36.090408 kubelet[3130]: I0510 00:00:36.090249 3130 scope.go:117] "RemoveContainer" containerID="5487e09017558b580bdf9fcff8d064cbe47ac7929b9d502f86ce857c91a77348" May 10 00:00:36.090699 containerd[1945]: time="2025-05-10T00:00:36.090625005Z" level=error msg="ContainerStatus for \"5487e09017558b580bdf9fcff8d064cbe47ac7929b9d502f86ce857c91a77348\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5487e09017558b580bdf9fcff8d064cbe47ac7929b9d502f86ce857c91a77348\": not found" May 10 00:00:36.090944 kubelet[3130]: E0510 00:00:36.090878 3130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5487e09017558b580bdf9fcff8d064cbe47ac7929b9d502f86ce857c91a77348\": not found" containerID="5487e09017558b580bdf9fcff8d064cbe47ac7929b9d502f86ce857c91a77348" May 10 00:00:36.091072 kubelet[3130]: I0510 00:00:36.090936 3130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5487e09017558b580bdf9fcff8d064cbe47ac7929b9d502f86ce857c91a77348"} err="failed to get container status \"5487e09017558b580bdf9fcff8d064cbe47ac7929b9d502f86ce857c91a77348\": rpc error: code = NotFound desc = an error occurred when try to find container \"5487e09017558b580bdf9fcff8d064cbe47ac7929b9d502f86ce857c91a77348\": not found" May 10 00:00:36.091072 kubelet[3130]: I0510 00:00:36.091014 3130 scope.go:117] "RemoveContainer" containerID="d333e677da11c4f8a4eb0de02b69960279a89e3df2447ba2ad7e9a40d4885c23" May 10 00:00:36.091682 containerd[1945]: time="2025-05-10T00:00:36.091546749Z" level=error msg="ContainerStatus for \"d333e677da11c4f8a4eb0de02b69960279a89e3df2447ba2ad7e9a40d4885c23\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d333e677da11c4f8a4eb0de02b69960279a89e3df2447ba2ad7e9a40d4885c23\": not found" May 10 00:00:36.091868 kubelet[3130]: E0510 00:00:36.091805 3130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d333e677da11c4f8a4eb0de02b69960279a89e3df2447ba2ad7e9a40d4885c23\": not found" containerID="d333e677da11c4f8a4eb0de02b69960279a89e3df2447ba2ad7e9a40d4885c23" May 10 00:00:36.091982 kubelet[3130]: I0510 00:00:36.091856 3130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d333e677da11c4f8a4eb0de02b69960279a89e3df2447ba2ad7e9a40d4885c23"} err="failed to get container status \"d333e677da11c4f8a4eb0de02b69960279a89e3df2447ba2ad7e9a40d4885c23\": rpc error: code = NotFound desc = an error occurred when try to find container \"d333e677da11c4f8a4eb0de02b69960279a89e3df2447ba2ad7e9a40d4885c23\": not found" May 10 00:00:36.091982 kubelet[3130]: I0510 00:00:36.091890 3130 scope.go:117] "RemoveContainer" containerID="4a749722d3ec27e62d6281638d96fa5bb40f23a21967b526673de96954b7003e" May 10 00:00:36.092640 containerd[1945]: time="2025-05-10T00:00:36.092561673Z" level=error msg="ContainerStatus for \"4a749722d3ec27e62d6281638d96fa5bb40f23a21967b526673de96954b7003e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a749722d3ec27e62d6281638d96fa5bb40f23a21967b526673de96954b7003e\": not found" May 10 00:00:36.092842 kubelet[3130]: E0510 00:00:36.092788 3130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a749722d3ec27e62d6281638d96fa5bb40f23a21967b526673de96954b7003e\": not found" containerID="4a749722d3ec27e62d6281638d96fa5bb40f23a21967b526673de96954b7003e" May 10 00:00:36.092919 kubelet[3130]: I0510 00:00:36.092840 3130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a749722d3ec27e62d6281638d96fa5bb40f23a21967b526673de96954b7003e"} err="failed to get container status \"4a749722d3ec27e62d6281638d96fa5bb40f23a21967b526673de96954b7003e\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a749722d3ec27e62d6281638d96fa5bb40f23a21967b526673de96954b7003e\": not found" May 10 00:00:36.092919 kubelet[3130]: I0510 00:00:36.092874 3130 scope.go:117] "RemoveContainer" containerID="a7c40cc0160d5bc3fd808736692f954574e523c535fc873942cb6a5e44b949ad" May 10 00:00:36.093297 containerd[1945]: time="2025-05-10T00:00:36.093243177Z" level=error msg="ContainerStatus for \"a7c40cc0160d5bc3fd808736692f954574e523c535fc873942cb6a5e44b949ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a7c40cc0160d5bc3fd808736692f954574e523c535fc873942cb6a5e44b949ad\": not found" May 10 00:00:36.093493 kubelet[3130]: E0510 00:00:36.093445 3130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a7c40cc0160d5bc3fd808736692f954574e523c535fc873942cb6a5e44b949ad\": not found" containerID="a7c40cc0160d5bc3fd808736692f954574e523c535fc873942cb6a5e44b949ad" May 10 00:00:36.093600 kubelet[3130]: I0510 00:00:36.093493 3130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a7c40cc0160d5bc3fd808736692f954574e523c535fc873942cb6a5e44b949ad"} err="failed to get container status \"a7c40cc0160d5bc3fd808736692f954574e523c535fc873942cb6a5e44b949ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"a7c40cc0160d5bc3fd808736692f954574e523c535fc873942cb6a5e44b949ad\": not found" May 10 00:00:36.093600 kubelet[3130]: I0510 00:00:36.093526 3130 scope.go:117] "RemoveContainer" containerID="e1f493f0fbd63a34b4eebb5bb121af0369770b1f6c90ef6b9166b19b9d54e8b6" May 10 00:00:36.094128 containerd[1945]: time="2025-05-10T00:00:36.093908193Z" level=error msg="ContainerStatus for \"e1f493f0fbd63a34b4eebb5bb121af0369770b1f6c90ef6b9166b19b9d54e8b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e1f493f0fbd63a34b4eebb5bb121af0369770b1f6c90ef6b9166b19b9d54e8b6\": not found" May 10 00:00:36.094402 kubelet[3130]: E0510 00:00:36.094312 3130 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e1f493f0fbd63a34b4eebb5bb121af0369770b1f6c90ef6b9166b19b9d54e8b6\": not found" containerID="e1f493f0fbd63a34b4eebb5bb121af0369770b1f6c90ef6b9166b19b9d54e8b6" May 10 00:00:36.094402 kubelet[3130]: I0510 00:00:36.094366 3130 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e1f493f0fbd63a34b4eebb5bb121af0369770b1f6c90ef6b9166b19b9d54e8b6"} err="failed to get container status \"e1f493f0fbd63a34b4eebb5bb121af0369770b1f6c90ef6b9166b19b9d54e8b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"e1f493f0fbd63a34b4eebb5bb121af0369770b1f6c90ef6b9166b19b9d54e8b6\": not found" May 10 00:00:36.153494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed-rootfs.mount: Deactivated successfully. May 10 00:00:36.153668 systemd[1]: var-lib-kubelet-pods-5c08c227\x2d1eff\x2d4cd7\x2d8d10\x2d21529b9a3a95-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgbvzp.mount: Deactivated successfully. May 10 00:00:36.153806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61f1f86d94772c41e340821ae358017a1ff69efab7f930807f1c774ed9470363-rootfs.mount: Deactivated successfully. May 10 00:00:36.153940 systemd[1]: var-lib-kubelet-pods-bddef3d7\x2d98cc\x2d4f26\x2d8c99\x2d594d985fbcfb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db4kkq.mount: Deactivated successfully. May 10 00:00:36.154106 systemd[1]: var-lib-kubelet-pods-5c08c227\x2d1eff\x2d4cd7\x2d8d10\x2d21529b9a3a95-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 00:00:36.154251 systemd[1]: var-lib-kubelet-pods-5c08c227\x2d1eff\x2d4cd7\x2d8d10\x2d21529b9a3a95-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 00:00:36.541202 kubelet[3130]: I0510 00:00:36.541154 3130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c08c227-1eff-4cd7-8d10-21529b9a3a95" path="/var/lib/kubelet/pods/5c08c227-1eff-4cd7-8d10-21529b9a3a95/volumes" May 10 00:00:36.542814 kubelet[3130]: I0510 00:00:36.542775 3130 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bddef3d7-98cc-4f26-8c99-594d985fbcfb" path="/var/lib/kubelet/pods/bddef3d7-98cc-4f26-8c99-594d985fbcfb/volumes" May 10 00:00:36.751680 kubelet[3130]: E0510 00:00:36.751612 3130 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:00:37.079235 sshd[5021]: pam_unix(sshd:session): session closed for user core May 10 00:00:37.086777 systemd[1]: sshd@25-172.31.24.82:22-147.75.109.163:43462.service: Deactivated successfully. May 10 00:00:37.092560 systemd[1]: session-26.scope: Deactivated successfully. May 10 00:00:37.093278 systemd[1]: session-26.scope: Consumed 2.068s CPU time. May 10 00:00:37.094737 systemd-logind[1915]: Session 26 logged out. Waiting for processes to exit. May 10 00:00:37.097157 systemd-logind[1915]: Removed session 26. May 10 00:00:37.118530 systemd[1]: Started sshd@26-172.31.24.82:22-147.75.109.163:59618.service - OpenSSH per-connection server daemon (147.75.109.163:59618). May 10 00:00:37.305631 sshd[5183]: Accepted publickey for core from 147.75.109.163 port 59618 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:37.308729 sshd[5183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:37.316416 systemd-logind[1915]: New session 27 of user core. May 10 00:00:37.325282 systemd[1]: Started session-27.scope - Session 27 of User core. May 10 00:00:37.416759 ntpd[1908]: Deleting interface #11 lxc_health, fe80::cc97:1ff:fed9:5e8%8#123, interface stats: received=0, sent=0, dropped=0, active_time=73 secs May 10 00:00:37.417340 ntpd[1908]: 10 May 00:00:37 ntpd[1908]: Deleting interface #11 lxc_health, fe80::cc97:1ff:fed9:5e8%8#123, interface stats: received=0, sent=0, dropped=0, active_time=73 secs May 10 00:00:38.539375 sshd[5183]: pam_unix(sshd:session): session closed for user core May 10 00:00:38.553857 systemd[1]: session-27.scope: Deactivated successfully. May 10 00:00:38.554182 systemd[1]: session-27.scope: Consumed 1.014s CPU time. May 10 00:00:38.560853 systemd[1]: sshd@26-172.31.24.82:22-147.75.109.163:59618.service: Deactivated successfully. May 10 00:00:38.588629 systemd-logind[1915]: Session 27 logged out. Waiting for processes to exit. May 10 00:00:38.596484 systemd[1]: Started sshd@27-172.31.24.82:22-147.75.109.163:59628.service - OpenSSH per-connection server daemon (147.75.109.163:59628). May 10 00:00:38.602737 kubelet[3130]: I0510 00:00:38.602321 3130 memory_manager.go:355] "RemoveStaleState removing state" podUID="bddef3d7-98cc-4f26-8c99-594d985fbcfb" containerName="cilium-operator" May 10 00:00:38.602737 kubelet[3130]: I0510 00:00:38.602392 3130 memory_manager.go:355] "RemoveStaleState removing state" podUID="5c08c227-1eff-4cd7-8d10-21529b9a3a95" containerName="cilium-agent" May 10 00:00:38.607427 systemd-logind[1915]: Removed session 27. May 10 00:00:38.617804 kubelet[3130]: I0510 00:00:38.617732 3130 status_manager.go:890] "Failed to get status for pod" podUID="312bf3ff-a99d-4d4f-b584-cf190de6f82b" pod="kube-system/cilium-8zl5m" err="pods \"cilium-8zl5m\" is forbidden: User \"system:node:ip-172-31-24-82\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-24-82' and this object" May 10 00:00:38.647784 systemd[1]: Created slice kubepods-burstable-pod312bf3ff_a99d_4d4f_b584_cf190de6f82b.slice - libcontainer container kubepods-burstable-pod312bf3ff_a99d_4d4f_b584_cf190de6f82b.slice. May 10 00:00:38.731206 kubelet[3130]: I0510 00:00:38.730413 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/312bf3ff-a99d-4d4f-b584-cf190de6f82b-cilium-run\") pod \"cilium-8zl5m\" (UID: \"312bf3ff-a99d-4d4f-b584-cf190de6f82b\") " pod="kube-system/cilium-8zl5m" May 10 00:00:38.731206 kubelet[3130]: I0510 00:00:38.730499 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/312bf3ff-a99d-4d4f-b584-cf190de6f82b-etc-cni-netd\") pod \"cilium-8zl5m\" (UID: \"312bf3ff-a99d-4d4f-b584-cf190de6f82b\") " pod="kube-system/cilium-8zl5m" May 10 00:00:38.731206 kubelet[3130]: I0510 00:00:38.730541 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/312bf3ff-a99d-4d4f-b584-cf190de6f82b-cilium-ipsec-secrets\") pod \"cilium-8zl5m\" (UID: \"312bf3ff-a99d-4d4f-b584-cf190de6f82b\") " pod="kube-system/cilium-8zl5m" May 10 00:00:38.731206 kubelet[3130]: I0510 00:00:38.730583 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/312bf3ff-a99d-4d4f-b584-cf190de6f82b-host-proc-sys-kernel\") pod \"cilium-8zl5m\" (UID: \"312bf3ff-a99d-4d4f-b584-cf190de6f82b\") " pod="kube-system/cilium-8zl5m" May 10 00:00:38.731206 kubelet[3130]: I0510 00:00:38.730622 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/312bf3ff-a99d-4d4f-b584-cf190de6f82b-cni-path\") pod \"cilium-8zl5m\" (UID: \"312bf3ff-a99d-4d4f-b584-cf190de6f82b\") " pod="kube-system/cilium-8zl5m" May 10 00:00:38.731206 kubelet[3130]: I0510 00:00:38.730658 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/312bf3ff-a99d-4d4f-b584-cf190de6f82b-cilium-config-path\") pod \"cilium-8zl5m\" (UID: \"312bf3ff-a99d-4d4f-b584-cf190de6f82b\") " pod="kube-system/cilium-8zl5m" May 10 00:00:38.731634 kubelet[3130]: I0510 00:00:38.730695 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/312bf3ff-a99d-4d4f-b584-cf190de6f82b-xtables-lock\") pod \"cilium-8zl5m\" (UID: \"312bf3ff-a99d-4d4f-b584-cf190de6f82b\") " pod="kube-system/cilium-8zl5m" May 10 00:00:38.731634 kubelet[3130]: I0510 00:00:38.730735 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/312bf3ff-a99d-4d4f-b584-cf190de6f82b-hubble-tls\") pod \"cilium-8zl5m\" (UID: \"312bf3ff-a99d-4d4f-b584-cf190de6f82b\") " pod="kube-system/cilium-8zl5m" May 10 00:00:38.731634 kubelet[3130]: I0510 00:00:38.730772 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5mxt\" (UniqueName: \"kubernetes.io/projected/312bf3ff-a99d-4d4f-b584-cf190de6f82b-kube-api-access-p5mxt\") pod \"cilium-8zl5m\" (UID: \"312bf3ff-a99d-4d4f-b584-cf190de6f82b\") " pod="kube-system/cilium-8zl5m" May 10 00:00:38.731634 kubelet[3130]: I0510 00:00:38.730812 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/312bf3ff-a99d-4d4f-b584-cf190de6f82b-lib-modules\") pod \"cilium-8zl5m\" (UID: \"312bf3ff-a99d-4d4f-b584-cf190de6f82b\") " pod="kube-system/cilium-8zl5m" May 10 00:00:38.731634 kubelet[3130]: I0510 00:00:38.730852 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/312bf3ff-a99d-4d4f-b584-cf190de6f82b-cilium-cgroup\") pod \"cilium-8zl5m\" (UID: \"312bf3ff-a99d-4d4f-b584-cf190de6f82b\") " pod="kube-system/cilium-8zl5m" May 10 00:00:38.731634 kubelet[3130]: I0510 00:00:38.730890 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/312bf3ff-a99d-4d4f-b584-cf190de6f82b-host-proc-sys-net\") pod \"cilium-8zl5m\" (UID: \"312bf3ff-a99d-4d4f-b584-cf190de6f82b\") " pod="kube-system/cilium-8zl5m" May 10 00:00:38.731937 kubelet[3130]: I0510 00:00:38.730929 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/312bf3ff-a99d-4d4f-b584-cf190de6f82b-bpf-maps\") pod \"cilium-8zl5m\" (UID: \"312bf3ff-a99d-4d4f-b584-cf190de6f82b\") " pod="kube-system/cilium-8zl5m" May 10 00:00:38.731937 kubelet[3130]: I0510 00:00:38.730992 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/312bf3ff-a99d-4d4f-b584-cf190de6f82b-hostproc\") pod \"cilium-8zl5m\" (UID: \"312bf3ff-a99d-4d4f-b584-cf190de6f82b\") " pod="kube-system/cilium-8zl5m" May 10 00:00:38.731937 kubelet[3130]: I0510 00:00:38.731034 3130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/312bf3ff-a99d-4d4f-b584-cf190de6f82b-clustermesh-secrets\") pod \"cilium-8zl5m\" (UID: \"312bf3ff-a99d-4d4f-b584-cf190de6f82b\") " pod="kube-system/cilium-8zl5m" May 10 00:00:38.825353 sshd[5195]: Accepted publickey for core from 147.75.109.163 port 59628 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:38.828697 sshd[5195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:38.842362 systemd-logind[1915]: New session 28 of user core. May 10 00:00:38.894309 systemd[1]: Started session-28.scope - Session 28 of User core. May 10 00:00:38.973366 containerd[1945]: time="2025-05-10T00:00:38.973314387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8zl5m,Uid:312bf3ff-a99d-4d4f-b584-cf190de6f82b,Namespace:kube-system,Attempt:0,}" May 10 00:00:39.023594 containerd[1945]: time="2025-05-10T00:00:39.023206968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:00:39.023594 containerd[1945]: time="2025-05-10T00:00:39.023320104Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:00:39.023594 containerd[1945]: time="2025-05-10T00:00:39.023389188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:39.024082 containerd[1945]: time="2025-05-10T00:00:39.023627844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:39.031868 sshd[5195]: pam_unix(sshd:session): session closed for user core May 10 00:00:39.041886 systemd[1]: sshd@27-172.31.24.82:22-147.75.109.163:59628.service: Deactivated successfully. May 10 00:00:39.049658 systemd[1]: session-28.scope: Deactivated successfully. May 10 00:00:39.052310 systemd-logind[1915]: Session 28 logged out. Waiting for processes to exit. May 10 00:00:39.068713 systemd-logind[1915]: Removed session 28. May 10 00:00:39.074275 systemd[1]: Started cri-containerd-fa4447a6844a8ea9edc7eaba13e1f0fd3350c88dfda427ee8f59d040ac750eaf.scope - libcontainer container fa4447a6844a8ea9edc7eaba13e1f0fd3350c88dfda427ee8f59d040ac750eaf. May 10 00:00:39.084149 systemd[1]: Started sshd@28-172.31.24.82:22-147.75.109.163:59644.service - OpenSSH per-connection server daemon (147.75.109.163:59644). May 10 00:00:39.129142 containerd[1945]: time="2025-05-10T00:00:39.128985336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8zl5m,Uid:312bf3ff-a99d-4d4f-b584-cf190de6f82b,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa4447a6844a8ea9edc7eaba13e1f0fd3350c88dfda427ee8f59d040ac750eaf\"" May 10 00:00:39.135951 containerd[1945]: time="2025-05-10T00:00:39.135795240Z" level=info msg="CreateContainer within sandbox \"fa4447a6844a8ea9edc7eaba13e1f0fd3350c88dfda427ee8f59d040ac750eaf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:00:39.159430 containerd[1945]: time="2025-05-10T00:00:39.159361560Z" level=info msg="CreateContainer within sandbox \"fa4447a6844a8ea9edc7eaba13e1f0fd3350c88dfda427ee8f59d040ac750eaf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"11a15314522cb110e151e21b66c704cd450b9e2a0238cc6cf53a7a9a0c621855\"" May 10 00:00:39.162458 containerd[1945]: time="2025-05-10T00:00:39.162065124Z" level=info msg="StartContainer for \"11a15314522cb110e151e21b66c704cd450b9e2a0238cc6cf53a7a9a0c621855\"" May 10 00:00:39.208281 systemd[1]: Started cri-containerd-11a15314522cb110e151e21b66c704cd450b9e2a0238cc6cf53a7a9a0c621855.scope - libcontainer container 11a15314522cb110e151e21b66c704cd450b9e2a0238cc6cf53a7a9a0c621855. May 10 00:00:39.263192 containerd[1945]: time="2025-05-10T00:00:39.263058709Z" level=info msg="StartContainer for \"11a15314522cb110e151e21b66c704cd450b9e2a0238cc6cf53a7a9a0c621855\" returns successfully" May 10 00:00:39.267440 sshd[5235]: Accepted publickey for core from 147.75.109.163 port 59644 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:39.271330 sshd[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:39.281291 systemd-logind[1915]: New session 29 of user core. May 10 00:00:39.289722 systemd[1]: Started session-29.scope - Session 29 of User core. May 10 00:00:39.290383 kubelet[3130]: I0510 00:00:39.290309 3130 setters.go:602] "Node became not ready" node="ip-172-31-24-82" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-10T00:00:39Z","lastTransitionTime":"2025-05-10T00:00:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 10 00:00:39.291100 systemd[1]: cri-containerd-11a15314522cb110e151e21b66c704cd450b9e2a0238cc6cf53a7a9a0c621855.scope: Deactivated successfully. May 10 00:00:39.365000 containerd[1945]: time="2025-05-10T00:00:39.364769725Z" level=info msg="shim disconnected" id=11a15314522cb110e151e21b66c704cd450b9e2a0238cc6cf53a7a9a0c621855 namespace=k8s.io May 10 00:00:39.365000 containerd[1945]: time="2025-05-10T00:00:39.364855441Z" level=warning msg="cleaning up after shim disconnected" id=11a15314522cb110e151e21b66c704cd450b9e2a0238cc6cf53a7a9a0c621855 namespace=k8s.io May 10 00:00:39.365000 containerd[1945]: time="2025-05-10T00:00:39.364878877Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:40.029776 containerd[1945]: time="2025-05-10T00:00:40.029686417Z" level=info msg="CreateContainer within sandbox \"fa4447a6844a8ea9edc7eaba13e1f0fd3350c88dfda427ee8f59d040ac750eaf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 00:00:40.059721 containerd[1945]: time="2025-05-10T00:00:40.059567941Z" level=info msg="CreateContainer within sandbox \"fa4447a6844a8ea9edc7eaba13e1f0fd3350c88dfda427ee8f59d040ac750eaf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7ac1f8bc310432c16e2990fa014eca50c11c355205fc23cd5739ad8d3284bfea\"" May 10 00:00:40.060580 containerd[1945]: time="2025-05-10T00:00:40.060533221Z" level=info msg="StartContainer for \"7ac1f8bc310432c16e2990fa014eca50c11c355205fc23cd5739ad8d3284bfea\"" May 10 00:00:40.120283 systemd[1]: Started cri-containerd-7ac1f8bc310432c16e2990fa014eca50c11c355205fc23cd5739ad8d3284bfea.scope - libcontainer container 7ac1f8bc310432c16e2990fa014eca50c11c355205fc23cd5739ad8d3284bfea. May 10 00:00:40.173645 containerd[1945]: time="2025-05-10T00:00:40.173563417Z" level=info msg="StartContainer for \"7ac1f8bc310432c16e2990fa014eca50c11c355205fc23cd5739ad8d3284bfea\" returns successfully" May 10 00:00:40.185517 systemd[1]: cri-containerd-7ac1f8bc310432c16e2990fa014eca50c11c355205fc23cd5739ad8d3284bfea.scope: Deactivated successfully. May 10 00:00:40.226612 containerd[1945]: time="2025-05-10T00:00:40.226526678Z" level=info msg="shim disconnected" id=7ac1f8bc310432c16e2990fa014eca50c11c355205fc23cd5739ad8d3284bfea namespace=k8s.io May 10 00:00:40.227264 containerd[1945]: time="2025-05-10T00:00:40.226901246Z" level=warning msg="cleaning up after shim disconnected" id=7ac1f8bc310432c16e2990fa014eca50c11c355205fc23cd5739ad8d3284bfea namespace=k8s.io May 10 00:00:40.227264 containerd[1945]: time="2025-05-10T00:00:40.226932434Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:40.844042 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ac1f8bc310432c16e2990fa014eca50c11c355205fc23cd5739ad8d3284bfea-rootfs.mount: Deactivated successfully. May 10 00:00:41.038539 containerd[1945]: time="2025-05-10T00:00:41.038470106Z" level=info msg="CreateContainer within sandbox \"fa4447a6844a8ea9edc7eaba13e1f0fd3350c88dfda427ee8f59d040ac750eaf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 00:00:41.081232 containerd[1945]: time="2025-05-10T00:00:41.081159110Z" level=info msg="CreateContainer within sandbox \"fa4447a6844a8ea9edc7eaba13e1f0fd3350c88dfda427ee8f59d040ac750eaf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"996f0dda31f5caa11b809fb67ebd4ecd0010f1a50b63c4d47cd79d7bb30adfbd\"" May 10 00:00:41.084016 containerd[1945]: time="2025-05-10T00:00:41.082200050Z" level=info msg="StartContainer for \"996f0dda31f5caa11b809fb67ebd4ecd0010f1a50b63c4d47cd79d7bb30adfbd\"" May 10 00:00:41.171492 systemd[1]: Started cri-containerd-996f0dda31f5caa11b809fb67ebd4ecd0010f1a50b63c4d47cd79d7bb30adfbd.scope - libcontainer container 996f0dda31f5caa11b809fb67ebd4ecd0010f1a50b63c4d47cd79d7bb30adfbd. May 10 00:00:41.253746 containerd[1945]: time="2025-05-10T00:00:41.253685679Z" level=info msg="StartContainer for \"996f0dda31f5caa11b809fb67ebd4ecd0010f1a50b63c4d47cd79d7bb30adfbd\" returns successfully" May 10 00:00:41.263883 systemd[1]: cri-containerd-996f0dda31f5caa11b809fb67ebd4ecd0010f1a50b63c4d47cd79d7bb30adfbd.scope: Deactivated successfully. May 10 00:00:41.315742 containerd[1945]: time="2025-05-10T00:00:41.315643623Z" level=info msg="shim disconnected" id=996f0dda31f5caa11b809fb67ebd4ecd0010f1a50b63c4d47cd79d7bb30adfbd namespace=k8s.io May 10 00:00:41.316130 containerd[1945]: time="2025-05-10T00:00:41.316095447Z" level=warning msg="cleaning up after shim disconnected" id=996f0dda31f5caa11b809fb67ebd4ecd0010f1a50b63c4d47cd79d7bb30adfbd namespace=k8s.io May 10 00:00:41.316245 containerd[1945]: time="2025-05-10T00:00:41.316217967Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:41.752802 kubelet[3130]: E0510 00:00:41.752711 3130 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:00:41.844240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-996f0dda31f5caa11b809fb67ebd4ecd0010f1a50b63c4d47cd79d7bb30adfbd-rootfs.mount: Deactivated successfully. May 10 00:00:42.043578 containerd[1945]: time="2025-05-10T00:00:42.043410615Z" level=info msg="CreateContainer within sandbox \"fa4447a6844a8ea9edc7eaba13e1f0fd3350c88dfda427ee8f59d040ac750eaf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 00:00:42.076142 containerd[1945]: time="2025-05-10T00:00:42.074547123Z" level=info msg="CreateContainer within sandbox \"fa4447a6844a8ea9edc7eaba13e1f0fd3350c88dfda427ee8f59d040ac750eaf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bf08648acbf4b08b48eab8155de09dd0d0b061380a8ef7c228bb77213055be47\"" May 10 00:00:42.079887 containerd[1945]: time="2025-05-10T00:00:42.079840035Z" level=info msg="StartContainer for \"bf08648acbf4b08b48eab8155de09dd0d0b061380a8ef7c228bb77213055be47\"" May 10 00:00:42.151289 systemd[1]: Started cri-containerd-bf08648acbf4b08b48eab8155de09dd0d0b061380a8ef7c228bb77213055be47.scope - libcontainer container bf08648acbf4b08b48eab8155de09dd0d0b061380a8ef7c228bb77213055be47. May 10 00:00:42.215608 containerd[1945]: time="2025-05-10T00:00:42.215515024Z" level=info msg="StartContainer for \"bf08648acbf4b08b48eab8155de09dd0d0b061380a8ef7c228bb77213055be47\" returns successfully" May 10 00:00:42.220172 systemd[1]: cri-containerd-bf08648acbf4b08b48eab8155de09dd0d0b061380a8ef7c228bb77213055be47.scope: Deactivated successfully. May 10 00:00:42.277062 containerd[1945]: time="2025-05-10T00:00:42.276886324Z" level=info msg="shim disconnected" id=bf08648acbf4b08b48eab8155de09dd0d0b061380a8ef7c228bb77213055be47 namespace=k8s.io May 10 00:00:42.277062 containerd[1945]: time="2025-05-10T00:00:42.277019020Z" level=warning msg="cleaning up after shim disconnected" id=bf08648acbf4b08b48eab8155de09dd0d0b061380a8ef7c228bb77213055be47 namespace=k8s.io May 10 00:00:42.277062 containerd[1945]: time="2025-05-10T00:00:42.277039540Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:42.844210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf08648acbf4b08b48eab8155de09dd0d0b061380a8ef7c228bb77213055be47-rootfs.mount: Deactivated successfully. May 10 00:00:43.051385 containerd[1945]: time="2025-05-10T00:00:43.051310600Z" level=info msg="CreateContainer within sandbox \"fa4447a6844a8ea9edc7eaba13e1f0fd3350c88dfda427ee8f59d040ac750eaf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 00:00:43.093285 containerd[1945]: time="2025-05-10T00:00:43.092012632Z" level=info msg="CreateContainer within sandbox \"fa4447a6844a8ea9edc7eaba13e1f0fd3350c88dfda427ee8f59d040ac750eaf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d37ed16cad4523ab2f381f84765804353a89f649520159370323b94c90b5fd61\"" May 10 00:00:43.096126 containerd[1945]: time="2025-05-10T00:00:43.094524304Z" level=info msg="StartContainer for \"d37ed16cad4523ab2f381f84765804353a89f649520159370323b94c90b5fd61\"" May 10 00:00:43.151282 systemd[1]: Started cri-containerd-d37ed16cad4523ab2f381f84765804353a89f649520159370323b94c90b5fd61.scope - libcontainer container d37ed16cad4523ab2f381f84765804353a89f649520159370323b94c90b5fd61. May 10 00:00:43.210037 containerd[1945]: time="2025-05-10T00:00:43.209928160Z" level=info msg="StartContainer for \"d37ed16cad4523ab2f381f84765804353a89f649520159370323b94c90b5fd61\" returns successfully" May 10 00:00:43.537596 kubelet[3130]: E0510 00:00:43.537421 3130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-k2g9k" podUID="2ae349c8-814f-498f-b873-866a1e3ae0e7" May 10 00:00:43.972312 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 10 00:00:44.100214 kubelet[3130]: I0510 00:00:44.100104 3130 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8zl5m" podStartSLOduration=6.099917705 podStartE2EDuration="6.099917705s" podCreationTimestamp="2025-05-10 00:00:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:00:44.097880393 +0000 UTC m=+117.937744295" watchObservedRunningTime="2025-05-10 00:00:44.099917705 +0000 UTC m=+117.939781595" May 10 00:00:45.537814 kubelet[3130]: E0510 00:00:45.537733 3130 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-k2g9k" podUID="2ae349c8-814f-498f-b873-866a1e3ae0e7" May 10 00:00:46.479362 containerd[1945]: time="2025-05-10T00:00:46.478726269Z" level=info msg="StopPodSandbox for \"61f1f86d94772c41e340821ae358017a1ff69efab7f930807f1c774ed9470363\"" May 10 00:00:46.479362 containerd[1945]: time="2025-05-10T00:00:46.478891425Z" level=info msg="TearDown network for sandbox \"61f1f86d94772c41e340821ae358017a1ff69efab7f930807f1c774ed9470363\" successfully" May 10 00:00:46.479362 containerd[1945]: time="2025-05-10T00:00:46.478916457Z" level=info msg="StopPodSandbox for \"61f1f86d94772c41e340821ae358017a1ff69efab7f930807f1c774ed9470363\" returns successfully" May 10 00:00:46.481262 containerd[1945]: time="2025-05-10T00:00:46.479910849Z" level=info msg="RemovePodSandbox for \"61f1f86d94772c41e340821ae358017a1ff69efab7f930807f1c774ed9470363\"" May 10 00:00:46.481262 containerd[1945]: time="2025-05-10T00:00:46.480021597Z" level=info msg="Forcibly stopping sandbox \"61f1f86d94772c41e340821ae358017a1ff69efab7f930807f1c774ed9470363\"" May 10 00:00:46.481262 containerd[1945]: time="2025-05-10T00:00:46.480234273Z" level=info msg="TearDown network for sandbox \"61f1f86d94772c41e340821ae358017a1ff69efab7f930807f1c774ed9470363\" successfully" May 10 00:00:46.488674 containerd[1945]: time="2025-05-10T00:00:46.488575413Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61f1f86d94772c41e340821ae358017a1ff69efab7f930807f1c774ed9470363\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:00:46.488893 containerd[1945]: time="2025-05-10T00:00:46.488692089Z" level=info msg="RemovePodSandbox \"61f1f86d94772c41e340821ae358017a1ff69efab7f930807f1c774ed9470363\" returns successfully" May 10 00:00:46.489506 containerd[1945]: time="2025-05-10T00:00:46.489448317Z" level=info msg="StopPodSandbox for \"5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed\"" May 10 00:00:46.489644 containerd[1945]: time="2025-05-10T00:00:46.489593145Z" level=info msg="TearDown network for sandbox \"5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed\" successfully" May 10 00:00:46.489644 containerd[1945]: time="2025-05-10T00:00:46.489618405Z" level=info msg="StopPodSandbox for \"5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed\" returns successfully" May 10 00:00:46.490594 containerd[1945]: time="2025-05-10T00:00:46.490525917Z" level=info msg="RemovePodSandbox for \"5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed\"" May 10 00:00:46.490741 containerd[1945]: time="2025-05-10T00:00:46.490637985Z" level=info msg="Forcibly stopping sandbox \"5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed\"" May 10 00:00:46.492018 containerd[1945]: time="2025-05-10T00:00:46.490803237Z" level=info msg="TearDown network for sandbox \"5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed\" successfully" May 10 00:00:46.497575 containerd[1945]: time="2025-05-10T00:00:46.497444601Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:00:46.497739 containerd[1945]: time="2025-05-10T00:00:46.497594169Z" level=info msg="RemovePodSandbox \"5c29c5e591b2d94ab65d51ced5ea36dca1354786ec8861550ed5c217af13aeed\" returns successfully" May 10 00:00:48.386897 systemd-networkd[1759]: lxc_health: Link UP May 10 00:00:48.399185 systemd-networkd[1759]: lxc_health: Gained carrier May 10 00:00:48.403937 (udev-worker)[6049]: Network interface NamePolicy= disabled on kernel command line. May 10 00:00:49.640218 systemd-networkd[1759]: lxc_health: Gained IPv6LL May 10 00:00:50.454097 kubelet[3130]: E0510 00:00:50.453687 3130 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:42662->127.0.0.1:34883: write tcp 127.0.0.1:42662->127.0.0.1:34883: write: broken pipe May 10 00:00:52.416826 ntpd[1908]: Listen normally on 14 lxc_health [fe80::6c91:beff:feeb:320b%14]:123 May 10 00:00:52.417377 ntpd[1908]: 10 May 00:00:52 ntpd[1908]: Listen normally on 14 lxc_health [fe80::6c91:beff:feeb:320b%14]:123 May 10 00:00:52.636933 systemd[1]: run-containerd-runc-k8s.io-d37ed16cad4523ab2f381f84765804353a89f649520159370323b94c90b5fd61-runc.YUB6Jv.mount: Deactivated successfully. May 10 00:00:55.009355 sshd[5235]: pam_unix(sshd:session): session closed for user core May 10 00:00:55.016812 systemd[1]: sshd@28-172.31.24.82:22-147.75.109.163:59644.service: Deactivated successfully. May 10 00:00:55.024719 systemd[1]: session-29.scope: Deactivated successfully. May 10 00:00:55.030582 systemd-logind[1915]: Session 29 logged out. Waiting for processes to exit. May 10 00:00:55.036608 systemd-logind[1915]: Removed session 29. May 10 00:01:09.142090 kubelet[3130]: E0510 00:01:09.141881 3130 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-82?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 10 00:01:09.324431 systemd[1]: cri-containerd-8ff7d7696e66a7d24d2a815f2fb0fbbe7eb09bc8b1151d8705677199e129fa08.scope: Deactivated successfully. May 10 00:01:09.325077 systemd[1]: cri-containerd-8ff7d7696e66a7d24d2a815f2fb0fbbe7eb09bc8b1151d8705677199e129fa08.scope: Consumed 5.260s CPU time, 17.5M memory peak, 0B memory swap peak. May 10 00:01:09.367811 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ff7d7696e66a7d24d2a815f2fb0fbbe7eb09bc8b1151d8705677199e129fa08-rootfs.mount: Deactivated successfully. May 10 00:01:09.378006 containerd[1945]: time="2025-05-10T00:01:09.377626362Z" level=info msg="shim disconnected" id=8ff7d7696e66a7d24d2a815f2fb0fbbe7eb09bc8b1151d8705677199e129fa08 namespace=k8s.io May 10 00:01:09.378006 containerd[1945]: time="2025-05-10T00:01:09.377706006Z" level=warning msg="cleaning up after shim disconnected" id=8ff7d7696e66a7d24d2a815f2fb0fbbe7eb09bc8b1151d8705677199e129fa08 namespace=k8s.io May 10 00:01:09.378006 containerd[1945]: time="2025-05-10T00:01:09.377727990Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:01:10.135423 kubelet[3130]: I0510 00:01:10.135072 3130 scope.go:117] "RemoveContainer" containerID="8ff7d7696e66a7d24d2a815f2fb0fbbe7eb09bc8b1151d8705677199e129fa08" May 10 00:01:10.138376 containerd[1945]: time="2025-05-10T00:01:10.138306522Z" level=info msg="CreateContainer within sandbox \"eeed9899d28186bbc35b4138be822922c1518d18a81dc27a9e7a13a880c7f985\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 10 00:01:10.166666 containerd[1945]: time="2025-05-10T00:01:10.166570962Z" level=info msg="CreateContainer within sandbox \"eeed9899d28186bbc35b4138be822922c1518d18a81dc27a9e7a13a880c7f985\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"79d74b71426be2e5d0b760b93aaeb8a9c836d5f9ce4473f89b1655316a9aa5c0\"" May 10 00:01:10.167744 containerd[1945]: time="2025-05-10T00:01:10.167660550Z" level=info msg="StartContainer for \"79d74b71426be2e5d0b760b93aaeb8a9c836d5f9ce4473f89b1655316a9aa5c0\"" May 10 00:01:10.231289 systemd[1]: Started cri-containerd-79d74b71426be2e5d0b760b93aaeb8a9c836d5f9ce4473f89b1655316a9aa5c0.scope - libcontainer container 79d74b71426be2e5d0b760b93aaeb8a9c836d5f9ce4473f89b1655316a9aa5c0. May 10 00:01:10.296835 containerd[1945]: time="2025-05-10T00:01:10.296434627Z" level=info msg="StartContainer for \"79d74b71426be2e5d0b760b93aaeb8a9c836d5f9ce4473f89b1655316a9aa5c0\" returns successfully" May 10 00:01:14.640492 systemd[1]: cri-containerd-cc446c794757b1c2914b181c34ae9458e6489a21abf47433b13d66bf9a2ae229.scope: Deactivated successfully. May 10 00:01:14.641211 systemd[1]: cri-containerd-cc446c794757b1c2914b181c34ae9458e6489a21abf47433b13d66bf9a2ae229.scope: Consumed 4.520s CPU time, 16.3M memory peak, 0B memory swap peak. May 10 00:01:14.681614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc446c794757b1c2914b181c34ae9458e6489a21abf47433b13d66bf9a2ae229-rootfs.mount: Deactivated successfully. May 10 00:01:14.697451 containerd[1945]: time="2025-05-10T00:01:14.697289593Z" level=info msg="shim disconnected" id=cc446c794757b1c2914b181c34ae9458e6489a21abf47433b13d66bf9a2ae229 namespace=k8s.io May 10 00:01:14.697451 containerd[1945]: time="2025-05-10T00:01:14.697441801Z" level=warning msg="cleaning up after shim disconnected" id=cc446c794757b1c2914b181c34ae9458e6489a21abf47433b13d66bf9a2ae229 namespace=k8s.io May 10 00:01:14.699879 containerd[1945]: time="2025-05-10T00:01:14.697463677Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:01:15.153202 kubelet[3130]: I0510 00:01:15.153134 3130 scope.go:117] "RemoveContainer" containerID="cc446c794757b1c2914b181c34ae9458e6489a21abf47433b13d66bf9a2ae229" May 10 00:01:15.156562 containerd[1945]: time="2025-05-10T00:01:15.156497795Z" level=info msg="CreateContainer within sandbox \"13ddc33de777163cd7955a0ff1626119aa04d44c177833ad22e415f46fbabba3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 10 00:01:15.186746 containerd[1945]: time="2025-05-10T00:01:15.186660791Z" level=info msg="CreateContainer within sandbox \"13ddc33de777163cd7955a0ff1626119aa04d44c177833ad22e415f46fbabba3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ea5be937ae619188612e630239a4dc8020577996c843791409c4177e07d57f6b\"" May 10 00:01:15.187690 containerd[1945]: time="2025-05-10T00:01:15.187605983Z" level=info msg="StartContainer for \"ea5be937ae619188612e630239a4dc8020577996c843791409c4177e07d57f6b\"" May 10 00:01:15.244294 systemd[1]: Started cri-containerd-ea5be937ae619188612e630239a4dc8020577996c843791409c4177e07d57f6b.scope - libcontainer container ea5be937ae619188612e630239a4dc8020577996c843791409c4177e07d57f6b. May 10 00:01:15.307327 containerd[1945]: time="2025-05-10T00:01:15.307153020Z" level=info msg="StartContainer for \"ea5be937ae619188612e630239a4dc8020577996c843791409c4177e07d57f6b\" returns successfully" May 10 00:01:19.142986 kubelet[3130]: E0510 00:01:19.142776 3130 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-82?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 10 00:01:29.143841 kubelet[3130]: E0510 00:01:29.143719 3130 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-82?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"