May 7 23:44:51.175949 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] May 7 23:44:51.175994 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed May 7 22:21:35 -00 2025 May 7 23:44:51.176019 kernel: KASLR disabled due to lack of seed May 7 23:44:51.176035 kernel: efi: EFI v2.7 by EDK II May 7 23:44:51.176051 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a733a98 MEMRESERVE=0x78557598 May 7 23:44:51.176066 kernel: secureboot: Secure boot disabled May 7 23:44:51.176083 kernel: ACPI: Early table checksum verification disabled May 7 23:44:51.176099 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) May 7 23:44:51.176114 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) May 7 23:44:51.176129 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) May 7 23:44:51.176150 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) May 7 23:44:51.176165 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 7 23:44:51.176181 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) May 7 23:44:51.176196 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) May 7 23:44:51.176214 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) May 7 23:44:51.176235 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 7 23:44:51.176941 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) May 7 23:44:51.176970 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) May 7 23:44:51.176989 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 May 7 23:44:51.177008 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') May 7 23:44:51.177026 kernel: printk: bootconsole [uart0] enabled May 7 23:44:51.177044 kernel: NUMA: Failed to initialise from firmware May 7 23:44:51.177063 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] May 7 23:44:51.177080 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] May 7 23:44:51.177096 kernel: Zone ranges: May 7 23:44:51.177113 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 7 23:44:51.177141 kernel: DMA32 empty May 7 23:44:51.177157 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] May 7 23:44:51.177174 kernel: Movable zone start for each node May 7 23:44:51.177191 kernel: Early memory node ranges May 7 23:44:51.177207 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] May 7 23:44:51.177224 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] May 7 23:44:51.177240 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] May 7 23:44:51.177305 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] May 7 23:44:51.177324 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] May 7 23:44:51.177341 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] May 7 23:44:51.177358 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] May 7 23:44:51.177374 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] May 7 23:44:51.177398 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] May 7 23:44:51.177416 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges May 7 23:44:51.177441 kernel: psci: probing for conduit method from ACPI. May 7 23:44:51.177458 kernel: psci: PSCIv1.0 detected in firmware. May 7 23:44:51.177476 kernel: psci: Using standard PSCI v0.2 function IDs May 7 23:44:51.177498 kernel: psci: Trusted OS migration not required May 7 23:44:51.177516 kernel: psci: SMC Calling Convention v1.1 May 7 23:44:51.177533 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 May 7 23:44:51.177550 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 May 7 23:44:51.177568 kernel: pcpu-alloc: [0] 0 [0] 1 May 7 23:44:51.177585 kernel: Detected PIPT I-cache on CPU0 May 7 23:44:51.177602 kernel: CPU features: detected: GIC system register CPU interface May 7 23:44:51.177618 kernel: CPU features: detected: Spectre-v2 May 7 23:44:51.177635 kernel: CPU features: detected: Spectre-v3a May 7 23:44:51.177652 kernel: CPU features: detected: Spectre-BHB May 7 23:44:51.177668 kernel: CPU features: detected: ARM erratum 1742098 May 7 23:44:51.177685 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 May 7 23:44:51.177707 kernel: alternatives: applying boot alternatives May 7 23:44:51.177725 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=82f9441f083668f7b43f8fe99c3dc9ee441b8a3ef2f63ecd1e548de4dde5b207 May 7 23:44:51.177744 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 7 23:44:51.177761 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 7 23:44:51.177778 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 7 23:44:51.177795 kernel: Fallback order for Node 0: 0 May 7 23:44:51.177812 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 May 7 23:44:51.177828 kernel: Policy zone: Normal May 7 23:44:51.177845 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 7 23:44:51.177862 kernel: software IO TLB: area num 2. May 7 23:44:51.177883 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) May 7 23:44:51.177901 kernel: Memory: 3821176K/4030464K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 209288K reserved, 0K cma-reserved) May 7 23:44:51.177917 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 7 23:44:51.177934 kernel: rcu: Preemptible hierarchical RCU implementation. May 7 23:44:51.177952 kernel: rcu: RCU event tracing is enabled. May 7 23:44:51.177970 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 7 23:44:51.177987 kernel: Trampoline variant of Tasks RCU enabled. May 7 23:44:51.178004 kernel: Tracing variant of Tasks RCU enabled. May 7 23:44:51.178021 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 7 23:44:51.178038 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 7 23:44:51.178054 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 7 23:44:51.178075 kernel: GICv3: 96 SPIs implemented May 7 23:44:51.178093 kernel: GICv3: 0 Extended SPIs implemented May 7 23:44:51.178109 kernel: Root IRQ handler: gic_handle_irq May 7 23:44:51.178126 kernel: GICv3: GICv3 features: 16 PPIs May 7 23:44:51.178143 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 May 7 23:44:51.178159 kernel: ITS [mem 0x10080000-0x1009ffff] May 7 23:44:51.178177 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) May 7 23:44:51.178194 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) May 7 23:44:51.178211 kernel: GICv3: using LPI property table @0x00000004000d0000 May 7 23:44:51.178228 kernel: ITS: Using hypervisor restricted LPI range [128] May 7 23:44:51.178245 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 May 7 23:44:51.178284 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 7 23:44:51.178308 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). May 7 23:44:51.178326 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns May 7 23:44:51.178344 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns May 7 23:44:51.178361 kernel: Console: colour dummy device 80x25 May 7 23:44:51.178378 kernel: printk: console [tty1] enabled May 7 23:44:51.178395 kernel: ACPI: Core revision 20230628 May 7 23:44:51.178413 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) May 7 23:44:51.178430 kernel: pid_max: default: 32768 minimum: 301 May 7 23:44:51.178447 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 7 23:44:51.178464 kernel: landlock: Up and running. May 7 23:44:51.178486 kernel: SELinux: Initializing. May 7 23:44:51.178503 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 7 23:44:51.178521 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 7 23:44:51.178538 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 7 23:44:51.178555 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 7 23:44:51.178572 kernel: rcu: Hierarchical SRCU implementation. May 7 23:44:51.178590 kernel: rcu: Max phase no-delay instances is 400. May 7 23:44:51.178607 kernel: Platform MSI: ITS@0x10080000 domain created May 7 23:44:51.178628 kernel: PCI/MSI: ITS@0x10080000 domain created May 7 23:44:51.178645 kernel: Remapping and enabling EFI services. May 7 23:44:51.178662 kernel: smp: Bringing up secondary CPUs ... May 7 23:44:51.178679 kernel: Detected PIPT I-cache on CPU1 May 7 23:44:51.178696 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 May 7 23:44:51.178713 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 May 7 23:44:51.178731 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] May 7 23:44:51.178747 kernel: smp: Brought up 1 node, 2 CPUs May 7 23:44:51.178764 kernel: SMP: Total of 2 processors activated. May 7 23:44:51.178781 kernel: CPU features: detected: 32-bit EL0 Support May 7 23:44:51.178803 kernel: CPU features: detected: 32-bit EL1 Support May 7 23:44:51.178821 kernel: CPU features: detected: CRC32 instructions May 7 23:44:51.178850 kernel: CPU: All CPU(s) started at EL1 May 7 23:44:51.178873 kernel: alternatives: applying system-wide alternatives May 7 23:44:51.178891 kernel: devtmpfs: initialized May 7 23:44:51.178910 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 7 23:44:51.178928 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 7 23:44:51.178945 kernel: pinctrl core: initialized pinctrl subsystem May 7 23:44:51.178964 kernel: SMBIOS 3.0.0 present. May 7 23:44:51.178987 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 May 7 23:44:51.179005 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 7 23:44:51.179023 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 7 23:44:51.179042 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 7 23:44:51.179060 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 7 23:44:51.179078 kernel: audit: initializing netlink subsys (disabled) May 7 23:44:51.179096 kernel: audit: type=2000 audit(0.218:1): state=initialized audit_enabled=0 res=1 May 7 23:44:51.179118 kernel: thermal_sys: Registered thermal governor 'step_wise' May 7 23:44:51.179137 kernel: cpuidle: using governor menu May 7 23:44:51.179155 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 7 23:44:51.179173 kernel: ASID allocator initialised with 65536 entries May 7 23:44:51.179191 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 7 23:44:51.179208 kernel: Serial: AMBA PL011 UART driver May 7 23:44:51.179226 kernel: Modules: 17744 pages in range for non-PLT usage May 7 23:44:51.179244 kernel: Modules: 509264 pages in range for PLT usage May 7 23:44:51.179297 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 7 23:44:51.179323 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 7 23:44:51.179342 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 7 23:44:51.179361 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 7 23:44:51.179384 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 7 23:44:51.179426 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 7 23:44:51.179485 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 7 23:44:51.179522 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 7 23:44:51.179543 kernel: ACPI: Added _OSI(Module Device) May 7 23:44:51.179580 kernel: ACPI: Added _OSI(Processor Device) May 7 23:44:51.179607 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 7 23:44:51.179625 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 7 23:44:51.179644 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 7 23:44:51.179661 kernel: ACPI: Interpreter enabled May 7 23:44:51.179679 kernel: ACPI: Using GIC for interrupt routing May 7 23:44:51.179697 kernel: ACPI: MCFG table detected, 1 entries May 7 23:44:51.179715 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) May 7 23:44:51.180046 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 7 23:44:51.180340 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 7 23:44:51.180547 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 7 23:44:51.180747 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 May 7 23:44:51.180945 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] May 7 23:44:51.180971 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] May 7 23:44:51.180989 kernel: acpiphp: Slot [1] registered May 7 23:44:51.181008 kernel: acpiphp: Slot [2] registered May 7 23:44:51.181025 kernel: acpiphp: Slot [3] registered May 7 23:44:51.181050 kernel: acpiphp: Slot [4] registered May 7 23:44:51.181068 kernel: acpiphp: Slot [5] registered May 7 23:44:51.181086 kernel: acpiphp: Slot [6] registered May 7 23:44:51.181105 kernel: acpiphp: Slot [7] registered May 7 23:44:51.181122 kernel: acpiphp: Slot [8] registered May 7 23:44:51.181140 kernel: acpiphp: Slot [9] registered May 7 23:44:51.181158 kernel: acpiphp: Slot [10] registered May 7 23:44:51.181176 kernel: acpiphp: Slot [11] registered May 7 23:44:51.181194 kernel: acpiphp: Slot [12] registered May 7 23:44:51.181211 kernel: acpiphp: Slot [13] registered May 7 23:44:51.181234 kernel: acpiphp: Slot [14] registered May 7 23:44:51.182452 kernel: acpiphp: Slot [15] registered May 7 23:44:51.182489 kernel: acpiphp: Slot [16] registered May 7 23:44:51.182508 kernel: acpiphp: Slot [17] registered May 7 23:44:51.182526 kernel: acpiphp: Slot [18] registered May 7 23:44:51.182544 kernel: acpiphp: Slot [19] registered May 7 23:44:51.182562 kernel: acpiphp: Slot [20] registered May 7 23:44:51.182580 kernel: acpiphp: Slot [21] registered May 7 23:44:51.182598 kernel: acpiphp: Slot [22] registered May 7 23:44:51.182625 kernel: acpiphp: Slot [23] registered May 7 23:44:51.182644 kernel: acpiphp: Slot [24] registered May 7 23:44:51.182661 kernel: acpiphp: Slot [25] registered May 7 23:44:51.182680 kernel: acpiphp: Slot [26] registered May 7 23:44:51.182697 kernel: acpiphp: Slot [27] registered May 7 23:44:51.182715 kernel: acpiphp: Slot [28] registered May 7 23:44:51.182733 kernel: acpiphp: Slot [29] registered May 7 23:44:51.182751 kernel: acpiphp: Slot [30] registered May 7 23:44:51.182768 kernel: acpiphp: Slot [31] registered May 7 23:44:51.182787 kernel: PCI host bridge to bus 0000:00 May 7 23:44:51.183039 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] May 7 23:44:51.183237 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 7 23:44:51.183467 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] May 7 23:44:51.183679 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] May 7 23:44:51.183922 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 May 7 23:44:51.184157 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 May 7 23:44:51.188664 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] May 7 23:44:51.188932 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 7 23:44:51.189139 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] May 7 23:44:51.189370 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold May 7 23:44:51.189586 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 7 23:44:51.189794 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] May 7 23:44:51.190009 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] May 7 23:44:51.190227 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] May 7 23:44:51.192717 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold May 7 23:44:51.192939 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] May 7 23:44:51.193143 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] May 7 23:44:51.193388 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] May 7 23:44:51.193600 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] May 7 23:44:51.193805 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] May 7 23:44:51.194000 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] May 7 23:44:51.194178 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 7 23:44:51.196601 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] May 7 23:44:51.196645 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 7 23:44:51.196665 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 7 23:44:51.196684 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 7 23:44:51.196702 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 7 23:44:51.196721 kernel: iommu: Default domain type: Translated May 7 23:44:51.196749 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 7 23:44:51.196768 kernel: efivars: Registered efivars operations May 7 23:44:51.196785 kernel: vgaarb: loaded May 7 23:44:51.196804 kernel: clocksource: Switched to clocksource arch_sys_counter May 7 23:44:51.196822 kernel: VFS: Disk quotas dquot_6.6.0 May 7 23:44:51.196840 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 7 23:44:51.196858 kernel: pnp: PnP ACPI init May 7 23:44:51.197069 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved May 7 23:44:51.197101 kernel: pnp: PnP ACPI: found 1 devices May 7 23:44:51.197120 kernel: NET: Registered PF_INET protocol family May 7 23:44:51.197139 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 7 23:44:51.197158 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 7 23:44:51.197176 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 7 23:44:51.197958 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 7 23:44:51.197980 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 7 23:44:51.197999 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 7 23:44:51.198017 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 7 23:44:51.198042 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 7 23:44:51.198061 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 7 23:44:51.198079 kernel: PCI: CLS 0 bytes, default 64 May 7 23:44:51.198097 kernel: kvm [1]: HYP mode not available May 7 23:44:51.198115 kernel: Initialise system trusted keyrings May 7 23:44:51.198133 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 7 23:44:51.198151 kernel: Key type asymmetric registered May 7 23:44:51.198169 kernel: Asymmetric key parser 'x509' registered May 7 23:44:51.198186 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 7 23:44:51.198209 kernel: io scheduler mq-deadline registered May 7 23:44:51.198227 kernel: io scheduler kyber registered May 7 23:44:51.198245 kernel: io scheduler bfq registered May 7 23:44:51.198501 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered May 7 23:44:51.198528 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 7 23:44:51.198547 kernel: ACPI: button: Power Button [PWRB] May 7 23:44:51.198565 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 May 7 23:44:51.198583 kernel: ACPI: button: Sleep Button [SLPB] May 7 23:44:51.198611 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 7 23:44:51.198630 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 7 23:44:51.199649 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) May 7 23:44:51.199685 kernel: printk: console [ttyS0] disabled May 7 23:44:51.199704 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A May 7 23:44:51.199723 kernel: printk: console [ttyS0] enabled May 7 23:44:51.199741 kernel: printk: bootconsole [uart0] disabled May 7 23:44:51.199759 kernel: thunder_xcv, ver 1.0 May 7 23:44:51.199777 kernel: thunder_bgx, ver 1.0 May 7 23:44:51.199795 kernel: nicpf, ver 1.0 May 7 23:44:51.199820 kernel: nicvf, ver 1.0 May 7 23:44:51.200050 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 7 23:44:51.200603 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-07T23:44:50 UTC (1746661490) May 7 23:44:51.200635 kernel: hid: raw HID events driver (C) Jiri Kosina May 7 23:44:51.200655 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available May 7 23:44:51.200673 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 7 23:44:51.200691 kernel: watchdog: Hard watchdog permanently disabled May 7 23:44:51.200717 kernel: NET: Registered PF_INET6 protocol family May 7 23:44:51.200735 kernel: Segment Routing with IPv6 May 7 23:44:51.200753 kernel: In-situ OAM (IOAM) with IPv6 May 7 23:44:51.200771 kernel: NET: Registered PF_PACKET protocol family May 7 23:44:51.200789 kernel: Key type dns_resolver registered May 7 23:44:51.200807 kernel: registered taskstats version 1 May 7 23:44:51.200825 kernel: Loading compiled-in X.509 certificates May 7 23:44:51.200843 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: f45666b1b2057b901dda15e57012558a26abdeb0' May 7 23:44:51.200861 kernel: Key type .fscrypt registered May 7 23:44:51.200927 kernel: Key type fscrypt-provisioning registered May 7 23:44:51.200956 kernel: ima: No TPM chip found, activating TPM-bypass! May 7 23:44:51.200975 kernel: ima: Allocated hash algorithm: sha1 May 7 23:44:51.200993 kernel: ima: No architecture policies found May 7 23:44:51.203080 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 7 23:44:51.203103 kernel: clk: Disabling unused clocks May 7 23:44:51.203121 kernel: Freeing unused kernel memory: 38336K May 7 23:44:51.203139 kernel: Run /init as init process May 7 23:44:51.203157 kernel: with arguments: May 7 23:44:51.203175 kernel: /init May 7 23:44:51.203203 kernel: with environment: May 7 23:44:51.203221 kernel: HOME=/ May 7 23:44:51.203238 kernel: TERM=linux May 7 23:44:51.203285 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 7 23:44:51.203309 systemd[1]: Successfully made /usr/ read-only. May 7 23:44:51.203334 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 7 23:44:51.203355 systemd[1]: Detected virtualization amazon. May 7 23:44:51.203380 systemd[1]: Detected architecture arm64. May 7 23:44:51.203400 systemd[1]: Running in initrd. May 7 23:44:51.203419 systemd[1]: No hostname configured, using default hostname. May 7 23:44:51.203439 systemd[1]: Hostname set to . May 7 23:44:51.203459 systemd[1]: Initializing machine ID from VM UUID. May 7 23:44:51.203478 systemd[1]: Queued start job for default target initrd.target. May 7 23:44:51.203498 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 7 23:44:51.203518 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 7 23:44:51.203539 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 7 23:44:51.203579 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 7 23:44:51.203602 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 7 23:44:51.203623 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 7 23:44:51.203645 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 7 23:44:51.203665 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 7 23:44:51.203685 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 7 23:44:51.203711 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 7 23:44:51.203731 systemd[1]: Reached target paths.target - Path Units. May 7 23:44:51.203750 systemd[1]: Reached target slices.target - Slice Units. May 7 23:44:51.203770 systemd[1]: Reached target swap.target - Swaps. May 7 23:44:51.203789 systemd[1]: Reached target timers.target - Timer Units. May 7 23:44:51.203808 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 7 23:44:51.203828 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 7 23:44:51.203848 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 7 23:44:51.203867 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 7 23:44:51.203891 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 7 23:44:51.203911 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 7 23:44:51.203930 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 7 23:44:51.203950 systemd[1]: Reached target sockets.target - Socket Units. May 7 23:44:51.203969 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 7 23:44:51.203989 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 7 23:44:51.204008 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 7 23:44:51.204028 systemd[1]: Starting systemd-fsck-usr.service... May 7 23:44:51.204052 systemd[1]: Starting systemd-journald.service - Journal Service... May 7 23:44:51.204071 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 7 23:44:51.204091 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 7 23:44:51.204111 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 7 23:44:51.204131 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 7 23:44:51.204151 systemd[1]: Finished systemd-fsck-usr.service. May 7 23:44:51.204176 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 7 23:44:51.204240 systemd-journald[252]: Collecting audit messages is disabled. May 7 23:44:51.207391 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 7 23:44:51.207421 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 7 23:44:51.207442 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 7 23:44:51.207462 kernel: Bridge firewalling registered May 7 23:44:51.207482 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 7 23:44:51.207502 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 7 23:44:51.207523 systemd-journald[252]: Journal started May 7 23:44:51.207577 systemd-journald[252]: Runtime Journal (/run/log/journal/ec297ec5cb7e4bdad6f6f4631e157132) is 8M, max 75.3M, 67.3M free. May 7 23:44:51.154381 systemd-modules-load[253]: Inserted module 'overlay' May 7 23:44:51.189657 systemd-modules-load[253]: Inserted module 'br_netfilter' May 7 23:44:51.212807 systemd[1]: Started systemd-journald.service - Journal Service. May 7 23:44:51.222595 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 7 23:44:51.224991 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 7 23:44:51.236588 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 7 23:44:51.270338 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 7 23:44:51.278469 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 7 23:44:51.295535 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 7 23:44:51.309564 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 7 23:44:51.313217 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 7 23:44:51.327653 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 7 23:44:51.347450 dracut-cmdline[287]: dracut-dracut-053 May 7 23:44:51.354412 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=82f9441f083668f7b43f8fe99c3dc9ee441b8a3ef2f63ecd1e548de4dde5b207 May 7 23:44:51.421662 systemd-resolved[291]: Positive Trust Anchors: May 7 23:44:51.421698 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 7 23:44:51.421758 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 7 23:44:51.498291 kernel: SCSI subsystem initialized May 7 23:44:51.508274 kernel: Loading iSCSI transport class v2.0-870. May 7 23:44:51.519295 kernel: iscsi: registered transport (tcp) May 7 23:44:51.540900 kernel: iscsi: registered transport (qla4xxx) May 7 23:44:51.540977 kernel: QLogic iSCSI HBA Driver May 7 23:44:51.650288 kernel: random: crng init done May 7 23:44:51.650595 systemd-resolved[291]: Defaulting to hostname 'linux'. May 7 23:44:51.654029 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 7 23:44:51.657524 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 7 23:44:51.682347 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 7 23:44:51.692610 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 7 23:44:51.735127 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 7 23:44:51.735218 kernel: device-mapper: uevent: version 1.0.3 May 7 23:44:51.737286 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 7 23:44:51.801295 kernel: raid6: neonx8 gen() 6634 MB/s May 7 23:44:51.818282 kernel: raid6: neonx4 gen() 6580 MB/s May 7 23:44:51.835283 kernel: raid6: neonx2 gen() 5472 MB/s May 7 23:44:51.852282 kernel: raid6: neonx1 gen() 3974 MB/s May 7 23:44:51.869282 kernel: raid6: int64x8 gen() 3647 MB/s May 7 23:44:51.886282 kernel: raid6: int64x4 gen() 3723 MB/s May 7 23:44:51.903282 kernel: raid6: int64x2 gen() 3631 MB/s May 7 23:44:51.921108 kernel: raid6: int64x1 gen() 2777 MB/s May 7 23:44:51.921140 kernel: raid6: using algorithm neonx8 gen() 6634 MB/s May 7 23:44:51.939074 kernel: raid6: .... xor() 4781 MB/s, rmw enabled May 7 23:44:51.939118 kernel: raid6: using neon recovery algorithm May 7 23:44:51.946286 kernel: xor: measuring software checksum speed May 7 23:44:51.946341 kernel: 8regs : 11947 MB/sec May 7 23:44:51.949665 kernel: 32regs : 12008 MB/sec May 7 23:44:51.949697 kernel: arm64_neon : 9544 MB/sec May 7 23:44:51.949722 kernel: xor: using function: 32regs (12008 MB/sec) May 7 23:44:52.032299 kernel: Btrfs loaded, zoned=no, fsverity=no May 7 23:44:52.051029 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 7 23:44:52.060642 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 7 23:44:52.108722 systemd-udevd[471]: Using default interface naming scheme 'v255'. May 7 23:44:52.119782 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 7 23:44:52.139597 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 7 23:44:52.179915 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation May 7 23:44:52.236339 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 7 23:44:52.249546 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 7 23:44:52.362558 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 7 23:44:52.373523 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 7 23:44:52.420023 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 7 23:44:52.423172 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 7 23:44:52.429506 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 7 23:44:52.433572 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 7 23:44:52.453615 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 7 23:44:52.490410 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 7 23:44:52.571830 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 7 23:44:52.571894 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) May 7 23:44:52.599192 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 7 23:44:52.600495 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 7 23:44:52.600748 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:2c:30:ef:4a:a1 May 7 23:44:52.600978 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 7 23:44:52.601006 kernel: nvme nvme0: pci function 0000:00:04.0 May 7 23:44:52.582540 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 7 23:44:52.582649 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 7 23:44:52.586727 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 7 23:44:52.591395 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 7 23:44:52.591523 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 7 23:44:52.596502 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 7 23:44:52.603972 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 7 23:44:52.624507 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 7 23:44:52.614638 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 7 23:44:52.639068 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 7 23:44:52.639135 kernel: GPT:9289727 != 16777215 May 7 23:44:52.639161 kernel: GPT:Alternate GPT header not at the end of the disk. May 7 23:44:52.639938 kernel: GPT:9289727 != 16777215 May 7 23:44:52.640973 kernel: GPT: Use GNU Parted to correct GPT errors. May 7 23:44:52.641825 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 7 23:44:52.646984 (udev-worker)[523]: Network interface NamePolicy= disabled on kernel command line. May 7 23:44:52.651962 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 7 23:44:52.660821 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 7 23:44:52.714357 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 7 23:44:52.769816 kernel: BTRFS: device fsid a4d66dad-2d34-4ed0-87a7-f6519531b08f devid 1 transid 42 /dev/nvme0n1p3 scanned by (udev-worker) (524) May 7 23:44:52.788298 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (538) May 7 23:44:52.890529 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 7 23:44:52.916801 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 7 23:44:52.940443 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 7 23:44:52.961046 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 7 23:44:52.961428 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 7 23:44:52.984498 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 7 23:44:52.995146 disk-uuid[663]: Primary Header is updated. May 7 23:44:52.995146 disk-uuid[663]: Secondary Entries is updated. May 7 23:44:52.995146 disk-uuid[663]: Secondary Header is updated. May 7 23:44:53.006294 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 7 23:44:54.024321 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 7 23:44:54.025829 disk-uuid[664]: The operation has completed successfully. May 7 23:44:54.193510 systemd[1]: disk-uuid.service: Deactivated successfully. May 7 23:44:54.197314 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 7 23:44:54.304558 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 7 23:44:54.321218 sh[925]: Success May 7 23:44:54.339327 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 7 23:44:54.450173 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 7 23:44:54.454154 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 7 23:44:54.469513 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 7 23:44:54.502828 kernel: BTRFS info (device dm-0): first mount of filesystem a4d66dad-2d34-4ed0-87a7-f6519531b08f May 7 23:44:54.502896 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 7 23:44:54.502923 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 7 23:44:54.504685 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 7 23:44:54.505977 kernel: BTRFS info (device dm-0): using free space tree May 7 23:44:54.633295 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 7 23:44:54.647591 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 7 23:44:54.651516 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 7 23:44:54.663605 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 7 23:44:54.671633 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 7 23:44:54.710905 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 7 23:44:54.710973 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 7 23:44:54.712223 kernel: BTRFS info (device nvme0n1p6): using free space tree May 7 23:44:54.728290 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 7 23:44:54.737311 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 7 23:44:54.741499 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 7 23:44:54.751625 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 7 23:44:54.846915 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 7 23:44:54.884158 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 7 23:44:54.933754 systemd-networkd[1126]: lo: Link UP May 7 23:44:54.933775 systemd-networkd[1126]: lo: Gained carrier May 7 23:44:54.938992 systemd-networkd[1126]: Enumeration completed May 7 23:44:54.940483 systemd[1]: Started systemd-networkd.service - Network Configuration. May 7 23:44:54.942793 systemd-networkd[1126]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 7 23:44:54.942801 systemd-networkd[1126]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 7 23:44:54.946103 systemd[1]: Reached target network.target - Network. May 7 23:44:54.956405 systemd-networkd[1126]: eth0: Link UP May 7 23:44:54.956424 systemd-networkd[1126]: eth0: Gained carrier May 7 23:44:54.956442 systemd-networkd[1126]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 7 23:44:54.973320 systemd-networkd[1126]: eth0: DHCPv4 address 172.31.25.188/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 7 23:44:55.091632 ignition[1046]: Ignition 2.20.0 May 7 23:44:55.091663 ignition[1046]: Stage: fetch-offline May 7 23:44:55.092094 ignition[1046]: no configs at "/usr/lib/ignition/base.d" May 7 23:44:55.098101 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 7 23:44:55.092169 ignition[1046]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 7 23:44:55.093688 ignition[1046]: Ignition finished successfully May 7 23:44:55.114629 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 7 23:44:55.139425 ignition[1137]: Ignition 2.20.0 May 7 23:44:55.139450 ignition[1137]: Stage: fetch May 7 23:44:55.140031 ignition[1137]: no configs at "/usr/lib/ignition/base.d" May 7 23:44:55.140056 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 7 23:44:55.140213 ignition[1137]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 7 23:44:55.176247 ignition[1137]: PUT result: OK May 7 23:44:55.180062 ignition[1137]: parsed url from cmdline: "" May 7 23:44:55.180085 ignition[1137]: no config URL provided May 7 23:44:55.180100 ignition[1137]: reading system config file "/usr/lib/ignition/user.ign" May 7 23:44:55.180154 ignition[1137]: no config at "/usr/lib/ignition/user.ign" May 7 23:44:55.180188 ignition[1137]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 7 23:44:55.183779 ignition[1137]: PUT result: OK May 7 23:44:55.184818 ignition[1137]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 7 23:44:55.187930 ignition[1137]: GET result: OK May 7 23:44:55.189157 ignition[1137]: parsing config with SHA512: 08941edb601e89c816bac1b7fe3fde948fece716771f269388740667c424d0bb16834cff7a1104c00af2a5d9047a667587321133f98d4e899c56a63d42884b14 May 7 23:44:55.202548 unknown[1137]: fetched base config from "system" May 7 23:44:55.202576 unknown[1137]: fetched base config from "system" May 7 23:44:55.202590 unknown[1137]: fetched user config from "aws" May 7 23:44:55.207671 ignition[1137]: fetch: fetch complete May 7 23:44:55.207694 ignition[1137]: fetch: fetch passed May 7 23:44:55.208957 ignition[1137]: Ignition finished successfully May 7 23:44:55.216314 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 7 23:44:55.231498 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 7 23:44:55.261006 ignition[1143]: Ignition 2.20.0 May 7 23:44:55.261040 ignition[1143]: Stage: kargs May 7 23:44:55.262375 ignition[1143]: no configs at "/usr/lib/ignition/base.d" May 7 23:44:55.262401 ignition[1143]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 7 23:44:55.262555 ignition[1143]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 7 23:44:55.266032 ignition[1143]: PUT result: OK May 7 23:44:55.279793 ignition[1143]: kargs: kargs passed May 7 23:44:55.279898 ignition[1143]: Ignition finished successfully May 7 23:44:55.284885 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 7 23:44:55.294600 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 7 23:44:55.323580 ignition[1150]: Ignition 2.20.0 May 7 23:44:55.323611 ignition[1150]: Stage: disks May 7 23:44:55.324708 ignition[1150]: no configs at "/usr/lib/ignition/base.d" May 7 23:44:55.324736 ignition[1150]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 7 23:44:55.324885 ignition[1150]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 7 23:44:55.326644 ignition[1150]: PUT result: OK May 7 23:44:55.336196 ignition[1150]: disks: disks passed May 7 23:44:55.337539 ignition[1150]: Ignition finished successfully May 7 23:44:55.341205 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 7 23:44:55.345759 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 7 23:44:55.349778 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 7 23:44:55.352064 systemd[1]: Reached target local-fs.target - Local File Systems. May 7 23:44:55.355053 systemd[1]: Reached target sysinit.target - System Initialization. May 7 23:44:55.361611 systemd[1]: Reached target basic.target - Basic System. May 7 23:44:55.374093 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 7 23:44:55.421317 systemd-fsck[1158]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 7 23:44:55.425352 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 7 23:44:55.438555 systemd[1]: Mounting sysroot.mount - /sysroot... May 7 23:44:55.522300 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f291ddc8-664e-45dc-bbf9-8344dca1a297 r/w with ordered data mode. Quota mode: none. May 7 23:44:55.523374 systemd[1]: Mounted sysroot.mount - /sysroot. May 7 23:44:55.526388 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 7 23:44:55.542539 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 7 23:44:55.547449 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 7 23:44:55.551904 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 7 23:44:55.552578 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 7 23:44:55.552630 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 7 23:44:55.571022 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 7 23:44:55.580546 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 7 23:44:55.591288 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1177) May 7 23:44:55.595208 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 7 23:44:55.595278 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 7 23:44:55.595318 kernel: BTRFS info (device nvme0n1p6): using free space tree May 7 23:44:55.608588 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 7 23:44:55.610318 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 7 23:44:56.013589 initrd-setup-root[1201]: cut: /sysroot/etc/passwd: No such file or directory May 7 23:44:56.033704 initrd-setup-root[1208]: cut: /sysroot/etc/group: No such file or directory May 7 23:44:56.042745 initrd-setup-root[1215]: cut: /sysroot/etc/shadow: No such file or directory May 7 23:44:56.061705 initrd-setup-root[1222]: cut: /sysroot/etc/gshadow: No such file or directory May 7 23:44:56.355398 systemd-networkd[1126]: eth0: Gained IPv6LL May 7 23:44:56.393020 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 7 23:44:56.403444 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 7 23:44:56.415081 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 7 23:44:56.433364 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 7 23:44:56.433352 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 7 23:44:56.470072 ignition[1290]: INFO : Ignition 2.20.0 May 7 23:44:56.470072 ignition[1290]: INFO : Stage: mount May 7 23:44:56.473414 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 7 23:44:56.475200 ignition[1290]: INFO : no configs at "/usr/lib/ignition/base.d" May 7 23:44:56.475200 ignition[1290]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 7 23:44:56.481747 ignition[1290]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 7 23:44:56.481747 ignition[1290]: INFO : PUT result: OK May 7 23:44:56.487932 ignition[1290]: INFO : mount: mount passed May 7 23:44:56.489537 ignition[1290]: INFO : Ignition finished successfully May 7 23:44:56.493213 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 7 23:44:56.505134 systemd[1]: Starting ignition-files.service - Ignition (files)... May 7 23:44:56.531619 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 7 23:44:56.562397 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1301) May 7 23:44:56.565947 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28594331-30e6-4c58-8ddc-9d8448a320bb May 7 23:44:56.565985 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 7 23:44:56.566011 kernel: BTRFS info (device nvme0n1p6): using free space tree May 7 23:44:56.573308 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 7 23:44:56.575228 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 7 23:44:56.608284 ignition[1318]: INFO : Ignition 2.20.0 May 7 23:44:56.611356 ignition[1318]: INFO : Stage: files May 7 23:44:56.611356 ignition[1318]: INFO : no configs at "/usr/lib/ignition/base.d" May 7 23:44:56.611356 ignition[1318]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 7 23:44:56.611356 ignition[1318]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 7 23:44:56.619339 ignition[1318]: INFO : PUT result: OK May 7 23:44:56.623219 ignition[1318]: DEBUG : files: compiled without relabeling support, skipping May 7 23:44:56.626242 ignition[1318]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 7 23:44:56.626242 ignition[1318]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 7 23:44:56.662412 ignition[1318]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 7 23:44:56.665265 ignition[1318]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 7 23:44:56.668119 unknown[1318]: wrote ssh authorized keys file for user: core May 7 23:44:56.670219 ignition[1318]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 7 23:44:56.691171 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 7 23:44:56.695018 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 7 23:44:56.824606 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 7 23:44:57.056402 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 7 23:44:57.056402 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 7 23:44:57.063205 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 7 23:44:57.405573 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 7 23:44:57.545591 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 7 23:44:57.550722 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 7 23:44:57.550722 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 7 23:44:57.550722 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 7 23:44:57.550722 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 7 23:44:57.550722 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 7 23:44:57.550722 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 7 23:44:57.550722 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 7 23:44:57.550722 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 7 23:44:57.550722 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 7 23:44:57.550722 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 7 23:44:57.550722 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 7 23:44:57.550722 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 7 23:44:57.550722 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 7 23:44:57.550722 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 7 23:44:57.845393 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 7 23:44:58.191534 ignition[1318]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 7 23:44:58.195754 ignition[1318]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 7 23:44:58.195754 ignition[1318]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 7 23:44:58.195754 ignition[1318]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 7 23:44:58.195754 ignition[1318]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 7 23:44:58.195754 ignition[1318]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 7 23:44:58.195754 ignition[1318]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 7 23:44:58.195754 ignition[1318]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 7 23:44:58.195754 ignition[1318]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 7 23:44:58.195754 ignition[1318]: INFO : files: files passed May 7 23:44:58.195754 ignition[1318]: INFO : Ignition finished successfully May 7 23:44:58.223188 systemd[1]: Finished ignition-files.service - Ignition (files). May 7 23:44:58.234566 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 7 23:44:58.239568 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 7 23:44:58.250075 systemd[1]: ignition-quench.service: Deactivated successfully. May 7 23:44:58.252304 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 7 23:44:58.276181 initrd-setup-root-after-ignition[1346]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 7 23:44:58.276181 initrd-setup-root-after-ignition[1346]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 7 23:44:58.282556 initrd-setup-root-after-ignition[1350]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 7 23:44:58.288054 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 7 23:44:58.293869 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 7 23:44:58.310060 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 7 23:44:58.353526 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 7 23:44:58.353928 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 7 23:44:58.359714 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 7 23:44:58.362793 systemd[1]: Reached target initrd.target - Initrd Default Target. May 7 23:44:58.364784 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 7 23:44:58.384040 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 7 23:44:58.411340 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 7 23:44:58.420617 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 7 23:44:58.447377 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 7 23:44:58.452380 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 7 23:44:58.455162 systemd[1]: Stopped target timers.target - Timer Units. May 7 23:44:58.461276 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 7 23:44:58.461688 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 7 23:44:58.468712 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 7 23:44:58.470996 systemd[1]: Stopped target basic.target - Basic System. May 7 23:44:58.476384 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 7 23:44:58.479112 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 7 23:44:58.485109 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 7 23:44:58.487663 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 7 23:44:58.493230 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 7 23:44:58.495712 systemd[1]: Stopped target sysinit.target - System Initialization. May 7 23:44:58.497805 systemd[1]: Stopped target local-fs.target - Local File Systems. May 7 23:44:58.499842 systemd[1]: Stopped target swap.target - Swaps. May 7 23:44:58.501991 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 7 23:44:58.502210 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 7 23:44:58.508766 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 7 23:44:58.512699 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 7 23:44:58.515301 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 7 23:44:58.516308 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 7 23:44:58.518700 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 7 23:44:58.518914 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 7 23:44:58.522957 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 7 23:44:58.523189 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 7 23:44:58.527610 systemd[1]: ignition-files.service: Deactivated successfully. May 7 23:44:58.527811 systemd[1]: Stopped ignition-files.service - Ignition (files). May 7 23:44:58.543171 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 7 23:44:58.573720 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 7 23:44:58.577413 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 7 23:44:58.579718 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 7 23:44:58.597002 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 7 23:44:58.597233 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 7 23:44:58.609131 ignition[1370]: INFO : Ignition 2.20.0 May 7 23:44:58.609131 ignition[1370]: INFO : Stage: umount May 7 23:44:58.614922 ignition[1370]: INFO : no configs at "/usr/lib/ignition/base.d" May 7 23:44:58.614922 ignition[1370]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 7 23:44:58.614922 ignition[1370]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 7 23:44:58.622434 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 7 23:44:58.622616 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 7 23:44:58.627134 ignition[1370]: INFO : PUT result: OK May 7 23:44:58.637040 ignition[1370]: INFO : umount: umount passed May 7 23:44:58.637040 ignition[1370]: INFO : Ignition finished successfully May 7 23:44:58.641090 systemd[1]: ignition-mount.service: Deactivated successfully. May 7 23:44:58.641302 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 7 23:44:58.649422 systemd[1]: ignition-disks.service: Deactivated successfully. May 7 23:44:58.649522 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 7 23:44:58.653141 systemd[1]: ignition-kargs.service: Deactivated successfully. May 7 23:44:58.653244 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 7 23:44:58.657453 systemd[1]: ignition-fetch.service: Deactivated successfully. May 7 23:44:58.657540 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 7 23:44:58.659953 systemd[1]: Stopped target network.target - Network. May 7 23:44:58.661658 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 7 23:44:58.661760 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 7 23:44:58.664027 systemd[1]: Stopped target paths.target - Path Units. May 7 23:44:58.667330 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 7 23:44:58.674734 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 7 23:44:58.677420 systemd[1]: Stopped target slices.target - Slice Units. May 7 23:44:58.680538 systemd[1]: Stopped target sockets.target - Socket Units. May 7 23:44:58.684120 systemd[1]: iscsid.socket: Deactivated successfully. May 7 23:44:58.684728 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 7 23:44:58.688560 systemd[1]: iscsiuio.socket: Deactivated successfully. May 7 23:44:58.688633 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 7 23:44:58.691627 systemd[1]: ignition-setup.service: Deactivated successfully. May 7 23:44:58.691718 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 7 23:44:58.694054 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 7 23:44:58.694136 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 7 23:44:58.706671 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 7 23:44:58.710270 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 7 23:44:58.713838 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 7 23:44:58.714764 systemd[1]: sysroot-boot.service: Deactivated successfully. May 7 23:44:58.716302 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 7 23:44:58.718791 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 7 23:44:58.718955 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 7 23:44:58.729793 systemd[1]: systemd-networkd.service: Deactivated successfully. May 7 23:44:58.730394 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 7 23:44:58.736581 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 7 23:44:58.737011 systemd[1]: systemd-resolved.service: Deactivated successfully. May 7 23:44:58.737226 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 7 23:44:58.744411 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 7 23:44:58.745947 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 7 23:44:58.746064 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 7 23:44:58.762596 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 7 23:44:58.768463 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 7 23:44:58.768582 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 7 23:44:58.770909 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 7 23:44:58.770987 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 7 23:44:58.780607 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 7 23:44:58.780697 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 7 23:44:58.782766 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 7 23:44:58.782845 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 7 23:44:58.787411 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 7 23:44:58.791852 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 7 23:44:58.798110 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 7 23:44:58.843658 systemd[1]: systemd-udevd.service: Deactivated successfully. May 7 23:44:58.843936 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 7 23:44:58.849822 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 7 23:44:58.849907 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 7 23:44:58.854715 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 7 23:44:58.854785 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 7 23:44:58.857100 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 7 23:44:58.857191 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 7 23:44:58.861511 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 7 23:44:58.861601 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 7 23:44:58.865027 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 7 23:44:58.865111 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 7 23:44:58.877077 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 7 23:44:58.900465 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 7 23:44:58.900714 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 7 23:44:58.907711 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 7 23:44:58.907806 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 7 23:44:58.910817 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 7 23:44:58.910896 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 7 23:44:58.913220 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 7 23:44:58.913328 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 7 23:44:58.923340 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 7 23:44:58.923455 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 7 23:44:58.924180 systemd[1]: network-cleanup.service: Deactivated successfully. May 7 23:44:58.926683 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 7 23:44:58.939721 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 7 23:44:58.939888 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 7 23:44:58.949868 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 7 23:44:58.966143 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 7 23:44:58.981489 systemd[1]: Switching root. May 7 23:44:59.016167 systemd-journald[252]: Journal stopped May 7 23:45:01.557712 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). May 7 23:45:01.557849 kernel: SELinux: policy capability network_peer_controls=1 May 7 23:45:01.557886 kernel: SELinux: policy capability open_perms=1 May 7 23:45:01.557918 kernel: SELinux: policy capability extended_socket_class=1 May 7 23:45:01.557948 kernel: SELinux: policy capability always_check_network=0 May 7 23:45:01.557977 kernel: SELinux: policy capability cgroup_seclabel=1 May 7 23:45:01.558008 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 7 23:45:01.558035 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 7 23:45:01.558065 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 7 23:45:01.558095 kernel: audit: type=1403 audit(1746661499.494:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 7 23:45:01.558136 systemd[1]: Successfully loaded SELinux policy in 59.296ms. May 7 23:45:01.558191 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 24.170ms. May 7 23:45:01.558225 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 7 23:45:01.558306 systemd[1]: Detected virtualization amazon. May 7 23:45:01.558344 systemd[1]: Detected architecture arm64. May 7 23:45:01.558377 systemd[1]: Detected first boot. May 7 23:45:01.558418 systemd[1]: Initializing machine ID from VM UUID. May 7 23:45:01.558448 zram_generator::config[1415]: No configuration found. May 7 23:45:01.558487 kernel: NET: Registered PF_VSOCK protocol family May 7 23:45:01.558518 systemd[1]: Populated /etc with preset unit settings. May 7 23:45:01.558552 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 7 23:45:01.558584 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 7 23:45:01.558616 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 7 23:45:01.558649 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 7 23:45:01.558681 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 7 23:45:01.558713 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 7 23:45:01.558746 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 7 23:45:01.558781 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 7 23:45:01.558815 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 7 23:45:01.558848 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 7 23:45:01.558881 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 7 23:45:01.558912 systemd[1]: Created slice user.slice - User and Session Slice. May 7 23:45:01.558946 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 7 23:45:01.558976 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 7 23:45:01.559007 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 7 23:45:01.559035 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 7 23:45:01.559072 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 7 23:45:01.559106 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 7 23:45:01.559139 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 7 23:45:01.559171 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 7 23:45:01.559202 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 7 23:45:01.559231 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 7 23:45:01.559318 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 7 23:45:01.559363 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 7 23:45:01.559395 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 7 23:45:01.559426 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 7 23:45:01.559474 systemd[1]: Reached target slices.target - Slice Units. May 7 23:45:01.559511 systemd[1]: Reached target swap.target - Swaps. May 7 23:45:01.559542 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 7 23:45:01.559574 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 7 23:45:01.559604 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 7 23:45:01.559639 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 7 23:45:01.559668 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 7 23:45:01.559701 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 7 23:45:01.559732 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 7 23:45:01.559761 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 7 23:45:01.559789 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 7 23:45:01.559817 systemd[1]: Mounting media.mount - External Media Directory... May 7 23:45:01.559848 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 7 23:45:01.559877 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 7 23:45:01.559908 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 7 23:45:01.559943 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 7 23:45:01.559972 systemd[1]: Reached target machines.target - Containers. May 7 23:45:01.560001 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 7 23:45:01.560031 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 7 23:45:01.560060 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 7 23:45:01.560091 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 7 23:45:01.560119 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 7 23:45:01.560147 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 7 23:45:01.560176 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 7 23:45:01.560209 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 7 23:45:01.560241 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 7 23:45:01.560298 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 7 23:45:01.560333 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 7 23:45:01.560367 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 7 23:45:01.560396 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 7 23:45:01.560427 systemd[1]: Stopped systemd-fsck-usr.service. May 7 23:45:01.560460 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 7 23:45:01.560496 systemd[1]: Starting systemd-journald.service - Journal Service... May 7 23:45:01.560525 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 7 23:45:01.560553 kernel: loop: module loaded May 7 23:45:01.560581 kernel: fuse: init (API version 7.39) May 7 23:45:01.560610 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 7 23:45:01.560645 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 7 23:45:01.560675 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 7 23:45:01.560706 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 7 23:45:01.560744 systemd[1]: verity-setup.service: Deactivated successfully. May 7 23:45:01.560773 systemd[1]: Stopped verity-setup.service. May 7 23:45:01.560804 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 7 23:45:01.560833 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 7 23:45:01.560864 systemd[1]: Mounted media.mount - External Media Directory. May 7 23:45:01.560893 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 7 23:45:01.560930 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 7 23:45:01.560960 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 7 23:45:01.560994 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 7 23:45:01.561022 kernel: ACPI: bus type drm_connector registered May 7 23:45:01.561050 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 7 23:45:01.561083 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 7 23:45:01.561113 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 7 23:45:01.561142 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 7 23:45:01.561171 systemd[1]: modprobe@drm.service: Deactivated successfully. May 7 23:45:01.561200 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 7 23:45:01.561228 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 7 23:45:01.561282 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 7 23:45:01.561316 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 7 23:45:01.561369 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 7 23:45:01.561413 systemd[1]: modprobe@loop.service: Deactivated successfully. May 7 23:45:01.561443 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 7 23:45:01.561475 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 7 23:45:01.561505 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 7 23:45:01.561539 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 7 23:45:01.561568 systemd[1]: Reached target network-pre.target - Preparation for Network. May 7 23:45:01.561655 systemd-journald[1501]: Collecting audit messages is disabled. May 7 23:45:01.561724 systemd-journald[1501]: Journal started May 7 23:45:01.561775 systemd-journald[1501]: Runtime Journal (/run/log/journal/ec297ec5cb7e4bdad6f6f4631e157132) is 8M, max 75.3M, 67.3M free. May 7 23:45:00.956667 systemd[1]: Queued start job for default target multi-user.target. May 7 23:45:00.970513 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 7 23:45:00.971367 systemd[1]: systemd-journald.service: Deactivated successfully. May 7 23:45:01.571379 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 7 23:45:01.586719 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 7 23:45:01.593517 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 7 23:45:01.593650 systemd[1]: Reached target local-fs.target - Local File Systems. May 7 23:45:01.599338 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 7 23:45:01.617108 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 7 23:45:01.631740 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 7 23:45:01.631830 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 7 23:45:01.647581 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 7 23:45:01.647684 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 7 23:45:01.659490 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 7 23:45:01.664314 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 7 23:45:01.682101 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 7 23:45:01.686666 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 7 23:45:01.715688 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 7 23:45:01.715768 systemd[1]: Started systemd-journald.service - Journal Service. May 7 23:45:01.725449 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 7 23:45:01.729408 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 7 23:45:01.736315 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 7 23:45:01.740985 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 7 23:45:01.744007 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 7 23:45:01.750344 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 7 23:45:01.782301 kernel: loop0: detected capacity change from 0 to 53784 May 7 23:45:01.792008 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 7 23:45:01.802731 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 7 23:45:01.816579 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 7 23:45:01.837088 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 7 23:45:01.864460 systemd-journald[1501]: Time spent on flushing to /var/log/journal/ec297ec5cb7e4bdad6f6f4631e157132 is 93.916ms for 928 entries. May 7 23:45:01.864460 systemd-journald[1501]: System Journal (/var/log/journal/ec297ec5cb7e4bdad6f6f4631e157132) is 8M, max 195.6M, 187.6M free. May 7 23:45:01.976943 systemd-journald[1501]: Received client request to flush runtime journal. May 7 23:45:01.977055 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 7 23:45:01.977091 kernel: loop1: detected capacity change from 0 to 123192 May 7 23:45:01.885106 systemd-tmpfiles[1532]: ACLs are not supported, ignoring. May 7 23:45:01.885130 systemd-tmpfiles[1532]: ACLs are not supported, ignoring. May 7 23:45:01.902979 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 7 23:45:01.925971 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 7 23:45:01.938568 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 7 23:45:01.976090 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 7 23:45:01.981357 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 7 23:45:01.992685 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 7 23:45:02.005465 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 7 23:45:02.031908 udevadm[1570]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 7 23:45:02.055335 kernel: loop2: detected capacity change from 0 to 194096 May 7 23:45:02.063743 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 7 23:45:02.076982 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 7 23:45:02.126009 systemd-tmpfiles[1573]: ACLs are not supported, ignoring. May 7 23:45:02.126051 systemd-tmpfiles[1573]: ACLs are not supported, ignoring. May 7 23:45:02.142501 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 7 23:45:02.401436 kernel: loop3: detected capacity change from 0 to 113512 May 7 23:45:02.518319 kernel: loop4: detected capacity change from 0 to 53784 May 7 23:45:02.539305 kernel: loop5: detected capacity change from 0 to 123192 May 7 23:45:02.551299 kernel: loop6: detected capacity change from 0 to 194096 May 7 23:45:02.589301 kernel: loop7: detected capacity change from 0 to 113512 May 7 23:45:02.601818 (sd-merge)[1578]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 7 23:45:02.602866 (sd-merge)[1578]: Merged extensions into '/usr'. May 7 23:45:02.615005 systemd[1]: Reload requested from client PID 1531 ('systemd-sysext') (unit systemd-sysext.service)... May 7 23:45:02.615039 systemd[1]: Reloading... May 7 23:45:02.841296 zram_generator::config[1609]: No configuration found. May 7 23:45:03.123855 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 7 23:45:03.270540 systemd[1]: Reloading finished in 654 ms. May 7 23:45:03.293322 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 7 23:45:03.296495 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 7 23:45:03.307818 ldconfig[1527]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 7 23:45:03.318745 systemd[1]: Starting ensure-sysext.service... May 7 23:45:03.323649 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 7 23:45:03.334617 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 7 23:45:03.363582 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 7 23:45:03.373519 systemd[1]: Reload requested from client PID 1658 ('systemctl') (unit ensure-sysext.service)... May 7 23:45:03.373555 systemd[1]: Reloading... May 7 23:45:03.418613 systemd-tmpfiles[1659]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 7 23:45:03.419108 systemd-tmpfiles[1659]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 7 23:45:03.423140 systemd-tmpfiles[1659]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 7 23:45:03.427838 systemd-tmpfiles[1659]: ACLs are not supported, ignoring. May 7 23:45:03.427991 systemd-tmpfiles[1659]: ACLs are not supported, ignoring. May 7 23:45:03.456051 systemd-tmpfiles[1659]: Detected autofs mount point /boot during canonicalization of boot. May 7 23:45:03.456080 systemd-tmpfiles[1659]: Skipping /boot May 7 23:45:03.470301 systemd-udevd[1660]: Using default interface naming scheme 'v255'. May 7 23:45:03.494831 systemd-tmpfiles[1659]: Detected autofs mount point /boot during canonicalization of boot. May 7 23:45:03.494860 systemd-tmpfiles[1659]: Skipping /boot May 7 23:45:03.604465 zram_generator::config[1692]: No configuration found. May 7 23:45:03.818421 (udev-worker)[1699]: Network interface NamePolicy= disabled on kernel command line. May 7 23:45:03.984677 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (1700) May 7 23:45:03.989574 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 7 23:45:04.184996 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 7 23:45:04.185763 systemd[1]: Reloading finished in 811 ms. May 7 23:45:04.205212 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 7 23:45:04.238491 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 7 23:45:04.298308 systemd[1]: Finished ensure-sysext.service. May 7 23:45:04.371312 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 7 23:45:04.388586 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 7 23:45:04.407170 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 7 23:45:04.420555 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 7 23:45:04.423073 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 7 23:45:04.433489 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 7 23:45:04.438380 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 7 23:45:04.443200 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 7 23:45:04.449362 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 7 23:45:04.457122 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 7 23:45:04.459504 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 7 23:45:04.465171 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 7 23:45:04.467477 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 7 23:45:04.470458 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 7 23:45:04.477228 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 7 23:45:04.484615 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 7 23:45:04.486746 systemd[1]: Reached target time-set.target - System Time Set. May 7 23:45:04.501629 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 7 23:45:04.510380 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 7 23:45:04.514130 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 7 23:45:04.514649 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 7 23:45:04.525680 systemd[1]: modprobe@drm.service: Deactivated successfully. May 7 23:45:04.526696 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 7 23:45:04.550161 systemd[1]: modprobe@loop.service: Deactivated successfully. May 7 23:45:04.550691 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 7 23:45:04.555967 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 7 23:45:04.558421 lvm[1866]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 7 23:45:04.565643 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 7 23:45:04.604885 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 7 23:45:04.606444 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 7 23:45:04.610658 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 7 23:45:04.649762 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 7 23:45:04.654365 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 7 23:45:04.660460 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 7 23:45:04.663831 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 7 23:45:04.676012 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 7 23:45:04.688605 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 7 23:45:04.700679 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 7 23:45:04.718907 lvm[1901]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 7 23:45:04.721626 augenrules[1904]: No rules May 7 23:45:04.727578 systemd[1]: audit-rules.service: Deactivated successfully. May 7 23:45:04.728067 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 7 23:45:04.752156 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 7 23:45:04.756655 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 7 23:45:04.766480 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 7 23:45:04.783007 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 7 23:45:04.791386 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 7 23:45:04.822531 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 7 23:45:04.926096 systemd-networkd[1874]: lo: Link UP May 7 23:45:04.926636 systemd-networkd[1874]: lo: Gained carrier May 7 23:45:04.929975 systemd-networkd[1874]: Enumeration completed May 7 23:45:04.930363 systemd[1]: Started systemd-networkd.service - Network Configuration. May 7 23:45:04.932845 systemd-networkd[1874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 7 23:45:04.932863 systemd-networkd[1874]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 7 23:45:04.936045 systemd-resolved[1875]: Positive Trust Anchors: May 7 23:45:04.936082 systemd-resolved[1875]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 7 23:45:04.936145 systemd-resolved[1875]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 7 23:45:04.937133 systemd-networkd[1874]: eth0: Link UP May 7 23:45:04.937547 systemd-networkd[1874]: eth0: Gained carrier May 7 23:45:04.937582 systemd-networkd[1874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 7 23:45:04.942623 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 7 23:45:04.948383 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 7 23:45:04.955830 systemd-networkd[1874]: eth0: DHCPv4 address 172.31.25.188/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 7 23:45:04.963854 systemd-resolved[1875]: Defaulting to hostname 'linux'. May 7 23:45:04.970600 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 7 23:45:04.974008 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 7 23:45:04.976892 systemd[1]: Reached target network.target - Network. May 7 23:45:04.978873 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 7 23:45:04.981179 systemd[1]: Reached target sysinit.target - System Initialization. May 7 23:45:04.983320 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 7 23:45:04.985870 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 7 23:45:04.988554 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 7 23:45:04.990815 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 7 23:45:04.993185 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 7 23:45:04.995484 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 7 23:45:04.995536 systemd[1]: Reached target paths.target - Path Units. May 7 23:45:04.997215 systemd[1]: Reached target timers.target - Timer Units. May 7 23:45:05.000939 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 7 23:45:05.005602 systemd[1]: Starting docker.socket - Docker Socket for the API... May 7 23:45:05.012314 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 7 23:45:05.015533 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 7 23:45:05.017951 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 7 23:45:05.023649 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 7 23:45:05.026402 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 7 23:45:05.029828 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 7 23:45:05.032449 systemd[1]: Reached target sockets.target - Socket Units. May 7 23:45:05.034669 systemd[1]: Reached target basic.target - Basic System. May 7 23:45:05.036573 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 7 23:45:05.036621 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 7 23:45:05.038685 systemd[1]: Starting containerd.service - containerd container runtime... May 7 23:45:05.046615 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 7 23:45:05.051799 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 7 23:45:05.058451 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 7 23:45:05.072581 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 7 23:45:05.074541 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 7 23:45:05.079649 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 7 23:45:05.087378 systemd[1]: Started ntpd.service - Network Time Service. May 7 23:45:05.094243 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 7 23:45:05.110504 systemd[1]: Starting setup-oem.service - Setup OEM... May 7 23:45:05.126701 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 7 23:45:05.133195 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 7 23:45:05.143405 systemd[1]: Starting systemd-logind.service - User Login Management... May 7 23:45:05.148725 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 7 23:45:05.150759 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 7 23:45:05.152449 jq[1932]: false May 7 23:45:05.158996 systemd[1]: Starting update-engine.service - Update Engine... May 7 23:45:05.163712 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 7 23:45:05.172641 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 7 23:45:05.177127 extend-filesystems[1933]: Found loop4 May 7 23:45:05.177127 extend-filesystems[1933]: Found loop5 May 7 23:45:05.177127 extend-filesystems[1933]: Found loop6 May 7 23:45:05.177127 extend-filesystems[1933]: Found loop7 May 7 23:45:05.177127 extend-filesystems[1933]: Found nvme0n1 May 7 23:45:05.177127 extend-filesystems[1933]: Found nvme0n1p1 May 7 23:45:05.177127 extend-filesystems[1933]: Found nvme0n1p2 May 7 23:45:05.173933 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 7 23:45:05.214643 extend-filesystems[1933]: Found nvme0n1p3 May 7 23:45:05.214643 extend-filesystems[1933]: Found usr May 7 23:45:05.214643 extend-filesystems[1933]: Found nvme0n1p4 May 7 23:45:05.214643 extend-filesystems[1933]: Found nvme0n1p6 May 7 23:45:05.214643 extend-filesystems[1933]: Found nvme0n1p7 May 7 23:45:05.214643 extend-filesystems[1933]: Found nvme0n1p9 May 7 23:45:05.214643 extend-filesystems[1933]: Checking size of /dev/nvme0n1p9 May 7 23:45:05.256485 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 7 23:45:05.259448 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 7 23:45:05.276284 jq[1943]: true May 7 23:45:05.268436 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 7 23:45:05.266831 dbus-daemon[1931]: [system] SELinux support is enabled May 7 23:45:05.276679 dbus-daemon[1931]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1874 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 7 23:45:05.280104 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 7 23:45:05.282881 dbus-daemon[1931]: [system] Successfully activated service 'org.freedesktop.systemd1' May 7 23:45:05.283578 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 7 23:45:05.286116 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 7 23:45:05.286154 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 7 23:45:05.308631 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 7 23:45:05.332371 extend-filesystems[1933]: Resized partition /dev/nvme0n1p9 May 7 23:45:05.335245 (ntainerd)[1959]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 7 23:45:05.345105 tar[1946]: linux-arm64/helm May 7 23:45:05.372306 extend-filesystems[1974]: resize2fs 1.47.1 (20-May-2024) May 7 23:45:05.397718 update_engine[1942]: I20250507 23:45:05.397564 1942 main.cc:92] Flatcar Update Engine starting May 7 23:45:05.404187 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 7 23:45:05.403961 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 7 23:45:05.408777 jq[1960]: true May 7 23:45:05.418165 systemd[1]: Started update-engine.service - Update Engine. May 7 23:45:05.424634 update_engine[1942]: I20250507 23:45:05.420293 1942 update_check_scheduler.cc:74] Next update check in 11m55s May 7 23:45:05.425880 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 7 23:45:05.436850 systemd[1]: motdgen.service: Deactivated successfully. May 7 23:45:05.438116 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 7 23:45:05.466208 ntpd[1935]: ntpd 4.2.8p17@1.4004-o Wed May 7 21:39:07 UTC 2025 (1): Starting May 7 23:45:05.474440 ntpd[1935]: 7 May 23:45:05 ntpd[1935]: ntpd 4.2.8p17@1.4004-o Wed May 7 21:39:07 UTC 2025 (1): Starting May 7 23:45:05.474834 ntpd[1935]: 7 May 23:45:05 ntpd[1935]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 7 23:45:05.474834 ntpd[1935]: 7 May 23:45:05 ntpd[1935]: ---------------------------------------------------- May 7 23:45:05.474834 ntpd[1935]: 7 May 23:45:05 ntpd[1935]: ntp-4 is maintained by Network Time Foundation, May 7 23:45:05.474834 ntpd[1935]: 7 May 23:45:05 ntpd[1935]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 7 23:45:05.474834 ntpd[1935]: 7 May 23:45:05 ntpd[1935]: corporation. Support and training for ntp-4 are May 7 23:45:05.474834 ntpd[1935]: 7 May 23:45:05 ntpd[1935]: available at https://www.nwtime.org/support May 7 23:45:05.474834 ntpd[1935]: 7 May 23:45:05 ntpd[1935]: ---------------------------------------------------- May 7 23:45:05.474440 ntpd[1935]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 7 23:45:05.474468 ntpd[1935]: ---------------------------------------------------- May 7 23:45:05.474487 ntpd[1935]: ntp-4 is maintained by Network Time Foundation, May 7 23:45:05.474505 ntpd[1935]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 7 23:45:05.474528 ntpd[1935]: corporation. Support and training for ntp-4 are May 7 23:45:05.474545 ntpd[1935]: available at https://www.nwtime.org/support May 7 23:45:05.474563 ntpd[1935]: ---------------------------------------------------- May 7 23:45:05.486831 ntpd[1935]: proto: precision = 0.096 usec (-23) May 7 23:45:05.492489 ntpd[1935]: 7 May 23:45:05 ntpd[1935]: proto: precision = 0.096 usec (-23) May 7 23:45:05.497999 ntpd[1935]: basedate set to 2025-04-25 May 7 23:45:05.498324 ntpd[1935]: 7 May 23:45:05 ntpd[1935]: basedate set to 2025-04-25 May 7 23:45:05.498324 ntpd[1935]: 7 May 23:45:05 ntpd[1935]: gps base set to 2025-04-27 (week 2364) May 7 23:45:05.498047 ntpd[1935]: gps base set to 2025-04-27 (week 2364) May 7 23:45:05.521414 ntpd[1935]: 7 May 23:45:05 ntpd[1935]: Listen and drop on 0 v6wildcard [::]:123 May 7 23:45:05.521414 ntpd[1935]: 7 May 23:45:05 ntpd[1935]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 7 23:45:05.521414 ntpd[1935]: 7 May 23:45:05 ntpd[1935]: Listen normally on 2 lo 127.0.0.1:123 May 7 23:45:05.518829 ntpd[1935]: Listen and drop on 0 v6wildcard [::]:123 May 7 23:45:05.518916 ntpd[1935]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 7 23:45:05.519187 ntpd[1935]: Listen normally on 2 lo 127.0.0.1:123 May 7 23:45:05.529303 ntpd[1935]: Listen normally on 3 eth0 172.31.25.188:123 May 7 23:45:05.530080 ntpd[1935]: 7 May 23:45:05 ntpd[1935]: Listen normally on 3 eth0 172.31.25.188:123 May 7 23:45:05.530080 ntpd[1935]: 7 May 23:45:05 ntpd[1935]: Listen normally on 4 lo [::1]:123 May 7 23:45:05.530080 ntpd[1935]: 7 May 23:45:05 ntpd[1935]: bind(21) AF_INET6 fe80::42c:30ff:feef:4aa1%2#123 flags 0x11 failed: Cannot assign requested address May 7 23:45:05.530080 ntpd[1935]: 7 May 23:45:05 ntpd[1935]: unable to create socket on eth0 (5) for fe80::42c:30ff:feef:4aa1%2#123 May 7 23:45:05.530080 ntpd[1935]: 7 May 23:45:05 ntpd[1935]: failed to init interface for address fe80::42c:30ff:feef:4aa1%2 May 7 23:45:05.530080 ntpd[1935]: 7 May 23:45:05 ntpd[1935]: Listening on routing socket on fd #21 for interface updates May 7 23:45:05.529445 ntpd[1935]: Listen normally on 4 lo [::1]:123 May 7 23:45:05.529529 ntpd[1935]: bind(21) AF_INET6 fe80::42c:30ff:feef:4aa1%2#123 flags 0x11 failed: Cannot assign requested address May 7 23:45:05.529623 ntpd[1935]: unable to create socket on eth0 (5) for fe80::42c:30ff:feef:4aa1%2#123 May 7 23:45:05.529651 ntpd[1935]: failed to init interface for address fe80::42c:30ff:feef:4aa1%2 May 7 23:45:05.529711 ntpd[1935]: Listening on routing socket on fd #21 for interface updates May 7 23:45:05.561487 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 7 23:45:05.553929 ntpd[1935]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 7 23:45:05.561671 ntpd[1935]: 7 May 23:45:05 ntpd[1935]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 7 23:45:05.561671 ntpd[1935]: 7 May 23:45:05 ntpd[1935]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 7 23:45:05.554729 ntpd[1935]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 7 23:45:05.580819 systemd[1]: Finished setup-oem.service - Setup OEM. May 7 23:45:05.594305 extend-filesystems[1974]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 7 23:45:05.594305 extend-filesystems[1974]: old_desc_blocks = 1, new_desc_blocks = 1 May 7 23:45:05.594305 extend-filesystems[1974]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 7 23:45:05.587726 systemd[1]: extend-filesystems.service: Deactivated successfully. May 7 23:45:05.614398 extend-filesystems[1933]: Resized filesystem in /dev/nvme0n1p9 May 7 23:45:05.588146 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 7 23:45:05.648666 coreos-metadata[1930]: May 07 23:45:05.648 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 7 23:45:05.663133 coreos-metadata[1930]: May 07 23:45:05.656 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 7 23:45:05.666420 coreos-metadata[1930]: May 07 23:45:05.666 INFO Fetch successful May 7 23:45:05.666420 coreos-metadata[1930]: May 07 23:45:05.666 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 7 23:45:05.670816 coreos-metadata[1930]: May 07 23:45:05.670 INFO Fetch successful May 7 23:45:05.670816 coreos-metadata[1930]: May 07 23:45:05.670 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 7 23:45:05.681684 coreos-metadata[1930]: May 07 23:45:05.681 INFO Fetch successful May 7 23:45:05.681684 coreos-metadata[1930]: May 07 23:45:05.681 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 7 23:45:05.683016 coreos-metadata[1930]: May 07 23:45:05.682 INFO Fetch successful May 7 23:45:05.683016 coreos-metadata[1930]: May 07 23:45:05.682 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 7 23:45:05.686277 coreos-metadata[1930]: May 07 23:45:05.683 INFO Fetch failed with 404: resource not found May 7 23:45:05.686277 coreos-metadata[1930]: May 07 23:45:05.683 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 7 23:45:05.696306 coreos-metadata[1930]: May 07 23:45:05.691 INFO Fetch successful May 7 23:45:05.696306 coreos-metadata[1930]: May 07 23:45:05.691 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 7 23:45:05.698817 coreos-metadata[1930]: May 07 23:45:05.698 INFO Fetch successful May 7 23:45:05.698817 coreos-metadata[1930]: May 07 23:45:05.698 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 7 23:45:05.699446 coreos-metadata[1930]: May 07 23:45:05.699 INFO Fetch successful May 7 23:45:05.700643 coreos-metadata[1930]: May 07 23:45:05.700 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 7 23:45:05.703352 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (1705) May 7 23:45:05.708841 coreos-metadata[1930]: May 07 23:45:05.706 INFO Fetch successful May 7 23:45:05.708841 coreos-metadata[1930]: May 07 23:45:05.706 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 7 23:45:05.717558 coreos-metadata[1930]: May 07 23:45:05.713 INFO Fetch successful May 7 23:45:05.759860 bash[2022]: Updated "/home/core/.ssh/authorized_keys" May 7 23:45:05.763440 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 7 23:45:05.772710 systemd[1]: Starting sshkeys.service... May 7 23:45:05.802222 locksmithd[1981]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 7 23:45:05.817102 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 7 23:45:05.819714 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 7 23:45:05.898467 systemd-logind[1941]: Watching system buttons on /dev/input/event0 (Power Button) May 7 23:45:05.898511 systemd-logind[1941]: Watching system buttons on /dev/input/event1 (Sleep Button) May 7 23:45:05.903705 systemd-logind[1941]: New seat seat0. May 7 23:45:05.904760 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 7 23:45:06.000770 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 7 23:45:06.004819 systemd[1]: Started systemd-logind.service - User Login Management. May 7 23:45:06.072081 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 7 23:45:06.075378 dbus-daemon[1931]: [system] Successfully activated service 'org.freedesktop.hostname1' May 7 23:45:06.077147 dbus-daemon[1931]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1966 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 7 23:45:06.146079 systemd[1]: Starting polkit.service - Authorization Manager... May 7 23:45:06.223655 polkitd[2093]: Started polkitd version 121 May 7 23:45:06.283818 containerd[1959]: time="2025-05-07T23:45:06.282183466Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 7 23:45:06.298324 polkitd[2093]: Loading rules from directory /etc/polkit-1/rules.d May 7 23:45:06.298444 polkitd[2093]: Loading rules from directory /usr/share/polkit-1/rules.d May 7 23:45:06.305785 polkitd[2093]: Finished loading, compiling and executing 2 rules May 7 23:45:06.314082 dbus-daemon[1931]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 7 23:45:06.315421 systemd[1]: Started polkit.service - Authorization Manager. May 7 23:45:06.324790 polkitd[2093]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 7 23:45:06.405639 coreos-metadata[2075]: May 07 23:45:06.403 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 7 23:45:06.407690 coreos-metadata[2075]: May 07 23:45:06.406 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 7 23:45:06.414478 coreos-metadata[2075]: May 07 23:45:06.410 INFO Fetch successful May 7 23:45:06.414478 coreos-metadata[2075]: May 07 23:45:06.410 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 7 23:45:06.415018 systemd-hostnamed[1966]: Hostname set to (transient) May 7 23:45:06.415805 coreos-metadata[2075]: May 07 23:45:06.415 INFO Fetch successful May 7 23:45:06.415982 systemd-resolved[1875]: System hostname changed to 'ip-172-31-25-188'. May 7 23:45:06.419559 unknown[2075]: wrote ssh authorized keys file for user: core May 7 23:45:06.452427 containerd[1959]: time="2025-05-07T23:45:06.452171375Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 7 23:45:06.457982 containerd[1959]: time="2025-05-07T23:45:06.457903643Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 7 23:45:06.457982 containerd[1959]: time="2025-05-07T23:45:06.457970711Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 7 23:45:06.458134 containerd[1959]: time="2025-05-07T23:45:06.458007731Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 7 23:45:06.459864 containerd[1959]: time="2025-05-07T23:45:06.459803807Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 7 23:45:06.459970 containerd[1959]: time="2025-05-07T23:45:06.459865775Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 7 23:45:06.460096 containerd[1959]: time="2025-05-07T23:45:06.460048535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 7 23:45:06.460151 containerd[1959]: time="2025-05-07T23:45:06.460092143Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 7 23:45:06.460581 containerd[1959]: time="2025-05-07T23:45:06.460528103Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 7 23:45:06.460581 containerd[1959]: time="2025-05-07T23:45:06.460572935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 7 23:45:06.460714 containerd[1959]: time="2025-05-07T23:45:06.460608131Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 7 23:45:06.460714 containerd[1959]: time="2025-05-07T23:45:06.460632335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 7 23:45:06.460847 containerd[1959]: time="2025-05-07T23:45:06.460807199Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 7 23:45:06.462089 containerd[1959]: time="2025-05-07T23:45:06.461228339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 7 23:45:06.462969 containerd[1959]: time="2025-05-07T23:45:06.462909959Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 7 23:45:06.462969 containerd[1959]: time="2025-05-07T23:45:06.462961859Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 7 23:45:06.463312 containerd[1959]: time="2025-05-07T23:45:06.463172087Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 7 23:45:06.463373 containerd[1959]: time="2025-05-07T23:45:06.463307123Z" level=info msg="metadata content store policy set" policy=shared May 7 23:45:06.469323 update-ssh-keys[2126]: Updated "/home/core/.ssh/authorized_keys" May 7 23:45:06.472519 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 7 23:45:06.476764 ntpd[1935]: 7 May 23:45:06 ntpd[1935]: bind(24) AF_INET6 fe80::42c:30ff:feef:4aa1%2#123 flags 0x11 failed: Cannot assign requested address May 7 23:45:06.476764 ntpd[1935]: 7 May 23:45:06 ntpd[1935]: unable to create socket on eth0 (6) for fe80::42c:30ff:feef:4aa1%2#123 May 7 23:45:06.476764 ntpd[1935]: 7 May 23:45:06 ntpd[1935]: failed to init interface for address fe80::42c:30ff:feef:4aa1%2 May 7 23:45:06.476337 ntpd[1935]: bind(24) AF_INET6 fe80::42c:30ff:feef:4aa1%2#123 flags 0x11 failed: Cannot assign requested address May 7 23:45:06.476392 ntpd[1935]: unable to create socket on eth0 (6) for fe80::42c:30ff:feef:4aa1%2#123 May 7 23:45:06.476420 ntpd[1935]: failed to init interface for address fe80::42c:30ff:feef:4aa1%2 May 7 23:45:06.479298 containerd[1959]: time="2025-05-07T23:45:06.478797959Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 7 23:45:06.479298 containerd[1959]: time="2025-05-07T23:45:06.478903679Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 7 23:45:06.479298 containerd[1959]: time="2025-05-07T23:45:06.478938947Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 7 23:45:06.479298 containerd[1959]: time="2025-05-07T23:45:06.478991759Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 7 23:45:06.479298 containerd[1959]: time="2025-05-07T23:45:06.479027723Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 7 23:45:06.479596 containerd[1959]: time="2025-05-07T23:45:06.479305271Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 7 23:45:06.483335 containerd[1959]: time="2025-05-07T23:45:06.481778555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 7 23:45:06.484601 containerd[1959]: time="2025-05-07T23:45:06.484549619Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 7 23:45:06.485288 containerd[1959]: time="2025-05-07T23:45:06.484606931Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 7 23:45:06.486717 containerd[1959]: time="2025-05-07T23:45:06.485238815Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 7 23:45:06.486717 containerd[1959]: time="2025-05-07T23:45:06.486505535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 7 23:45:06.487051 containerd[1959]: time="2025-05-07T23:45:06.486567047Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 7 23:45:06.489567 containerd[1959]: time="2025-05-07T23:45:06.487066751Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 7 23:45:06.489683 containerd[1959]: time="2025-05-07T23:45:06.489580883Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 7 23:45:06.489683 containerd[1959]: time="2025-05-07T23:45:06.489647735Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 7 23:45:06.489769 containerd[1959]: time="2025-05-07T23:45:06.489703979Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 7 23:45:06.489769 containerd[1959]: time="2025-05-07T23:45:06.489737555Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 7 23:45:06.489850 containerd[1959]: time="2025-05-07T23:45:06.489789179Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 7 23:45:06.489897 containerd[1959]: time="2025-05-07T23:45:06.489842087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 7 23:45:06.489953 containerd[1959]: time="2025-05-07T23:45:06.489902519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 7 23:45:06.490001 containerd[1959]: time="2025-05-07T23:45:06.489959807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 7 23:45:06.490130 containerd[1959]: time="2025-05-07T23:45:06.489995627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 7 23:45:06.496519 containerd[1959]: time="2025-05-07T23:45:06.491344163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 7 23:45:06.496519 containerd[1959]: time="2025-05-07T23:45:06.491444915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 7 23:45:06.496519 containerd[1959]: time="2025-05-07T23:45:06.491501675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 7 23:45:06.496519 containerd[1959]: time="2025-05-07T23:45:06.491542499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 7 23:45:06.496519 containerd[1959]: time="2025-05-07T23:45:06.491599175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 7 23:45:06.496519 containerd[1959]: time="2025-05-07T23:45:06.491637203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 7 23:45:06.496519 containerd[1959]: time="2025-05-07T23:45:06.491728307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 7 23:45:06.496519 containerd[1959]: time="2025-05-07T23:45:06.493453115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 7 23:45:06.496519 containerd[1959]: time="2025-05-07T23:45:06.494490659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 7 23:45:06.496519 containerd[1959]: time="2025-05-07T23:45:06.494572967Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 7 23:45:06.491844 systemd[1]: Finished sshkeys.service. May 7 23:45:06.497398 containerd[1959]: time="2025-05-07T23:45:06.497326319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 7 23:45:06.497457 containerd[1959]: time="2025-05-07T23:45:06.497416595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 7 23:45:06.500405 containerd[1959]: time="2025-05-07T23:45:06.499421531Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 7 23:45:06.500405 containerd[1959]: time="2025-05-07T23:45:06.499684331Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 7 23:45:06.500405 containerd[1959]: time="2025-05-07T23:45:06.499748879Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 7 23:45:06.500405 containerd[1959]: time="2025-05-07T23:45:06.499777499Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 7 23:45:06.500405 containerd[1959]: time="2025-05-07T23:45:06.499835999Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 7 23:45:06.500405 containerd[1959]: time="2025-05-07T23:45:06.499863167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 7 23:45:06.500405 containerd[1959]: time="2025-05-07T23:45:06.499926455Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 7 23:45:06.500405 containerd[1959]: time="2025-05-07T23:45:06.499953335Z" level=info msg="NRI interface is disabled by configuration." May 7 23:45:06.500405 containerd[1959]: time="2025-05-07T23:45:06.500006183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 7 23:45:06.504314 containerd[1959]: time="2025-05-07T23:45:06.503286803Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 7 23:45:06.504314 containerd[1959]: time="2025-05-07T23:45:06.503437163Z" level=info msg="Connect containerd service" May 7 23:45:06.504314 containerd[1959]: time="2025-05-07T23:45:06.503540819Z" level=info msg="using legacy CRI server" May 7 23:45:06.504314 containerd[1959]: time="2025-05-07T23:45:06.503560727Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 7 23:45:06.504314 containerd[1959]: time="2025-05-07T23:45:06.503871587Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 7 23:45:06.506775 containerd[1959]: time="2025-05-07T23:45:06.505014875Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 7 23:45:06.506775 containerd[1959]: time="2025-05-07T23:45:06.505426079Z" level=info msg="Start subscribing containerd event" May 7 23:45:06.506775 containerd[1959]: time="2025-05-07T23:45:06.505503023Z" level=info msg="Start recovering state" May 7 23:45:06.506775 containerd[1959]: time="2025-05-07T23:45:06.505669271Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 7 23:45:06.506775 containerd[1959]: time="2025-05-07T23:45:06.505671911Z" level=info msg="Start event monitor" May 7 23:45:06.506775 containerd[1959]: time="2025-05-07T23:45:06.505730531Z" level=info msg="Start snapshots syncer" May 7 23:45:06.506775 containerd[1959]: time="2025-05-07T23:45:06.505752467Z" level=info msg="Start cni network conf syncer for default" May 7 23:45:06.506775 containerd[1959]: time="2025-05-07T23:45:06.505767347Z" level=info msg=serving... address=/run/containerd/containerd.sock May 7 23:45:06.506775 containerd[1959]: time="2025-05-07T23:45:06.505772603Z" level=info msg="Start streaming server" May 7 23:45:06.506775 containerd[1959]: time="2025-05-07T23:45:06.506608871Z" level=info msg="containerd successfully booted in 0.232092s" May 7 23:45:06.506089 systemd[1]: Started containerd.service - containerd container runtime. May 7 23:45:06.517388 sshd_keygen[1976]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 7 23:45:06.574454 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 7 23:45:06.590204 systemd[1]: Starting issuegen.service - Generate /run/issue... May 7 23:45:06.603480 systemd[1]: Started sshd@0-172.31.25.188:22-147.75.109.163:49926.service - OpenSSH per-connection server daemon (147.75.109.163:49926). May 7 23:45:06.620720 systemd[1]: issuegen.service: Deactivated successfully. May 7 23:45:06.621228 systemd[1]: Finished issuegen.service - Generate /run/issue. May 7 23:45:06.635805 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 7 23:45:06.670334 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 7 23:45:06.685538 systemd[1]: Started getty@tty1.service - Getty on tty1. May 7 23:45:06.696035 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 7 23:45:06.698955 systemd[1]: Reached target getty.target - Login Prompts. May 7 23:45:06.896332 sshd[2144]: Accepted publickey for core from 147.75.109.163 port 49926 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:45:06.897463 sshd-session[2144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:45:06.912598 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 7 23:45:06.925708 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 7 23:45:06.949345 systemd-logind[1941]: New session 1 of user core. May 7 23:45:06.962653 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 7 23:45:06.975503 tar[1946]: linux-arm64/LICENSE May 7 23:45:06.977603 tar[1946]: linux-arm64/README.md May 7 23:45:06.979583 systemd-networkd[1874]: eth0: Gained IPv6LL May 7 23:45:06.979769 systemd[1]: Starting user@500.service - User Manager for UID 500... May 7 23:45:06.988458 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 7 23:45:06.995321 systemd[1]: Reached target network-online.target - Network is Online. May 7 23:45:07.007762 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 7 23:45:07.010972 (systemd)[2155]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 7 23:45:07.027728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:45:07.040457 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 7 23:45:07.044470 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 7 23:45:07.072364 systemd-logind[1941]: New session c1 of user core. May 7 23:45:07.117308 amazon-ssm-agent[2159]: Initializing new seelog logger May 7 23:45:07.118720 amazon-ssm-agent[2159]: New Seelog Logger Creation Complete May 7 23:45:07.118822 amazon-ssm-agent[2159]: 2025/05/07 23:45:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 7 23:45:07.118822 amazon-ssm-agent[2159]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 7 23:45:07.120300 amazon-ssm-agent[2159]: 2025/05/07 23:45:07 processing appconfig overrides May 7 23:45:07.125527 amazon-ssm-agent[2159]: 2025/05/07 23:45:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 7 23:45:07.125527 amazon-ssm-agent[2159]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 7 23:45:07.125527 amazon-ssm-agent[2159]: 2025/05/07 23:45:07 processing appconfig overrides May 7 23:45:07.125527 amazon-ssm-agent[2159]: 2025/05/07 23:45:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 7 23:45:07.125527 amazon-ssm-agent[2159]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 7 23:45:07.125527 amazon-ssm-agent[2159]: 2025/05/07 23:45:07 processing appconfig overrides May 7 23:45:07.125527 amazon-ssm-agent[2159]: 2025-05-07 23:45:07 INFO Proxy environment variables: May 7 23:45:07.130975 amazon-ssm-agent[2159]: 2025/05/07 23:45:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 7 23:45:07.133429 amazon-ssm-agent[2159]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 7 23:45:07.133429 amazon-ssm-agent[2159]: 2025/05/07 23:45:07 processing appconfig overrides May 7 23:45:07.143766 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 7 23:45:07.224020 amazon-ssm-agent[2159]: 2025-05-07 23:45:07 INFO https_proxy: May 7 23:45:07.324547 amazon-ssm-agent[2159]: 2025-05-07 23:45:07 INFO http_proxy: May 7 23:45:07.402804 systemd[2155]: Queued start job for default target default.target. May 7 23:45:07.419159 systemd[2155]: Created slice app.slice - User Application Slice. May 7 23:45:07.419757 systemd[2155]: Reached target paths.target - Paths. May 7 23:45:07.419960 systemd[2155]: Reached target timers.target - Timers. May 7 23:45:07.424288 amazon-ssm-agent[2159]: 2025-05-07 23:45:07 INFO no_proxy: May 7 23:45:07.424612 systemd[2155]: Starting dbus.socket - D-Bus User Message Bus Socket... May 7 23:45:07.451217 systemd[2155]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 7 23:45:07.451670 systemd[2155]: Reached target sockets.target - Sockets. May 7 23:45:07.451754 systemd[2155]: Reached target basic.target - Basic System. May 7 23:45:07.451837 systemd[2155]: Reached target default.target - Main User Target. May 7 23:45:07.451896 systemd[2155]: Startup finished in 363ms. May 7 23:45:07.452472 systemd[1]: Started user@500.service - User Manager for UID 500. May 7 23:45:07.472564 systemd[1]: Started session-1.scope - Session 1 of User core. May 7 23:45:07.521918 amazon-ssm-agent[2159]: 2025-05-07 23:45:07 INFO Checking if agent identity type OnPrem can be assumed May 7 23:45:07.622704 amazon-ssm-agent[2159]: 2025-05-07 23:45:07 INFO Checking if agent identity type EC2 can be assumed May 7 23:45:07.639785 systemd[1]: Started sshd@1-172.31.25.188:22-147.75.109.163:35308.service - OpenSSH per-connection server daemon (147.75.109.163:35308). May 7 23:45:07.721284 amazon-ssm-agent[2159]: 2025-05-07 23:45:07 INFO Agent will take identity from EC2 May 7 23:45:07.820034 amazon-ssm-agent[2159]: 2025-05-07 23:45:07 INFO [amazon-ssm-agent] using named pipe channel for IPC May 7 23:45:07.891842 sshd[2189]: Accepted publickey for core from 147.75.109.163 port 35308 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:45:07.895131 sshd-session[2189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:45:07.904612 systemd-logind[1941]: New session 2 of user core. May 7 23:45:07.914514 systemd[1]: Started session-2.scope - Session 2 of User core. May 7 23:45:07.919967 amazon-ssm-agent[2159]: 2025-05-07 23:45:07 INFO [amazon-ssm-agent] using named pipe channel for IPC May 7 23:45:07.970789 amazon-ssm-agent[2159]: 2025-05-07 23:45:07 INFO [amazon-ssm-agent] using named pipe channel for IPC May 7 23:45:07.970976 amazon-ssm-agent[2159]: 2025-05-07 23:45:07 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 May 7 23:45:07.971355 amazon-ssm-agent[2159]: 2025-05-07 23:45:07 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 May 7 23:45:07.971355 amazon-ssm-agent[2159]: 2025-05-07 23:45:07 INFO [amazon-ssm-agent] Starting Core Agent May 7 23:45:07.971355 amazon-ssm-agent[2159]: 2025-05-07 23:45:07 INFO [amazon-ssm-agent] registrar detected. Attempting registration May 7 23:45:07.971355 amazon-ssm-agent[2159]: 2025-05-07 23:45:07 INFO [Registrar] Starting registrar module May 7 23:45:07.971355 amazon-ssm-agent[2159]: 2025-05-07 23:45:07 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration May 7 23:45:07.971355 amazon-ssm-agent[2159]: 2025-05-07 23:45:07 INFO [EC2Identity] EC2 registration was successful. May 7 23:45:07.971355 amazon-ssm-agent[2159]: 2025-05-07 23:45:07 INFO [CredentialRefresher] credentialRefresher has started May 7 23:45:07.971355 amazon-ssm-agent[2159]: 2025-05-07 23:45:07 INFO [CredentialRefresher] Starting credentials refresher loop May 7 23:45:07.971355 amazon-ssm-agent[2159]: 2025-05-07 23:45:07 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 7 23:45:08.019422 amazon-ssm-agent[2159]: 2025-05-07 23:45:07 INFO [CredentialRefresher] Next credential rotation will be in 32.23331843463333 minutes May 7 23:45:08.046759 sshd[2191]: Connection closed by 147.75.109.163 port 35308 May 7 23:45:08.047842 sshd-session[2189]: pam_unix(sshd:session): session closed for user core May 7 23:45:08.054419 systemd[1]: sshd@1-172.31.25.188:22-147.75.109.163:35308.service: Deactivated successfully. May 7 23:45:08.058131 systemd[1]: session-2.scope: Deactivated successfully. May 7 23:45:08.060910 systemd-logind[1941]: Session 2 logged out. Waiting for processes to exit. May 7 23:45:08.063198 systemd-logind[1941]: Removed session 2. May 7 23:45:08.094700 systemd[1]: Started sshd@2-172.31.25.188:22-147.75.109.163:35314.service - OpenSSH per-connection server daemon (147.75.109.163:35314). May 7 23:45:08.280218 sshd[2197]: Accepted publickey for core from 147.75.109.163 port 35314 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:45:08.282704 sshd-session[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:45:08.293369 systemd-logind[1941]: New session 3 of user core. May 7 23:45:08.299738 systemd[1]: Started session-3.scope - Session 3 of User core. May 7 23:45:08.426852 sshd[2199]: Connection closed by 147.75.109.163 port 35314 May 7 23:45:08.427665 sshd-session[2197]: pam_unix(sshd:session): session closed for user core May 7 23:45:08.433412 systemd[1]: sshd@2-172.31.25.188:22-147.75.109.163:35314.service: Deactivated successfully. May 7 23:45:08.436066 systemd[1]: session-3.scope: Deactivated successfully. May 7 23:45:08.438975 systemd-logind[1941]: Session 3 logged out. Waiting for processes to exit. May 7 23:45:08.441371 systemd-logind[1941]: Removed session 3. May 7 23:45:09.001367 amazon-ssm-agent[2159]: 2025-05-07 23:45:09 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 7 23:45:09.103640 amazon-ssm-agent[2159]: 2025-05-07 23:45:09 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2205) started May 7 23:45:09.204436 amazon-ssm-agent[2159]: 2025-05-07 23:45:09 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 7 23:45:09.475148 ntpd[1935]: Listen normally on 7 eth0 [fe80::42c:30ff:feef:4aa1%2]:123 May 7 23:45:09.476655 ntpd[1935]: 7 May 23:45:09 ntpd[1935]: Listen normally on 7 eth0 [fe80::42c:30ff:feef:4aa1%2]:123 May 7 23:45:10.525744 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:45:10.529009 systemd[1]: Reached target multi-user.target - Multi-User System. May 7 23:45:10.531442 systemd[1]: Startup finished in 1.064s (kernel) + 8.704s (initrd) + 11.094s (userspace) = 20.863s. May 7 23:45:10.537944 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 7 23:45:11.995678 kubelet[2220]: E0507 23:45:11.995590 2220 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 7 23:45:12.000291 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 7 23:45:12.000636 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 7 23:45:12.001362 systemd[1]: kubelet.service: Consumed 1.278s CPU time, 243.2M memory peak. May 7 23:45:18.471717 systemd[1]: Started sshd@3-172.31.25.188:22-147.75.109.163:38010.service - OpenSSH per-connection server daemon (147.75.109.163:38010). May 7 23:45:18.659197 sshd[2233]: Accepted publickey for core from 147.75.109.163 port 38010 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:45:18.661690 sshd-session[2233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:45:18.670874 systemd-logind[1941]: New session 4 of user core. May 7 23:45:18.680616 systemd[1]: Started session-4.scope - Session 4 of User core. May 7 23:45:18.810806 sshd[2235]: Connection closed by 147.75.109.163 port 38010 May 7 23:45:18.809565 sshd-session[2233]: pam_unix(sshd:session): session closed for user core May 7 23:45:18.815312 systemd[1]: sshd@3-172.31.25.188:22-147.75.109.163:38010.service: Deactivated successfully. May 7 23:45:18.815619 systemd-logind[1941]: Session 4 logged out. Waiting for processes to exit. May 7 23:45:18.818522 systemd[1]: session-4.scope: Deactivated successfully. May 7 23:45:18.822800 systemd-logind[1941]: Removed session 4. May 7 23:45:18.854759 systemd[1]: Started sshd@4-172.31.25.188:22-147.75.109.163:38018.service - OpenSSH per-connection server daemon (147.75.109.163:38018). May 7 23:45:19.040302 sshd[2241]: Accepted publickey for core from 147.75.109.163 port 38018 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:45:19.042678 sshd-session[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:45:19.050542 systemd-logind[1941]: New session 5 of user core. May 7 23:45:19.064549 systemd[1]: Started session-5.scope - Session 5 of User core. May 7 23:45:19.185380 sshd[2243]: Connection closed by 147.75.109.163 port 38018 May 7 23:45:19.186223 sshd-session[2241]: pam_unix(sshd:session): session closed for user core May 7 23:45:19.192657 systemd[1]: sshd@4-172.31.25.188:22-147.75.109.163:38018.service: Deactivated successfully. May 7 23:45:19.195711 systemd[1]: session-5.scope: Deactivated successfully. May 7 23:45:19.196966 systemd-logind[1941]: Session 5 logged out. Waiting for processes to exit. May 7 23:45:19.199092 systemd-logind[1941]: Removed session 5. May 7 23:45:19.228246 systemd[1]: Started sshd@5-172.31.25.188:22-147.75.109.163:38020.service - OpenSSH per-connection server daemon (147.75.109.163:38020). May 7 23:45:19.419639 sshd[2249]: Accepted publickey for core from 147.75.109.163 port 38020 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:45:19.422121 sshd-session[2249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:45:19.432709 systemd-logind[1941]: New session 6 of user core. May 7 23:45:19.439554 systemd[1]: Started session-6.scope - Session 6 of User core. May 7 23:45:19.566544 sshd[2251]: Connection closed by 147.75.109.163 port 38020 May 7 23:45:19.567483 sshd-session[2249]: pam_unix(sshd:session): session closed for user core May 7 23:45:19.572660 systemd-logind[1941]: Session 6 logged out. Waiting for processes to exit. May 7 23:45:19.573288 systemd[1]: sshd@5-172.31.25.188:22-147.75.109.163:38020.service: Deactivated successfully. May 7 23:45:19.576461 systemd[1]: session-6.scope: Deactivated successfully. May 7 23:45:19.582043 systemd-logind[1941]: Removed session 6. May 7 23:45:19.614733 systemd[1]: Started sshd@6-172.31.25.188:22-147.75.109.163:38026.service - OpenSSH per-connection server daemon (147.75.109.163:38026). May 7 23:45:19.799124 sshd[2257]: Accepted publickey for core from 147.75.109.163 port 38026 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:45:19.801632 sshd-session[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:45:19.812748 systemd-logind[1941]: New session 7 of user core. May 7 23:45:19.819603 systemd[1]: Started session-7.scope - Session 7 of User core. May 7 23:45:19.938695 sudo[2260]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 7 23:45:19.939991 sudo[2260]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 7 23:45:19.959704 sudo[2260]: pam_unix(sudo:session): session closed for user root May 7 23:45:19.983564 sshd[2259]: Connection closed by 147.75.109.163 port 38026 May 7 23:45:19.984703 sshd-session[2257]: pam_unix(sshd:session): session closed for user core May 7 23:45:19.990734 systemd-logind[1941]: Session 7 logged out. Waiting for processes to exit. May 7 23:45:19.992401 systemd[1]: sshd@6-172.31.25.188:22-147.75.109.163:38026.service: Deactivated successfully. May 7 23:45:19.995582 systemd[1]: session-7.scope: Deactivated successfully. May 7 23:45:20.000033 systemd-logind[1941]: Removed session 7. May 7 23:45:20.029106 systemd[1]: Started sshd@7-172.31.25.188:22-147.75.109.163:38032.service - OpenSSH per-connection server daemon (147.75.109.163:38032). May 7 23:45:20.217421 sshd[2266]: Accepted publickey for core from 147.75.109.163 port 38032 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:45:20.220598 sshd-session[2266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:45:20.229674 systemd-logind[1941]: New session 8 of user core. May 7 23:45:20.236595 systemd[1]: Started session-8.scope - Session 8 of User core. May 7 23:45:20.343424 sudo[2270]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 7 23:45:20.344102 sudo[2270]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 7 23:45:20.350827 sudo[2270]: pam_unix(sudo:session): session closed for user root May 7 23:45:20.361383 sudo[2269]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 7 23:45:20.361998 sudo[2269]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 7 23:45:20.387885 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 7 23:45:20.436921 augenrules[2292]: No rules May 7 23:45:20.439988 systemd[1]: audit-rules.service: Deactivated successfully. May 7 23:45:20.442344 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 7 23:45:20.444113 sudo[2269]: pam_unix(sudo:session): session closed for user root May 7 23:45:20.468096 sshd[2268]: Connection closed by 147.75.109.163 port 38032 May 7 23:45:20.468942 sshd-session[2266]: pam_unix(sshd:session): session closed for user core May 7 23:45:20.474508 systemd[1]: sshd@7-172.31.25.188:22-147.75.109.163:38032.service: Deactivated successfully. May 7 23:45:20.479127 systemd[1]: session-8.scope: Deactivated successfully. May 7 23:45:20.481922 systemd-logind[1941]: Session 8 logged out. Waiting for processes to exit. May 7 23:45:20.484022 systemd-logind[1941]: Removed session 8. May 7 23:45:20.509761 systemd[1]: Started sshd@8-172.31.25.188:22-147.75.109.163:38044.service - OpenSSH per-connection server daemon (147.75.109.163:38044). May 7 23:45:20.703225 sshd[2301]: Accepted publickey for core from 147.75.109.163 port 38044 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:45:20.705623 sshd-session[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:45:20.715125 systemd-logind[1941]: New session 9 of user core. May 7 23:45:20.719565 systemd[1]: Started session-9.scope - Session 9 of User core. May 7 23:45:20.823964 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 7 23:45:20.824670 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 7 23:45:21.382140 systemd[1]: Starting docker.service - Docker Application Container Engine... May 7 23:45:21.385434 (dockerd)[2322]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 7 23:45:21.733331 dockerd[2322]: time="2025-05-07T23:45:21.730882469Z" level=info msg="Starting up" May 7 23:45:21.837095 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1316292616-merged.mount: Deactivated successfully. May 7 23:45:21.858598 systemd[1]: var-lib-docker-metacopy\x2dcheck4211548797-merged.mount: Deactivated successfully. May 7 23:45:21.876590 dockerd[2322]: time="2025-05-07T23:45:21.876497042Z" level=info msg="Loading containers: start." May 7 23:45:22.130400 kernel: Initializing XFRM netlink socket May 7 23:45:22.163893 (udev-worker)[2345]: Network interface NamePolicy= disabled on kernel command line. May 7 23:45:22.179651 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 7 23:45:22.190076 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:45:22.273653 systemd-networkd[1874]: docker0: Link UP May 7 23:45:22.326807 dockerd[2322]: time="2025-05-07T23:45:22.326739881Z" level=info msg="Loading containers: done." May 7 23:45:22.360301 dockerd[2322]: time="2025-05-07T23:45:22.359594506Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 7 23:45:22.360301 dockerd[2322]: time="2025-05-07T23:45:22.359743004Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 7 23:45:22.360301 dockerd[2322]: time="2025-05-07T23:45:22.359975699Z" level=info msg="Daemon has completed initialization" May 7 23:45:22.435517 dockerd[2322]: time="2025-05-07T23:45:22.434355099Z" level=info msg="API listen on /run/docker.sock" May 7 23:45:22.436176 systemd[1]: Started docker.service - Docker Application Container Engine. May 7 23:45:22.562564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:45:22.577846 (kubelet)[2514]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 7 23:45:22.692173 kubelet[2514]: E0507 23:45:22.691976 2514 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 7 23:45:22.701106 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 7 23:45:22.702468 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 7 23:45:22.704392 systemd[1]: kubelet.service: Consumed 311ms CPU time, 94.3M memory peak. May 7 23:45:22.832942 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3322062738-merged.mount: Deactivated successfully. May 7 23:45:23.705550 containerd[1959]: time="2025-05-07T23:45:23.705476188Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 7 23:45:24.346928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3330530695.mount: Deactivated successfully. May 7 23:45:25.877080 containerd[1959]: time="2025-05-07T23:45:25.876810309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:25.878448 containerd[1959]: time="2025-05-07T23:45:25.878329673Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794150" May 7 23:45:25.879779 containerd[1959]: time="2025-05-07T23:45:25.879666356Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:25.885355 containerd[1959]: time="2025-05-07T23:45:25.885216232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:25.887920 containerd[1959]: time="2025-05-07T23:45:25.887623463Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.182076139s" May 7 23:45:25.887920 containerd[1959]: time="2025-05-07T23:45:25.887692609Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 7 23:45:25.928098 containerd[1959]: time="2025-05-07T23:45:25.928046086Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 7 23:45:27.528562 containerd[1959]: time="2025-05-07T23:45:27.528224222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:27.529856 containerd[1959]: time="2025-05-07T23:45:27.529740035Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855550" May 7 23:45:27.530907 containerd[1959]: time="2025-05-07T23:45:27.530818368Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:27.537472 containerd[1959]: time="2025-05-07T23:45:27.537399500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:27.539301 containerd[1959]: time="2025-05-07T23:45:27.538956393Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.610851044s" May 7 23:45:27.539301 containerd[1959]: time="2025-05-07T23:45:27.539012069Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 7 23:45:27.578465 containerd[1959]: time="2025-05-07T23:45:27.578294530Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 7 23:45:28.725974 containerd[1959]: time="2025-05-07T23:45:28.725891872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:28.727989 containerd[1959]: time="2025-05-07T23:45:28.727906816Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263945" May 7 23:45:28.730281 containerd[1959]: time="2025-05-07T23:45:28.729692147Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:28.736555 containerd[1959]: time="2025-05-07T23:45:28.736499401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:28.738903 containerd[1959]: time="2025-05-07T23:45:28.738848618Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.160495989s" May 7 23:45:28.739075 containerd[1959]: time="2025-05-07T23:45:28.739046411Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 7 23:45:28.778097 containerd[1959]: time="2025-05-07T23:45:28.778047649Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 7 23:45:30.098058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount921369054.mount: Deactivated successfully. May 7 23:45:30.600168 containerd[1959]: time="2025-05-07T23:45:30.600103619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:30.607775 containerd[1959]: time="2025-05-07T23:45:30.607670837Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775705" May 7 23:45:30.607935 containerd[1959]: time="2025-05-07T23:45:30.607877794Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:30.611973 containerd[1959]: time="2025-05-07T23:45:30.611912995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:30.613519 containerd[1959]: time="2025-05-07T23:45:30.613471231Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.835145141s" May 7 23:45:30.613693 containerd[1959]: time="2025-05-07T23:45:30.613661408Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 7 23:45:30.655997 containerd[1959]: time="2025-05-07T23:45:30.655937524Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 7 23:45:31.226238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2444716041.mount: Deactivated successfully. May 7 23:45:32.390695 containerd[1959]: time="2025-05-07T23:45:32.390610355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:32.394013 containerd[1959]: time="2025-05-07T23:45:32.393922222Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" May 7 23:45:32.397033 containerd[1959]: time="2025-05-07T23:45:32.396961166Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:32.402587 containerd[1959]: time="2025-05-07T23:45:32.402505104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:32.405498 containerd[1959]: time="2025-05-07T23:45:32.404846117Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.748840336s" May 7 23:45:32.405498 containerd[1959]: time="2025-05-07T23:45:32.404909901Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 7 23:45:32.443798 containerd[1959]: time="2025-05-07T23:45:32.443739253Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 7 23:45:32.745147 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 7 23:45:32.754622 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:45:33.004792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2318467030.mount: Deactivated successfully. May 7 23:45:33.022721 containerd[1959]: time="2025-05-07T23:45:33.022647668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:33.024945 containerd[1959]: time="2025-05-07T23:45:33.024857023Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" May 7 23:45:33.026592 containerd[1959]: time="2025-05-07T23:45:33.026517112Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:33.035373 containerd[1959]: time="2025-05-07T23:45:33.033039629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:33.035887 containerd[1959]: time="2025-05-07T23:45:33.035816156Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 591.873077ms" May 7 23:45:33.037869 containerd[1959]: time="2025-05-07T23:45:33.037800336Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 7 23:45:33.066659 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:45:33.070431 (kubelet)[2684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 7 23:45:33.088401 containerd[1959]: time="2025-05-07T23:45:33.088060773Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 7 23:45:33.157189 kubelet[2684]: E0507 23:45:33.157105 2684 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 7 23:45:33.162583 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 7 23:45:33.163095 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 7 23:45:33.164434 systemd[1]: kubelet.service: Consumed 283ms CPU time, 96.6M memory peak. May 7 23:45:33.670183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount506427705.mount: Deactivated successfully. May 7 23:45:35.717835 containerd[1959]: time="2025-05-07T23:45:35.717775434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:35.722754 containerd[1959]: time="2025-05-07T23:45:35.722657208Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" May 7 23:45:35.727838 containerd[1959]: time="2025-05-07T23:45:35.727690766Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:35.736432 containerd[1959]: time="2025-05-07T23:45:35.736332431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:45:35.739064 containerd[1959]: time="2025-05-07T23:45:35.738869905Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.650750373s" May 7 23:45:35.739064 containerd[1959]: time="2025-05-07T23:45:35.738927729Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 7 23:45:36.449863 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 7 23:45:43.245179 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 7 23:45:43.254485 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:45:43.579650 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:45:43.587865 (kubelet)[2807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 7 23:45:43.670012 kubelet[2807]: E0507 23:45:43.669927 2807 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 7 23:45:43.674678 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 7 23:45:43.674985 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 7 23:45:43.675827 systemd[1]: kubelet.service: Consumed 265ms CPU time, 94.4M memory peak. May 7 23:45:44.213531 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:45:44.213884 systemd[1]: kubelet.service: Consumed 265ms CPU time, 94.4M memory peak. May 7 23:45:44.228760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:45:44.270436 systemd[1]: Reload requested from client PID 2821 ('systemctl') (unit session-9.scope)... May 7 23:45:44.270908 systemd[1]: Reloading... May 7 23:45:44.545294 zram_generator::config[2869]: No configuration found. May 7 23:45:44.774717 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 7 23:45:45.004460 systemd[1]: Reloading finished in 732 ms. May 7 23:45:45.105581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:45:45.111389 (kubelet)[2920]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 7 23:45:45.115551 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:45:45.118025 systemd[1]: kubelet.service: Deactivated successfully. May 7 23:45:45.118573 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:45:45.118671 systemd[1]: kubelet.service: Consumed 200ms CPU time, 81.4M memory peak. May 7 23:45:45.126869 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:45:45.406382 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:45:45.426215 (kubelet)[2933]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 7 23:45:45.500806 kubelet[2933]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 7 23:45:45.502324 kubelet[2933]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 7 23:45:45.502324 kubelet[2933]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 7 23:45:45.502324 kubelet[2933]: I0507 23:45:45.501473 2933 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 7 23:45:47.293087 kubelet[2933]: I0507 23:45:47.293012 2933 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 7 23:45:47.293087 kubelet[2933]: I0507 23:45:47.293071 2933 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 7 23:45:47.293697 kubelet[2933]: I0507 23:45:47.293438 2933 server.go:927] "Client rotation is on, will bootstrap in background" May 7 23:45:47.320306 kubelet[2933]: E0507 23:45:47.318766 2933 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.25.188:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.25.188:6443: connect: connection refused May 7 23:45:47.320306 kubelet[2933]: I0507 23:45:47.319878 2933 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 7 23:45:47.337144 kubelet[2933]: I0507 23:45:47.337093 2933 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 7 23:45:47.339573 kubelet[2933]: I0507 23:45:47.339488 2933 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 7 23:45:47.339869 kubelet[2933]: I0507 23:45:47.339567 2933 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-188","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 7 23:45:47.340057 kubelet[2933]: I0507 23:45:47.339889 2933 topology_manager.go:138] "Creating topology manager with none policy" May 7 23:45:47.340057 kubelet[2933]: I0507 23:45:47.339910 2933 container_manager_linux.go:301] "Creating device plugin manager" May 7 23:45:47.340171 kubelet[2933]: I0507 23:45:47.340145 2933 state_mem.go:36] "Initialized new in-memory state store" May 7 23:45:47.342768 kubelet[2933]: W0507 23:45:47.342620 2933 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.25.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-188&limit=500&resourceVersion=0": dial tcp 172.31.25.188:6443: connect: connection refused May 7 23:45:47.342768 kubelet[2933]: E0507 23:45:47.342728 2933 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.25.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-188&limit=500&resourceVersion=0": dial tcp 172.31.25.188:6443: connect: connection refused May 7 23:45:47.343545 kubelet[2933]: I0507 23:45:47.343499 2933 kubelet.go:400] "Attempting to sync node with API server" May 7 23:45:47.343642 kubelet[2933]: I0507 23:45:47.343557 2933 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 7 23:45:47.343709 kubelet[2933]: I0507 23:45:47.343652 2933 kubelet.go:312] "Adding apiserver pod source" May 7 23:45:47.343767 kubelet[2933]: I0507 23:45:47.343708 2933 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 7 23:45:47.347289 kubelet[2933]: I0507 23:45:47.345351 2933 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 7 23:45:47.347289 kubelet[2933]: I0507 23:45:47.345692 2933 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 7 23:45:47.347289 kubelet[2933]: W0507 23:45:47.345782 2933 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 7 23:45:47.347289 kubelet[2933]: I0507 23:45:47.346856 2933 server.go:1264] "Started kubelet" May 7 23:45:47.347289 kubelet[2933]: W0507 23:45:47.347042 2933 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.25.188:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.25.188:6443: connect: connection refused May 7 23:45:47.347289 kubelet[2933]: E0507 23:45:47.347112 2933 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.25.188:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.25.188:6443: connect: connection refused May 7 23:45:47.356706 kubelet[2933]: E0507 23:45:47.356406 2933 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.25.188:6443/api/v1/namespaces/default/events\": dial tcp 172.31.25.188:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-25-188.183d636c5070fd95 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-188,UID:ip-172-31-25-188,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-188,},FirstTimestamp:2025-05-07 23:45:47.346820501 +0000 UTC m=+1.914009459,LastTimestamp:2025-05-07 23:45:47.346820501 +0000 UTC m=+1.914009459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-188,}" May 7 23:45:47.358149 kubelet[2933]: I0507 23:45:47.358107 2933 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 7 23:45:47.362723 kubelet[2933]: I0507 23:45:47.362626 2933 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 7 23:45:47.364402 kubelet[2933]: I0507 23:45:47.364351 2933 server.go:455] "Adding debug handlers to kubelet server" May 7 23:45:47.366124 kubelet[2933]: I0507 23:45:47.366018 2933 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 7 23:45:47.366638 kubelet[2933]: I0507 23:45:47.366598 2933 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 7 23:45:47.368382 kubelet[2933]: I0507 23:45:47.367681 2933 volume_manager.go:291] "Starting Kubelet Volume Manager" May 7 23:45:47.368382 kubelet[2933]: I0507 23:45:47.367962 2933 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 7 23:45:47.371716 kubelet[2933]: I0507 23:45:47.371656 2933 reconciler.go:26] "Reconciler: start to sync state" May 7 23:45:47.372486 kubelet[2933]: E0507 23:45:47.372348 2933 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-188?timeout=10s\": dial tcp 172.31.25.188:6443: connect: connection refused" interval="200ms" May 7 23:45:47.372486 kubelet[2933]: W0507 23:45:47.372376 2933 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.25.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.188:6443: connect: connection refused May 7 23:45:47.372486 kubelet[2933]: E0507 23:45:47.372459 2933 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.25.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.188:6443: connect: connection refused May 7 23:45:47.372759 kubelet[2933]: I0507 23:45:47.372724 2933 factory.go:221] Registration of the systemd container factory successfully May 7 23:45:47.373663 kubelet[2933]: I0507 23:45:47.372888 2933 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 7 23:45:47.373663 kubelet[2933]: E0507 23:45:47.373211 2933 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 7 23:45:47.378282 kubelet[2933]: I0507 23:45:47.377900 2933 factory.go:221] Registration of the containerd container factory successfully May 7 23:45:47.412587 kubelet[2933]: I0507 23:45:47.412507 2933 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 7 23:45:47.416174 kubelet[2933]: I0507 23:45:47.416118 2933 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 7 23:45:47.417073 kubelet[2933]: I0507 23:45:47.416460 2933 status_manager.go:217] "Starting to sync pod status with apiserver" May 7 23:45:47.417073 kubelet[2933]: I0507 23:45:47.416517 2933 kubelet.go:2337] "Starting kubelet main sync loop" May 7 23:45:47.417073 kubelet[2933]: E0507 23:45:47.416594 2933 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 7 23:45:47.427616 kubelet[2933]: W0507 23:45:47.427538 2933 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.25.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.188:6443: connect: connection refused May 7 23:45:47.427833 kubelet[2933]: E0507 23:45:47.427809 2933 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.25.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.188:6443: connect: connection refused May 7 23:45:47.430516 kubelet[2933]: I0507 23:45:47.430480 2933 cpu_manager.go:214] "Starting CPU manager" policy="none" May 7 23:45:47.430881 kubelet[2933]: I0507 23:45:47.430854 2933 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 7 23:45:47.431018 kubelet[2933]: I0507 23:45:47.431000 2933 state_mem.go:36] "Initialized new in-memory state store" May 7 23:45:47.437400 kubelet[2933]: I0507 23:45:47.437365 2933 policy_none.go:49] "None policy: Start" May 7 23:45:47.439067 kubelet[2933]: I0507 23:45:47.438565 2933 memory_manager.go:170] "Starting memorymanager" policy="None" May 7 23:45:47.439067 kubelet[2933]: I0507 23:45:47.438607 2933 state_mem.go:35] "Initializing new in-memory state store" May 7 23:45:47.455590 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 7 23:45:47.470696 kubelet[2933]: I0507 23:45:47.470048 2933 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-25-188" May 7 23:45:47.471434 kubelet[2933]: E0507 23:45:47.471102 2933 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.25.188:6443/api/v1/nodes\": dial tcp 172.31.25.188:6443: connect: connection refused" node="ip-172-31-25-188" May 7 23:45:47.473960 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 7 23:45:47.479914 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 7 23:45:47.493203 kubelet[2933]: I0507 23:45:47.493016 2933 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 7 23:45:47.495457 kubelet[2933]: I0507 23:45:47.494847 2933 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 7 23:45:47.495457 kubelet[2933]: I0507 23:45:47.495063 2933 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 7 23:45:47.498354 kubelet[2933]: E0507 23:45:47.498166 2933 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-25-188\" not found" May 7 23:45:47.517639 kubelet[2933]: I0507 23:45:47.517545 2933 topology_manager.go:215] "Topology Admit Handler" podUID="921de854697e06a5911c3d95c41f8259" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-25-188" May 7 23:45:47.520369 kubelet[2933]: I0507 23:45:47.519939 2933 topology_manager.go:215] "Topology Admit Handler" podUID="6f55b0a90d4b3ca98934ff95ee5cd688" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-25-188" May 7 23:45:47.524867 kubelet[2933]: I0507 23:45:47.523128 2933 topology_manager.go:215] "Topology Admit Handler" podUID="6ce3d1380305c7f21eef189c24518256" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-25-188" May 7 23:45:47.535991 systemd[1]: Created slice kubepods-burstable-pod921de854697e06a5911c3d95c41f8259.slice - libcontainer container kubepods-burstable-pod921de854697e06a5911c3d95c41f8259.slice. May 7 23:45:47.557555 systemd[1]: Created slice kubepods-burstable-pod6f55b0a90d4b3ca98934ff95ee5cd688.slice - libcontainer container kubepods-burstable-pod6f55b0a90d4b3ca98934ff95ee5cd688.slice. May 7 23:45:47.567698 systemd[1]: Created slice kubepods-burstable-pod6ce3d1380305c7f21eef189c24518256.slice - libcontainer container kubepods-burstable-pod6ce3d1380305c7f21eef189c24518256.slice. May 7 23:45:47.572552 kubelet[2933]: I0507 23:45:47.572486 2933 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f55b0a90d4b3ca98934ff95ee5cd688-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-188\" (UID: \"6f55b0a90d4b3ca98934ff95ee5cd688\") " pod="kube-system/kube-controller-manager-ip-172-31-25-188" May 7 23:45:47.572700 kubelet[2933]: I0507 23:45:47.572558 2933 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ce3d1380305c7f21eef189c24518256-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-188\" (UID: \"6ce3d1380305c7f21eef189c24518256\") " pod="kube-system/kube-scheduler-ip-172-31-25-188" May 7 23:45:47.572700 kubelet[2933]: I0507 23:45:47.572599 2933 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f55b0a90d4b3ca98934ff95ee5cd688-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-188\" (UID: \"6f55b0a90d4b3ca98934ff95ee5cd688\") " pod="kube-system/kube-controller-manager-ip-172-31-25-188" May 7 23:45:47.572700 kubelet[2933]: I0507 23:45:47.572636 2933 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6f55b0a90d4b3ca98934ff95ee5cd688-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-188\" (UID: \"6f55b0a90d4b3ca98934ff95ee5cd688\") " pod="kube-system/kube-controller-manager-ip-172-31-25-188" May 7 23:45:47.572700 kubelet[2933]: I0507 23:45:47.572682 2933 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f55b0a90d4b3ca98934ff95ee5cd688-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-188\" (UID: \"6f55b0a90d4b3ca98934ff95ee5cd688\") " pod="kube-system/kube-controller-manager-ip-172-31-25-188" May 7 23:45:47.572902 kubelet[2933]: I0507 23:45:47.572717 2933 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f55b0a90d4b3ca98934ff95ee5cd688-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-188\" (UID: \"6f55b0a90d4b3ca98934ff95ee5cd688\") " pod="kube-system/kube-controller-manager-ip-172-31-25-188" May 7 23:45:47.572902 kubelet[2933]: I0507 23:45:47.572758 2933 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/921de854697e06a5911c3d95c41f8259-ca-certs\") pod \"kube-apiserver-ip-172-31-25-188\" (UID: \"921de854697e06a5911c3d95c41f8259\") " pod="kube-system/kube-apiserver-ip-172-31-25-188" May 7 23:45:47.572902 kubelet[2933]: I0507 23:45:47.572826 2933 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/921de854697e06a5911c3d95c41f8259-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-188\" (UID: \"921de854697e06a5911c3d95c41f8259\") " pod="kube-system/kube-apiserver-ip-172-31-25-188" May 7 23:45:47.572902 kubelet[2933]: I0507 23:45:47.572864 2933 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/921de854697e06a5911c3d95c41f8259-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-188\" (UID: \"921de854697e06a5911c3d95c41f8259\") " pod="kube-system/kube-apiserver-ip-172-31-25-188" May 7 23:45:47.573572 kubelet[2933]: E0507 23:45:47.573484 2933 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-188?timeout=10s\": dial tcp 172.31.25.188:6443: connect: connection refused" interval="400ms" May 7 23:45:47.674149 kubelet[2933]: I0507 23:45:47.674026 2933 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-25-188" May 7 23:45:47.674611 kubelet[2933]: E0507 23:45:47.674549 2933 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.25.188:6443/api/v1/nodes\": dial tcp 172.31.25.188:6443: connect: connection refused" node="ip-172-31-25-188" May 7 23:45:47.848831 containerd[1959]: time="2025-05-07T23:45:47.848665559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-188,Uid:921de854697e06a5911c3d95c41f8259,Namespace:kube-system,Attempt:0,}" May 7 23:45:47.868406 containerd[1959]: time="2025-05-07T23:45:47.868345314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-188,Uid:6f55b0a90d4b3ca98934ff95ee5cd688,Namespace:kube-system,Attempt:0,}" May 7 23:45:47.873276 containerd[1959]: time="2025-05-07T23:45:47.873134482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-188,Uid:6ce3d1380305c7f21eef189c24518256,Namespace:kube-system,Attempt:0,}" May 7 23:45:47.974796 kubelet[2933]: E0507 23:45:47.974710 2933 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-188?timeout=10s\": dial tcp 172.31.25.188:6443: connect: connection refused" interval="800ms" May 7 23:45:48.077466 kubelet[2933]: I0507 23:45:48.077388 2933 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-25-188" May 7 23:45:48.077961 kubelet[2933]: E0507 23:45:48.077882 2933 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.25.188:6443/api/v1/nodes\": dial tcp 172.31.25.188:6443: connect: connection refused" node="ip-172-31-25-188" May 7 23:45:48.392917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2360425149.mount: Deactivated successfully. May 7 23:45:48.406330 containerd[1959]: time="2025-05-07T23:45:48.405437667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 7 23:45:48.409963 containerd[1959]: time="2025-05-07T23:45:48.409873360Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 7 23:45:48.412199 kubelet[2933]: W0507 23:45:48.412112 2933 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.25.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.188:6443: connect: connection refused May 7 23:45:48.412769 kubelet[2933]: E0507 23:45:48.412205 2933 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.25.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.188:6443: connect: connection refused May 7 23:45:48.416654 containerd[1959]: time="2025-05-07T23:45:48.416559104Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 7 23:45:48.422716 containerd[1959]: time="2025-05-07T23:45:48.422636871Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 7 23:45:48.424277 containerd[1959]: time="2025-05-07T23:45:48.424171671Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 7 23:45:48.430278 containerd[1959]: time="2025-05-07T23:45:48.428478405Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 7 23:45:48.430684 containerd[1959]: time="2025-05-07T23:45:48.430614884Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 7 23:45:48.433083 containerd[1959]: time="2025-05-07T23:45:48.433001750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 7 23:45:48.439700 containerd[1959]: time="2025-05-07T23:45:48.439629347Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 590.850829ms" May 7 23:45:48.444374 containerd[1959]: time="2025-05-07T23:45:48.444314947Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 571.073383ms" May 7 23:45:48.444830 containerd[1959]: time="2025-05-07T23:45:48.444790773Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 576.337321ms" May 7 23:45:48.633782 containerd[1959]: time="2025-05-07T23:45:48.633140639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:45:48.633782 containerd[1959]: time="2025-05-07T23:45:48.633277359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:45:48.633782 containerd[1959]: time="2025-05-07T23:45:48.633305377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:45:48.633782 containerd[1959]: time="2025-05-07T23:45:48.633445071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:45:48.642505 containerd[1959]: time="2025-05-07T23:45:48.640913915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:45:48.642887 containerd[1959]: time="2025-05-07T23:45:48.642499773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:45:48.642887 containerd[1959]: time="2025-05-07T23:45:48.642539102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:45:48.644034 containerd[1959]: time="2025-05-07T23:45:48.642907437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:45:48.649014 containerd[1959]: time="2025-05-07T23:45:48.648484120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:45:48.649014 containerd[1959]: time="2025-05-07T23:45:48.648581667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:45:48.649014 containerd[1959]: time="2025-05-07T23:45:48.648618860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:45:48.649014 containerd[1959]: time="2025-05-07T23:45:48.648778153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:45:48.656858 kubelet[2933]: W0507 23:45:48.656713 2933 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.25.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-188&limit=500&resourceVersion=0": dial tcp 172.31.25.188:6443: connect: connection refused May 7 23:45:48.656858 kubelet[2933]: E0507 23:45:48.656820 2933 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.25.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-188&limit=500&resourceVersion=0": dial tcp 172.31.25.188:6443: connect: connection refused May 7 23:45:48.687017 systemd[1]: Started cri-containerd-efb4b20aa15169e1669b47a705c586e3b599ab397ac19149a289e753883e8856.scope - libcontainer container efb4b20aa15169e1669b47a705c586e3b599ab397ac19149a289e753883e8856. May 7 23:45:48.700147 systemd[1]: Started cri-containerd-bfa778508d0c3c054cdb46b5ae41b31c638e7443e9cafd614d7630d0d4e6c210.scope - libcontainer container bfa778508d0c3c054cdb46b5ae41b31c638e7443e9cafd614d7630d0d4e6c210. May 7 23:45:48.723845 systemd[1]: Started cri-containerd-56f1d7efaff03117e9d6450d52303bb155d89984275e143cd56e0cc38994294c.scope - libcontainer container 56f1d7efaff03117e9d6450d52303bb155d89984275e143cd56e0cc38994294c. May 7 23:45:48.738583 kubelet[2933]: W0507 23:45:48.738493 2933 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.25.188:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.25.188:6443: connect: connection refused May 7 23:45:48.738728 kubelet[2933]: E0507 23:45:48.738592 2933 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.25.188:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.25.188:6443: connect: connection refused May 7 23:45:48.776013 kubelet[2933]: E0507 23:45:48.775942 2933 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-188?timeout=10s\": dial tcp 172.31.25.188:6443: connect: connection refused" interval="1.6s" May 7 23:45:48.780444 kubelet[2933]: W0507 23:45:48.780339 2933 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.25.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.188:6443: connect: connection refused May 7 23:45:48.780444 kubelet[2933]: E0507 23:45:48.780409 2933 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.25.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.188:6443: connect: connection refused May 7 23:45:48.818816 containerd[1959]: time="2025-05-07T23:45:48.818540066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-188,Uid:6f55b0a90d4b3ca98934ff95ee5cd688,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfa778508d0c3c054cdb46b5ae41b31c638e7443e9cafd614d7630d0d4e6c210\"" May 7 23:45:48.829862 containerd[1959]: time="2025-05-07T23:45:48.829778072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-188,Uid:921de854697e06a5911c3d95c41f8259,Namespace:kube-system,Attempt:0,} returns sandbox id \"efb4b20aa15169e1669b47a705c586e3b599ab397ac19149a289e753883e8856\"" May 7 23:45:48.838301 containerd[1959]: time="2025-05-07T23:45:48.837883137Z" level=info msg="CreateContainer within sandbox \"efb4b20aa15169e1669b47a705c586e3b599ab397ac19149a289e753883e8856\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 7 23:45:48.838301 containerd[1959]: time="2025-05-07T23:45:48.837924697Z" level=info msg="CreateContainer within sandbox \"bfa778508d0c3c054cdb46b5ae41b31c638e7443e9cafd614d7630d0d4e6c210\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 7 23:45:48.857648 containerd[1959]: time="2025-05-07T23:45:48.857487558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-188,Uid:6ce3d1380305c7f21eef189c24518256,Namespace:kube-system,Attempt:0,} returns sandbox id \"56f1d7efaff03117e9d6450d52303bb155d89984275e143cd56e0cc38994294c\"" May 7 23:45:48.863913 containerd[1959]: time="2025-05-07T23:45:48.863517686Z" level=info msg="CreateContainer within sandbox \"56f1d7efaff03117e9d6450d52303bb155d89984275e143cd56e0cc38994294c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 7 23:45:48.880952 kubelet[2933]: I0507 23:45:48.880910 2933 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-25-188" May 7 23:45:48.881734 kubelet[2933]: E0507 23:45:48.881664 2933 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.25.188:6443/api/v1/nodes\": dial tcp 172.31.25.188:6443: connect: connection refused" node="ip-172-31-25-188" May 7 23:45:48.887962 containerd[1959]: time="2025-05-07T23:45:48.887896954Z" level=info msg="CreateContainer within sandbox \"bfa778508d0c3c054cdb46b5ae41b31c638e7443e9cafd614d7630d0d4e6c210\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"06609b840c9f561a8c3781259e0fa65779cba1d8877d610d11be9a7a4bba107d\"" May 7 23:45:48.889174 containerd[1959]: time="2025-05-07T23:45:48.889112930Z" level=info msg="StartContainer for \"06609b840c9f561a8c3781259e0fa65779cba1d8877d610d11be9a7a4bba107d\"" May 7 23:45:48.896337 containerd[1959]: time="2025-05-07T23:45:48.894161576Z" level=info msg="CreateContainer within sandbox \"efb4b20aa15169e1669b47a705c586e3b599ab397ac19149a289e753883e8856\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8bb02452258ca9c194f03bd2fafc40b94e30268f602dfb7108ff68c0ba05364b\"" May 7 23:45:48.899807 containerd[1959]: time="2025-05-07T23:45:48.899732933Z" level=info msg="StartContainer for \"8bb02452258ca9c194f03bd2fafc40b94e30268f602dfb7108ff68c0ba05364b\"" May 7 23:45:48.930680 containerd[1959]: time="2025-05-07T23:45:48.930606509Z" level=info msg="CreateContainer within sandbox \"56f1d7efaff03117e9d6450d52303bb155d89984275e143cd56e0cc38994294c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7a66a89709b9c37e8c7378566eecdbd112306441df990d11f4b93c029376b771\"" May 7 23:45:48.931889 containerd[1959]: time="2025-05-07T23:45:48.931841879Z" level=info msg="StartContainer for \"7a66a89709b9c37e8c7378566eecdbd112306441df990d11f4b93c029376b771\"" May 7 23:45:48.969630 systemd[1]: Started cri-containerd-06609b840c9f561a8c3781259e0fa65779cba1d8877d610d11be9a7a4bba107d.scope - libcontainer container 06609b840c9f561a8c3781259e0fa65779cba1d8877d610d11be9a7a4bba107d. May 7 23:45:48.983601 systemd[1]: Started cri-containerd-8bb02452258ca9c194f03bd2fafc40b94e30268f602dfb7108ff68c0ba05364b.scope - libcontainer container 8bb02452258ca9c194f03bd2fafc40b94e30268f602dfb7108ff68c0ba05364b. May 7 23:45:49.022872 systemd[1]: Started cri-containerd-7a66a89709b9c37e8c7378566eecdbd112306441df990d11f4b93c029376b771.scope - libcontainer container 7a66a89709b9c37e8c7378566eecdbd112306441df990d11f4b93c029376b771. May 7 23:45:49.100233 containerd[1959]: time="2025-05-07T23:45:49.100006299Z" level=info msg="StartContainer for \"8bb02452258ca9c194f03bd2fafc40b94e30268f602dfb7108ff68c0ba05364b\" returns successfully" May 7 23:45:49.130627 containerd[1959]: time="2025-05-07T23:45:49.130545830Z" level=info msg="StartContainer for \"06609b840c9f561a8c3781259e0fa65779cba1d8877d610d11be9a7a4bba107d\" returns successfully" May 7 23:45:49.184580 containerd[1959]: time="2025-05-07T23:45:49.184419699Z" level=info msg="StartContainer for \"7a66a89709b9c37e8c7378566eecdbd112306441df990d11f4b93c029376b771\" returns successfully" May 7 23:45:50.484142 kubelet[2933]: I0507 23:45:50.484093 2933 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-25-188" May 7 23:45:50.670291 update_engine[1942]: I20250507 23:45:50.669492 1942 update_attempter.cc:509] Updating boot flags... May 7 23:45:50.819295 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (3222) May 7 23:45:51.282319 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (3221) May 7 23:45:53.101853 kubelet[2933]: E0507 23:45:53.101783 2933 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-25-188\" not found" node="ip-172-31-25-188" May 7 23:45:53.154277 kubelet[2933]: E0507 23:45:53.153985 2933 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-25-188.183d636c5070fd95 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-188,UID:ip-172-31-25-188,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-188,},FirstTimestamp:2025-05-07 23:45:47.346820501 +0000 UTC m=+1.914009459,LastTimestamp:2025-05-07 23:45:47.346820501 +0000 UTC m=+1.914009459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-188,}" May 7 23:45:53.210802 kubelet[2933]: E0507 23:45:53.210643 2933 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-25-188.183d636c5203749f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-188,UID:ip-172-31-25-188,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-25-188,},FirstTimestamp:2025-05-07 23:45:47.373196447 +0000 UTC m=+1.940385429,LastTimestamp:2025-05-07 23:45:47.373196447 +0000 UTC m=+1.940385429,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-188,}" May 7 23:45:53.266170 kubelet[2933]: I0507 23:45:53.266103 2933 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-25-188" May 7 23:45:53.300508 kubelet[2933]: E0507 23:45:53.300072 2933 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-25-188.183d636c54f0d67f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-188,UID:ip-172-31-25-188,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-25-188 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-25-188,},FirstTimestamp:2025-05-07 23:45:47.422307967 +0000 UTC m=+1.989496925,LastTimestamp:2025-05-07 23:45:47.422307967 +0000 UTC m=+1.989496925,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-188,}" May 7 23:45:53.350391 kubelet[2933]: I0507 23:45:53.350034 2933 apiserver.go:52] "Watching apiserver" May 7 23:45:53.368709 kubelet[2933]: I0507 23:45:53.368538 2933 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 7 23:45:53.376057 kubelet[2933]: E0507 23:45:53.375882 2933 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-25-188.183d636c54f103cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-188,UID:ip-172-31-25-188,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-172-31-25-188 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-172-31-25-188,},FirstTimestamp:2025-05-07 23:45:47.422319565 +0000 UTC m=+1.989508523,LastTimestamp:2025-05-07 23:45:47.422319565 +0000 UTC m=+1.989508523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-188,}" May 7 23:45:55.376767 systemd[1]: Reload requested from client PID 3392 ('systemctl') (unit session-9.scope)... May 7 23:45:55.376794 systemd[1]: Reloading... May 7 23:45:55.588301 zram_generator::config[3440]: No configuration found. May 7 23:45:55.822616 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 7 23:45:56.087433 systemd[1]: Reloading finished in 709 ms. May 7 23:45:56.144906 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:45:56.161559 systemd[1]: kubelet.service: Deactivated successfully. May 7 23:45:56.161990 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:45:56.162075 systemd[1]: kubelet.service: Consumed 2.690s CPU time, 114.1M memory peak. May 7 23:45:56.171870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 7 23:45:56.495586 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 7 23:45:56.498655 (kubelet)[3497]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 7 23:45:56.606281 kubelet[3497]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 7 23:45:56.606281 kubelet[3497]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 7 23:45:56.606281 kubelet[3497]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 7 23:45:56.606281 kubelet[3497]: I0507 23:45:56.605626 3497 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 7 23:45:56.616042 kubelet[3497]: I0507 23:45:56.614219 3497 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 7 23:45:56.616042 kubelet[3497]: I0507 23:45:56.614390 3497 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 7 23:45:56.616042 kubelet[3497]: I0507 23:45:56.614822 3497 server.go:927] "Client rotation is on, will bootstrap in background" May 7 23:45:56.617759 kubelet[3497]: I0507 23:45:56.617701 3497 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 7 23:45:56.623196 kubelet[3497]: I0507 23:45:56.623063 3497 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 7 23:45:56.640889 kubelet[3497]: I0507 23:45:56.640816 3497 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 7 23:45:56.642371 kubelet[3497]: I0507 23:45:56.641684 3497 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 7 23:45:56.642371 kubelet[3497]: I0507 23:45:56.641742 3497 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-188","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 7 23:45:56.642371 kubelet[3497]: I0507 23:45:56.642042 3497 topology_manager.go:138] "Creating topology manager with none policy" May 7 23:45:56.642371 kubelet[3497]: I0507 23:45:56.642062 3497 container_manager_linux.go:301] "Creating device plugin manager" May 7 23:45:56.642371 kubelet[3497]: I0507 23:45:56.642128 3497 state_mem.go:36] "Initialized new in-memory state store" May 7 23:45:56.642364 sudo[3510]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 7 23:45:56.643028 sudo[3510]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 7 23:45:56.645233 kubelet[3497]: I0507 23:45:56.643540 3497 kubelet.go:400] "Attempting to sync node with API server" May 7 23:45:56.645233 kubelet[3497]: I0507 23:45:56.645170 3497 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 7 23:45:56.647061 kubelet[3497]: I0507 23:45:56.645501 3497 kubelet.go:312] "Adding apiserver pod source" May 7 23:45:56.647061 kubelet[3497]: I0507 23:45:56.646947 3497 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 7 23:45:56.654649 kubelet[3497]: I0507 23:45:56.654609 3497 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 7 23:45:56.655131 kubelet[3497]: I0507 23:45:56.655107 3497 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 7 23:45:56.656068 kubelet[3497]: I0507 23:45:56.655913 3497 server.go:1264] "Started kubelet" May 7 23:45:56.673278 kubelet[3497]: I0507 23:45:56.671415 3497 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 7 23:45:56.692105 kubelet[3497]: I0507 23:45:56.692040 3497 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 7 23:45:56.699971 kubelet[3497]: I0507 23:45:56.699724 3497 server.go:455] "Adding debug handlers to kubelet server" May 7 23:45:56.703738 kubelet[3497]: I0507 23:45:56.702715 3497 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 7 23:45:56.703738 kubelet[3497]: I0507 23:45:56.703099 3497 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 7 23:45:56.707222 kubelet[3497]: I0507 23:45:56.707183 3497 volume_manager.go:291] "Starting Kubelet Volume Manager" May 7 23:45:56.709727 kubelet[3497]: I0507 23:45:56.709667 3497 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 7 23:45:56.711171 kubelet[3497]: I0507 23:45:56.711137 3497 reconciler.go:26] "Reconciler: start to sync state" May 7 23:45:56.724692 kubelet[3497]: I0507 23:45:56.724630 3497 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 7 23:45:56.728682 kubelet[3497]: I0507 23:45:56.728028 3497 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 7 23:45:56.728682 kubelet[3497]: I0507 23:45:56.728097 3497 status_manager.go:217] "Starting to sync pod status with apiserver" May 7 23:45:56.728682 kubelet[3497]: I0507 23:45:56.728132 3497 kubelet.go:2337] "Starting kubelet main sync loop" May 7 23:45:56.728682 kubelet[3497]: E0507 23:45:56.728203 3497 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 7 23:45:56.752351 kubelet[3497]: I0507 23:45:56.748921 3497 factory.go:221] Registration of the systemd container factory successfully May 7 23:45:56.752351 kubelet[3497]: I0507 23:45:56.749124 3497 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 7 23:45:56.762744 kubelet[3497]: I0507 23:45:56.761359 3497 factory.go:221] Registration of the containerd container factory successfully May 7 23:45:56.821058 kubelet[3497]: I0507 23:45:56.820616 3497 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-25-188" May 7 23:45:56.828580 kubelet[3497]: E0507 23:45:56.828537 3497 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 7 23:45:56.852883 kubelet[3497]: I0507 23:45:56.847795 3497 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-25-188" May 7 23:45:56.852883 kubelet[3497]: I0507 23:45:56.849397 3497 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-25-188" May 7 23:45:56.889541 kubelet[3497]: I0507 23:45:56.889480 3497 cpu_manager.go:214] "Starting CPU manager" policy="none" May 7 23:45:56.889541 kubelet[3497]: I0507 23:45:56.889515 3497 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 7 23:45:56.889755 kubelet[3497]: I0507 23:45:56.889567 3497 state_mem.go:36] "Initialized new in-memory state store" May 7 23:45:56.890836 kubelet[3497]: I0507 23:45:56.889826 3497 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 7 23:45:56.890836 kubelet[3497]: I0507 23:45:56.889858 3497 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 7 23:45:56.890836 kubelet[3497]: I0507 23:45:56.889897 3497 policy_none.go:49] "None policy: Start" May 7 23:45:56.891162 kubelet[3497]: I0507 23:45:56.891104 3497 memory_manager.go:170] "Starting memorymanager" policy="None" May 7 23:45:56.891162 kubelet[3497]: I0507 23:45:56.891151 3497 state_mem.go:35] "Initializing new in-memory state store" May 7 23:45:56.891529 kubelet[3497]: I0507 23:45:56.891497 3497 state_mem.go:75] "Updated machine memory state" May 7 23:45:56.900922 kubelet[3497]: I0507 23:45:56.900864 3497 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 7 23:45:56.901643 kubelet[3497]: I0507 23:45:56.901161 3497 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 7 23:45:56.902375 kubelet[3497]: I0507 23:45:56.902141 3497 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 7 23:45:57.031324 kubelet[3497]: I0507 23:45:57.029738 3497 topology_manager.go:215] "Topology Admit Handler" podUID="6f55b0a90d4b3ca98934ff95ee5cd688" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-25-188" May 7 23:45:57.031324 kubelet[3497]: I0507 23:45:57.029926 3497 topology_manager.go:215] "Topology Admit Handler" podUID="6ce3d1380305c7f21eef189c24518256" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-25-188" May 7 23:45:57.031324 kubelet[3497]: I0507 23:45:57.029996 3497 topology_manager.go:215] "Topology Admit Handler" podUID="921de854697e06a5911c3d95c41f8259" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-25-188" May 7 23:45:57.121915 kubelet[3497]: I0507 23:45:57.121702 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6f55b0a90d4b3ca98934ff95ee5cd688-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-188\" (UID: \"6f55b0a90d4b3ca98934ff95ee5cd688\") " pod="kube-system/kube-controller-manager-ip-172-31-25-188" May 7 23:45:57.123292 kubelet[3497]: I0507 23:45:57.123218 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f55b0a90d4b3ca98934ff95ee5cd688-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-188\" (UID: \"6f55b0a90d4b3ca98934ff95ee5cd688\") " pod="kube-system/kube-controller-manager-ip-172-31-25-188" May 7 23:45:57.124613 kubelet[3497]: I0507 23:45:57.123812 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6f55b0a90d4b3ca98934ff95ee5cd688-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-188\" (UID: \"6f55b0a90d4b3ca98934ff95ee5cd688\") " pod="kube-system/kube-controller-manager-ip-172-31-25-188" May 7 23:45:57.124613 kubelet[3497]: I0507 23:45:57.123881 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/921de854697e06a5911c3d95c41f8259-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-188\" (UID: \"921de854697e06a5911c3d95c41f8259\") " pod="kube-system/kube-apiserver-ip-172-31-25-188" May 7 23:45:57.124613 kubelet[3497]: I0507 23:45:57.123918 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6f55b0a90d4b3ca98934ff95ee5cd688-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-188\" (UID: \"6f55b0a90d4b3ca98934ff95ee5cd688\") " pod="kube-system/kube-controller-manager-ip-172-31-25-188" May 7 23:45:57.124613 kubelet[3497]: I0507 23:45:57.123954 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ce3d1380305c7f21eef189c24518256-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-188\" (UID: \"6ce3d1380305c7f21eef189c24518256\") " pod="kube-system/kube-scheduler-ip-172-31-25-188" May 7 23:45:57.124613 kubelet[3497]: I0507 23:45:57.123988 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/921de854697e06a5911c3d95c41f8259-ca-certs\") pod \"kube-apiserver-ip-172-31-25-188\" (UID: \"921de854697e06a5911c3d95c41f8259\") " pod="kube-system/kube-apiserver-ip-172-31-25-188" May 7 23:45:57.124977 kubelet[3497]: I0507 23:45:57.124034 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/921de854697e06a5911c3d95c41f8259-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-188\" (UID: \"921de854697e06a5911c3d95c41f8259\") " pod="kube-system/kube-apiserver-ip-172-31-25-188" May 7 23:45:57.124977 kubelet[3497]: I0507 23:45:57.124074 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6f55b0a90d4b3ca98934ff95ee5cd688-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-188\" (UID: \"6f55b0a90d4b3ca98934ff95ee5cd688\") " pod="kube-system/kube-controller-manager-ip-172-31-25-188" May 7 23:45:57.583697 sudo[3510]: pam_unix(sudo:session): session closed for user root May 7 23:45:57.655349 kubelet[3497]: I0507 23:45:57.653425 3497 apiserver.go:52] "Watching apiserver" May 7 23:45:57.711899 kubelet[3497]: I0507 23:45:57.711801 3497 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 7 23:45:57.756368 kubelet[3497]: I0507 23:45:57.756212 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-25-188" podStartSLOduration=0.75616151 podStartE2EDuration="756.16151ms" podCreationTimestamp="2025-05-07 23:45:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:45:57.732501666 +0000 UTC m=+1.222350896" watchObservedRunningTime="2025-05-07 23:45:57.75616151 +0000 UTC m=+1.246010764" May 7 23:45:57.785928 kubelet[3497]: I0507 23:45:57.785845 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-25-188" podStartSLOduration=0.785822216 podStartE2EDuration="785.822216ms" podCreationTimestamp="2025-05-07 23:45:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:45:57.758852688 +0000 UTC m=+1.248701918" watchObservedRunningTime="2025-05-07 23:45:57.785822216 +0000 UTC m=+1.275671470" May 7 23:45:57.838622 kubelet[3497]: I0507 23:45:57.838283 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-25-188" podStartSLOduration=0.838229196 podStartE2EDuration="838.229196ms" podCreationTimestamp="2025-05-07 23:45:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:45:57.787206108 +0000 UTC m=+1.277055362" watchObservedRunningTime="2025-05-07 23:45:57.838229196 +0000 UTC m=+1.328078438" May 7 23:46:00.318800 sudo[2304]: pam_unix(sudo:session): session closed for user root May 7 23:46:00.343442 sshd[2303]: Connection closed by 147.75.109.163 port 38044 May 7 23:46:00.342561 sshd-session[2301]: pam_unix(sshd:session): session closed for user core May 7 23:46:00.349415 systemd-logind[1941]: Session 9 logged out. Waiting for processes to exit. May 7 23:46:00.351924 systemd[1]: sshd@8-172.31.25.188:22-147.75.109.163:38044.service: Deactivated successfully. May 7 23:46:00.356302 systemd[1]: session-9.scope: Deactivated successfully. May 7 23:46:00.358549 systemd[1]: session-9.scope: Consumed 12.384s CPU time, 292.8M memory peak. May 7 23:46:00.362740 systemd-logind[1941]: Removed session 9. May 7 23:46:10.940884 kubelet[3497]: I0507 23:46:10.940835 3497 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 7 23:46:10.942115 containerd[1959]: time="2025-05-07T23:46:10.941945191Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 7 23:46:10.943538 kubelet[3497]: I0507 23:46:10.943114 3497 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 7 23:46:11.565062 kubelet[3497]: I0507 23:46:11.564327 3497 topology_manager.go:215] "Topology Admit Handler" podUID="b5bbdeff-173f-4f1d-9b58-71bbc2eab690" podNamespace="kube-system" podName="kube-proxy-x62bt" May 7 23:46:11.581955 kubelet[3497]: W0507 23:46:11.581886 3497 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-25-188" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-25-188' and this object May 7 23:46:11.581955 kubelet[3497]: E0507 23:46:11.581953 3497 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-25-188" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-25-188' and this object May 7 23:46:11.588409 systemd[1]: Created slice kubepods-besteffort-podb5bbdeff_173f_4f1d_9b58_71bbc2eab690.slice - libcontainer container kubepods-besteffort-podb5bbdeff_173f_4f1d_9b58_71bbc2eab690.slice. May 7 23:46:11.611271 kubelet[3497]: I0507 23:46:11.611184 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5bbdeff-173f-4f1d-9b58-71bbc2eab690-lib-modules\") pod \"kube-proxy-x62bt\" (UID: \"b5bbdeff-173f-4f1d-9b58-71bbc2eab690\") " pod="kube-system/kube-proxy-x62bt" May 7 23:46:11.611434 kubelet[3497]: I0507 23:46:11.611277 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhckt\" (UniqueName: \"kubernetes.io/projected/b5bbdeff-173f-4f1d-9b58-71bbc2eab690-kube-api-access-dhckt\") pod \"kube-proxy-x62bt\" (UID: \"b5bbdeff-173f-4f1d-9b58-71bbc2eab690\") " pod="kube-system/kube-proxy-x62bt" May 7 23:46:11.611434 kubelet[3497]: I0507 23:46:11.611320 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5bbdeff-173f-4f1d-9b58-71bbc2eab690-xtables-lock\") pod \"kube-proxy-x62bt\" (UID: \"b5bbdeff-173f-4f1d-9b58-71bbc2eab690\") " pod="kube-system/kube-proxy-x62bt" May 7 23:46:11.611434 kubelet[3497]: I0507 23:46:11.611358 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b5bbdeff-173f-4f1d-9b58-71bbc2eab690-kube-proxy\") pod \"kube-proxy-x62bt\" (UID: \"b5bbdeff-173f-4f1d-9b58-71bbc2eab690\") " pod="kube-system/kube-proxy-x62bt" May 7 23:46:11.616238 kubelet[3497]: I0507 23:46:11.616172 3497 topology_manager.go:215] "Topology Admit Handler" podUID="b9652676-af76-490e-bdad-d776ffd569a0" podNamespace="kube-system" podName="cilium-wdhsz" May 7 23:46:11.634034 systemd[1]: Created slice kubepods-burstable-podb9652676_af76_490e_bdad_d776ffd569a0.slice - libcontainer container kubepods-burstable-podb9652676_af76_490e_bdad_d776ffd569a0.slice. May 7 23:46:11.715292 kubelet[3497]: I0507 23:46:11.712474 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-etc-cni-netd\") pod \"cilium-wdhsz\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " pod="kube-system/cilium-wdhsz" May 7 23:46:11.715292 kubelet[3497]: I0507 23:46:11.712564 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-cilium-cgroup\") pod \"cilium-wdhsz\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " pod="kube-system/cilium-wdhsz" May 7 23:46:11.715292 kubelet[3497]: I0507 23:46:11.712604 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-lib-modules\") pod \"cilium-wdhsz\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " pod="kube-system/cilium-wdhsz" May 7 23:46:11.715292 kubelet[3497]: I0507 23:46:11.712645 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-xtables-lock\") pod \"cilium-wdhsz\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " pod="kube-system/cilium-wdhsz" May 7 23:46:11.715292 kubelet[3497]: I0507 23:46:11.712683 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-host-proc-sys-net\") pod \"cilium-wdhsz\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " pod="kube-system/cilium-wdhsz" May 7 23:46:11.715292 kubelet[3497]: I0507 23:46:11.712721 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b9652676-af76-490e-bdad-d776ffd569a0-hubble-tls\") pod \"cilium-wdhsz\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " pod="kube-system/cilium-wdhsz" May 7 23:46:11.715757 kubelet[3497]: I0507 23:46:11.712826 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-cilium-run\") pod \"cilium-wdhsz\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " pod="kube-system/cilium-wdhsz" May 7 23:46:11.715757 kubelet[3497]: I0507 23:46:11.712860 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-hostproc\") pod \"cilium-wdhsz\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " pod="kube-system/cilium-wdhsz" May 7 23:46:11.715757 kubelet[3497]: I0507 23:46:11.712892 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b9652676-af76-490e-bdad-d776ffd569a0-clustermesh-secrets\") pod \"cilium-wdhsz\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " pod="kube-system/cilium-wdhsz" May 7 23:46:11.715757 kubelet[3497]: I0507 23:46:11.712925 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv78g\" (UniqueName: \"kubernetes.io/projected/b9652676-af76-490e-bdad-d776ffd569a0-kube-api-access-sv78g\") pod \"cilium-wdhsz\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " pod="kube-system/cilium-wdhsz" May 7 23:46:11.715757 kubelet[3497]: I0507 23:46:11.712986 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-bpf-maps\") pod \"cilium-wdhsz\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " pod="kube-system/cilium-wdhsz" May 7 23:46:11.715757 kubelet[3497]: I0507 23:46:11.713023 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-cni-path\") pod \"cilium-wdhsz\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " pod="kube-system/cilium-wdhsz" May 7 23:46:11.716078 kubelet[3497]: I0507 23:46:11.713060 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-host-proc-sys-kernel\") pod \"cilium-wdhsz\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " pod="kube-system/cilium-wdhsz" May 7 23:46:11.716078 kubelet[3497]: I0507 23:46:11.713108 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9652676-af76-490e-bdad-d776ffd569a0-cilium-config-path\") pod \"cilium-wdhsz\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " pod="kube-system/cilium-wdhsz" May 7 23:46:11.725870 kubelet[3497]: E0507 23:46:11.725815 3497 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 7 23:46:11.725870 kubelet[3497]: E0507 23:46:11.725867 3497 projected.go:200] Error preparing data for projected volume kube-api-access-dhckt for pod kube-system/kube-proxy-x62bt: configmap "kube-root-ca.crt" not found May 7 23:46:11.726128 kubelet[3497]: E0507 23:46:11.725966 3497 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b5bbdeff-173f-4f1d-9b58-71bbc2eab690-kube-api-access-dhckt podName:b5bbdeff-173f-4f1d-9b58-71bbc2eab690 nodeName:}" failed. No retries permitted until 2025-05-07 23:46:12.225933735 +0000 UTC m=+15.715782977 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dhckt" (UniqueName: "kubernetes.io/projected/b5bbdeff-173f-4f1d-9b58-71bbc2eab690-kube-api-access-dhckt") pod "kube-proxy-x62bt" (UID: "b5bbdeff-173f-4f1d-9b58-71bbc2eab690") : configmap "kube-root-ca.crt" not found May 7 23:46:11.945564 containerd[1959]: time="2025-05-07T23:46:11.944958653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wdhsz,Uid:b9652676-af76-490e-bdad-d776ffd569a0,Namespace:kube-system,Attempt:0,}" May 7 23:46:11.961109 kubelet[3497]: I0507 23:46:11.961021 3497 topology_manager.go:215] "Topology Admit Handler" podUID="9be17240-3ab4-46d4-8a63-b1eeffa423dd" podNamespace="kube-system" podName="cilium-operator-599987898-ncb74" May 7 23:46:11.999193 systemd[1]: Created slice kubepods-besteffort-pod9be17240_3ab4_46d4_8a63_b1eeffa423dd.slice - libcontainer container kubepods-besteffort-pod9be17240_3ab4_46d4_8a63_b1eeffa423dd.slice. May 7 23:46:12.015951 kubelet[3497]: I0507 23:46:12.014931 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9be17240-3ab4-46d4-8a63-b1eeffa423dd-cilium-config-path\") pod \"cilium-operator-599987898-ncb74\" (UID: \"9be17240-3ab4-46d4-8a63-b1eeffa423dd\") " pod="kube-system/cilium-operator-599987898-ncb74" May 7 23:46:12.015951 kubelet[3497]: I0507 23:46:12.015004 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5gqh\" (UniqueName: \"kubernetes.io/projected/9be17240-3ab4-46d4-8a63-b1eeffa423dd-kube-api-access-f5gqh\") pod \"cilium-operator-599987898-ncb74\" (UID: \"9be17240-3ab4-46d4-8a63-b1eeffa423dd\") " pod="kube-system/cilium-operator-599987898-ncb74" May 7 23:46:12.041528 containerd[1959]: time="2025-05-07T23:46:12.041380878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:46:12.041893 containerd[1959]: time="2025-05-07T23:46:12.041723810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:46:12.042026 containerd[1959]: time="2025-05-07T23:46:12.041918509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:46:12.044044 containerd[1959]: time="2025-05-07T23:46:12.042387978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:46:12.105022 systemd[1]: Started cri-containerd-167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f.scope - libcontainer container 167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f. May 7 23:46:12.233716 containerd[1959]: time="2025-05-07T23:46:12.232528249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wdhsz,Uid:b9652676-af76-490e-bdad-d776ffd569a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f\"" May 7 23:46:12.240067 containerd[1959]: time="2025-05-07T23:46:12.240001722Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 7 23:46:12.321750 containerd[1959]: time="2025-05-07T23:46:12.321599579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-ncb74,Uid:9be17240-3ab4-46d4-8a63-b1eeffa423dd,Namespace:kube-system,Attempt:0,}" May 7 23:46:12.365837 containerd[1959]: time="2025-05-07T23:46:12.365549037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:46:12.365837 containerd[1959]: time="2025-05-07T23:46:12.365672624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:46:12.365837 containerd[1959]: time="2025-05-07T23:46:12.365725385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:46:12.366530 containerd[1959]: time="2025-05-07T23:46:12.366354866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:46:12.398563 systemd[1]: Started cri-containerd-49bbc4936a3eae76326820ae18944db32563f76ab093b1b1da3651f30640c6c3.scope - libcontainer container 49bbc4936a3eae76326820ae18944db32563f76ab093b1b1da3651f30640c6c3. May 7 23:46:12.458733 containerd[1959]: time="2025-05-07T23:46:12.458680493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-ncb74,Uid:9be17240-3ab4-46d4-8a63-b1eeffa423dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"49bbc4936a3eae76326820ae18944db32563f76ab093b1b1da3651f30640c6c3\"" May 7 23:46:12.713478 kubelet[3497]: E0507 23:46:12.713340 3497 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition May 7 23:46:12.713478 kubelet[3497]: E0507 23:46:12.713472 3497 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b5bbdeff-173f-4f1d-9b58-71bbc2eab690-kube-proxy podName:b5bbdeff-173f-4f1d-9b58-71bbc2eab690 nodeName:}" failed. No retries permitted until 2025-05-07 23:46:13.213444472 +0000 UTC m=+16.703293714 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/b5bbdeff-173f-4f1d-9b58-71bbc2eab690-kube-proxy") pod "kube-proxy-x62bt" (UID: "b5bbdeff-173f-4f1d-9b58-71bbc2eab690") : failed to sync configmap cache: timed out waiting for the condition May 7 23:46:13.401434 containerd[1959]: time="2025-05-07T23:46:13.401309321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x62bt,Uid:b5bbdeff-173f-4f1d-9b58-71bbc2eab690,Namespace:kube-system,Attempt:0,}" May 7 23:46:13.452618 containerd[1959]: time="2025-05-07T23:46:13.451709057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:46:13.452618 containerd[1959]: time="2025-05-07T23:46:13.451821525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:46:13.452618 containerd[1959]: time="2025-05-07T23:46:13.452301645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:46:13.452618 containerd[1959]: time="2025-05-07T23:46:13.452524433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:46:13.496585 systemd[1]: Started cri-containerd-4dfdb2031734c3af4dcace312bb9be5b0b7665820da3e241695807f7e2109501.scope - libcontainer container 4dfdb2031734c3af4dcace312bb9be5b0b7665820da3e241695807f7e2109501. May 7 23:46:13.544142 containerd[1959]: time="2025-05-07T23:46:13.543972590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x62bt,Uid:b5bbdeff-173f-4f1d-9b58-71bbc2eab690,Namespace:kube-system,Attempt:0,} returns sandbox id \"4dfdb2031734c3af4dcace312bb9be5b0b7665820da3e241695807f7e2109501\"" May 7 23:46:13.551177 containerd[1959]: time="2025-05-07T23:46:13.551097434Z" level=info msg="CreateContainer within sandbox \"4dfdb2031734c3af4dcace312bb9be5b0b7665820da3e241695807f7e2109501\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 7 23:46:13.589093 containerd[1959]: time="2025-05-07T23:46:13.589012615Z" level=info msg="CreateContainer within sandbox \"4dfdb2031734c3af4dcace312bb9be5b0b7665820da3e241695807f7e2109501\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cf00e9b1c13d3d11c80d070d014b45fd8d14768ac14deb50afb43db0ef477e84\"" May 7 23:46:13.590651 containerd[1959]: time="2025-05-07T23:46:13.590604831Z" level=info msg="StartContainer for \"cf00e9b1c13d3d11c80d070d014b45fd8d14768ac14deb50afb43db0ef477e84\"" May 7 23:46:13.642548 systemd[1]: Started cri-containerd-cf00e9b1c13d3d11c80d070d014b45fd8d14768ac14deb50afb43db0ef477e84.scope - libcontainer container cf00e9b1c13d3d11c80d070d014b45fd8d14768ac14deb50afb43db0ef477e84. May 7 23:46:13.722244 containerd[1959]: time="2025-05-07T23:46:13.721926382Z" level=info msg="StartContainer for \"cf00e9b1c13d3d11c80d070d014b45fd8d14768ac14deb50afb43db0ef477e84\" returns successfully" May 7 23:46:13.906123 kubelet[3497]: I0507 23:46:13.904594 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x62bt" podStartSLOduration=2.904354822 podStartE2EDuration="2.904354822s" podCreationTimestamp="2025-05-07 23:46:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:46:13.901335261 +0000 UTC m=+17.391184527" watchObservedRunningTime="2025-05-07 23:46:13.904354822 +0000 UTC m=+17.394204100" May 7 23:46:18.852171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2703066636.mount: Deactivated successfully. May 7 23:46:21.409513 containerd[1959]: time="2025-05-07T23:46:21.409447901Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:46:21.412186 containerd[1959]: time="2025-05-07T23:46:21.412090947Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 7 23:46:21.413102 containerd[1959]: time="2025-05-07T23:46:21.412653525Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:46:21.416737 containerd[1959]: time="2025-05-07T23:46:21.416090322Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.176015557s" May 7 23:46:21.416737 containerd[1959]: time="2025-05-07T23:46:21.416155929Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 7 23:46:21.420398 containerd[1959]: time="2025-05-07T23:46:21.419972636Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 7 23:46:21.422506 containerd[1959]: time="2025-05-07T23:46:21.422433241Z" level=info msg="CreateContainer within sandbox \"167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 7 23:46:21.444212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2142011185.mount: Deactivated successfully. May 7 23:46:21.449336 containerd[1959]: time="2025-05-07T23:46:21.449235969Z" level=info msg="CreateContainer within sandbox \"167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1390b03c2b308110ffaf4302a495b335e273f728ce230872ecfab7c0a42a3218\"" May 7 23:46:21.453696 containerd[1959]: time="2025-05-07T23:46:21.450096227Z" level=info msg="StartContainer for \"1390b03c2b308110ffaf4302a495b335e273f728ce230872ecfab7c0a42a3218\"" May 7 23:46:21.509875 systemd[1]: Started cri-containerd-1390b03c2b308110ffaf4302a495b335e273f728ce230872ecfab7c0a42a3218.scope - libcontainer container 1390b03c2b308110ffaf4302a495b335e273f728ce230872ecfab7c0a42a3218. May 7 23:46:21.564575 containerd[1959]: time="2025-05-07T23:46:21.564478459Z" level=info msg="StartContainer for \"1390b03c2b308110ffaf4302a495b335e273f728ce230872ecfab7c0a42a3218\" returns successfully" May 7 23:46:21.589805 systemd[1]: cri-containerd-1390b03c2b308110ffaf4302a495b335e273f728ce230872ecfab7c0a42a3218.scope: Deactivated successfully. May 7 23:46:22.436475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1390b03c2b308110ffaf4302a495b335e273f728ce230872ecfab7c0a42a3218-rootfs.mount: Deactivated successfully. May 7 23:46:22.617466 containerd[1959]: time="2025-05-07T23:46:22.617367830Z" level=info msg="shim disconnected" id=1390b03c2b308110ffaf4302a495b335e273f728ce230872ecfab7c0a42a3218 namespace=k8s.io May 7 23:46:22.617466 containerd[1959]: time="2025-05-07T23:46:22.617451093Z" level=warning msg="cleaning up after shim disconnected" id=1390b03c2b308110ffaf4302a495b335e273f728ce230872ecfab7c0a42a3218 namespace=k8s.io May 7 23:46:22.618237 containerd[1959]: time="2025-05-07T23:46:22.617472790Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:46:22.896359 containerd[1959]: time="2025-05-07T23:46:22.896307647Z" level=info msg="CreateContainer within sandbox \"167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 7 23:46:22.927271 containerd[1959]: time="2025-05-07T23:46:22.927067616Z" level=info msg="CreateContainer within sandbox \"167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9d3fe43585f64b118b0cdad5f89a810d7c79551b5bdde476a69aeab02933add4\"" May 7 23:46:22.929499 containerd[1959]: time="2025-05-07T23:46:22.929316575Z" level=info msg="StartContainer for \"9d3fe43585f64b118b0cdad5f89a810d7c79551b5bdde476a69aeab02933add4\"" May 7 23:46:22.984771 systemd[1]: Started cri-containerd-9d3fe43585f64b118b0cdad5f89a810d7c79551b5bdde476a69aeab02933add4.scope - libcontainer container 9d3fe43585f64b118b0cdad5f89a810d7c79551b5bdde476a69aeab02933add4. May 7 23:46:23.035764 containerd[1959]: time="2025-05-07T23:46:23.035686707Z" level=info msg="StartContainer for \"9d3fe43585f64b118b0cdad5f89a810d7c79551b5bdde476a69aeab02933add4\" returns successfully" May 7 23:46:23.061809 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 7 23:46:23.062754 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 7 23:46:23.063104 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 7 23:46:23.073829 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 7 23:46:23.079280 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 7 23:46:23.080966 systemd[1]: cri-containerd-9d3fe43585f64b118b0cdad5f89a810d7c79551b5bdde476a69aeab02933add4.scope: Deactivated successfully. May 7 23:46:23.119050 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 7 23:46:23.142291 containerd[1959]: time="2025-05-07T23:46:23.141879665Z" level=info msg="shim disconnected" id=9d3fe43585f64b118b0cdad5f89a810d7c79551b5bdde476a69aeab02933add4 namespace=k8s.io May 7 23:46:23.142291 containerd[1959]: time="2025-05-07T23:46:23.142222609Z" level=warning msg="cleaning up after shim disconnected" id=9d3fe43585f64b118b0cdad5f89a810d7c79551b5bdde476a69aeab02933add4 namespace=k8s.io May 7 23:46:23.142291 containerd[1959]: time="2025-05-07T23:46:23.142245530Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:46:23.437780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d3fe43585f64b118b0cdad5f89a810d7c79551b5bdde476a69aeab02933add4-rootfs.mount: Deactivated successfully. May 7 23:46:23.885298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3594510065.mount: Deactivated successfully. May 7 23:46:23.905143 containerd[1959]: time="2025-05-07T23:46:23.904585179Z" level=info msg="CreateContainer within sandbox \"167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 7 23:46:23.958347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1091602734.mount: Deactivated successfully. May 7 23:46:23.973705 containerd[1959]: time="2025-05-07T23:46:23.973629960Z" level=info msg="CreateContainer within sandbox \"167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5931c3f79eb86c269a3ebf3e141679d0a9e4c9b67ae78da305a724d7b2928f9c\"" May 7 23:46:23.982528 containerd[1959]: time="2025-05-07T23:46:23.982226684Z" level=info msg="StartContainer for \"5931c3f79eb86c269a3ebf3e141679d0a9e4c9b67ae78da305a724d7b2928f9c\"" May 7 23:46:24.042624 systemd[1]: Started cri-containerd-5931c3f79eb86c269a3ebf3e141679d0a9e4c9b67ae78da305a724d7b2928f9c.scope - libcontainer container 5931c3f79eb86c269a3ebf3e141679d0a9e4c9b67ae78da305a724d7b2928f9c. May 7 23:46:24.106798 containerd[1959]: time="2025-05-07T23:46:24.104576794Z" level=info msg="StartContainer for \"5931c3f79eb86c269a3ebf3e141679d0a9e4c9b67ae78da305a724d7b2928f9c\" returns successfully" May 7 23:46:24.116195 systemd[1]: cri-containerd-5931c3f79eb86c269a3ebf3e141679d0a9e4c9b67ae78da305a724d7b2928f9c.scope: Deactivated successfully. May 7 23:46:24.192914 containerd[1959]: time="2025-05-07T23:46:24.192429486Z" level=info msg="shim disconnected" id=5931c3f79eb86c269a3ebf3e141679d0a9e4c9b67ae78da305a724d7b2928f9c namespace=k8s.io May 7 23:46:24.192914 containerd[1959]: time="2025-05-07T23:46:24.192521972Z" level=warning msg="cleaning up after shim disconnected" id=5931c3f79eb86c269a3ebf3e141679d0a9e4c9b67ae78da305a724d7b2928f9c namespace=k8s.io May 7 23:46:24.192914 containerd[1959]: time="2025-05-07T23:46:24.192543465Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:46:24.922823 containerd[1959]: time="2025-05-07T23:46:24.922560220Z" level=info msg="CreateContainer within sandbox \"167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 7 23:46:24.962543 containerd[1959]: time="2025-05-07T23:46:24.958640691Z" level=info msg="CreateContainer within sandbox \"167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6ecff04b779ec7d8768cea5686a597ffd430098c6a6b69bddd883f175d1f3435\"" May 7 23:46:24.962543 containerd[1959]: time="2025-05-07T23:46:24.959750964Z" level=info msg="StartContainer for \"6ecff04b779ec7d8768cea5686a597ffd430098c6a6b69bddd883f175d1f3435\"" May 7 23:46:25.033592 systemd[1]: Started cri-containerd-6ecff04b779ec7d8768cea5686a597ffd430098c6a6b69bddd883f175d1f3435.scope - libcontainer container 6ecff04b779ec7d8768cea5686a597ffd430098c6a6b69bddd883f175d1f3435. May 7 23:46:25.103702 systemd[1]: cri-containerd-6ecff04b779ec7d8768cea5686a597ffd430098c6a6b69bddd883f175d1f3435.scope: Deactivated successfully. May 7 23:46:25.113526 containerd[1959]: time="2025-05-07T23:46:25.112846398Z" level=info msg="StartContainer for \"6ecff04b779ec7d8768cea5686a597ffd430098c6a6b69bddd883f175d1f3435\" returns successfully" May 7 23:46:25.252448 containerd[1959]: time="2025-05-07T23:46:25.252059066Z" level=info msg="shim disconnected" id=6ecff04b779ec7d8768cea5686a597ffd430098c6a6b69bddd883f175d1f3435 namespace=k8s.io May 7 23:46:25.252448 containerd[1959]: time="2025-05-07T23:46:25.252132925Z" level=warning msg="cleaning up after shim disconnected" id=6ecff04b779ec7d8768cea5686a597ffd430098c6a6b69bddd883f175d1f3435 namespace=k8s.io May 7 23:46:25.252448 containerd[1959]: time="2025-05-07T23:46:25.252155953Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:46:25.278514 containerd[1959]: time="2025-05-07T23:46:25.278431413Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:46:25.282914 containerd[1959]: time="2025-05-07T23:46:25.282844113Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 7 23:46:25.283214 containerd[1959]: time="2025-05-07T23:46:25.283179933Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 7 23:46:25.286157 containerd[1959]: time="2025-05-07T23:46:25.285931585Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.865884574s" May 7 23:46:25.286157 containerd[1959]: time="2025-05-07T23:46:25.286006379Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 7 23:46:25.291600 containerd[1959]: time="2025-05-07T23:46:25.291548807Z" level=info msg="CreateContainer within sandbox \"49bbc4936a3eae76326820ae18944db32563f76ab093b1b1da3651f30640c6c3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 7 23:46:25.317928 containerd[1959]: time="2025-05-07T23:46:25.317829304Z" level=info msg="CreateContainer within sandbox \"49bbc4936a3eae76326820ae18944db32563f76ab093b1b1da3651f30640c6c3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"415d30f50dea83d62aeeeb6a8f517a547994d3b42873cd278208622608b26be0\"" May 7 23:46:25.320150 containerd[1959]: time="2025-05-07T23:46:25.319725711Z" level=info msg="StartContainer for \"415d30f50dea83d62aeeeb6a8f517a547994d3b42873cd278208622608b26be0\"" May 7 23:46:25.366586 systemd[1]: Started cri-containerd-415d30f50dea83d62aeeeb6a8f517a547994d3b42873cd278208622608b26be0.scope - libcontainer container 415d30f50dea83d62aeeeb6a8f517a547994d3b42873cd278208622608b26be0. May 7 23:46:25.413491 containerd[1959]: time="2025-05-07T23:46:25.413362558Z" level=info msg="StartContainer for \"415d30f50dea83d62aeeeb6a8f517a547994d3b42873cd278208622608b26be0\" returns successfully" May 7 23:46:25.440296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ecff04b779ec7d8768cea5686a597ffd430098c6a6b69bddd883f175d1f3435-rootfs.mount: Deactivated successfully. May 7 23:46:25.929288 containerd[1959]: time="2025-05-07T23:46:25.928953243Z" level=info msg="CreateContainer within sandbox \"167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 7 23:46:25.969015 containerd[1959]: time="2025-05-07T23:46:25.968200802Z" level=info msg="CreateContainer within sandbox \"167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5\"" May 7 23:46:25.969511 containerd[1959]: time="2025-05-07T23:46:25.969226864Z" level=info msg="StartContainer for \"b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5\"" May 7 23:46:26.057684 systemd[1]: Started cri-containerd-b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5.scope - libcontainer container b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5. May 7 23:46:26.182961 kubelet[3497]: I0507 23:46:26.180361 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-ncb74" podStartSLOduration=2.354088199 podStartE2EDuration="15.180335382s" podCreationTimestamp="2025-05-07 23:46:11 +0000 UTC" firstStartedPulling="2025-05-07 23:46:12.461751472 +0000 UTC m=+15.951600714" lastFinishedPulling="2025-05-07 23:46:25.287998655 +0000 UTC m=+28.777847897" observedRunningTime="2025-05-07 23:46:26.061199819 +0000 UTC m=+29.551049073" watchObservedRunningTime="2025-05-07 23:46:26.180335382 +0000 UTC m=+29.670184660" May 7 23:46:26.219472 containerd[1959]: time="2025-05-07T23:46:26.217808908Z" level=info msg="StartContainer for \"b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5\" returns successfully" May 7 23:46:26.440545 systemd[1]: run-containerd-runc-k8s.io-b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5-runc.g6ufIg.mount: Deactivated successfully. May 7 23:46:26.608502 kubelet[3497]: I0507 23:46:26.605037 3497 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 7 23:46:26.766294 kubelet[3497]: I0507 23:46:26.765439 3497 topology_manager.go:215] "Topology Admit Handler" podUID="e775ff23-fa7c-46b5-99d9-684c30556e79" podNamespace="kube-system" podName="coredns-7db6d8ff4d-s4psr" May 7 23:46:26.782735 systemd[1]: Created slice kubepods-burstable-pode775ff23_fa7c_46b5_99d9_684c30556e79.slice - libcontainer container kubepods-burstable-pode775ff23_fa7c_46b5_99d9_684c30556e79.slice. May 7 23:46:26.788330 kubelet[3497]: I0507 23:46:26.786578 3497 topology_manager.go:215] "Topology Admit Handler" podUID="1dfe130c-0a83-446b-b9f0-8fc1d560a8f8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gmpql" May 7 23:46:26.804773 systemd[1]: Created slice kubepods-burstable-pod1dfe130c_0a83_446b_b9f0_8fc1d560a8f8.slice - libcontainer container kubepods-burstable-pod1dfe130c_0a83_446b_b9f0_8fc1d560a8f8.slice. May 7 23:46:26.806576 kubelet[3497]: W0507 23:46:26.806518 3497 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-25-188" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-25-188' and this object May 7 23:46:26.806576 kubelet[3497]: E0507 23:46:26.806579 3497 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-25-188" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-25-188' and this object May 7 23:46:26.928906 kubelet[3497]: I0507 23:46:26.928188 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e775ff23-fa7c-46b5-99d9-684c30556e79-config-volume\") pod \"coredns-7db6d8ff4d-s4psr\" (UID: \"e775ff23-fa7c-46b5-99d9-684c30556e79\") " pod="kube-system/coredns-7db6d8ff4d-s4psr" May 7 23:46:26.928906 kubelet[3497]: I0507 23:46:26.928434 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp4xb\" (UniqueName: \"kubernetes.io/projected/1dfe130c-0a83-446b-b9f0-8fc1d560a8f8-kube-api-access-qp4xb\") pod \"coredns-7db6d8ff4d-gmpql\" (UID: \"1dfe130c-0a83-446b-b9f0-8fc1d560a8f8\") " pod="kube-system/coredns-7db6d8ff4d-gmpql" May 7 23:46:26.928906 kubelet[3497]: I0507 23:46:26.928494 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vchlm\" (UniqueName: \"kubernetes.io/projected/e775ff23-fa7c-46b5-99d9-684c30556e79-kube-api-access-vchlm\") pod \"coredns-7db6d8ff4d-s4psr\" (UID: \"e775ff23-fa7c-46b5-99d9-684c30556e79\") " pod="kube-system/coredns-7db6d8ff4d-s4psr" May 7 23:46:26.928906 kubelet[3497]: I0507 23:46:26.928757 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1dfe130c-0a83-446b-b9f0-8fc1d560a8f8-config-volume\") pod \"coredns-7db6d8ff4d-gmpql\" (UID: \"1dfe130c-0a83-446b-b9f0-8fc1d560a8f8\") " pod="kube-system/coredns-7db6d8ff4d-gmpql" May 7 23:46:27.993486 containerd[1959]: time="2025-05-07T23:46:27.993143474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s4psr,Uid:e775ff23-fa7c-46b5-99d9-684c30556e79,Namespace:kube-system,Attempt:0,}" May 7 23:46:28.014079 containerd[1959]: time="2025-05-07T23:46:28.013996950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gmpql,Uid:1dfe130c-0a83-446b-b9f0-8fc1d560a8f8,Namespace:kube-system,Attempt:0,}" May 7 23:46:29.835611 (udev-worker)[4328]: Network interface NamePolicy= disabled on kernel command line. May 7 23:46:29.836094 systemd-networkd[1874]: cilium_host: Link UP May 7 23:46:29.837233 systemd-networkd[1874]: cilium_net: Link UP May 7 23:46:29.839654 (udev-worker)[4268]: Network interface NamePolicy= disabled on kernel command line. May 7 23:46:29.841440 systemd-networkd[1874]: cilium_net: Gained carrier May 7 23:46:29.841934 systemd-networkd[1874]: cilium_host: Gained carrier May 7 23:46:30.013040 (udev-worker)[4333]: Network interface NamePolicy= disabled on kernel command line. May 7 23:46:30.029579 systemd-networkd[1874]: cilium_vxlan: Link UP May 7 23:46:30.029596 systemd-networkd[1874]: cilium_vxlan: Gained carrier May 7 23:46:30.060040 systemd-networkd[1874]: cilium_host: Gained IPv6LL May 7 23:46:30.525314 kernel: NET: Registered PF_ALG protocol family May 7 23:46:30.628828 systemd-networkd[1874]: cilium_net: Gained IPv6LL May 7 23:46:31.525367 systemd-networkd[1874]: cilium_vxlan: Gained IPv6LL May 7 23:46:32.017755 (udev-worker)[4336]: Network interface NamePolicy= disabled on kernel command line. May 7 23:46:32.018310 systemd-networkd[1874]: lxc_health: Link UP May 7 23:46:32.040078 systemd-networkd[1874]: lxc_health: Gained carrier May 7 23:46:32.332618 systemd[1]: Started sshd@9-172.31.25.188:22-147.75.109.163:52626.service - OpenSSH per-connection server daemon (147.75.109.163:52626). May 7 23:46:32.531595 sshd[4669]: Accepted publickey for core from 147.75.109.163 port 52626 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:46:32.534670 sshd-session[4669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:46:32.544749 systemd-logind[1941]: New session 10 of user core. May 7 23:46:32.554782 systemd[1]: Started session-10.scope - Session 10 of User core. May 7 23:46:32.597323 kernel: eth0: renamed from tmp7fb80 May 7 23:46:32.622997 systemd-networkd[1874]: lxc68c27d30958c: Link UP May 7 23:46:32.624068 systemd-networkd[1874]: lxc68c27d30958c: Gained carrier May 7 23:46:32.672092 kernel: eth0: renamed from tmp968d7 May 7 23:46:32.670008 systemd-networkd[1874]: lxca9fce38e6b44: Link UP May 7 23:46:32.682481 systemd-networkd[1874]: lxca9fce38e6b44: Gained carrier May 7 23:46:32.921772 sshd[4671]: Connection closed by 147.75.109.163 port 52626 May 7 23:46:32.925847 sshd-session[4669]: pam_unix(sshd:session): session closed for user core May 7 23:46:32.934010 systemd[1]: sshd@9-172.31.25.188:22-147.75.109.163:52626.service: Deactivated successfully. May 7 23:46:32.958980 systemd[1]: session-10.scope: Deactivated successfully. May 7 23:46:32.963729 systemd-logind[1941]: Session 10 logged out. Waiting for processes to exit. May 7 23:46:32.969859 systemd-logind[1941]: Removed session 10. May 7 23:46:33.891640 systemd-networkd[1874]: lxc_health: Gained IPv6LL May 7 23:46:33.987630 kubelet[3497]: I0507 23:46:33.987503 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wdhsz" podStartSLOduration=13.806245874 podStartE2EDuration="22.987481471s" podCreationTimestamp="2025-05-07 23:46:11 +0000 UTC" firstStartedPulling="2025-05-07 23:46:12.23734342 +0000 UTC m=+15.727192662" lastFinishedPulling="2025-05-07 23:46:21.418579029 +0000 UTC m=+24.908428259" observedRunningTime="2025-05-07 23:46:27.42132957 +0000 UTC m=+30.911178824" watchObservedRunningTime="2025-05-07 23:46:33.987481471 +0000 UTC m=+37.477330713" May 7 23:46:34.147521 systemd-networkd[1874]: lxc68c27d30958c: Gained IPv6LL May 7 23:46:34.275593 systemd-networkd[1874]: lxca9fce38e6b44: Gained IPv6LL May 7 23:46:36.475378 ntpd[1935]: Listen normally on 8 cilium_host 192.168.0.38:123 May 7 23:46:36.476487 ntpd[1935]: 7 May 23:46:36 ntpd[1935]: Listen normally on 8 cilium_host 192.168.0.38:123 May 7 23:46:36.476487 ntpd[1935]: 7 May 23:46:36 ntpd[1935]: Listen normally on 9 cilium_net [fe80::e847:1bff:fe9d:7694%4]:123 May 7 23:46:36.476487 ntpd[1935]: 7 May 23:46:36 ntpd[1935]: Listen normally on 10 cilium_host [fe80::c85:6fff:fe93:515e%5]:123 May 7 23:46:36.476487 ntpd[1935]: 7 May 23:46:36 ntpd[1935]: Listen normally on 11 cilium_vxlan [fe80::e873:9bff:fe95:601f%6]:123 May 7 23:46:36.476487 ntpd[1935]: 7 May 23:46:36 ntpd[1935]: Listen normally on 12 lxc_health [fe80::18a9:4cff:fead:b9d4%8]:123 May 7 23:46:36.476487 ntpd[1935]: 7 May 23:46:36 ntpd[1935]: Listen normally on 13 lxc68c27d30958c [fe80::8cd0:d0ff:fe6f:3655%10]:123 May 7 23:46:36.476487 ntpd[1935]: 7 May 23:46:36 ntpd[1935]: Listen normally on 14 lxca9fce38e6b44 [fe80::24a7:34ff:fe61:d2f7%12]:123 May 7 23:46:36.475508 ntpd[1935]: Listen normally on 9 cilium_net [fe80::e847:1bff:fe9d:7694%4]:123 May 7 23:46:36.475590 ntpd[1935]: Listen normally on 10 cilium_host [fe80::c85:6fff:fe93:515e%5]:123 May 7 23:46:36.475656 ntpd[1935]: Listen normally on 11 cilium_vxlan [fe80::e873:9bff:fe95:601f%6]:123 May 7 23:46:36.475722 ntpd[1935]: Listen normally on 12 lxc_health [fe80::18a9:4cff:fead:b9d4%8]:123 May 7 23:46:36.475787 ntpd[1935]: Listen normally on 13 lxc68c27d30958c [fe80::8cd0:d0ff:fe6f:3655%10]:123 May 7 23:46:36.475852 ntpd[1935]: Listen normally on 14 lxca9fce38e6b44 [fe80::24a7:34ff:fe61:d2f7%12]:123 May 7 23:46:37.965873 systemd[1]: Started sshd@10-172.31.25.188:22-147.75.109.163:32832.service - OpenSSH per-connection server daemon (147.75.109.163:32832). May 7 23:46:38.163045 sshd[4713]: Accepted publickey for core from 147.75.109.163 port 32832 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:46:38.166015 sshd-session[4713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:46:38.180407 systemd-logind[1941]: New session 11 of user core. May 7 23:46:38.185804 systemd[1]: Started session-11.scope - Session 11 of User core. May 7 23:46:38.465554 sshd[4715]: Connection closed by 147.75.109.163 port 32832 May 7 23:46:38.468637 sshd-session[4713]: pam_unix(sshd:session): session closed for user core May 7 23:46:38.476597 systemd-logind[1941]: Session 11 logged out. Waiting for processes to exit. May 7 23:46:38.480153 systemd[1]: sshd@10-172.31.25.188:22-147.75.109.163:32832.service: Deactivated successfully. May 7 23:46:38.488625 systemd[1]: session-11.scope: Deactivated successfully. May 7 23:46:38.495371 systemd-logind[1941]: Removed session 11. May 7 23:46:41.399110 containerd[1959]: time="2025-05-07T23:46:41.398636966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:46:41.399110 containerd[1959]: time="2025-05-07T23:46:41.398730483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:46:41.399110 containerd[1959]: time="2025-05-07T23:46:41.398767844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:46:41.402788 containerd[1959]: time="2025-05-07T23:46:41.401756749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:46:41.466063 containerd[1959]: time="2025-05-07T23:46:41.465925921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:46:41.467710 containerd[1959]: time="2025-05-07T23:46:41.466359420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:46:41.466653 systemd[1]: Started cri-containerd-968d79a2e3c4e872c5e23b2ae5fbb98c2169ccf66432998c9824b0be4163d142.scope - libcontainer container 968d79a2e3c4e872c5e23b2ae5fbb98c2169ccf66432998c9824b0be4163d142. May 7 23:46:41.471709 containerd[1959]: time="2025-05-07T23:46:41.469704654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:46:41.471709 containerd[1959]: time="2025-05-07T23:46:41.469919851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:46:41.535685 systemd[1]: Started cri-containerd-7fb80728466d0eb1c1702e8145a5a99c01e1656a59d39bd2513555fd3b725210.scope - libcontainer container 7fb80728466d0eb1c1702e8145a5a99c01e1656a59d39bd2513555fd3b725210. May 7 23:46:41.621490 containerd[1959]: time="2025-05-07T23:46:41.621399550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gmpql,Uid:1dfe130c-0a83-446b-b9f0-8fc1d560a8f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"968d79a2e3c4e872c5e23b2ae5fbb98c2169ccf66432998c9824b0be4163d142\"" May 7 23:46:41.632602 containerd[1959]: time="2025-05-07T23:46:41.632487259Z" level=info msg="CreateContainer within sandbox \"968d79a2e3c4e872c5e23b2ae5fbb98c2169ccf66432998c9824b0be4163d142\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 7 23:46:41.658087 containerd[1959]: time="2025-05-07T23:46:41.657203295Z" level=info msg="CreateContainer within sandbox \"968d79a2e3c4e872c5e23b2ae5fbb98c2169ccf66432998c9824b0be4163d142\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"993617589ab46dcba6b1e2d0cdeb28fb0a985fe84caf4f650cf6966d205daf46\"" May 7 23:46:41.658535 containerd[1959]: time="2025-05-07T23:46:41.658481280Z" level=info msg="StartContainer for \"993617589ab46dcba6b1e2d0cdeb28fb0a985fe84caf4f650cf6966d205daf46\"" May 7 23:46:41.750579 systemd[1]: Started cri-containerd-993617589ab46dcba6b1e2d0cdeb28fb0a985fe84caf4f650cf6966d205daf46.scope - libcontainer container 993617589ab46dcba6b1e2d0cdeb28fb0a985fe84caf4f650cf6966d205daf46. May 7 23:46:41.765934 containerd[1959]: time="2025-05-07T23:46:41.765838374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s4psr,Uid:e775ff23-fa7c-46b5-99d9-684c30556e79,Namespace:kube-system,Attempt:0,} returns sandbox id \"7fb80728466d0eb1c1702e8145a5a99c01e1656a59d39bd2513555fd3b725210\"" May 7 23:46:41.776735 containerd[1959]: time="2025-05-07T23:46:41.776669616Z" level=info msg="CreateContainer within sandbox \"7fb80728466d0eb1c1702e8145a5a99c01e1656a59d39bd2513555fd3b725210\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 7 23:46:41.806187 containerd[1959]: time="2025-05-07T23:46:41.806116637Z" level=info msg="CreateContainer within sandbox \"7fb80728466d0eb1c1702e8145a5a99c01e1656a59d39bd2513555fd3b725210\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b5898d4618e33cff57c9b08313f597c20d887e08ea662274408fa9529ef4bd61\"" May 7 23:46:41.808858 containerd[1959]: time="2025-05-07T23:46:41.808383132Z" level=info msg="StartContainer for \"b5898d4618e33cff57c9b08313f597c20d887e08ea662274408fa9529ef4bd61\"" May 7 23:46:41.863158 containerd[1959]: time="2025-05-07T23:46:41.863087874Z" level=info msg="StartContainer for \"993617589ab46dcba6b1e2d0cdeb28fb0a985fe84caf4f650cf6966d205daf46\" returns successfully" May 7 23:46:41.894943 systemd[1]: Started cri-containerd-b5898d4618e33cff57c9b08313f597c20d887e08ea662274408fa9529ef4bd61.scope - libcontainer container b5898d4618e33cff57c9b08313f597c20d887e08ea662274408fa9529ef4bd61. May 7 23:46:42.003316 containerd[1959]: time="2025-05-07T23:46:42.003155005Z" level=info msg="StartContainer for \"b5898d4618e33cff57c9b08313f597c20d887e08ea662274408fa9529ef4bd61\" returns successfully" May 7 23:46:42.104635 kubelet[3497]: I0507 23:46:42.104172 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gmpql" podStartSLOduration=31.104129637 podStartE2EDuration="31.104129637s" podCreationTimestamp="2025-05-07 23:46:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:46:42.068868524 +0000 UTC m=+45.558717766" watchObservedRunningTime="2025-05-07 23:46:42.104129637 +0000 UTC m=+45.593978867" May 7 23:46:43.042827 kubelet[3497]: I0507 23:46:43.041637 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-s4psr" podStartSLOduration=32.041616254 podStartE2EDuration="32.041616254s" podCreationTimestamp="2025-05-07 23:46:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:46:42.106827496 +0000 UTC m=+45.596676762" watchObservedRunningTime="2025-05-07 23:46:43.041616254 +0000 UTC m=+46.531465508" May 7 23:46:43.506838 systemd[1]: Started sshd@11-172.31.25.188:22-147.75.109.163:32844.service - OpenSSH per-connection server daemon (147.75.109.163:32844). May 7 23:46:43.700485 sshd[4907]: Accepted publickey for core from 147.75.109.163 port 32844 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:46:43.703110 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:46:43.711521 systemd-logind[1941]: New session 12 of user core. May 7 23:46:43.721558 systemd[1]: Started session-12.scope - Session 12 of User core. May 7 23:46:43.965351 sshd[4909]: Connection closed by 147.75.109.163 port 32844 May 7 23:46:43.965102 sshd-session[4907]: pam_unix(sshd:session): session closed for user core May 7 23:46:43.970750 systemd[1]: sshd@11-172.31.25.188:22-147.75.109.163:32844.service: Deactivated successfully. May 7 23:46:43.974726 systemd[1]: session-12.scope: Deactivated successfully. May 7 23:46:43.978695 systemd-logind[1941]: Session 12 logged out. Waiting for processes to exit. May 7 23:46:43.981043 systemd-logind[1941]: Removed session 12. May 7 23:46:49.010789 systemd[1]: Started sshd@12-172.31.25.188:22-147.75.109.163:34370.service - OpenSSH per-connection server daemon (147.75.109.163:34370). May 7 23:46:49.196922 sshd[4924]: Accepted publickey for core from 147.75.109.163 port 34370 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:46:49.199670 sshd-session[4924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:46:49.208914 systemd-logind[1941]: New session 13 of user core. May 7 23:46:49.216597 systemd[1]: Started session-13.scope - Session 13 of User core. May 7 23:46:49.458570 sshd[4926]: Connection closed by 147.75.109.163 port 34370 May 7 23:46:49.459605 sshd-session[4924]: pam_unix(sshd:session): session closed for user core May 7 23:46:49.466766 systemd[1]: sshd@12-172.31.25.188:22-147.75.109.163:34370.service: Deactivated successfully. May 7 23:46:49.471143 systemd[1]: session-13.scope: Deactivated successfully. May 7 23:46:49.473003 systemd-logind[1941]: Session 13 logged out. Waiting for processes to exit. May 7 23:46:49.475743 systemd-logind[1941]: Removed session 13. May 7 23:46:54.500865 systemd[1]: Started sshd@13-172.31.25.188:22-147.75.109.163:34380.service - OpenSSH per-connection server daemon (147.75.109.163:34380). May 7 23:46:54.691069 sshd[4939]: Accepted publickey for core from 147.75.109.163 port 34380 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:46:54.693680 sshd-session[4939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:46:54.702742 systemd-logind[1941]: New session 14 of user core. May 7 23:46:54.712596 systemd[1]: Started session-14.scope - Session 14 of User core. May 7 23:46:54.959157 sshd[4941]: Connection closed by 147.75.109.163 port 34380 May 7 23:46:54.960379 sshd-session[4939]: pam_unix(sshd:session): session closed for user core May 7 23:46:54.966125 systemd-logind[1941]: Session 14 logged out. Waiting for processes to exit. May 7 23:46:54.966850 systemd[1]: sshd@13-172.31.25.188:22-147.75.109.163:34380.service: Deactivated successfully. May 7 23:46:54.970973 systemd[1]: session-14.scope: Deactivated successfully. May 7 23:46:54.976229 systemd-logind[1941]: Removed session 14. May 7 23:46:55.002789 systemd[1]: Started sshd@14-172.31.25.188:22-147.75.109.163:34390.service - OpenSSH per-connection server daemon (147.75.109.163:34390). May 7 23:46:55.194709 sshd[4954]: Accepted publickey for core from 147.75.109.163 port 34390 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:46:55.197392 sshd-session[4954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:46:55.205588 systemd-logind[1941]: New session 15 of user core. May 7 23:46:55.213541 systemd[1]: Started session-15.scope - Session 15 of User core. May 7 23:46:55.532995 sshd[4956]: Connection closed by 147.75.109.163 port 34390 May 7 23:46:55.534448 sshd-session[4954]: pam_unix(sshd:session): session closed for user core May 7 23:46:55.547746 systemd-logind[1941]: Session 15 logged out. Waiting for processes to exit. May 7 23:46:55.549399 systemd[1]: sshd@14-172.31.25.188:22-147.75.109.163:34390.service: Deactivated successfully. May 7 23:46:55.558526 systemd[1]: session-15.scope: Deactivated successfully. May 7 23:46:55.578818 systemd-logind[1941]: Removed session 15. May 7 23:46:55.585118 systemd[1]: Started sshd@15-172.31.25.188:22-147.75.109.163:34400.service - OpenSSH per-connection server daemon (147.75.109.163:34400). May 7 23:46:55.774552 sshd[4965]: Accepted publickey for core from 147.75.109.163 port 34400 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:46:55.777757 sshd-session[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:46:55.787053 systemd-logind[1941]: New session 16 of user core. May 7 23:46:55.797829 systemd[1]: Started session-16.scope - Session 16 of User core. May 7 23:46:56.061437 sshd[4968]: Connection closed by 147.75.109.163 port 34400 May 7 23:46:56.062496 sshd-session[4965]: pam_unix(sshd:session): session closed for user core May 7 23:46:56.070873 systemd[1]: sshd@15-172.31.25.188:22-147.75.109.163:34400.service: Deactivated successfully. May 7 23:46:56.078239 systemd[1]: session-16.scope: Deactivated successfully. May 7 23:46:56.080824 systemd-logind[1941]: Session 16 logged out. Waiting for processes to exit. May 7 23:46:56.084407 systemd-logind[1941]: Removed session 16. May 7 23:47:01.109721 systemd[1]: Started sshd@16-172.31.25.188:22-147.75.109.163:33686.service - OpenSSH per-connection server daemon (147.75.109.163:33686). May 7 23:47:01.293487 sshd[4983]: Accepted publickey for core from 147.75.109.163 port 33686 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:47:01.296011 sshd-session[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:47:01.304381 systemd-logind[1941]: New session 17 of user core. May 7 23:47:01.311540 systemd[1]: Started session-17.scope - Session 17 of User core. May 7 23:47:01.557793 sshd[4985]: Connection closed by 147.75.109.163 port 33686 May 7 23:47:01.558686 sshd-session[4983]: pam_unix(sshd:session): session closed for user core May 7 23:47:01.565242 systemd[1]: sshd@16-172.31.25.188:22-147.75.109.163:33686.service: Deactivated successfully. May 7 23:47:01.569407 systemd[1]: session-17.scope: Deactivated successfully. May 7 23:47:01.572815 systemd-logind[1941]: Session 17 logged out. Waiting for processes to exit. May 7 23:47:01.575661 systemd-logind[1941]: Removed session 17. May 7 23:47:06.598787 systemd[1]: Started sshd@17-172.31.25.188:22-147.75.109.163:33698.service - OpenSSH per-connection server daemon (147.75.109.163:33698). May 7 23:47:06.796040 sshd[4997]: Accepted publickey for core from 147.75.109.163 port 33698 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:47:06.798687 sshd-session[4997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:47:06.808810 systemd-logind[1941]: New session 18 of user core. May 7 23:47:06.816566 systemd[1]: Started session-18.scope - Session 18 of User core. May 7 23:47:07.076898 sshd[4999]: Connection closed by 147.75.109.163 port 33698 May 7 23:47:07.077547 sshd-session[4997]: pam_unix(sshd:session): session closed for user core May 7 23:47:07.084476 systemd[1]: sshd@17-172.31.25.188:22-147.75.109.163:33698.service: Deactivated successfully. May 7 23:47:07.090798 systemd[1]: session-18.scope: Deactivated successfully. May 7 23:47:07.094069 systemd-logind[1941]: Session 18 logged out. Waiting for processes to exit. May 7 23:47:07.095837 systemd-logind[1941]: Removed session 18. May 7 23:47:12.119743 systemd[1]: Started sshd@18-172.31.25.188:22-147.75.109.163:47678.service - OpenSSH per-connection server daemon (147.75.109.163:47678). May 7 23:47:12.306736 sshd[5013]: Accepted publickey for core from 147.75.109.163 port 47678 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:47:12.309407 sshd-session[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:47:12.317680 systemd-logind[1941]: New session 19 of user core. May 7 23:47:12.330609 systemd[1]: Started session-19.scope - Session 19 of User core. May 7 23:47:12.576806 sshd[5015]: Connection closed by 147.75.109.163 port 47678 May 7 23:47:12.577784 sshd-session[5013]: pam_unix(sshd:session): session closed for user core May 7 23:47:12.583243 systemd[1]: sshd@18-172.31.25.188:22-147.75.109.163:47678.service: Deactivated successfully. May 7 23:47:12.586732 systemd[1]: session-19.scope: Deactivated successfully. May 7 23:47:12.592680 systemd-logind[1941]: Session 19 logged out. Waiting for processes to exit. May 7 23:47:12.594869 systemd-logind[1941]: Removed session 19. May 7 23:47:12.616790 systemd[1]: Started sshd@19-172.31.25.188:22-147.75.109.163:47686.service - OpenSSH per-connection server daemon (147.75.109.163:47686). May 7 23:47:12.812706 sshd[5027]: Accepted publickey for core from 147.75.109.163 port 47686 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:47:12.815218 sshd-session[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:47:12.824225 systemd-logind[1941]: New session 20 of user core. May 7 23:47:12.830558 systemd[1]: Started session-20.scope - Session 20 of User core. May 7 23:47:13.142449 sshd[5029]: Connection closed by 147.75.109.163 port 47686 May 7 23:47:13.144456 sshd-session[5027]: pam_unix(sshd:session): session closed for user core May 7 23:47:13.151553 systemd[1]: sshd@19-172.31.25.188:22-147.75.109.163:47686.service: Deactivated successfully. May 7 23:47:13.155491 systemd[1]: session-20.scope: Deactivated successfully. May 7 23:47:13.156943 systemd-logind[1941]: Session 20 logged out. Waiting for processes to exit. May 7 23:47:13.159979 systemd-logind[1941]: Removed session 20. May 7 23:47:13.182815 systemd[1]: Started sshd@20-172.31.25.188:22-147.75.109.163:47688.service - OpenSSH per-connection server daemon (147.75.109.163:47688). May 7 23:47:13.377967 sshd[5039]: Accepted publickey for core from 147.75.109.163 port 47688 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:47:13.380670 sshd-session[5039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:47:13.389350 systemd-logind[1941]: New session 21 of user core. May 7 23:47:13.398572 systemd[1]: Started session-21.scope - Session 21 of User core. May 7 23:47:16.088089 sshd[5041]: Connection closed by 147.75.109.163 port 47688 May 7 23:47:16.087955 sshd-session[5039]: pam_unix(sshd:session): session closed for user core May 7 23:47:16.098455 systemd[1]: sshd@20-172.31.25.188:22-147.75.109.163:47688.service: Deactivated successfully. May 7 23:47:16.107847 systemd[1]: session-21.scope: Deactivated successfully. May 7 23:47:16.115213 systemd-logind[1941]: Session 21 logged out. Waiting for processes to exit. May 7 23:47:16.144533 systemd[1]: Started sshd@21-172.31.25.188:22-147.75.109.163:47696.service - OpenSSH per-connection server daemon (147.75.109.163:47696). May 7 23:47:16.146366 systemd-logind[1941]: Removed session 21. May 7 23:47:16.345342 sshd[5059]: Accepted publickey for core from 147.75.109.163 port 47696 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:47:16.347926 sshd-session[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:47:16.357823 systemd-logind[1941]: New session 22 of user core. May 7 23:47:16.364552 systemd[1]: Started session-22.scope - Session 22 of User core. May 7 23:47:16.872331 sshd[5062]: Connection closed by 147.75.109.163 port 47696 May 7 23:47:16.871485 sshd-session[5059]: pam_unix(sshd:session): session closed for user core May 7 23:47:16.879133 systemd[1]: sshd@21-172.31.25.188:22-147.75.109.163:47696.service: Deactivated successfully. May 7 23:47:16.882651 systemd[1]: session-22.scope: Deactivated successfully. May 7 23:47:16.884006 systemd-logind[1941]: Session 22 logged out. Waiting for processes to exit. May 7 23:47:16.887127 systemd-logind[1941]: Removed session 22. May 7 23:47:16.914803 systemd[1]: Started sshd@22-172.31.25.188:22-147.75.109.163:57916.service - OpenSSH per-connection server daemon (147.75.109.163:57916). May 7 23:47:17.101924 sshd[5072]: Accepted publickey for core from 147.75.109.163 port 57916 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:47:17.104530 sshd-session[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:47:17.114570 systemd-logind[1941]: New session 23 of user core. May 7 23:47:17.122507 systemd[1]: Started session-23.scope - Session 23 of User core. May 7 23:47:17.375950 sshd[5074]: Connection closed by 147.75.109.163 port 57916 May 7 23:47:17.376446 sshd-session[5072]: pam_unix(sshd:session): session closed for user core May 7 23:47:17.383936 systemd[1]: sshd@22-172.31.25.188:22-147.75.109.163:57916.service: Deactivated successfully. May 7 23:47:17.388289 systemd[1]: session-23.scope: Deactivated successfully. May 7 23:47:17.390361 systemd-logind[1941]: Session 23 logged out. Waiting for processes to exit. May 7 23:47:17.392610 systemd-logind[1941]: Removed session 23. May 7 23:47:22.419776 systemd[1]: Started sshd@23-172.31.25.188:22-147.75.109.163:57924.service - OpenSSH per-connection server daemon (147.75.109.163:57924). May 7 23:47:22.608850 sshd[5087]: Accepted publickey for core from 147.75.109.163 port 57924 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:47:22.611695 sshd-session[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:47:22.620386 systemd-logind[1941]: New session 24 of user core. May 7 23:47:22.628555 systemd[1]: Started session-24.scope - Session 24 of User core. May 7 23:47:22.875538 sshd[5089]: Connection closed by 147.75.109.163 port 57924 May 7 23:47:22.876665 sshd-session[5087]: pam_unix(sshd:session): session closed for user core May 7 23:47:22.883444 systemd[1]: sshd@23-172.31.25.188:22-147.75.109.163:57924.service: Deactivated successfully. May 7 23:47:22.887698 systemd[1]: session-24.scope: Deactivated successfully. May 7 23:47:22.889757 systemd-logind[1941]: Session 24 logged out. Waiting for processes to exit. May 7 23:47:22.892438 systemd-logind[1941]: Removed session 24. May 7 23:47:27.914865 systemd[1]: Started sshd@24-172.31.25.188:22-147.75.109.163:57946.service - OpenSSH per-connection server daemon (147.75.109.163:57946). May 7 23:47:28.109873 sshd[5104]: Accepted publickey for core from 147.75.109.163 port 57946 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:47:28.112534 sshd-session[5104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:47:28.121005 systemd-logind[1941]: New session 25 of user core. May 7 23:47:28.130644 systemd[1]: Started session-25.scope - Session 25 of User core. May 7 23:47:28.375304 sshd[5106]: Connection closed by 147.75.109.163 port 57946 May 7 23:47:28.376294 sshd-session[5104]: pam_unix(sshd:session): session closed for user core May 7 23:47:28.383226 systemd-logind[1941]: Session 25 logged out. Waiting for processes to exit. May 7 23:47:28.385242 systemd[1]: sshd@24-172.31.25.188:22-147.75.109.163:57946.service: Deactivated successfully. May 7 23:47:28.390596 systemd[1]: session-25.scope: Deactivated successfully. May 7 23:47:28.393533 systemd-logind[1941]: Removed session 25. May 7 23:47:33.411835 systemd[1]: Started sshd@25-172.31.25.188:22-147.75.109.163:57950.service - OpenSSH per-connection server daemon (147.75.109.163:57950). May 7 23:47:33.605536 sshd[5117]: Accepted publickey for core from 147.75.109.163 port 57950 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:47:33.608174 sshd-session[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:47:33.618166 systemd-logind[1941]: New session 26 of user core. May 7 23:47:33.629610 systemd[1]: Started session-26.scope - Session 26 of User core. May 7 23:47:33.880274 sshd[5119]: Connection closed by 147.75.109.163 port 57950 May 7 23:47:33.881106 sshd-session[5117]: pam_unix(sshd:session): session closed for user core May 7 23:47:33.888129 systemd[1]: sshd@25-172.31.25.188:22-147.75.109.163:57950.service: Deactivated successfully. May 7 23:47:33.892104 systemd[1]: session-26.scope: Deactivated successfully. May 7 23:47:33.894976 systemd-logind[1941]: Session 26 logged out. Waiting for processes to exit. May 7 23:47:33.897225 systemd-logind[1941]: Removed session 26. May 7 23:47:38.923759 systemd[1]: Started sshd@26-172.31.25.188:22-147.75.109.163:54940.service - OpenSSH per-connection server daemon (147.75.109.163:54940). May 7 23:47:39.112871 sshd[5131]: Accepted publickey for core from 147.75.109.163 port 54940 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:47:39.115950 sshd-session[5131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:47:39.125675 systemd-logind[1941]: New session 27 of user core. May 7 23:47:39.133587 systemd[1]: Started session-27.scope - Session 27 of User core. May 7 23:47:39.379925 sshd[5133]: Connection closed by 147.75.109.163 port 54940 May 7 23:47:39.380888 sshd-session[5131]: pam_unix(sshd:session): session closed for user core May 7 23:47:39.387831 systemd[1]: sshd@26-172.31.25.188:22-147.75.109.163:54940.service: Deactivated successfully. May 7 23:47:39.394618 systemd[1]: session-27.scope: Deactivated successfully. May 7 23:47:39.396679 systemd-logind[1941]: Session 27 logged out. Waiting for processes to exit. May 7 23:47:39.398726 systemd-logind[1941]: Removed session 27. May 7 23:47:39.421821 systemd[1]: Started sshd@27-172.31.25.188:22-147.75.109.163:54950.service - OpenSSH per-connection server daemon (147.75.109.163:54950). May 7 23:47:39.610651 sshd[5144]: Accepted publickey for core from 147.75.109.163 port 54950 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:47:39.613199 sshd-session[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:47:39.623500 systemd-logind[1941]: New session 28 of user core. May 7 23:47:39.629538 systemd[1]: Started session-28.scope - Session 28 of User core. May 7 23:47:43.472750 containerd[1959]: time="2025-05-07T23:47:43.472528283Z" level=info msg="StopContainer for \"415d30f50dea83d62aeeeb6a8f517a547994d3b42873cd278208622608b26be0\" with timeout 30 (s)" May 7 23:47:43.475376 containerd[1959]: time="2025-05-07T23:47:43.475039886Z" level=info msg="Stop container \"415d30f50dea83d62aeeeb6a8f517a547994d3b42873cd278208622608b26be0\" with signal terminated" May 7 23:47:43.507789 systemd[1]: cri-containerd-415d30f50dea83d62aeeeb6a8f517a547994d3b42873cd278208622608b26be0.scope: Deactivated successfully. May 7 23:47:43.526093 containerd[1959]: time="2025-05-07T23:47:43.525857277Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 7 23:47:43.540219 containerd[1959]: time="2025-05-07T23:47:43.540146425Z" level=info msg="StopContainer for \"b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5\" with timeout 2 (s)" May 7 23:47:43.540936 containerd[1959]: time="2025-05-07T23:47:43.540883996Z" level=info msg="Stop container \"b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5\" with signal terminated" May 7 23:47:43.557103 systemd-networkd[1874]: lxc_health: Link DOWN May 7 23:47:43.557128 systemd-networkd[1874]: lxc_health: Lost carrier May 7 23:47:43.588204 systemd[1]: cri-containerd-b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5.scope: Deactivated successfully. May 7 23:47:43.588832 systemd[1]: cri-containerd-b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5.scope: Consumed 14.746s CPU time, 125.2M memory peak, 136K read from disk, 12.9M written to disk. May 7 23:47:43.604190 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-415d30f50dea83d62aeeeb6a8f517a547994d3b42873cd278208622608b26be0-rootfs.mount: Deactivated successfully. May 7 23:47:43.629440 containerd[1959]: time="2025-05-07T23:47:43.629210235Z" level=info msg="shim disconnected" id=415d30f50dea83d62aeeeb6a8f517a547994d3b42873cd278208622608b26be0 namespace=k8s.io May 7 23:47:43.629440 containerd[1959]: time="2025-05-07T23:47:43.629333066Z" level=warning msg="cleaning up after shim disconnected" id=415d30f50dea83d62aeeeb6a8f517a547994d3b42873cd278208622608b26be0 namespace=k8s.io May 7 23:47:43.629758 containerd[1959]: time="2025-05-07T23:47:43.629403063Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:47:43.644454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5-rootfs.mount: Deactivated successfully. May 7 23:47:43.649574 containerd[1959]: time="2025-05-07T23:47:43.649247352Z" level=info msg="shim disconnected" id=b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5 namespace=k8s.io May 7 23:47:43.649574 containerd[1959]: time="2025-05-07T23:47:43.649408383Z" level=warning msg="cleaning up after shim disconnected" id=b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5 namespace=k8s.io May 7 23:47:43.649574 containerd[1959]: time="2025-05-07T23:47:43.649430056Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:47:43.678100 containerd[1959]: time="2025-05-07T23:47:43.678001036Z" level=info msg="StopContainer for \"415d30f50dea83d62aeeeb6a8f517a547994d3b42873cd278208622608b26be0\" returns successfully" May 7 23:47:43.680458 containerd[1959]: time="2025-05-07T23:47:43.679000100Z" level=info msg="StopPodSandbox for \"49bbc4936a3eae76326820ae18944db32563f76ab093b1b1da3651f30640c6c3\"" May 7 23:47:43.680458 containerd[1959]: time="2025-05-07T23:47:43.679077977Z" level=info msg="Container to stop \"415d30f50dea83d62aeeeb6a8f517a547994d3b42873cd278208622608b26be0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 7 23:47:43.685295 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-49bbc4936a3eae76326820ae18944db32563f76ab093b1b1da3651f30640c6c3-shm.mount: Deactivated successfully. May 7 23:47:43.692647 containerd[1959]: time="2025-05-07T23:47:43.692543413Z" level=warning msg="cleanup warnings time=\"2025-05-07T23:47:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 7 23:47:43.699633 containerd[1959]: time="2025-05-07T23:47:43.699562458Z" level=info msg="StopContainer for \"b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5\" returns successfully" May 7 23:47:43.700738 containerd[1959]: time="2025-05-07T23:47:43.700562577Z" level=info msg="StopPodSandbox for \"167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f\"" May 7 23:47:43.700738 containerd[1959]: time="2025-05-07T23:47:43.700624766Z" level=info msg="Container to stop \"1390b03c2b308110ffaf4302a495b335e273f728ce230872ecfab7c0a42a3218\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 7 23:47:43.700738 containerd[1959]: time="2025-05-07T23:47:43.700668017Z" level=info msg="Container to stop \"9d3fe43585f64b118b0cdad5f89a810d7c79551b5bdde476a69aeab02933add4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 7 23:47:43.700738 containerd[1959]: time="2025-05-07T23:47:43.700690697Z" level=info msg="Container to stop \"5931c3f79eb86c269a3ebf3e141679d0a9e4c9b67ae78da305a724d7b2928f9c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 7 23:47:43.700738 containerd[1959]: time="2025-05-07T23:47:43.700714685Z" level=info msg="Container to stop \"6ecff04b779ec7d8768cea5686a597ffd430098c6a6b69bddd883f175d1f3435\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 7 23:47:43.700738 containerd[1959]: time="2025-05-07T23:47:43.700736670Z" level=info msg="Container to stop \"b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 7 23:47:43.702322 systemd[1]: cri-containerd-49bbc4936a3eae76326820ae18944db32563f76ab093b1b1da3651f30640c6c3.scope: Deactivated successfully. May 7 23:47:43.720552 systemd[1]: cri-containerd-167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f.scope: Deactivated successfully. May 7 23:47:43.773144 containerd[1959]: time="2025-05-07T23:47:43.772940874Z" level=info msg="shim disconnected" id=49bbc4936a3eae76326820ae18944db32563f76ab093b1b1da3651f30640c6c3 namespace=k8s.io May 7 23:47:43.773144 containerd[1959]: time="2025-05-07T23:47:43.773021426Z" level=warning msg="cleaning up after shim disconnected" id=49bbc4936a3eae76326820ae18944db32563f76ab093b1b1da3651f30640c6c3 namespace=k8s.io May 7 23:47:43.773144 containerd[1959]: time="2025-05-07T23:47:43.773042439Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:47:43.776800 containerd[1959]: time="2025-05-07T23:47:43.776161971Z" level=info msg="shim disconnected" id=167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f namespace=k8s.io May 7 23:47:43.777078 containerd[1959]: time="2025-05-07T23:47:43.776480220Z" level=warning msg="cleaning up after shim disconnected" id=167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f namespace=k8s.io May 7 23:47:43.777598 containerd[1959]: time="2025-05-07T23:47:43.776902852Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:47:43.813771 containerd[1959]: time="2025-05-07T23:47:43.813685788Z" level=info msg="TearDown network for sandbox \"49bbc4936a3eae76326820ae18944db32563f76ab093b1b1da3651f30640c6c3\" successfully" May 7 23:47:43.813771 containerd[1959]: time="2025-05-07T23:47:43.813739473Z" level=info msg="StopPodSandbox for \"49bbc4936a3eae76326820ae18944db32563f76ab093b1b1da3651f30640c6c3\" returns successfully" May 7 23:47:43.819528 containerd[1959]: time="2025-05-07T23:47:43.819461498Z" level=info msg="TearDown network for sandbox \"167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f\" successfully" May 7 23:47:43.819880 containerd[1959]: time="2025-05-07T23:47:43.819692335Z" level=info msg="StopPodSandbox for \"167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f\" returns successfully" May 7 23:47:44.030413 kubelet[3497]: I0507 23:47:44.028161 3497 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b9652676-af76-490e-bdad-d776ffd569a0-hubble-tls\") pod \"b9652676-af76-490e-bdad-d776ffd569a0\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " May 7 23:47:44.030413 kubelet[3497]: I0507 23:47:44.028228 3497 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-host-proc-sys-net\") pod \"b9652676-af76-490e-bdad-d776ffd569a0\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " May 7 23:47:44.032215 kubelet[3497]: I0507 23:47:44.031156 3497 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-hostproc\") pod \"b9652676-af76-490e-bdad-d776ffd569a0\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " May 7 23:47:44.032215 kubelet[3497]: I0507 23:47:44.031221 3497 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-etc-cni-netd\") pod \"b9652676-af76-490e-bdad-d776ffd569a0\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " May 7 23:47:44.032215 kubelet[3497]: I0507 23:47:44.031289 3497 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv78g\" (UniqueName: \"kubernetes.io/projected/b9652676-af76-490e-bdad-d776ffd569a0-kube-api-access-sv78g\") pod \"b9652676-af76-490e-bdad-d776ffd569a0\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " May 7 23:47:44.032215 kubelet[3497]: I0507 23:47:44.031344 3497 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-bpf-maps\") pod \"b9652676-af76-490e-bdad-d776ffd569a0\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " May 7 23:47:44.032215 kubelet[3497]: I0507 23:47:44.031385 3497 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9652676-af76-490e-bdad-d776ffd569a0-cilium-config-path\") pod \"b9652676-af76-490e-bdad-d776ffd569a0\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " May 7 23:47:44.032215 kubelet[3497]: I0507 23:47:44.031417 3497 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-cni-path\") pod \"b9652676-af76-490e-bdad-d776ffd569a0\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " May 7 23:47:44.032634 kubelet[3497]: I0507 23:47:44.031449 3497 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-host-proc-sys-kernel\") pod \"b9652676-af76-490e-bdad-d776ffd569a0\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " May 7 23:47:44.032634 kubelet[3497]: I0507 23:47:44.031487 3497 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9be17240-3ab4-46d4-8a63-b1eeffa423dd-cilium-config-path\") pod \"9be17240-3ab4-46d4-8a63-b1eeffa423dd\" (UID: \"9be17240-3ab4-46d4-8a63-b1eeffa423dd\") " May 7 23:47:44.032634 kubelet[3497]: I0507 23:47:44.031524 3497 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-lib-modules\") pod \"b9652676-af76-490e-bdad-d776ffd569a0\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " May 7 23:47:44.032634 kubelet[3497]: I0507 23:47:44.031556 3497 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-cilium-run\") pod \"b9652676-af76-490e-bdad-d776ffd569a0\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " May 7 23:47:44.032634 kubelet[3497]: I0507 23:47:44.031590 3497 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-xtables-lock\") pod \"b9652676-af76-490e-bdad-d776ffd569a0\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " May 7 23:47:44.032634 kubelet[3497]: I0507 23:47:44.031622 3497 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-cilium-cgroup\") pod \"b9652676-af76-490e-bdad-d776ffd569a0\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " May 7 23:47:44.032944 kubelet[3497]: I0507 23:47:44.031658 3497 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b9652676-af76-490e-bdad-d776ffd569a0-clustermesh-secrets\") pod \"b9652676-af76-490e-bdad-d776ffd569a0\" (UID: \"b9652676-af76-490e-bdad-d776ffd569a0\") " May 7 23:47:44.032944 kubelet[3497]: I0507 23:47:44.031698 3497 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5gqh\" (UniqueName: \"kubernetes.io/projected/9be17240-3ab4-46d4-8a63-b1eeffa423dd-kube-api-access-f5gqh\") pod \"9be17240-3ab4-46d4-8a63-b1eeffa423dd\" (UID: \"9be17240-3ab4-46d4-8a63-b1eeffa423dd\") " May 7 23:47:44.036801 kubelet[3497]: I0507 23:47:44.036640 3497 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b9652676-af76-490e-bdad-d776ffd569a0" (UID: "b9652676-af76-490e-bdad-d776ffd569a0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:47:44.036801 kubelet[3497]: I0507 23:47:44.036753 3497 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-hostproc" (OuterVolumeSpecName: "hostproc") pod "b9652676-af76-490e-bdad-d776ffd569a0" (UID: "b9652676-af76-490e-bdad-d776ffd569a0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:47:44.037563 kubelet[3497]: I0507 23:47:44.037085 3497 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b9652676-af76-490e-bdad-d776ffd569a0" (UID: "b9652676-af76-490e-bdad-d776ffd569a0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:47:44.044775 kubelet[3497]: I0507 23:47:44.044623 3497 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b9652676-af76-490e-bdad-d776ffd569a0" (UID: "b9652676-af76-490e-bdad-d776ffd569a0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:47:44.046706 kubelet[3497]: I0507 23:47:44.046415 3497 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b9652676-af76-490e-bdad-d776ffd569a0" (UID: "b9652676-af76-490e-bdad-d776ffd569a0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:47:44.046706 kubelet[3497]: I0507 23:47:44.046503 3497 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b9652676-af76-490e-bdad-d776ffd569a0" (UID: "b9652676-af76-490e-bdad-d776ffd569a0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:47:44.046706 kubelet[3497]: I0507 23:47:44.046544 3497 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b9652676-af76-490e-bdad-d776ffd569a0" (UID: "b9652676-af76-490e-bdad-d776ffd569a0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:47:44.046706 kubelet[3497]: I0507 23:47:44.046585 3497 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b9652676-af76-490e-bdad-d776ffd569a0" (UID: "b9652676-af76-490e-bdad-d776ffd569a0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:47:44.048284 kubelet[3497]: I0507 23:47:44.047455 3497 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-cni-path" (OuterVolumeSpecName: "cni-path") pod "b9652676-af76-490e-bdad-d776ffd569a0" (UID: "b9652676-af76-490e-bdad-d776ffd569a0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:47:44.052301 kubelet[3497]: I0507 23:47:44.051294 3497 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b9652676-af76-490e-bdad-d776ffd569a0" (UID: "b9652676-af76-490e-bdad-d776ffd569a0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 7 23:47:44.052301 kubelet[3497]: I0507 23:47:44.051514 3497 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9652676-af76-490e-bdad-d776ffd569a0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b9652676-af76-490e-bdad-d776ffd569a0" (UID: "b9652676-af76-490e-bdad-d776ffd569a0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 7 23:47:44.065191 kubelet[3497]: I0507 23:47:44.065102 3497 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9be17240-3ab4-46d4-8a63-b1eeffa423dd-kube-api-access-f5gqh" (OuterVolumeSpecName: "kube-api-access-f5gqh") pod "9be17240-3ab4-46d4-8a63-b1eeffa423dd" (UID: "9be17240-3ab4-46d4-8a63-b1eeffa423dd"). InnerVolumeSpecName "kube-api-access-f5gqh". PluginName "kubernetes.io/projected", VolumeGidValue "" May 7 23:47:44.066553 kubelet[3497]: I0507 23:47:44.066468 3497 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9652676-af76-490e-bdad-d776ffd569a0-kube-api-access-sv78g" (OuterVolumeSpecName: "kube-api-access-sv78g") pod "b9652676-af76-490e-bdad-d776ffd569a0" (UID: "b9652676-af76-490e-bdad-d776ffd569a0"). InnerVolumeSpecName "kube-api-access-sv78g". PluginName "kubernetes.io/projected", VolumeGidValue "" May 7 23:47:44.070526 kubelet[3497]: I0507 23:47:44.070454 3497 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9652676-af76-490e-bdad-d776ffd569a0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b9652676-af76-490e-bdad-d776ffd569a0" (UID: "b9652676-af76-490e-bdad-d776ffd569a0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 7 23:47:44.072613 kubelet[3497]: I0507 23:47:44.072533 3497 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9be17240-3ab4-46d4-8a63-b1eeffa423dd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9be17240-3ab4-46d4-8a63-b1eeffa423dd" (UID: "9be17240-3ab4-46d4-8a63-b1eeffa423dd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 7 23:47:44.074462 kubelet[3497]: I0507 23:47:44.074394 3497 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9652676-af76-490e-bdad-d776ffd569a0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b9652676-af76-490e-bdad-d776ffd569a0" (UID: "b9652676-af76-490e-bdad-d776ffd569a0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 7 23:47:44.132876 kubelet[3497]: I0507 23:47:44.132828 3497 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-host-proc-sys-net\") on node \"ip-172-31-25-188\" DevicePath \"\"" May 7 23:47:44.133282 kubelet[3497]: I0507 23:47:44.133082 3497 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-hostproc\") on node \"ip-172-31-25-188\" DevicePath \"\"" May 7 23:47:44.133282 kubelet[3497]: I0507 23:47:44.133110 3497 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-etc-cni-netd\") on node \"ip-172-31-25-188\" DevicePath \"\"" May 7 23:47:44.133282 kubelet[3497]: I0507 23:47:44.133156 3497 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-sv78g\" (UniqueName: \"kubernetes.io/projected/b9652676-af76-490e-bdad-d776ffd569a0-kube-api-access-sv78g\") on node \"ip-172-31-25-188\" DevicePath \"\"" May 7 23:47:44.133282 kubelet[3497]: I0507 23:47:44.133180 3497 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-bpf-maps\") on node \"ip-172-31-25-188\" DevicePath \"\"" May 7 23:47:44.133282 kubelet[3497]: I0507 23:47:44.133205 3497 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9652676-af76-490e-bdad-d776ffd569a0-cilium-config-path\") on node \"ip-172-31-25-188\" DevicePath \"\"" May 7 23:47:44.133841 kubelet[3497]: I0507 23:47:44.133594 3497 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-cni-path\") on node \"ip-172-31-25-188\" DevicePath \"\"" May 7 23:47:44.133841 kubelet[3497]: I0507 23:47:44.133623 3497 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-host-proc-sys-kernel\") on node \"ip-172-31-25-188\" DevicePath \"\"" May 7 23:47:44.133841 kubelet[3497]: I0507 23:47:44.133643 3497 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9be17240-3ab4-46d4-8a63-b1eeffa423dd-cilium-config-path\") on node \"ip-172-31-25-188\" DevicePath \"\"" May 7 23:47:44.133841 kubelet[3497]: I0507 23:47:44.133689 3497 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-lib-modules\") on node \"ip-172-31-25-188\" DevicePath \"\"" May 7 23:47:44.133841 kubelet[3497]: I0507 23:47:44.133709 3497 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-cilium-run\") on node \"ip-172-31-25-188\" DevicePath \"\"" May 7 23:47:44.133841 kubelet[3497]: I0507 23:47:44.133727 3497 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-xtables-lock\") on node \"ip-172-31-25-188\" DevicePath \"\"" May 7 23:47:44.133841 kubelet[3497]: I0507 23:47:44.133769 3497 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b9652676-af76-490e-bdad-d776ffd569a0-cilium-cgroup\") on node \"ip-172-31-25-188\" DevicePath \"\"" May 7 23:47:44.133841 kubelet[3497]: I0507 23:47:44.133796 3497 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b9652676-af76-490e-bdad-d776ffd569a0-clustermesh-secrets\") on node \"ip-172-31-25-188\" DevicePath \"\"" May 7 23:47:44.134369 kubelet[3497]: I0507 23:47:44.133818 3497 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-f5gqh\" (UniqueName: \"kubernetes.io/projected/9be17240-3ab4-46d4-8a63-b1eeffa423dd-kube-api-access-f5gqh\") on node \"ip-172-31-25-188\" DevicePath \"\"" May 7 23:47:44.134369 kubelet[3497]: I0507 23:47:44.134297 3497 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b9652676-af76-490e-bdad-d776ffd569a0-hubble-tls\") on node \"ip-172-31-25-188\" DevicePath \"\"" May 7 23:47:44.186551 kubelet[3497]: I0507 23:47:44.186398 3497 scope.go:117] "RemoveContainer" containerID="b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5" May 7 23:47:44.193963 containerd[1959]: time="2025-05-07T23:47:44.193476459Z" level=info msg="RemoveContainer for \"b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5\"" May 7 23:47:44.206041 systemd[1]: Removed slice kubepods-burstable-podb9652676_af76_490e_bdad_d776ffd569a0.slice - libcontainer container kubepods-burstable-podb9652676_af76_490e_bdad_d776ffd569a0.slice. May 7 23:47:44.207412 systemd[1]: kubepods-burstable-podb9652676_af76_490e_bdad_d776ffd569a0.slice: Consumed 14.911s CPU time, 125.7M memory peak, 136K read from disk, 12.9M written to disk. May 7 23:47:44.214164 containerd[1959]: time="2025-05-07T23:47:44.213816340Z" level=info msg="RemoveContainer for \"b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5\" returns successfully" May 7 23:47:44.214363 systemd[1]: Removed slice kubepods-besteffort-pod9be17240_3ab4_46d4_8a63_b1eeffa423dd.slice - libcontainer container kubepods-besteffort-pod9be17240_3ab4_46d4_8a63_b1eeffa423dd.slice. May 7 23:47:44.215998 kubelet[3497]: I0507 23:47:44.214542 3497 scope.go:117] "RemoveContainer" containerID="6ecff04b779ec7d8768cea5686a597ffd430098c6a6b69bddd883f175d1f3435" May 7 23:47:44.220008 containerd[1959]: time="2025-05-07T23:47:44.219940116Z" level=info msg="RemoveContainer for \"6ecff04b779ec7d8768cea5686a597ffd430098c6a6b69bddd883f175d1f3435\"" May 7 23:47:44.227881 containerd[1959]: time="2025-05-07T23:47:44.227808108Z" level=info msg="RemoveContainer for \"6ecff04b779ec7d8768cea5686a597ffd430098c6a6b69bddd883f175d1f3435\" returns successfully" May 7 23:47:44.228351 kubelet[3497]: I0507 23:47:44.228306 3497 scope.go:117] "RemoveContainer" containerID="5931c3f79eb86c269a3ebf3e141679d0a9e4c9b67ae78da305a724d7b2928f9c" May 7 23:47:44.232091 containerd[1959]: time="2025-05-07T23:47:44.231482122Z" level=info msg="RemoveContainer for \"5931c3f79eb86c269a3ebf3e141679d0a9e4c9b67ae78da305a724d7b2928f9c\"" May 7 23:47:44.238594 containerd[1959]: time="2025-05-07T23:47:44.238498049Z" level=info msg="RemoveContainer for \"5931c3f79eb86c269a3ebf3e141679d0a9e4c9b67ae78da305a724d7b2928f9c\" returns successfully" May 7 23:47:44.239606 kubelet[3497]: I0507 23:47:44.239431 3497 scope.go:117] "RemoveContainer" containerID="9d3fe43585f64b118b0cdad5f89a810d7c79551b5bdde476a69aeab02933add4" May 7 23:47:44.245065 containerd[1959]: time="2025-05-07T23:47:44.244840128Z" level=info msg="RemoveContainer for \"9d3fe43585f64b118b0cdad5f89a810d7c79551b5bdde476a69aeab02933add4\"" May 7 23:47:44.251967 containerd[1959]: time="2025-05-07T23:47:44.251863095Z" level=info msg="RemoveContainer for \"9d3fe43585f64b118b0cdad5f89a810d7c79551b5bdde476a69aeab02933add4\" returns successfully" May 7 23:47:44.253460 kubelet[3497]: I0507 23:47:44.253380 3497 scope.go:117] "RemoveContainer" containerID="1390b03c2b308110ffaf4302a495b335e273f728ce230872ecfab7c0a42a3218" May 7 23:47:44.256165 containerd[1959]: time="2025-05-07T23:47:44.255986536Z" level=info msg="RemoveContainer for \"1390b03c2b308110ffaf4302a495b335e273f728ce230872ecfab7c0a42a3218\"" May 7 23:47:44.266004 containerd[1959]: time="2025-05-07T23:47:44.265928051Z" level=info msg="RemoveContainer for \"1390b03c2b308110ffaf4302a495b335e273f728ce230872ecfab7c0a42a3218\" returns successfully" May 7 23:47:44.266822 kubelet[3497]: I0507 23:47:44.266672 3497 scope.go:117] "RemoveContainer" containerID="b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5" May 7 23:47:44.268321 containerd[1959]: time="2025-05-07T23:47:44.267491829Z" level=error msg="ContainerStatus for \"b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5\": not found" May 7 23:47:44.268321 containerd[1959]: time="2025-05-07T23:47:44.268272146Z" level=error msg="ContainerStatus for \"6ecff04b779ec7d8768cea5686a597ffd430098c6a6b69bddd883f175d1f3435\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6ecff04b779ec7d8768cea5686a597ffd430098c6a6b69bddd883f175d1f3435\": not found" May 7 23:47:44.268548 kubelet[3497]: E0507 23:47:44.267735 3497 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5\": not found" containerID="b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5" May 7 23:47:44.268548 kubelet[3497]: I0507 23:47:44.267780 3497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5"} err="failed to get container status \"b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"b557357352c8f2913b064de7ac0287ed5f7aa7f7ae6836543026b1244ad114d5\": not found" May 7 23:47:44.268548 kubelet[3497]: I0507 23:47:44.267944 3497 scope.go:117] "RemoveContainer" containerID="6ecff04b779ec7d8768cea5686a597ffd430098c6a6b69bddd883f175d1f3435" May 7 23:47:44.268548 kubelet[3497]: E0507 23:47:44.268497 3497 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6ecff04b779ec7d8768cea5686a597ffd430098c6a6b69bddd883f175d1f3435\": not found" containerID="6ecff04b779ec7d8768cea5686a597ffd430098c6a6b69bddd883f175d1f3435" May 7 23:47:44.268548 kubelet[3497]: I0507 23:47:44.268539 3497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6ecff04b779ec7d8768cea5686a597ffd430098c6a6b69bddd883f175d1f3435"} err="failed to get container status \"6ecff04b779ec7d8768cea5686a597ffd430098c6a6b69bddd883f175d1f3435\": rpc error: code = NotFound desc = an error occurred when try to find container \"6ecff04b779ec7d8768cea5686a597ffd430098c6a6b69bddd883f175d1f3435\": not found" May 7 23:47:44.268858 kubelet[3497]: I0507 23:47:44.268573 3497 scope.go:117] "RemoveContainer" containerID="5931c3f79eb86c269a3ebf3e141679d0a9e4c9b67ae78da305a724d7b2928f9c" May 7 23:47:44.269126 containerd[1959]: time="2025-05-07T23:47:44.269072194Z" level=error msg="ContainerStatus for \"5931c3f79eb86c269a3ebf3e141679d0a9e4c9b67ae78da305a724d7b2928f9c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5931c3f79eb86c269a3ebf3e141679d0a9e4c9b67ae78da305a724d7b2928f9c\": not found" May 7 23:47:44.269375 kubelet[3497]: E0507 23:47:44.269324 3497 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5931c3f79eb86c269a3ebf3e141679d0a9e4c9b67ae78da305a724d7b2928f9c\": not found" containerID="5931c3f79eb86c269a3ebf3e141679d0a9e4c9b67ae78da305a724d7b2928f9c" May 7 23:47:44.269448 kubelet[3497]: I0507 23:47:44.269374 3497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5931c3f79eb86c269a3ebf3e141679d0a9e4c9b67ae78da305a724d7b2928f9c"} err="failed to get container status \"5931c3f79eb86c269a3ebf3e141679d0a9e4c9b67ae78da305a724d7b2928f9c\": rpc error: code = NotFound desc = an error occurred when try to find container \"5931c3f79eb86c269a3ebf3e141679d0a9e4c9b67ae78da305a724d7b2928f9c\": not found" May 7 23:47:44.269448 kubelet[3497]: I0507 23:47:44.269413 3497 scope.go:117] "RemoveContainer" containerID="9d3fe43585f64b118b0cdad5f89a810d7c79551b5bdde476a69aeab02933add4" May 7 23:47:44.270110 containerd[1959]: time="2025-05-07T23:47:44.269871582Z" level=error msg="ContainerStatus for \"9d3fe43585f64b118b0cdad5f89a810d7c79551b5bdde476a69aeab02933add4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9d3fe43585f64b118b0cdad5f89a810d7c79551b5bdde476a69aeab02933add4\": not found" May 7 23:47:44.270246 kubelet[3497]: E0507 23:47:44.270192 3497 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d3fe43585f64b118b0cdad5f89a810d7c79551b5bdde476a69aeab02933add4\": not found" containerID="9d3fe43585f64b118b0cdad5f89a810d7c79551b5bdde476a69aeab02933add4" May 7 23:47:44.270246 kubelet[3497]: I0507 23:47:44.270238 3497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9d3fe43585f64b118b0cdad5f89a810d7c79551b5bdde476a69aeab02933add4"} err="failed to get container status \"9d3fe43585f64b118b0cdad5f89a810d7c79551b5bdde476a69aeab02933add4\": rpc error: code = NotFound desc = an error occurred when try to find container \"9d3fe43585f64b118b0cdad5f89a810d7c79551b5bdde476a69aeab02933add4\": not found" May 7 23:47:44.270483 kubelet[3497]: I0507 23:47:44.270297 3497 scope.go:117] "RemoveContainer" containerID="1390b03c2b308110ffaf4302a495b335e273f728ce230872ecfab7c0a42a3218" May 7 23:47:44.270662 containerd[1959]: time="2025-05-07T23:47:44.270604404Z" level=error msg="ContainerStatus for \"1390b03c2b308110ffaf4302a495b335e273f728ce230872ecfab7c0a42a3218\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1390b03c2b308110ffaf4302a495b335e273f728ce230872ecfab7c0a42a3218\": not found" May 7 23:47:44.271109 kubelet[3497]: E0507 23:47:44.271049 3497 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1390b03c2b308110ffaf4302a495b335e273f728ce230872ecfab7c0a42a3218\": not found" containerID="1390b03c2b308110ffaf4302a495b335e273f728ce230872ecfab7c0a42a3218" May 7 23:47:44.271221 kubelet[3497]: I0507 23:47:44.271126 3497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1390b03c2b308110ffaf4302a495b335e273f728ce230872ecfab7c0a42a3218"} err="failed to get container status \"1390b03c2b308110ffaf4302a495b335e273f728ce230872ecfab7c0a42a3218\": rpc error: code = NotFound desc = an error occurred when try to find container \"1390b03c2b308110ffaf4302a495b335e273f728ce230872ecfab7c0a42a3218\": not found" May 7 23:47:44.271221 kubelet[3497]: I0507 23:47:44.271160 3497 scope.go:117] "RemoveContainer" containerID="415d30f50dea83d62aeeeb6a8f517a547994d3b42873cd278208622608b26be0" May 7 23:47:44.273729 containerd[1959]: time="2025-05-07T23:47:44.273665692Z" level=info msg="RemoveContainer for \"415d30f50dea83d62aeeeb6a8f517a547994d3b42873cd278208622608b26be0\"" May 7 23:47:44.280276 containerd[1959]: time="2025-05-07T23:47:44.280179082Z" level=info msg="RemoveContainer for \"415d30f50dea83d62aeeeb6a8f517a547994d3b42873cd278208622608b26be0\" returns successfully" May 7 23:47:44.281063 kubelet[3497]: I0507 23:47:44.280528 3497 scope.go:117] "RemoveContainer" containerID="415d30f50dea83d62aeeeb6a8f517a547994d3b42873cd278208622608b26be0" May 7 23:47:44.281165 containerd[1959]: time="2025-05-07T23:47:44.280892077Z" level=error msg="ContainerStatus for \"415d30f50dea83d62aeeeb6a8f517a547994d3b42873cd278208622608b26be0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"415d30f50dea83d62aeeeb6a8f517a547994d3b42873cd278208622608b26be0\": not found" May 7 23:47:44.281224 kubelet[3497]: E0507 23:47:44.281172 3497 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"415d30f50dea83d62aeeeb6a8f517a547994d3b42873cd278208622608b26be0\": not found" containerID="415d30f50dea83d62aeeeb6a8f517a547994d3b42873cd278208622608b26be0" May 7 23:47:44.281524 kubelet[3497]: I0507 23:47:44.281215 3497 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"415d30f50dea83d62aeeeb6a8f517a547994d3b42873cd278208622608b26be0"} err="failed to get container status \"415d30f50dea83d62aeeeb6a8f517a547994d3b42873cd278208622608b26be0\": rpc error: code = NotFound desc = an error occurred when try to find container \"415d30f50dea83d62aeeeb6a8f517a547994d3b42873cd278208622608b26be0\": not found" May 7 23:47:44.489662 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49bbc4936a3eae76326820ae18944db32563f76ab093b1b1da3651f30640c6c3-rootfs.mount: Deactivated successfully. May 7 23:47:44.489880 systemd[1]: var-lib-kubelet-pods-9be17240\x2d3ab4\x2d46d4\x2d8a63\x2db1eeffa423dd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df5gqh.mount: Deactivated successfully. May 7 23:47:44.490036 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f-rootfs.mount: Deactivated successfully. May 7 23:47:44.490204 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f-shm.mount: Deactivated successfully. May 7 23:47:44.490424 systemd[1]: var-lib-kubelet-pods-b9652676\x2daf76\x2d490e\x2dbdad\x2dd776ffd569a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsv78g.mount: Deactivated successfully. May 7 23:47:44.490598 systemd[1]: var-lib-kubelet-pods-b9652676\x2daf76\x2d490e\x2dbdad\x2dd776ffd569a0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 7 23:47:44.490774 systemd[1]: var-lib-kubelet-pods-b9652676\x2daf76\x2d490e\x2dbdad\x2dd776ffd569a0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 7 23:47:44.733885 kubelet[3497]: I0507 23:47:44.733762 3497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9be17240-3ab4-46d4-8a63-b1eeffa423dd" path="/var/lib/kubelet/pods/9be17240-3ab4-46d4-8a63-b1eeffa423dd/volumes" May 7 23:47:44.734914 kubelet[3497]: I0507 23:47:44.734869 3497 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9652676-af76-490e-bdad-d776ffd569a0" path="/var/lib/kubelet/pods/b9652676-af76-490e-bdad-d776ffd569a0/volumes" May 7 23:47:45.391612 sshd[5146]: Connection closed by 147.75.109.163 port 54950 May 7 23:47:45.392198 sshd-session[5144]: pam_unix(sshd:session): session closed for user core May 7 23:47:45.399525 systemd[1]: sshd@27-172.31.25.188:22-147.75.109.163:54950.service: Deactivated successfully. May 7 23:47:45.404851 systemd[1]: session-28.scope: Deactivated successfully. May 7 23:47:45.405694 systemd[1]: session-28.scope: Consumed 3.061s CPU time, 25.7M memory peak. May 7 23:47:45.407240 systemd-logind[1941]: Session 28 logged out. Waiting for processes to exit. May 7 23:47:45.409971 systemd-logind[1941]: Removed session 28. May 7 23:47:45.436767 systemd[1]: Started sshd@28-172.31.25.188:22-147.75.109.163:54960.service - OpenSSH per-connection server daemon (147.75.109.163:54960). May 7 23:47:45.622970 sshd[5310]: Accepted publickey for core from 147.75.109.163 port 54960 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:47:45.625603 sshd-session[5310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:47:45.634171 systemd-logind[1941]: New session 29 of user core. May 7 23:47:45.642602 systemd[1]: Started session-29.scope - Session 29 of User core. May 7 23:47:46.475232 ntpd[1935]: Deleting interface #12 lxc_health, fe80::18a9:4cff:fead:b9d4%8#123, interface stats: received=0, sent=0, dropped=0, active_time=70 secs May 7 23:47:46.475920 ntpd[1935]: 7 May 23:47:46 ntpd[1935]: Deleting interface #12 lxc_health, fe80::18a9:4cff:fead:b9d4%8#123, interface stats: received=0, sent=0, dropped=0, active_time=70 secs May 7 23:47:46.935960 kubelet[3497]: E0507 23:47:46.935816 3497 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 7 23:47:47.097425 sshd[5312]: Connection closed by 147.75.109.163 port 54960 May 7 23:47:47.100217 sshd-session[5310]: pam_unix(sshd:session): session closed for user core May 7 23:47:47.108358 systemd[1]: sshd@28-172.31.25.188:22-147.75.109.163:54960.service: Deactivated successfully. May 7 23:47:47.114852 systemd[1]: session-29.scope: Deactivated successfully. May 7 23:47:47.118724 systemd[1]: session-29.scope: Consumed 1.262s CPU time, 23.5M memory peak. May 7 23:47:47.122855 kubelet[3497]: I0507 23:47:47.122786 3497 topology_manager.go:215] "Topology Admit Handler" podUID="9dbe0b97-951d-4a05-b5fa-b8d2c2054239" podNamespace="kube-system" podName="cilium-c75qn" May 7 23:47:47.123042 kubelet[3497]: E0507 23:47:47.122885 3497 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b9652676-af76-490e-bdad-d776ffd569a0" containerName="mount-cgroup" May 7 23:47:47.123042 kubelet[3497]: E0507 23:47:47.122905 3497 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b9652676-af76-490e-bdad-d776ffd569a0" containerName="apply-sysctl-overwrites" May 7 23:47:47.123042 kubelet[3497]: E0507 23:47:47.122922 3497 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b9652676-af76-490e-bdad-d776ffd569a0" containerName="clean-cilium-state" May 7 23:47:47.123042 kubelet[3497]: E0507 23:47:47.122937 3497 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9be17240-3ab4-46d4-8a63-b1eeffa423dd" containerName="cilium-operator" May 7 23:47:47.123042 kubelet[3497]: E0507 23:47:47.122956 3497 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b9652676-af76-490e-bdad-d776ffd569a0" containerName="mount-bpf-fs" May 7 23:47:47.123042 kubelet[3497]: E0507 23:47:47.122971 3497 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b9652676-af76-490e-bdad-d776ffd569a0" containerName="cilium-agent" May 7 23:47:47.123042 kubelet[3497]: I0507 23:47:47.123013 3497 memory_manager.go:354] "RemoveStaleState removing state" podUID="9be17240-3ab4-46d4-8a63-b1eeffa423dd" containerName="cilium-operator" May 7 23:47:47.123042 kubelet[3497]: I0507 23:47:47.123028 3497 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9652676-af76-490e-bdad-d776ffd569a0" containerName="cilium-agent" May 7 23:47:47.130104 systemd-logind[1941]: Session 29 logged out. Waiting for processes to exit. May 7 23:47:47.138303 kubelet[3497]: W0507 23:47:47.138141 3497 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-25-188" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-25-188' and this object May 7 23:47:47.138303 kubelet[3497]: E0507 23:47:47.138207 3497 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-25-188" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-25-188' and this object May 7 23:47:47.140612 kubelet[3497]: W0507 23:47:47.140331 3497 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-25-188" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-25-188' and this object May 7 23:47:47.140612 kubelet[3497]: W0507 23:47:47.140433 3497 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-25-188" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-25-188' and this object May 7 23:47:47.140612 kubelet[3497]: E0507 23:47:47.140467 3497 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-25-188" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-25-188' and this object May 7 23:47:47.140612 kubelet[3497]: E0507 23:47:47.140507 3497 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-25-188" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-25-188' and this object May 7 23:47:47.140612 kubelet[3497]: W0507 23:47:47.140372 3497 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-25-188" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-25-188' and this object May 7 23:47:47.141068 kubelet[3497]: E0507 23:47:47.140559 3497 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-25-188" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-25-188' and this object May 7 23:47:47.158013 kubelet[3497]: I0507 23:47:47.157954 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9dbe0b97-951d-4a05-b5fa-b8d2c2054239-hubble-tls\") pod \"cilium-c75qn\" (UID: \"9dbe0b97-951d-4a05-b5fa-b8d2c2054239\") " pod="kube-system/cilium-c75qn" May 7 23:47:47.158013 kubelet[3497]: I0507 23:47:47.158028 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9dbe0b97-951d-4a05-b5fa-b8d2c2054239-bpf-maps\") pod \"cilium-c75qn\" (UID: \"9dbe0b97-951d-4a05-b5fa-b8d2c2054239\") " pod="kube-system/cilium-c75qn" May 7 23:47:47.158224 kubelet[3497]: I0507 23:47:47.158068 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9dbe0b97-951d-4a05-b5fa-b8d2c2054239-etc-cni-netd\") pod \"cilium-c75qn\" (UID: \"9dbe0b97-951d-4a05-b5fa-b8d2c2054239\") " pod="kube-system/cilium-c75qn" May 7 23:47:47.158224 kubelet[3497]: I0507 23:47:47.158103 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9dbe0b97-951d-4a05-b5fa-b8d2c2054239-xtables-lock\") pod \"cilium-c75qn\" (UID: \"9dbe0b97-951d-4a05-b5fa-b8d2c2054239\") " pod="kube-system/cilium-c75qn" May 7 23:47:47.158224 kubelet[3497]: I0507 23:47:47.158139 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9dbe0b97-951d-4a05-b5fa-b8d2c2054239-cilium-ipsec-secrets\") pod \"cilium-c75qn\" (UID: \"9dbe0b97-951d-4a05-b5fa-b8d2c2054239\") " pod="kube-system/cilium-c75qn" May 7 23:47:47.158224 kubelet[3497]: I0507 23:47:47.158179 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9dbe0b97-951d-4a05-b5fa-b8d2c2054239-cni-path\") pod \"cilium-c75qn\" (UID: \"9dbe0b97-951d-4a05-b5fa-b8d2c2054239\") " pod="kube-system/cilium-c75qn" May 7 23:47:47.158224 kubelet[3497]: I0507 23:47:47.158215 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9dbe0b97-951d-4a05-b5fa-b8d2c2054239-host-proc-sys-kernel\") pod \"cilium-c75qn\" (UID: \"9dbe0b97-951d-4a05-b5fa-b8d2c2054239\") " pod="kube-system/cilium-c75qn" May 7 23:47:47.159509 kubelet[3497]: I0507 23:47:47.159454 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9dbe0b97-951d-4a05-b5fa-b8d2c2054239-lib-modules\") pod \"cilium-c75qn\" (UID: \"9dbe0b97-951d-4a05-b5fa-b8d2c2054239\") " pod="kube-system/cilium-c75qn" May 7 23:47:47.159628 kubelet[3497]: I0507 23:47:47.159534 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9dbe0b97-951d-4a05-b5fa-b8d2c2054239-cilium-run\") pod \"cilium-c75qn\" (UID: \"9dbe0b97-951d-4a05-b5fa-b8d2c2054239\") " pod="kube-system/cilium-c75qn" May 7 23:47:47.159628 kubelet[3497]: I0507 23:47:47.159571 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9dbe0b97-951d-4a05-b5fa-b8d2c2054239-cilium-cgroup\") pod \"cilium-c75qn\" (UID: \"9dbe0b97-951d-4a05-b5fa-b8d2c2054239\") " pod="kube-system/cilium-c75qn" May 7 23:47:47.159628 kubelet[3497]: I0507 23:47:47.159614 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9dbe0b97-951d-4a05-b5fa-b8d2c2054239-clustermesh-secrets\") pod \"cilium-c75qn\" (UID: \"9dbe0b97-951d-4a05-b5fa-b8d2c2054239\") " pod="kube-system/cilium-c75qn" May 7 23:47:47.159817 kubelet[3497]: I0507 23:47:47.159653 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9dbe0b97-951d-4a05-b5fa-b8d2c2054239-hostproc\") pod \"cilium-c75qn\" (UID: \"9dbe0b97-951d-4a05-b5fa-b8d2c2054239\") " pod="kube-system/cilium-c75qn" May 7 23:47:47.159817 kubelet[3497]: I0507 23:47:47.159687 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9dbe0b97-951d-4a05-b5fa-b8d2c2054239-cilium-config-path\") pod \"cilium-c75qn\" (UID: \"9dbe0b97-951d-4a05-b5fa-b8d2c2054239\") " pod="kube-system/cilium-c75qn" May 7 23:47:47.159817 kubelet[3497]: I0507 23:47:47.159725 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9dbe0b97-951d-4a05-b5fa-b8d2c2054239-host-proc-sys-net\") pod \"cilium-c75qn\" (UID: \"9dbe0b97-951d-4a05-b5fa-b8d2c2054239\") " pod="kube-system/cilium-c75qn" May 7 23:47:47.159817 kubelet[3497]: I0507 23:47:47.159759 3497 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgvkv\" (UniqueName: \"kubernetes.io/projected/9dbe0b97-951d-4a05-b5fa-b8d2c2054239-kube-api-access-vgvkv\") pod \"cilium-c75qn\" (UID: \"9dbe0b97-951d-4a05-b5fa-b8d2c2054239\") " pod="kube-system/cilium-c75qn" May 7 23:47:47.167812 systemd[1]: Started sshd@29-172.31.25.188:22-147.75.109.163:44484.service - OpenSSH per-connection server daemon (147.75.109.163:44484). May 7 23:47:47.173400 systemd-logind[1941]: Removed session 29. May 7 23:47:47.194420 systemd[1]: Created slice kubepods-burstable-pod9dbe0b97_951d_4a05_b5fa_b8d2c2054239.slice - libcontainer container kubepods-burstable-pod9dbe0b97_951d_4a05_b5fa_b8d2c2054239.slice. May 7 23:47:47.423039 sshd[5322]: Accepted publickey for core from 147.75.109.163 port 44484 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:47:47.425475 sshd-session[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:47:47.434044 systemd-logind[1941]: New session 30 of user core. May 7 23:47:47.444580 systemd[1]: Started session-30.scope - Session 30 of User core. May 7 23:47:47.564035 sshd[5327]: Connection closed by 147.75.109.163 port 44484 May 7 23:47:47.564932 sshd-session[5322]: pam_unix(sshd:session): session closed for user core May 7 23:47:47.571881 systemd[1]: sshd@29-172.31.25.188:22-147.75.109.163:44484.service: Deactivated successfully. May 7 23:47:47.575642 systemd[1]: session-30.scope: Deactivated successfully. May 7 23:47:47.577775 systemd-logind[1941]: Session 30 logged out. Waiting for processes to exit. May 7 23:47:47.580112 systemd-logind[1941]: Removed session 30. May 7 23:47:47.603843 systemd[1]: Started sshd@30-172.31.25.188:22-147.75.109.163:44492.service - OpenSSH per-connection server daemon (147.75.109.163:44492). May 7 23:47:47.794462 sshd[5334]: Accepted publickey for core from 147.75.109.163 port 44492 ssh2: RSA SHA256:kQP1JwyMe/WwD6o95f0kuF0WNYd/0mECzU0K15pTcJg May 7 23:47:47.797702 sshd-session[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 7 23:47:47.808711 systemd-logind[1941]: New session 31 of user core. May 7 23:47:47.813554 systemd[1]: Started session-31.scope - Session 31 of User core. May 7 23:47:48.261620 kubelet[3497]: E0507 23:47:48.261556 3497 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition May 7 23:47:48.262218 kubelet[3497]: E0507 23:47:48.261692 3497 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9dbe0b97-951d-4a05-b5fa-b8d2c2054239-clustermesh-secrets podName:9dbe0b97-951d-4a05-b5fa-b8d2c2054239 nodeName:}" failed. No retries permitted until 2025-05-07 23:47:48.761664611 +0000 UTC m=+112.251513853 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/9dbe0b97-951d-4a05-b5fa-b8d2c2054239-clustermesh-secrets") pod "cilium-c75qn" (UID: "9dbe0b97-951d-4a05-b5fa-b8d2c2054239") : failed to sync secret cache: timed out waiting for the condition May 7 23:47:48.262218 kubelet[3497]: E0507 23:47:48.262010 3497 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition May 7 23:47:48.262218 kubelet[3497]: E0507 23:47:48.262099 3497 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9dbe0b97-951d-4a05-b5fa-b8d2c2054239-cilium-ipsec-secrets podName:9dbe0b97-951d-4a05-b5fa-b8d2c2054239 nodeName:}" failed. No retries permitted until 2025-05-07 23:47:48.762080743 +0000 UTC m=+112.251929985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/9dbe0b97-951d-4a05-b5fa-b8d2c2054239-cilium-ipsec-secrets") pod "cilium-c75qn" (UID: "9dbe0b97-951d-4a05-b5fa-b8d2c2054239") : failed to sync secret cache: timed out waiting for the condition May 7 23:47:48.262218 kubelet[3497]: E0507 23:47:48.261551 3497 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 7 23:47:48.262761 kubelet[3497]: E0507 23:47:48.262159 3497 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9dbe0b97-951d-4a05-b5fa-b8d2c2054239-cilium-config-path podName:9dbe0b97-951d-4a05-b5fa-b8d2c2054239 nodeName:}" failed. No retries permitted until 2025-05-07 23:47:48.762144287 +0000 UTC m=+112.251993517 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/9dbe0b97-951d-4a05-b5fa-b8d2c2054239-cilium-config-path") pod "cilium-c75qn" (UID: "9dbe0b97-951d-4a05-b5fa-b8d2c2054239") : failed to sync configmap cache: timed out waiting for the condition May 7 23:47:49.039683 containerd[1959]: time="2025-05-07T23:47:49.039606932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c75qn,Uid:9dbe0b97-951d-4a05-b5fa-b8d2c2054239,Namespace:kube-system,Attempt:0,}" May 7 23:47:49.083916 containerd[1959]: time="2025-05-07T23:47:49.083722519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 7 23:47:49.084338 containerd[1959]: time="2025-05-07T23:47:49.084010231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 7 23:47:49.084338 containerd[1959]: time="2025-05-07T23:47:49.084088732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:47:49.084549 containerd[1959]: time="2025-05-07T23:47:49.084317277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 7 23:47:49.126583 systemd[1]: Started cri-containerd-4a9c4b0ef40c0b8d74b169ca89d6a31d12c6c972b48f3bf0e3c3e38633b17549.scope - libcontainer container 4a9c4b0ef40c0b8d74b169ca89d6a31d12c6c972b48f3bf0e3c3e38633b17549. May 7 23:47:49.172179 containerd[1959]: time="2025-05-07T23:47:49.172121142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c75qn,Uid:9dbe0b97-951d-4a05-b5fa-b8d2c2054239,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a9c4b0ef40c0b8d74b169ca89d6a31d12c6c972b48f3bf0e3c3e38633b17549\"" May 7 23:47:49.179149 containerd[1959]: time="2025-05-07T23:47:49.179071017Z" level=info msg="CreateContainer within sandbox \"4a9c4b0ef40c0b8d74b169ca89d6a31d12c6c972b48f3bf0e3c3e38633b17549\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 7 23:47:49.203080 containerd[1959]: time="2025-05-07T23:47:49.202948061Z" level=info msg="CreateContainer within sandbox \"4a9c4b0ef40c0b8d74b169ca89d6a31d12c6c972b48f3bf0e3c3e38633b17549\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"36cfb5688209ca9f00d0a7f8f07fd7b316656f58c26fcf5c69b4607ff37f5ea0\"" May 7 23:47:49.205536 containerd[1959]: time="2025-05-07T23:47:49.204044144Z" level=info msg="StartContainer for \"36cfb5688209ca9f00d0a7f8f07fd7b316656f58c26fcf5c69b4607ff37f5ea0\"" May 7 23:47:49.249557 systemd[1]: Started cri-containerd-36cfb5688209ca9f00d0a7f8f07fd7b316656f58c26fcf5c69b4607ff37f5ea0.scope - libcontainer container 36cfb5688209ca9f00d0a7f8f07fd7b316656f58c26fcf5c69b4607ff37f5ea0. May 7 23:47:49.302845 containerd[1959]: time="2025-05-07T23:47:49.302585589Z" level=info msg="StartContainer for \"36cfb5688209ca9f00d0a7f8f07fd7b316656f58c26fcf5c69b4607ff37f5ea0\" returns successfully" May 7 23:47:49.318792 systemd[1]: cri-containerd-36cfb5688209ca9f00d0a7f8f07fd7b316656f58c26fcf5c69b4607ff37f5ea0.scope: Deactivated successfully. May 7 23:47:49.377314 containerd[1959]: time="2025-05-07T23:47:49.376910237Z" level=info msg="shim disconnected" id=36cfb5688209ca9f00d0a7f8f07fd7b316656f58c26fcf5c69b4607ff37f5ea0 namespace=k8s.io May 7 23:47:49.377314 containerd[1959]: time="2025-05-07T23:47:49.376989721Z" level=warning msg="cleaning up after shim disconnected" id=36cfb5688209ca9f00d0a7f8f07fd7b316656f58c26fcf5c69b4607ff37f5ea0 namespace=k8s.io May 7 23:47:49.377314 containerd[1959]: time="2025-05-07T23:47:49.377012557Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:47:49.786633 kubelet[3497]: I0507 23:47:49.786280 3497 setters.go:580] "Node became not ready" node="ip-172-31-25-188" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-07T23:47:49Z","lastTransitionTime":"2025-05-07T23:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 7 23:47:50.251140 containerd[1959]: time="2025-05-07T23:47:50.249997169Z" level=info msg="CreateContainer within sandbox \"4a9c4b0ef40c0b8d74b169ca89d6a31d12c6c972b48f3bf0e3c3e38633b17549\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 7 23:47:50.282375 containerd[1959]: time="2025-05-07T23:47:50.282223704Z" level=info msg="CreateContainer within sandbox \"4a9c4b0ef40c0b8d74b169ca89d6a31d12c6c972b48f3bf0e3c3e38633b17549\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4211dbe93553fe65d79e149e07301ed87fd21c6104dfdba2d70e2d5efab7d75f\"" May 7 23:47:50.283460 containerd[1959]: time="2025-05-07T23:47:50.283203626Z" level=info msg="StartContainer for \"4211dbe93553fe65d79e149e07301ed87fd21c6104dfdba2d70e2d5efab7d75f\"" May 7 23:47:50.343569 systemd[1]: Started cri-containerd-4211dbe93553fe65d79e149e07301ed87fd21c6104dfdba2d70e2d5efab7d75f.scope - libcontainer container 4211dbe93553fe65d79e149e07301ed87fd21c6104dfdba2d70e2d5efab7d75f. May 7 23:47:50.395113 containerd[1959]: time="2025-05-07T23:47:50.394653973Z" level=info msg="StartContainer for \"4211dbe93553fe65d79e149e07301ed87fd21c6104dfdba2d70e2d5efab7d75f\" returns successfully" May 7 23:47:50.407839 systemd[1]: cri-containerd-4211dbe93553fe65d79e149e07301ed87fd21c6104dfdba2d70e2d5efab7d75f.scope: Deactivated successfully. May 7 23:47:50.455841 containerd[1959]: time="2025-05-07T23:47:50.455707883Z" level=info msg="shim disconnected" id=4211dbe93553fe65d79e149e07301ed87fd21c6104dfdba2d70e2d5efab7d75f namespace=k8s.io May 7 23:47:50.456129 containerd[1959]: time="2025-05-07T23:47:50.455851307Z" level=warning msg="cleaning up after shim disconnected" id=4211dbe93553fe65d79e149e07301ed87fd21c6104dfdba2d70e2d5efab7d75f namespace=k8s.io May 7 23:47:50.456129 containerd[1959]: time="2025-05-07T23:47:50.455873844Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:47:50.779750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4211dbe93553fe65d79e149e07301ed87fd21c6104dfdba2d70e2d5efab7d75f-rootfs.mount: Deactivated successfully. May 7 23:47:51.262338 containerd[1959]: time="2025-05-07T23:47:51.262235264Z" level=info msg="CreateContainer within sandbox \"4a9c4b0ef40c0b8d74b169ca89d6a31d12c6c972b48f3bf0e3c3e38633b17549\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 7 23:47:51.304703 containerd[1959]: time="2025-05-07T23:47:51.304623440Z" level=info msg="CreateContainer within sandbox \"4a9c4b0ef40c0b8d74b169ca89d6a31d12c6c972b48f3bf0e3c3e38633b17549\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d67e8ba6837371eab6173eccd944cc07d04a502f55aa28fbfa2e50899744e05a\"" May 7 23:47:51.305629 containerd[1959]: time="2025-05-07T23:47:51.305388969Z" level=info msg="StartContainer for \"d67e8ba6837371eab6173eccd944cc07d04a502f55aa28fbfa2e50899744e05a\"" May 7 23:47:51.369594 systemd[1]: Started cri-containerd-d67e8ba6837371eab6173eccd944cc07d04a502f55aa28fbfa2e50899744e05a.scope - libcontainer container d67e8ba6837371eab6173eccd944cc07d04a502f55aa28fbfa2e50899744e05a. May 7 23:47:51.427787 systemd[1]: cri-containerd-d67e8ba6837371eab6173eccd944cc07d04a502f55aa28fbfa2e50899744e05a.scope: Deactivated successfully. May 7 23:47:51.428517 containerd[1959]: time="2025-05-07T23:47:51.428088225Z" level=info msg="StartContainer for \"d67e8ba6837371eab6173eccd944cc07d04a502f55aa28fbfa2e50899744e05a\" returns successfully" May 7 23:47:51.477301 containerd[1959]: time="2025-05-07T23:47:51.477186767Z" level=info msg="shim disconnected" id=d67e8ba6837371eab6173eccd944cc07d04a502f55aa28fbfa2e50899744e05a namespace=k8s.io May 7 23:47:51.477301 containerd[1959]: time="2025-05-07T23:47:51.477292830Z" level=warning msg="cleaning up after shim disconnected" id=d67e8ba6837371eab6173eccd944cc07d04a502f55aa28fbfa2e50899744e05a namespace=k8s.io May 7 23:47:51.477782 containerd[1959]: time="2025-05-07T23:47:51.477314479Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:47:51.779817 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d67e8ba6837371eab6173eccd944cc07d04a502f55aa28fbfa2e50899744e05a-rootfs.mount: Deactivated successfully. May 7 23:47:51.937647 kubelet[3497]: E0507 23:47:51.937527 3497 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 7 23:47:52.273185 containerd[1959]: time="2025-05-07T23:47:52.272101353Z" level=info msg="CreateContainer within sandbox \"4a9c4b0ef40c0b8d74b169ca89d6a31d12c6c972b48f3bf0e3c3e38633b17549\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 7 23:47:52.317343 containerd[1959]: time="2025-05-07T23:47:52.317130872Z" level=info msg="CreateContainer within sandbox \"4a9c4b0ef40c0b8d74b169ca89d6a31d12c6c972b48f3bf0e3c3e38633b17549\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"041c6dc0ccc65da96bebfde058366dfd817cc0a3c7be06c795ba4ee232b887f2\"" May 7 23:47:52.319386 containerd[1959]: time="2025-05-07T23:47:52.318020994Z" level=info msg="StartContainer for \"041c6dc0ccc65da96bebfde058366dfd817cc0a3c7be06c795ba4ee232b887f2\"" May 7 23:47:52.372580 systemd[1]: Started cri-containerd-041c6dc0ccc65da96bebfde058366dfd817cc0a3c7be06c795ba4ee232b887f2.scope - libcontainer container 041c6dc0ccc65da96bebfde058366dfd817cc0a3c7be06c795ba4ee232b887f2. May 7 23:47:52.418479 systemd[1]: cri-containerd-041c6dc0ccc65da96bebfde058366dfd817cc0a3c7be06c795ba4ee232b887f2.scope: Deactivated successfully. May 7 23:47:52.424170 containerd[1959]: time="2025-05-07T23:47:52.424013124Z" level=info msg="StartContainer for \"041c6dc0ccc65da96bebfde058366dfd817cc0a3c7be06c795ba4ee232b887f2\" returns successfully" May 7 23:47:52.466750 containerd[1959]: time="2025-05-07T23:47:52.466669305Z" level=info msg="shim disconnected" id=041c6dc0ccc65da96bebfde058366dfd817cc0a3c7be06c795ba4ee232b887f2 namespace=k8s.io May 7 23:47:52.466750 containerd[1959]: time="2025-05-07T23:47:52.466747542Z" level=warning msg="cleaning up after shim disconnected" id=041c6dc0ccc65da96bebfde058366dfd817cc0a3c7be06c795ba4ee232b887f2 namespace=k8s.io May 7 23:47:52.467296 containerd[1959]: time="2025-05-07T23:47:52.466768508Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:47:52.781126 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-041c6dc0ccc65da96bebfde058366dfd817cc0a3c7be06c795ba4ee232b887f2-rootfs.mount: Deactivated successfully. May 7 23:47:53.274311 containerd[1959]: time="2025-05-07T23:47:53.274113172Z" level=info msg="CreateContainer within sandbox \"4a9c4b0ef40c0b8d74b169ca89d6a31d12c6c972b48f3bf0e3c3e38633b17549\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 7 23:47:53.313880 containerd[1959]: time="2025-05-07T23:47:53.313675933Z" level=info msg="CreateContainer within sandbox \"4a9c4b0ef40c0b8d74b169ca89d6a31d12c6c972b48f3bf0e3c3e38633b17549\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"577c2a44d60918af6af38dcd41d4e5f9e8f6253408a5a263eccdf4a774d5b83a\"" May 7 23:47:53.315538 containerd[1959]: time="2025-05-07T23:47:53.315286511Z" level=info msg="StartContainer for \"577c2a44d60918af6af38dcd41d4e5f9e8f6253408a5a263eccdf4a774d5b83a\"" May 7 23:47:53.377562 systemd[1]: Started cri-containerd-577c2a44d60918af6af38dcd41d4e5f9e8f6253408a5a263eccdf4a774d5b83a.scope - libcontainer container 577c2a44d60918af6af38dcd41d4e5f9e8f6253408a5a263eccdf4a774d5b83a. May 7 23:47:53.438894 containerd[1959]: time="2025-05-07T23:47:53.438805737Z" level=info msg="StartContainer for \"577c2a44d60918af6af38dcd41d4e5f9e8f6253408a5a263eccdf4a774d5b83a\" returns successfully" May 7 23:47:54.270678 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 7 23:47:54.729950 kubelet[3497]: E0507 23:47:54.729203 3497 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-gmpql" podUID="1dfe130c-0a83-446b-b9f0-8fc1d560a8f8" May 7 23:47:56.731298 kubelet[3497]: E0507 23:47:56.729092 3497 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-gmpql" podUID="1dfe130c-0a83-446b-b9f0-8fc1d560a8f8" May 7 23:47:56.789461 containerd[1959]: time="2025-05-07T23:47:56.789400855Z" level=info msg="StopPodSandbox for \"49bbc4936a3eae76326820ae18944db32563f76ab093b1b1da3651f30640c6c3\"" May 7 23:47:56.790041 containerd[1959]: time="2025-05-07T23:47:56.789547722Z" level=info msg="TearDown network for sandbox \"49bbc4936a3eae76326820ae18944db32563f76ab093b1b1da3651f30640c6c3\" successfully" May 7 23:47:56.790041 containerd[1959]: time="2025-05-07T23:47:56.789572597Z" level=info msg="StopPodSandbox for \"49bbc4936a3eae76326820ae18944db32563f76ab093b1b1da3651f30640c6c3\" returns successfully" May 7 23:47:56.791097 containerd[1959]: time="2025-05-07T23:47:56.790992867Z" level=info msg="RemovePodSandbox for \"49bbc4936a3eae76326820ae18944db32563f76ab093b1b1da3651f30640c6c3\"" May 7 23:47:56.791271 containerd[1959]: time="2025-05-07T23:47:56.791067254Z" level=info msg="Forcibly stopping sandbox \"49bbc4936a3eae76326820ae18944db32563f76ab093b1b1da3651f30640c6c3\"" May 7 23:47:56.791476 containerd[1959]: time="2025-05-07T23:47:56.791429436Z" level=info msg="TearDown network for sandbox \"49bbc4936a3eae76326820ae18944db32563f76ab093b1b1da3651f30640c6c3\" successfully" May 7 23:47:56.800008 containerd[1959]: time="2025-05-07T23:47:56.799850952Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"49bbc4936a3eae76326820ae18944db32563f76ab093b1b1da3651f30640c6c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 7 23:47:56.800224 containerd[1959]: time="2025-05-07T23:47:56.800064877Z" level=info msg="RemovePodSandbox \"49bbc4936a3eae76326820ae18944db32563f76ab093b1b1da3651f30640c6c3\" returns successfully" May 7 23:47:56.801762 containerd[1959]: time="2025-05-07T23:47:56.801480672Z" level=info msg="StopPodSandbox for \"167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f\"" May 7 23:47:56.801762 containerd[1959]: time="2025-05-07T23:47:56.801624828Z" level=info msg="TearDown network for sandbox \"167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f\" successfully" May 7 23:47:56.801762 containerd[1959]: time="2025-05-07T23:47:56.801646645Z" level=info msg="StopPodSandbox for \"167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f\" returns successfully" May 7 23:47:56.803449 containerd[1959]: time="2025-05-07T23:47:56.803377451Z" level=info msg="RemovePodSandbox for \"167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f\"" May 7 23:47:56.803583 containerd[1959]: time="2025-05-07T23:47:56.803469985Z" level=info msg="Forcibly stopping sandbox \"167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f\"" May 7 23:47:56.803996 containerd[1959]: time="2025-05-07T23:47:56.803582345Z" level=info msg="TearDown network for sandbox \"167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f\" successfully" May 7 23:47:56.821064 containerd[1959]: time="2025-05-07T23:47:56.820963090Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 7 23:47:56.821283 containerd[1959]: time="2025-05-07T23:47:56.821122071Z" level=info msg="RemovePodSandbox \"167a827ed7c0e1f10ada3bd9055f524084c6d9d80f254b37ede7cf93cd34a39f\" returns successfully" May 7 23:47:58.632118 systemd[1]: run-containerd-runc-k8s.io-577c2a44d60918af6af38dcd41d4e5f9e8f6253408a5a263eccdf4a774d5b83a-runc.ZsYMid.mount: Deactivated successfully. May 7 23:47:58.753679 systemd-networkd[1874]: lxc_health: Link UP May 7 23:47:58.761453 systemd-networkd[1874]: lxc_health: Gained carrier May 7 23:47:58.771059 (udev-worker)[6177]: Network interface NamePolicy= disabled on kernel command line. May 7 23:47:59.083500 kubelet[3497]: I0507 23:47:59.082806 3497 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-c75qn" podStartSLOduration=12.082785332 podStartE2EDuration="12.082785332s" podCreationTimestamp="2025-05-07 23:47:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-07 23:47:54.327669529 +0000 UTC m=+117.817518771" watchObservedRunningTime="2025-05-07 23:47:59.082785332 +0000 UTC m=+122.572634586" May 7 23:48:00.356009 systemd-networkd[1874]: lxc_health: Gained IPv6LL May 7 23:48:01.194554 kubelet[3497]: E0507 23:48:01.194498 3497 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40714->127.0.0.1:35969: write tcp 127.0.0.1:40714->127.0.0.1:35969: write: broken pipe May 7 23:48:02.475331 ntpd[1935]: Listen normally on 15 lxc_health [fe80::887c:6fff:feec:d552%14]:123 May 7 23:48:02.475998 ntpd[1935]: 7 May 23:48:02 ntpd[1935]: Listen normally on 15 lxc_health [fe80::887c:6fff:feec:d552%14]:123 May 7 23:48:03.379094 systemd[1]: run-containerd-runc-k8s.io-577c2a44d60918af6af38dcd41d4e5f9e8f6253408a5a263eccdf4a774d5b83a-runc.9X7lO8.mount: Deactivated successfully. May 7 23:48:05.661220 systemd[1]: run-containerd-runc-k8s.io-577c2a44d60918af6af38dcd41d4e5f9e8f6253408a5a263eccdf4a774d5b83a-runc.708Hhc.mount: Deactivated successfully. May 7 23:48:05.789846 sshd[5336]: Connection closed by 147.75.109.163 port 44492 May 7 23:48:05.791232 sshd-session[5334]: pam_unix(sshd:session): session closed for user core May 7 23:48:05.800940 systemd[1]: sshd@30-172.31.25.188:22-147.75.109.163:44492.service: Deactivated successfully. May 7 23:48:05.807743 systemd[1]: session-31.scope: Deactivated successfully. May 7 23:48:05.813670 systemd-logind[1941]: Session 31 logged out. Waiting for processes to exit. May 7 23:48:05.816792 systemd-logind[1941]: Removed session 31. May 7 23:48:19.933950 kubelet[3497]: E0507 23:48:19.933872 3497 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-188?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 7 23:48:20.121849 systemd[1]: cri-containerd-06609b840c9f561a8c3781259e0fa65779cba1d8877d610d11be9a7a4bba107d.scope: Deactivated successfully. May 7 23:48:20.123166 systemd[1]: cri-containerd-06609b840c9f561a8c3781259e0fa65779cba1d8877d610d11be9a7a4bba107d.scope: Consumed 5.099s CPU time, 57.3M memory peak. May 7 23:48:20.163110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06609b840c9f561a8c3781259e0fa65779cba1d8877d610d11be9a7a4bba107d-rootfs.mount: Deactivated successfully. May 7 23:48:20.175590 containerd[1959]: time="2025-05-07T23:48:20.175491774Z" level=info msg="shim disconnected" id=06609b840c9f561a8c3781259e0fa65779cba1d8877d610d11be9a7a4bba107d namespace=k8s.io May 7 23:48:20.176404 containerd[1959]: time="2025-05-07T23:48:20.175593003Z" level=warning msg="cleaning up after shim disconnected" id=06609b840c9f561a8c3781259e0fa65779cba1d8877d610d11be9a7a4bba107d namespace=k8s.io May 7 23:48:20.176404 containerd[1959]: time="2025-05-07T23:48:20.175616056Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:48:20.359458 kubelet[3497]: I0507 23:48:20.359395 3497 scope.go:117] "RemoveContainer" containerID="06609b840c9f561a8c3781259e0fa65779cba1d8877d610d11be9a7a4bba107d" May 7 23:48:20.363524 containerd[1959]: time="2025-05-07T23:48:20.363327877Z" level=info msg="CreateContainer within sandbox \"bfa778508d0c3c054cdb46b5ae41b31c638e7443e9cafd614d7630d0d4e6c210\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 7 23:48:20.390050 containerd[1959]: time="2025-05-07T23:48:20.389901172Z" level=info msg="CreateContainer within sandbox \"bfa778508d0c3c054cdb46b5ae41b31c638e7443e9cafd614d7630d0d4e6c210\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b77ca843c47fe5e8f852eade189f8b30a711576dc91ae1ad94416f6b7af30cfc\"" May 7 23:48:20.391306 containerd[1959]: time="2025-05-07T23:48:20.391145010Z" level=info msg="StartContainer for \"b77ca843c47fe5e8f852eade189f8b30a711576dc91ae1ad94416f6b7af30cfc\"" May 7 23:48:20.448589 systemd[1]: Started cri-containerd-b77ca843c47fe5e8f852eade189f8b30a711576dc91ae1ad94416f6b7af30cfc.scope - libcontainer container b77ca843c47fe5e8f852eade189f8b30a711576dc91ae1ad94416f6b7af30cfc. May 7 23:48:20.516644 containerd[1959]: time="2025-05-07T23:48:20.516571870Z" level=info msg="StartContainer for \"b77ca843c47fe5e8f852eade189f8b30a711576dc91ae1ad94416f6b7af30cfc\" returns successfully" May 7 23:48:24.765084 systemd[1]: cri-containerd-7a66a89709b9c37e8c7378566eecdbd112306441df990d11f4b93c029376b771.scope: Deactivated successfully. May 7 23:48:24.766389 systemd[1]: cri-containerd-7a66a89709b9c37e8c7378566eecdbd112306441df990d11f4b93c029376b771.scope: Consumed 3.339s CPU time, 21.2M memory peak. May 7 23:48:24.808559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a66a89709b9c37e8c7378566eecdbd112306441df990d11f4b93c029376b771-rootfs.mount: Deactivated successfully. May 7 23:48:24.821832 containerd[1959]: time="2025-05-07T23:48:24.821736995Z" level=info msg="shim disconnected" id=7a66a89709b9c37e8c7378566eecdbd112306441df990d11f4b93c029376b771 namespace=k8s.io May 7 23:48:24.822593 containerd[1959]: time="2025-05-07T23:48:24.821826434Z" level=warning msg="cleaning up after shim disconnected" id=7a66a89709b9c37e8c7378566eecdbd112306441df990d11f4b93c029376b771 namespace=k8s.io May 7 23:48:24.822593 containerd[1959]: time="2025-05-07T23:48:24.821854296Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 7 23:48:25.385513 kubelet[3497]: I0507 23:48:25.385033 3497 scope.go:117] "RemoveContainer" containerID="7a66a89709b9c37e8c7378566eecdbd112306441df990d11f4b93c029376b771" May 7 23:48:25.390538 containerd[1959]: time="2025-05-07T23:48:25.389943763Z" level=info msg="CreateContainer within sandbox \"56f1d7efaff03117e9d6450d52303bb155d89984275e143cd56e0cc38994294c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 7 23:48:25.420997 containerd[1959]: time="2025-05-07T23:48:25.420918652Z" level=info msg="CreateContainer within sandbox \"56f1d7efaff03117e9d6450d52303bb155d89984275e143cd56e0cc38994294c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d1f7eebe4f17765f0a841157fe32f45d3c5ee17cb1b2068b8c2b3172691574d1\"" May 7 23:48:25.421960 containerd[1959]: time="2025-05-07T23:48:25.421909368Z" level=info msg="StartContainer for \"d1f7eebe4f17765f0a841157fe32f45d3c5ee17cb1b2068b8c2b3172691574d1\"" May 7 23:48:25.476641 systemd[1]: Started cri-containerd-d1f7eebe4f17765f0a841157fe32f45d3c5ee17cb1b2068b8c2b3172691574d1.scope - libcontainer container d1f7eebe4f17765f0a841157fe32f45d3c5ee17cb1b2068b8c2b3172691574d1. May 7 23:48:25.544732 containerd[1959]: time="2025-05-07T23:48:25.544495977Z" level=info msg="StartContainer for \"d1f7eebe4f17765f0a841157fe32f45d3c5ee17cb1b2068b8c2b3172691574d1\" returns successfully" May 7 23:48:29.934817 kubelet[3497]: E0507 23:48:29.934319 3497 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-188?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 7 23:48:39.935604 kubelet[3497]: E0507 23:48:39.935046 3497 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-188?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"