May 27 02:46:48.103609 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] May 27 02:46:48.103652 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 27 01:20:04 -00 2025 May 27 02:46:48.103676 kernel: KASLR disabled due to lack of seed May 27 02:46:48.103692 kernel: efi: EFI v2.7 by EDK II May 27 02:46:48.103707 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a733a98 MEMRESERVE=0x78551598 May 27 02:46:48.103722 kernel: secureboot: Secure boot disabled May 27 02:46:48.103738 kernel: ACPI: Early table checksum verification disabled May 27 02:46:48.103753 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) May 27 02:46:48.103768 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) May 27 02:46:48.103783 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) May 27 02:46:48.103802 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) May 27 02:46:48.103817 kernel: ACPI: FACS 0x0000000078630000 000040 May 27 02:46:48.103832 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 27 02:46:48.103847 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) May 27 02:46:48.103864 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) May 27 02:46:48.103879 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) May 27 02:46:48.103899 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 27 02:46:48.103915 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) May 27 02:46:48.103930 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) May 27 02:46:48.104015 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 May 27 02:46:48.104035 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') May 27 02:46:48.104051 kernel: printk: legacy bootconsole [uart0] enabled May 27 02:46:48.104067 kernel: ACPI: Use ACPI SPCR as default console: Yes May 27 02:46:48.104083 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] May 27 02:46:48.104099 kernel: NODE_DATA(0) allocated [mem 0x4b584cdc0-0x4b5853fff] May 27 02:46:48.104114 kernel: Zone ranges: May 27 02:46:48.104136 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 27 02:46:48.104151 kernel: DMA32 empty May 27 02:46:48.104167 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] May 27 02:46:48.104182 kernel: Device empty May 27 02:46:48.104197 kernel: Movable zone start for each node May 27 02:46:48.104212 kernel: Early memory node ranges May 27 02:46:48.104227 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] May 27 02:46:48.104243 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] May 27 02:46:48.104259 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] May 27 02:46:48.104274 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] May 27 02:46:48.104289 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] May 27 02:46:48.104304 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] May 27 02:46:48.104323 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] May 27 02:46:48.104339 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] May 27 02:46:48.104361 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] May 27 02:46:48.104378 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges May 27 02:46:48.104394 kernel: psci: probing for conduit method from ACPI. May 27 02:46:48.104414 kernel: psci: PSCIv1.0 detected in firmware. May 27 02:46:48.104431 kernel: psci: Using standard PSCI v0.2 function IDs May 27 02:46:48.104447 kernel: psci: Trusted OS migration not required May 27 02:46:48.104463 kernel: psci: SMC Calling Convention v1.1 May 27 02:46:48.104479 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 27 02:46:48.104495 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 27 02:46:48.104513 kernel: pcpu-alloc: [0] 0 [0] 1 May 27 02:46:48.104530 kernel: Detected PIPT I-cache on CPU0 May 27 02:46:48.104547 kernel: CPU features: detected: GIC system register CPU interface May 27 02:46:48.104563 kernel: CPU features: detected: Spectre-v2 May 27 02:46:48.104579 kernel: CPU features: detected: Spectre-v3a May 27 02:46:48.104595 kernel: CPU features: detected: Spectre-BHB May 27 02:46:48.104615 kernel: CPU features: detected: ARM erratum 1742098 May 27 02:46:48.104631 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 May 27 02:46:48.104648 kernel: alternatives: applying boot alternatives May 27 02:46:48.104666 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4c3f98aae7a61b3dcbab6391ba922461adab29dbcb79fd6e18169f93c5a4ab5a May 27 02:46:48.104684 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 02:46:48.104700 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 02:46:48.104717 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 02:46:48.104733 kernel: Fallback order for Node 0: 0 May 27 02:46:48.104750 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 May 27 02:46:48.104767 kernel: Policy zone: Normal May 27 02:46:48.104787 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 02:46:48.104804 kernel: software IO TLB: area num 2. May 27 02:46:48.104820 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) May 27 02:46:48.104837 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 27 02:46:48.104853 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 02:46:48.104870 kernel: rcu: RCU event tracing is enabled. May 27 02:46:48.104887 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 27 02:46:48.104904 kernel: Trampoline variant of Tasks RCU enabled. May 27 02:46:48.104921 kernel: Tracing variant of Tasks RCU enabled. May 27 02:46:48.104938 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 02:46:48.105057 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 27 02:46:48.105075 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 02:46:48.105099 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 27 02:46:48.105115 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 27 02:46:48.105132 kernel: GICv3: 96 SPIs implemented May 27 02:46:48.105148 kernel: GICv3: 0 Extended SPIs implemented May 27 02:46:48.105164 kernel: Root IRQ handler: gic_handle_irq May 27 02:46:48.105180 kernel: GICv3: GICv3 features: 16 PPIs May 27 02:46:48.105196 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 27 02:46:48.105213 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 May 27 02:46:48.105229 kernel: ITS [mem 0x10080000-0x1009ffff] May 27 02:46:48.105245 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) May 27 02:46:48.105262 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) May 27 02:46:48.105282 kernel: GICv3: using LPI property table @0x00000004000e0000 May 27 02:46:48.105299 kernel: ITS: Using hypervisor restricted LPI range [128] May 27 02:46:48.105315 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 May 27 02:46:48.105331 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 02:46:48.105348 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). May 27 02:46:48.105364 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns May 27 02:46:48.105381 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns May 27 02:46:48.105397 kernel: Console: colour dummy device 80x25 May 27 02:46:48.105415 kernel: printk: legacy console [tty1] enabled May 27 02:46:48.105431 kernel: ACPI: Core revision 20240827 May 27 02:46:48.105448 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) May 27 02:46:48.105470 kernel: pid_max: default: 32768 minimum: 301 May 27 02:46:48.105486 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 02:46:48.105503 kernel: landlock: Up and running. May 27 02:46:48.105520 kernel: SELinux: Initializing. May 27 02:46:48.105536 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 02:46:48.105553 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 02:46:48.105569 kernel: rcu: Hierarchical SRCU implementation. May 27 02:46:48.105587 kernel: rcu: Max phase no-delay instances is 400. May 27 02:46:48.105604 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 02:46:48.105625 kernel: Remapping and enabling EFI services. May 27 02:46:48.105641 kernel: smp: Bringing up secondary CPUs ... May 27 02:46:48.105658 kernel: Detected PIPT I-cache on CPU1 May 27 02:46:48.105675 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 May 27 02:46:48.105691 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 May 27 02:46:48.105708 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] May 27 02:46:48.105725 kernel: smp: Brought up 1 node, 2 CPUs May 27 02:46:48.105741 kernel: SMP: Total of 2 processors activated. May 27 02:46:48.105758 kernel: CPU: All CPU(s) started at EL1 May 27 02:46:48.105778 kernel: CPU features: detected: 32-bit EL0 Support May 27 02:46:48.105806 kernel: CPU features: detected: 32-bit EL1 Support May 27 02:46:48.105824 kernel: CPU features: detected: CRC32 instructions May 27 02:46:48.105845 kernel: alternatives: applying system-wide alternatives May 27 02:46:48.105863 kernel: Memory: 3813536K/4030464K available (11072K kernel code, 2276K rwdata, 8936K rodata, 39424K init, 1034K bss, 212156K reserved, 0K cma-reserved) May 27 02:46:48.105881 kernel: devtmpfs: initialized May 27 02:46:48.105898 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 02:46:48.105916 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 27 02:46:48.105937 kernel: 17024 pages in range for non-PLT usage May 27 02:46:48.108170 kernel: 508544 pages in range for PLT usage May 27 02:46:48.108192 kernel: pinctrl core: initialized pinctrl subsystem May 27 02:46:48.108211 kernel: SMBIOS 3.0.0 present. May 27 02:46:48.108229 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 May 27 02:46:48.108247 kernel: DMI: Memory slots populated: 0/0 May 27 02:46:48.108265 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 02:46:48.108283 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 27 02:46:48.108301 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 27 02:46:48.108330 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 27 02:46:48.108348 kernel: audit: initializing netlink subsys (disabled) May 27 02:46:48.108367 kernel: audit: type=2000 audit(0.259:1): state=initialized audit_enabled=0 res=1 May 27 02:46:48.108384 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 02:46:48.108402 kernel: cpuidle: using governor menu May 27 02:46:48.108420 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 27 02:46:48.108438 kernel: ASID allocator initialised with 65536 entries May 27 02:46:48.108456 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 02:46:48.108478 kernel: Serial: AMBA PL011 UART driver May 27 02:46:48.108496 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 02:46:48.108513 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 27 02:46:48.108531 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 27 02:46:48.108549 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 27 02:46:48.108566 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 02:46:48.108583 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 27 02:46:48.108601 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 27 02:46:48.108618 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 27 02:46:48.108640 kernel: ACPI: Added _OSI(Module Device) May 27 02:46:48.108657 kernel: ACPI: Added _OSI(Processor Device) May 27 02:46:48.108675 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 02:46:48.108692 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 02:46:48.108710 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 02:46:48.108727 kernel: ACPI: Interpreter enabled May 27 02:46:48.108745 kernel: ACPI: Using GIC for interrupt routing May 27 02:46:48.108763 kernel: ACPI: MCFG table detected, 1 entries May 27 02:46:48.108781 kernel: ACPI: CPU0 has been hot-added May 27 02:46:48.108798 kernel: ACPI: CPU1 has been hot-added May 27 02:46:48.108820 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) May 27 02:46:48.109141 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 02:46:48.109348 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 27 02:46:48.109558 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 27 02:46:48.109763 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 May 27 02:46:48.112039 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] May 27 02:46:48.112085 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] May 27 02:46:48.112114 kernel: acpiphp: Slot [1] registered May 27 02:46:48.112133 kernel: acpiphp: Slot [2] registered May 27 02:46:48.112150 kernel: acpiphp: Slot [3] registered May 27 02:46:48.112168 kernel: acpiphp: Slot [4] registered May 27 02:46:48.112185 kernel: acpiphp: Slot [5] registered May 27 02:46:48.112203 kernel: acpiphp: Slot [6] registered May 27 02:46:48.112220 kernel: acpiphp: Slot [7] registered May 27 02:46:48.112238 kernel: acpiphp: Slot [8] registered May 27 02:46:48.112255 kernel: acpiphp: Slot [9] registered May 27 02:46:48.112277 kernel: acpiphp: Slot [10] registered May 27 02:46:48.112295 kernel: acpiphp: Slot [11] registered May 27 02:46:48.112312 kernel: acpiphp: Slot [12] registered May 27 02:46:48.112329 kernel: acpiphp: Slot [13] registered May 27 02:46:48.112347 kernel: acpiphp: Slot [14] registered May 27 02:46:48.112364 kernel: acpiphp: Slot [15] registered May 27 02:46:48.112381 kernel: acpiphp: Slot [16] registered May 27 02:46:48.112399 kernel: acpiphp: Slot [17] registered May 27 02:46:48.112416 kernel: acpiphp: Slot [18] registered May 27 02:46:48.112433 kernel: acpiphp: Slot [19] registered May 27 02:46:48.112455 kernel: acpiphp: Slot [20] registered May 27 02:46:48.112472 kernel: acpiphp: Slot [21] registered May 27 02:46:48.112489 kernel: acpiphp: Slot [22] registered May 27 02:46:48.112506 kernel: acpiphp: Slot [23] registered May 27 02:46:48.112524 kernel: acpiphp: Slot [24] registered May 27 02:46:48.112541 kernel: acpiphp: Slot [25] registered May 27 02:46:48.112558 kernel: acpiphp: Slot [26] registered May 27 02:46:48.112576 kernel: acpiphp: Slot [27] registered May 27 02:46:48.112593 kernel: acpiphp: Slot [28] registered May 27 02:46:48.112614 kernel: acpiphp: Slot [29] registered May 27 02:46:48.112631 kernel: acpiphp: Slot [30] registered May 27 02:46:48.112649 kernel: acpiphp: Slot [31] registered May 27 02:46:48.112666 kernel: PCI host bridge to bus 0000:00 May 27 02:46:48.112894 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] May 27 02:46:48.113110 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 27 02:46:48.113292 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] May 27 02:46:48.113472 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] May 27 02:46:48.113707 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint May 27 02:46:48.113931 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint May 27 02:46:48.117374 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] May 27 02:46:48.117608 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint May 27 02:46:48.117816 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] May 27 02:46:48.119891 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold May 27 02:46:48.120201 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint May 27 02:46:48.120404 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] May 27 02:46:48.120602 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] May 27 02:46:48.120812 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] May 27 02:46:48.121075 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold May 27 02:46:48.121289 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned May 27 02:46:48.121492 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned May 27 02:46:48.121702 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned May 27 02:46:48.121897 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned May 27 02:46:48.122197 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned May 27 02:46:48.122388 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] May 27 02:46:48.122575 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 27 02:46:48.122753 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] May 27 02:46:48.122779 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 27 02:46:48.122807 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 27 02:46:48.122826 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 27 02:46:48.122844 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 27 02:46:48.122862 kernel: iommu: Default domain type: Translated May 27 02:46:48.122880 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 27 02:46:48.122897 kernel: efivars: Registered efivars operations May 27 02:46:48.122915 kernel: vgaarb: loaded May 27 02:46:48.122933 kernel: clocksource: Switched to clocksource arch_sys_counter May 27 02:46:48.122994 kernel: VFS: Disk quotas dquot_6.6.0 May 27 02:46:48.123021 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 02:46:48.123040 kernel: pnp: PnP ACPI init May 27 02:46:48.123298 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved May 27 02:46:48.123329 kernel: pnp: PnP ACPI: found 1 devices May 27 02:46:48.123347 kernel: NET: Registered PF_INET protocol family May 27 02:46:48.123366 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 02:46:48.123385 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 02:46:48.123403 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 02:46:48.123430 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 02:46:48.123449 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 02:46:48.123467 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 02:46:48.123484 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 02:46:48.123502 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 02:46:48.123519 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 02:46:48.123537 kernel: PCI: CLS 0 bytes, default 64 May 27 02:46:48.123554 kernel: kvm [1]: HYP mode not available May 27 02:46:48.123571 kernel: Initialise system trusted keyrings May 27 02:46:48.123592 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 02:46:48.123610 kernel: Key type asymmetric registered May 27 02:46:48.123627 kernel: Asymmetric key parser 'x509' registered May 27 02:46:48.123644 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 27 02:46:48.123662 kernel: io scheduler mq-deadline registered May 27 02:46:48.123679 kernel: io scheduler kyber registered May 27 02:46:48.123696 kernel: io scheduler bfq registered May 27 02:46:48.123909 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered May 27 02:46:48.123936 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 27 02:46:48.126044 kernel: ACPI: button: Power Button [PWRB] May 27 02:46:48.126066 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 May 27 02:46:48.126084 kernel: ACPI: button: Sleep Button [SLPB] May 27 02:46:48.126102 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 02:46:48.126121 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 27 02:46:48.126375 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) May 27 02:46:48.126403 kernel: printk: legacy console [ttyS0] disabled May 27 02:46:48.126422 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A May 27 02:46:48.126450 kernel: printk: legacy console [ttyS0] enabled May 27 02:46:48.126469 kernel: printk: legacy bootconsole [uart0] disabled May 27 02:46:48.126487 kernel: thunder_xcv, ver 1.0 May 27 02:46:48.126585 kernel: thunder_bgx, ver 1.0 May 27 02:46:48.126697 kernel: nicpf, ver 1.0 May 27 02:46:48.126718 kernel: nicvf, ver 1.0 May 27 02:46:48.127523 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 27 02:46:48.128861 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-27T02:46:47 UTC (1748314007) May 27 02:46:48.128896 kernel: hid: raw HID events driver (C) Jiri Kosina May 27 02:46:48.128925 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available May 27 02:46:48.128961 kernel: NET: Registered PF_INET6 protocol family May 27 02:46:48.128985 kernel: watchdog: NMI not fully supported May 27 02:46:48.129004 kernel: watchdog: Hard watchdog permanently disabled May 27 02:46:48.129022 kernel: Segment Routing with IPv6 May 27 02:46:48.129041 kernel: In-situ OAM (IOAM) with IPv6 May 27 02:46:48.129059 kernel: NET: Registered PF_PACKET protocol family May 27 02:46:48.129077 kernel: Key type dns_resolver registered May 27 02:46:48.129095 kernel: registered taskstats version 1 May 27 02:46:48.129120 kernel: Loading compiled-in X.509 certificates May 27 02:46:48.129138 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 6bbf5412ef1f8a32378a640b6d048f74e6d74df0' May 27 02:46:48.129155 kernel: Demotion targets for Node 0: null May 27 02:46:48.129173 kernel: Key type .fscrypt registered May 27 02:46:48.129190 kernel: Key type fscrypt-provisioning registered May 27 02:46:48.129207 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 02:46:48.129224 kernel: ima: Allocated hash algorithm: sha1 May 27 02:46:48.129242 kernel: ima: No architecture policies found May 27 02:46:48.129259 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 27 02:46:48.129281 kernel: clk: Disabling unused clocks May 27 02:46:48.129299 kernel: PM: genpd: Disabling unused power domains May 27 02:46:48.129316 kernel: Warning: unable to open an initial console. May 27 02:46:48.129334 kernel: Freeing unused kernel memory: 39424K May 27 02:46:48.129351 kernel: Run /init as init process May 27 02:46:48.129369 kernel: with arguments: May 27 02:46:48.129386 kernel: /init May 27 02:46:48.129404 kernel: with environment: May 27 02:46:48.129421 kernel: HOME=/ May 27 02:46:48.129443 kernel: TERM=linux May 27 02:46:48.129460 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 02:46:48.129480 systemd[1]: Successfully made /usr/ read-only. May 27 02:46:48.129505 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 02:46:48.129526 systemd[1]: Detected virtualization amazon. May 27 02:46:48.129545 systemd[1]: Detected architecture arm64. May 27 02:46:48.129563 systemd[1]: Running in initrd. May 27 02:46:48.129587 systemd[1]: No hostname configured, using default hostname. May 27 02:46:48.129607 systemd[1]: Hostname set to . May 27 02:46:48.129625 systemd[1]: Initializing machine ID from VM UUID. May 27 02:46:48.129644 systemd[1]: Queued start job for default target initrd.target. May 27 02:46:48.129663 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 02:46:48.129683 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 02:46:48.129703 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 02:46:48.129723 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 02:46:48.129747 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 02:46:48.129768 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 02:46:48.129790 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 02:46:48.129809 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 02:46:48.129829 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 02:46:48.129848 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 02:46:48.129868 systemd[1]: Reached target paths.target - Path Units. May 27 02:46:48.129891 systemd[1]: Reached target slices.target - Slice Units. May 27 02:46:48.129911 systemd[1]: Reached target swap.target - Swaps. May 27 02:46:48.129930 systemd[1]: Reached target timers.target - Timer Units. May 27 02:46:48.131066 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 02:46:48.131096 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 02:46:48.131117 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 02:46:48.131156 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 02:46:48.131177 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 02:46:48.131197 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 02:46:48.131226 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 02:46:48.131246 systemd[1]: Reached target sockets.target - Socket Units. May 27 02:46:48.131265 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 02:46:48.131285 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 02:46:48.131304 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 02:46:48.131324 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 02:46:48.131344 systemd[1]: Starting systemd-fsck-usr.service... May 27 02:46:48.131363 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 02:46:48.131386 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 02:46:48.131406 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 02:46:48.131425 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 02:46:48.131445 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 02:46:48.131465 systemd[1]: Finished systemd-fsck-usr.service. May 27 02:46:48.131489 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 02:46:48.131509 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 02:46:48.131528 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 02:46:48.131593 systemd-journald[258]: Collecting audit messages is disabled. May 27 02:46:48.131639 kernel: Bridge firewalling registered May 27 02:46:48.131678 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 02:46:48.131699 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 02:46:48.131719 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 02:46:48.131739 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 02:46:48.131763 systemd-journald[258]: Journal started May 27 02:46:48.131800 systemd-journald[258]: Runtime Journal (/run/log/journal/ec2817ce2e4c2fd31ffa10e5567a1615) is 8M, max 75.3M, 67.3M free. May 27 02:46:48.051071 systemd-modules-load[259]: Inserted module 'overlay' May 27 02:46:48.093191 systemd-modules-load[259]: Inserted module 'br_netfilter' May 27 02:46:48.142782 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 02:46:48.149025 systemd[1]: Started systemd-journald.service - Journal Service. May 27 02:46:48.155854 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 02:46:48.171244 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 02:46:48.181280 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 02:46:48.188418 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 02:46:48.196033 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 02:46:48.207857 systemd-tmpfiles[284]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 02:46:48.226314 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 02:46:48.236473 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 02:46:48.251912 dracut-cmdline[295]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4c3f98aae7a61b3dcbab6391ba922461adab29dbcb79fd6e18169f93c5a4ab5a May 27 02:46:48.333560 systemd-resolved[306]: Positive Trust Anchors: May 27 02:46:48.335228 systemd-resolved[306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 02:46:48.335297 systemd-resolved[306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 02:46:48.425986 kernel: SCSI subsystem initialized May 27 02:46:48.433990 kernel: Loading iSCSI transport class v2.0-870. May 27 02:46:48.447995 kernel: iscsi: registered transport (tcp) May 27 02:46:48.469685 kernel: iscsi: registered transport (qla4xxx) May 27 02:46:48.469760 kernel: QLogic iSCSI HBA Driver May 27 02:46:48.503114 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 02:46:48.530171 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 02:46:48.541006 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 02:46:48.612976 kernel: random: crng init done May 27 02:46:48.613338 systemd-resolved[306]: Defaulting to hostname 'linux'. May 27 02:46:48.616719 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 02:46:48.619602 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 02:46:48.646987 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 02:46:48.653904 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 02:46:48.742017 kernel: raid6: neonx8 gen() 6552 MB/s May 27 02:46:48.758995 kernel: raid6: neonx4 gen() 6552 MB/s May 27 02:46:48.775991 kernel: raid6: neonx2 gen() 5455 MB/s May 27 02:46:48.792996 kernel: raid6: neonx1 gen() 3953 MB/s May 27 02:46:48.809990 kernel: raid6: int64x8 gen() 3663 MB/s May 27 02:46:48.826996 kernel: raid6: int64x4 gen() 3710 MB/s May 27 02:46:48.843988 kernel: raid6: int64x2 gen() 3600 MB/s May 27 02:46:48.862142 kernel: raid6: int64x1 gen() 2767 MB/s May 27 02:46:48.862191 kernel: raid6: using algorithm neonx8 gen() 6552 MB/s May 27 02:46:48.880849 kernel: raid6: .... xor() 4730 MB/s, rmw enabled May 27 02:46:48.880919 kernel: raid6: using neon recovery algorithm May 27 02:46:48.889246 kernel: xor: measuring software checksum speed May 27 02:46:48.889314 kernel: 8regs : 12927 MB/sec May 27 02:46:48.890365 kernel: 32regs : 12676 MB/sec May 27 02:46:48.891599 kernel: arm64_neon : 9034 MB/sec May 27 02:46:48.891631 kernel: xor: using function: 8regs (12927 MB/sec) May 27 02:46:48.984992 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 02:46:48.996264 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 02:46:49.002323 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 02:46:49.050394 systemd-udevd[508]: Using default interface naming scheme 'v255'. May 27 02:46:49.062259 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 02:46:49.068725 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 02:46:49.112084 dracut-pre-trigger[513]: rd.md=0: removing MD RAID activation May 27 02:46:49.158045 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 02:46:49.164070 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 02:46:49.294011 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 02:46:49.300727 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 02:46:49.468147 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 27 02:46:49.468228 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) May 27 02:46:49.477968 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 27 02:46:49.478034 kernel: nvme nvme0: pci function 0000:00:04.0 May 27 02:46:49.482840 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 27 02:46:49.483272 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 27 02:46:49.487340 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 27 02:46:49.490323 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 02:46:49.493923 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:36:dc:b3:4c:a9 May 27 02:46:49.490773 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 02:46:49.499330 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 02:46:49.506207 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 02:46:49.506276 kernel: GPT:9289727 != 16777215 May 27 02:46:49.506301 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 02:46:49.506562 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 02:46:49.514780 kernel: GPT:9289727 != 16777215 May 27 02:46:49.514820 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 02:46:49.514844 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 02:46:49.515352 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 02:46:49.519896 (udev-worker)[561]: Network interface NamePolicy= disabled on kernel command line. May 27 02:46:49.560995 kernel: nvme nvme0: using unchecked data buffer May 27 02:46:49.563660 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 02:46:49.662710 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 27 02:46:49.731778 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 27 02:46:49.756002 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 02:46:49.778628 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 27 02:46:49.784321 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 27 02:46:49.824982 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 27 02:46:49.829926 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 02:46:49.832425 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 02:46:49.839011 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 02:46:49.847496 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 02:46:49.855266 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 02:46:49.879369 disk-uuid[688]: Primary Header is updated. May 27 02:46:49.879369 disk-uuid[688]: Secondary Entries is updated. May 27 02:46:49.879369 disk-uuid[688]: Secondary Header is updated. May 27 02:46:49.892476 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 02:46:49.900624 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 02:46:50.909042 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 27 02:46:50.910216 disk-uuid[689]: The operation has completed successfully. May 27 02:46:51.097322 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 02:46:51.098445 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 02:46:51.182249 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 02:46:51.205871 sh[956]: Success May 27 02:46:51.233630 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 02:46:51.233705 kernel: device-mapper: uevent: version 1.0.3 May 27 02:46:51.235580 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 02:46:51.249006 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 27 02:46:51.358032 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 02:46:51.366130 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 02:46:51.389224 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 02:46:51.409985 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 02:46:51.410063 kernel: BTRFS: device fsid 5c6341ea-4eb5-44b6-ac57-c4d29847e384 devid 1 transid 41 /dev/mapper/usr (254:0) scanned by mount (980) May 27 02:46:51.416436 kernel: BTRFS info (device dm-0): first mount of filesystem 5c6341ea-4eb5-44b6-ac57-c4d29847e384 May 27 02:46:51.416519 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 27 02:46:51.416546 kernel: BTRFS info (device dm-0): using free-space-tree May 27 02:46:51.445179 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 02:46:51.449039 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 02:46:51.453527 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 02:46:51.458374 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 02:46:51.464229 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 02:46:51.512006 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1011) May 27 02:46:51.516598 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 02:46:51.516672 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 27 02:46:51.517899 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 02:46:51.534002 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 02:46:51.536543 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 02:46:51.544052 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 02:46:51.647821 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 02:46:51.663254 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 02:46:51.757890 systemd-networkd[1152]: lo: Link UP May 27 02:46:51.759448 systemd-networkd[1152]: lo: Gained carrier May 27 02:46:51.764306 systemd-networkd[1152]: Enumeration completed May 27 02:46:51.765075 systemd-networkd[1152]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 02:46:51.765082 systemd-networkd[1152]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 02:46:51.766385 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 02:46:51.779160 systemd[1]: Reached target network.target - Network. May 27 02:46:51.784076 systemd-networkd[1152]: eth0: Link UP May 27 02:46:51.784627 systemd-networkd[1152]: eth0: Gained carrier May 27 02:46:51.785308 systemd-networkd[1152]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 02:46:51.796349 ignition[1072]: Ignition 2.21.0 May 27 02:46:51.796380 ignition[1072]: Stage: fetch-offline May 27 02:46:51.796763 ignition[1072]: no configs at "/usr/lib/ignition/base.d" May 27 02:46:51.796786 ignition[1072]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 02:46:51.801338 systemd-networkd[1152]: eth0: DHCPv4 address 172.31.29.92/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 27 02:46:51.799424 ignition[1072]: Ignition finished successfully May 27 02:46:51.810190 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 02:46:51.819325 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 27 02:46:51.864611 ignition[1163]: Ignition 2.21.0 May 27 02:46:51.864646 ignition[1163]: Stage: fetch May 27 02:46:51.865382 ignition[1163]: no configs at "/usr/lib/ignition/base.d" May 27 02:46:51.865408 ignition[1163]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 02:46:51.865584 ignition[1163]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 02:46:51.892873 ignition[1163]: PUT result: OK May 27 02:46:51.896678 ignition[1163]: parsed url from cmdline: "" May 27 02:46:51.896705 ignition[1163]: no config URL provided May 27 02:46:51.896723 ignition[1163]: reading system config file "/usr/lib/ignition/user.ign" May 27 02:46:51.896749 ignition[1163]: no config at "/usr/lib/ignition/user.ign" May 27 02:46:51.896782 ignition[1163]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 02:46:51.908225 ignition[1163]: PUT result: OK May 27 02:46:51.908429 ignition[1163]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 27 02:46:51.912972 ignition[1163]: GET result: OK May 27 02:46:51.913522 ignition[1163]: parsing config with SHA512: 58b1f6cf6181b329b010223b3d916d459f856105b0901011b93bd2e136c6665e6476715243cc0fa2f12ce92fb52a277dae451c97b8d403607d35d1365cdb731e May 27 02:46:51.921273 unknown[1163]: fetched base config from "system" May 27 02:46:51.921301 unknown[1163]: fetched base config from "system" May 27 02:46:51.923073 ignition[1163]: fetch: fetch complete May 27 02:46:51.921314 unknown[1163]: fetched user config from "aws" May 27 02:46:51.923091 ignition[1163]: fetch: fetch passed May 27 02:46:51.923217 ignition[1163]: Ignition finished successfully May 27 02:46:51.932026 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 27 02:46:51.938189 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 02:46:51.995602 ignition[1170]: Ignition 2.21.0 May 27 02:46:51.995634 ignition[1170]: Stage: kargs May 27 02:46:51.996387 ignition[1170]: no configs at "/usr/lib/ignition/base.d" May 27 02:46:51.996414 ignition[1170]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 02:46:51.996569 ignition[1170]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 02:46:51.998644 ignition[1170]: PUT result: OK May 27 02:46:52.009767 ignition[1170]: kargs: kargs passed May 27 02:46:52.010171 ignition[1170]: Ignition finished successfully May 27 02:46:52.015673 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 02:46:52.021699 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 02:46:52.080733 ignition[1177]: Ignition 2.21.0 May 27 02:46:52.081320 ignition[1177]: Stage: disks May 27 02:46:52.081873 ignition[1177]: no configs at "/usr/lib/ignition/base.d" May 27 02:46:52.081897 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 02:46:52.082146 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 02:46:52.085065 ignition[1177]: PUT result: OK May 27 02:46:52.096155 ignition[1177]: disks: disks passed May 27 02:46:52.096304 ignition[1177]: Ignition finished successfully May 27 02:46:52.102033 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 02:46:52.105555 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 02:46:52.108704 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 02:46:52.115210 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 02:46:52.117884 systemd[1]: Reached target sysinit.target - System Initialization. May 27 02:46:52.124507 systemd[1]: Reached target basic.target - Basic System. May 27 02:46:52.131548 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 02:46:52.204586 systemd-fsck[1185]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 02:46:52.212010 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 02:46:52.218700 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 02:46:52.358023 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 5656cec4-efbd-4a2d-be98-2263e6ae16bd r/w with ordered data mode. Quota mode: none. May 27 02:46:52.358595 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 02:46:52.362285 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 02:46:52.368357 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 02:46:52.371828 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 02:46:52.382180 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 02:46:52.385776 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 02:46:52.385839 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 02:46:52.403276 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 02:46:52.413199 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 02:46:52.428983 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1204) May 27 02:46:52.434048 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 02:46:52.434128 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 27 02:46:52.435645 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 02:46:52.447578 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 02:46:52.531142 initrd-setup-root[1228]: cut: /sysroot/etc/passwd: No such file or directory May 27 02:46:52.541867 initrd-setup-root[1235]: cut: /sysroot/etc/group: No such file or directory May 27 02:46:52.551793 initrd-setup-root[1242]: cut: /sysroot/etc/shadow: No such file or directory May 27 02:46:52.560023 initrd-setup-root[1249]: cut: /sysroot/etc/gshadow: No such file or directory May 27 02:46:52.743766 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 02:46:52.747016 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 02:46:52.777306 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 02:46:52.792855 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 02:46:52.795172 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 02:46:52.847504 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 02:46:52.852744 ignition[1316]: INFO : Ignition 2.21.0 May 27 02:46:52.852744 ignition[1316]: INFO : Stage: mount May 27 02:46:52.852744 ignition[1316]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 02:46:52.852744 ignition[1316]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 02:46:52.852744 ignition[1316]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 02:46:52.864382 ignition[1316]: INFO : PUT result: OK May 27 02:46:52.868208 ignition[1316]: INFO : mount: mount passed May 27 02:46:52.868208 ignition[1316]: INFO : Ignition finished successfully May 27 02:46:52.874120 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 02:46:52.879790 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 02:46:52.921182 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 02:46:52.967000 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 (259:5) scanned by mount (1329) May 27 02:46:52.971520 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 02:46:52.971599 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 27 02:46:52.971640 kernel: BTRFS info (device nvme0n1p6): using free-space-tree May 27 02:46:52.981550 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 02:46:53.024051 ignition[1346]: INFO : Ignition 2.21.0 May 27 02:46:53.024051 ignition[1346]: INFO : Stage: files May 27 02:46:53.027867 ignition[1346]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 02:46:53.027867 ignition[1346]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 02:46:53.027867 ignition[1346]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 02:46:53.035818 ignition[1346]: INFO : PUT result: OK May 27 02:46:53.039022 ignition[1346]: DEBUG : files: compiled without relabeling support, skipping May 27 02:46:53.041798 ignition[1346]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 02:46:53.041798 ignition[1346]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 02:46:53.050727 ignition[1346]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 02:46:53.053666 ignition[1346]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 02:46:53.057055 unknown[1346]: wrote ssh authorized keys file for user: core May 27 02:46:53.059306 ignition[1346]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 02:46:53.064700 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 27 02:46:53.068448 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 27 02:46:53.100306 systemd-networkd[1152]: eth0: Gained IPv6LL May 27 02:46:53.213769 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 02:46:53.418489 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 27 02:46:53.418489 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 02:46:53.426058 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 27 02:46:53.951080 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 27 02:46:54.398304 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 02:46:54.398304 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 27 02:46:54.405861 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 27 02:46:54.405861 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 02:46:54.405861 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 02:46:54.405861 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 02:46:54.405861 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 02:46:54.405861 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 02:46:54.405861 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 02:46:54.429412 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 02:46:54.429412 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 02:46:54.429412 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 27 02:46:54.429412 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 27 02:46:54.429412 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 27 02:46:54.429412 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 May 27 02:46:55.132524 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 27 02:46:57.649643 ignition[1346]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 27 02:46:57.649643 ignition[1346]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 27 02:46:57.656639 ignition[1346]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 02:46:57.660517 ignition[1346]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 02:46:57.660517 ignition[1346]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 27 02:46:57.660517 ignition[1346]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 27 02:46:57.660517 ignition[1346]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 27 02:46:57.660517 ignition[1346]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 02:46:57.660517 ignition[1346]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 02:46:57.660517 ignition[1346]: INFO : files: files passed May 27 02:46:57.660517 ignition[1346]: INFO : Ignition finished successfully May 27 02:46:57.680604 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 02:46:57.691417 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 02:46:57.695427 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 02:46:57.723111 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 02:46:57.725076 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 02:46:57.737919 initrd-setup-root-after-ignition[1377]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 02:46:57.737919 initrd-setup-root-after-ignition[1377]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 02:46:57.745442 initrd-setup-root-after-ignition[1381]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 02:46:57.763043 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 02:46:57.768659 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 02:46:57.774162 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 02:46:57.851269 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 02:46:57.851675 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 02:46:57.858543 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 02:46:57.861439 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 02:46:57.867635 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 02:46:57.870795 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 02:46:57.916565 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 02:46:57.923732 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 02:46:57.967351 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 02:46:57.972415 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 02:46:57.974510 systemd[1]: Stopped target timers.target - Timer Units. May 27 02:46:57.974884 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 02:46:57.975193 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 02:46:57.976190 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 02:46:57.976773 systemd[1]: Stopped target basic.target - Basic System. May 27 02:46:57.977720 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 02:46:57.978650 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 02:46:57.979596 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 02:46:57.980531 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 02:46:57.981156 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 02:46:57.981709 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 02:46:57.982700 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 02:46:57.983616 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 02:46:57.984560 systemd[1]: Stopped target swap.target - Swaps. May 27 02:46:57.985137 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 02:46:57.985382 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 02:46:57.986394 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 02:46:57.987202 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 02:46:57.987904 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 02:46:58.004047 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 02:46:58.004366 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 02:46:58.004670 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 02:46:58.005343 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 02:46:58.005651 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 02:46:58.009452 systemd[1]: ignition-files.service: Deactivated successfully. May 27 02:46:58.009676 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 02:46:58.012094 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 02:46:58.029961 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 02:46:58.030283 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 02:46:58.120701 ignition[1401]: INFO : Ignition 2.21.0 May 27 02:46:58.120701 ignition[1401]: INFO : Stage: umount May 27 02:46:58.120701 ignition[1401]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 02:46:58.120701 ignition[1401]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 27 02:46:58.120701 ignition[1401]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 27 02:46:58.045857 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 02:46:58.134307 ignition[1401]: INFO : PUT result: OK May 27 02:46:58.055277 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 02:46:58.056524 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 02:46:58.062375 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 02:46:58.062656 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 02:46:58.146178 ignition[1401]: INFO : umount: umount passed May 27 02:46:58.146178 ignition[1401]: INFO : Ignition finished successfully May 27 02:46:58.096477 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 02:46:58.102672 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 02:46:58.146809 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 02:46:58.151069 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 02:46:58.159727 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 02:46:58.159923 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 02:46:58.160635 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 02:46:58.160714 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 02:46:58.163556 systemd[1]: ignition-fetch.service: Deactivated successfully. May 27 02:46:58.163712 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 27 02:46:58.164218 systemd[1]: Stopped target network.target - Network. May 27 02:46:58.164872 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 02:46:58.165619 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 02:46:58.185125 systemd[1]: Stopped target paths.target - Path Units. May 27 02:46:58.192573 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 02:46:58.196414 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 02:46:58.199047 systemd[1]: Stopped target slices.target - Slice Units. May 27 02:46:58.201980 systemd[1]: Stopped target sockets.target - Socket Units. May 27 02:46:58.206870 systemd[1]: iscsid.socket: Deactivated successfully. May 27 02:46:58.207316 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 02:46:58.210257 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 02:46:58.210331 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 02:46:58.212598 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 02:46:58.212699 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 02:46:58.221215 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 02:46:58.221321 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 02:46:58.226698 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 02:46:58.230160 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 02:46:58.238915 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 02:46:58.240743 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 02:46:58.240977 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 02:46:58.253674 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 02:46:58.253827 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 02:46:58.259878 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 02:46:58.260276 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 02:46:58.278713 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 02:46:58.279262 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 02:46:58.279932 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 02:46:58.292719 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 02:46:58.295679 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 02:46:58.302826 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 02:46:58.302934 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 02:46:58.315459 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 02:46:58.319481 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 02:46:58.319626 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 02:46:58.327269 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 02:46:58.327381 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 02:46:58.343562 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 02:46:58.343833 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 02:46:58.347935 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 02:46:58.348071 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 02:46:58.354711 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 02:46:58.366416 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 02:46:58.366729 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 02:46:58.373933 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 02:46:58.378109 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 02:46:58.384052 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 02:46:58.384206 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 02:46:58.388377 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 02:46:58.388456 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 02:46:58.400682 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 02:46:58.401029 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 02:46:58.407313 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 02:46:58.407431 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 02:46:58.414425 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 02:46:58.414696 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 02:46:58.429689 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 02:46:58.435118 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 02:46:58.435269 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 02:46:58.445147 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 02:46:58.445272 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 02:46:58.453263 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 02:46:58.453389 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 02:46:58.465507 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 02:46:58.465649 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 02:46:58.465737 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 02:46:58.467237 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 02:46:58.488789 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 02:46:58.505215 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 02:46:58.507734 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 02:46:58.512273 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 02:46:58.524155 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 02:46:58.558352 systemd[1]: Switching root. May 27 02:46:58.596364 systemd-journald[258]: Journal stopped May 27 02:47:00.688190 systemd-journald[258]: Received SIGTERM from PID 1 (systemd). May 27 02:47:00.688321 kernel: SELinux: policy capability network_peer_controls=1 May 27 02:47:00.688378 kernel: SELinux: policy capability open_perms=1 May 27 02:47:00.688408 kernel: SELinux: policy capability extended_socket_class=1 May 27 02:47:00.688451 kernel: SELinux: policy capability always_check_network=0 May 27 02:47:00.688481 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 02:47:00.688512 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 02:47:00.688541 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 02:47:00.688571 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 02:47:00.688599 kernel: SELinux: policy capability userspace_initial_context=0 May 27 02:47:00.688625 kernel: audit: type=1403 audit(1748314018.960:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 02:47:00.688658 systemd[1]: Successfully loaded SELinux policy in 61.544ms. May 27 02:47:00.688704 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 26.647ms. May 27 02:47:00.688748 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 02:47:00.688779 systemd[1]: Detected virtualization amazon. May 27 02:47:00.688811 systemd[1]: Detected architecture arm64. May 27 02:47:00.688841 systemd[1]: Detected first boot. May 27 02:47:00.688871 systemd[1]: Initializing machine ID from VM UUID. May 27 02:47:00.688903 zram_generator::config[1445]: No configuration found. May 27 02:47:00.697001 kernel: NET: Registered PF_VSOCK protocol family May 27 02:47:00.697064 systemd[1]: Populated /etc with preset unit settings. May 27 02:47:00.697102 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 02:47:00.697141 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 02:47:00.697172 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 02:47:00.697202 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 02:47:00.697232 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 02:47:00.697264 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 02:47:00.697294 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 02:47:00.697326 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 02:47:00.697356 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 02:47:00.697390 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 02:47:00.697422 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 02:47:00.697452 systemd[1]: Created slice user.slice - User and Session Slice. May 27 02:47:00.697482 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 02:47:00.697511 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 02:47:00.697538 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 02:47:00.697566 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 02:47:00.697596 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 02:47:00.697626 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 02:47:00.697658 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 02:47:00.697689 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 02:47:00.697721 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 02:47:00.697749 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 02:47:00.697776 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 02:47:00.697806 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 02:47:00.697836 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 02:47:00.697867 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 02:47:00.697931 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 02:47:00.699049 systemd[1]: Reached target slices.target - Slice Units. May 27 02:47:00.699106 systemd[1]: Reached target swap.target - Swaps. May 27 02:47:00.699138 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 02:47:00.699167 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 02:47:00.699197 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 02:47:00.699229 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 02:47:00.699258 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 02:47:00.699287 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 02:47:00.699325 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 02:47:00.699357 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 02:47:00.699387 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 02:47:00.699418 systemd[1]: Mounting media.mount - External Media Directory... May 27 02:47:00.699448 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 02:47:00.699488 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 02:47:00.699518 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 02:47:00.699548 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 02:47:00.699583 systemd[1]: Reached target machines.target - Containers. May 27 02:47:00.699614 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 02:47:00.699642 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 02:47:00.699673 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 02:47:00.699701 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 02:47:00.699729 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 02:47:00.699756 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 02:47:00.699784 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 02:47:00.699814 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 02:47:00.699847 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 02:47:00.699876 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 02:47:00.699904 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 02:47:00.699934 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 02:47:00.704251 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 02:47:00.704294 systemd[1]: Stopped systemd-fsck-usr.service. May 27 02:47:00.704327 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 02:47:00.704356 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 02:47:00.704394 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 02:47:00.704424 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 02:47:00.704454 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 02:47:00.704484 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 02:47:00.704513 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 02:47:00.704547 systemd[1]: verity-setup.service: Deactivated successfully. May 27 02:47:00.704580 systemd[1]: Stopped verity-setup.service. May 27 02:47:00.704610 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 02:47:00.704643 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 02:47:00.704679 systemd[1]: Mounted media.mount - External Media Directory. May 27 02:47:00.704708 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 02:47:00.704741 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 02:47:00.704770 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 02:47:00.704803 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 02:47:00.704831 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 02:47:00.704859 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 02:47:00.704887 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 02:47:00.704916 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 02:47:00.704970 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 02:47:00.705012 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 02:47:00.705044 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 02:47:00.705072 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 02:47:00.705101 kernel: loop: module loaded May 27 02:47:00.705128 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 02:47:00.705157 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 02:47:00.705189 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 02:47:00.705217 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 02:47:00.705247 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 02:47:00.705276 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 02:47:00.705310 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 02:47:00.705338 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 02:47:00.705366 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 02:47:00.705394 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 02:47:00.705422 kernel: fuse: init (API version 7.41) May 27 02:47:00.705448 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 02:47:00.705477 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 02:47:00.705506 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 02:47:00.705540 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 02:47:00.705570 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 02:47:00.705650 systemd-journald[1525]: Collecting audit messages is disabled. May 27 02:47:00.705708 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 02:47:00.705737 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 02:47:00.705766 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 02:47:00.705795 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 02:47:00.705823 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 02:47:00.705854 systemd-journald[1525]: Journal started May 27 02:47:00.705898 systemd-journald[1525]: Runtime Journal (/run/log/journal/ec2817ce2e4c2fd31ffa10e5567a1615) is 8M, max 75.3M, 67.3M free. May 27 02:47:00.026824 systemd[1]: Queued start job for default target multi-user.target. May 27 02:47:00.043499 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 27 02:47:00.044534 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 02:47:00.716612 systemd[1]: Started systemd-journald.service - Journal Service. May 27 02:47:00.771996 kernel: ACPI: bus type drm_connector registered May 27 02:47:00.773568 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 02:47:00.778354 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 02:47:00.792606 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 02:47:00.810658 kernel: loop0: detected capacity change from 0 to 138376 May 27 02:47:00.812363 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 02:47:00.826693 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 02:47:00.830171 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 02:47:00.836465 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 02:47:00.843046 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 02:47:00.856260 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 02:47:00.892576 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 02:47:00.905142 systemd-journald[1525]: Time spent on flushing to /var/log/journal/ec2817ce2e4c2fd31ffa10e5567a1615 is 108.091ms for 934 entries. May 27 02:47:00.905142 systemd-journald[1525]: System Journal (/var/log/journal/ec2817ce2e4c2fd31ffa10e5567a1615) is 8M, max 195.6M, 187.6M free. May 27 02:47:01.036303 systemd-journald[1525]: Received client request to flush runtime journal. May 27 02:47:01.036395 kernel: loop1: detected capacity change from 0 to 61240 May 27 02:47:01.024609 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 02:47:01.031243 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 02:47:01.040176 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 02:47:01.059884 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 02:47:01.064610 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 02:47:01.068088 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 02:47:01.092480 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 02:47:01.104795 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 02:47:01.113980 kernel: loop2: detected capacity change from 0 to 207008 May 27 02:47:01.164691 systemd-tmpfiles[1592]: ACLs are not supported, ignoring. May 27 02:47:01.166659 systemd-tmpfiles[1592]: ACLs are not supported, ignoring. May 27 02:47:01.179608 kernel: loop3: detected capacity change from 0 to 107312 May 27 02:47:01.184064 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 02:47:01.246003 kernel: loop4: detected capacity change from 0 to 138376 May 27 02:47:01.289995 kernel: loop5: detected capacity change from 0 to 61240 May 27 02:47:01.311030 kernel: loop6: detected capacity change from 0 to 207008 May 27 02:47:01.342008 kernel: loop7: detected capacity change from 0 to 107312 May 27 02:47:01.372492 (sd-merge)[1604]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 27 02:47:01.374077 (sd-merge)[1604]: Merged extensions into '/usr'. May 27 02:47:01.389543 systemd[1]: Reload requested from client PID 1552 ('systemd-sysext') (unit systemd-sysext.service)... May 27 02:47:01.389575 systemd[1]: Reloading... May 27 02:47:01.649993 zram_generator::config[1630]: No configuration found. May 27 02:47:01.685726 ldconfig[1546]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 02:47:01.911018 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 02:47:02.120388 systemd[1]: Reloading finished in 729 ms. May 27 02:47:02.137887 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 02:47:02.144091 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 02:47:02.163365 systemd[1]: Starting ensure-sysext.service... May 27 02:47:02.170021 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 02:47:02.220822 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 02:47:02.225644 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 02:47:02.225731 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 02:47:02.226438 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 02:47:02.227026 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 02:47:02.228535 systemd[1]: Reload requested from client PID 1682 ('systemctl') (unit ensure-sysext.service)... May 27 02:47:02.228678 systemd[1]: Reloading... May 27 02:47:02.228915 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 02:47:02.229613 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. May 27 02:47:02.229780 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. May 27 02:47:02.237695 systemd-tmpfiles[1683]: Detected autofs mount point /boot during canonicalization of boot. May 27 02:47:02.237727 systemd-tmpfiles[1683]: Skipping /boot May 27 02:47:02.267264 systemd-tmpfiles[1683]: Detected autofs mount point /boot during canonicalization of boot. May 27 02:47:02.267291 systemd-tmpfiles[1683]: Skipping /boot May 27 02:47:02.367991 zram_generator::config[1713]: No configuration found. May 27 02:47:02.583392 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 02:47:02.786201 systemd[1]: Reloading finished in 556 ms. May 27 02:47:02.838087 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 02:47:02.853721 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 02:47:02.859682 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 02:47:02.866432 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 02:47:02.876197 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 02:47:02.885484 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 02:47:02.893159 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 02:47:02.902137 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 02:47:02.905672 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 02:47:02.922027 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 02:47:02.928243 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 02:47:02.931318 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 02:47:02.931584 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 02:47:02.941077 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 02:47:02.947227 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 02:47:02.947647 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 02:47:02.947904 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 02:47:02.970132 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 02:47:02.976919 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 02:47:02.979556 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 02:47:02.979863 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 02:47:02.980299 systemd[1]: Reached target time-set.target - System Time Set. May 27 02:47:02.993040 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 02:47:03.003978 systemd[1]: Finished ensure-sysext.service. May 27 02:47:03.013073 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 02:47:03.014258 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 02:47:03.018268 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 02:47:03.049249 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 02:47:03.049835 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 02:47:03.075763 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 02:47:03.078067 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 02:47:03.082172 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 02:47:03.082695 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 02:47:03.086256 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 02:47:03.095384 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 02:47:03.106313 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 02:47:03.155111 systemd-udevd[1767]: Using default interface naming scheme 'v255'. May 27 02:47:03.169260 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 02:47:03.180798 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 02:47:03.184563 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 02:47:03.191725 augenrules[1803]: No rules May 27 02:47:03.194788 systemd[1]: audit-rules.service: Deactivated successfully. May 27 02:47:03.195514 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 02:47:03.226795 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 02:47:03.235610 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 02:47:03.246053 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 02:47:03.473428 (udev-worker)[1840]: Network interface NamePolicy= disabled on kernel command line. May 27 02:47:03.481136 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 02:47:03.597140 systemd-networkd[1818]: lo: Link UP May 27 02:47:03.597168 systemd-networkd[1818]: lo: Gained carrier May 27 02:47:03.599150 systemd-networkd[1818]: Enumeration completed May 27 02:47:03.599334 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 02:47:03.604730 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 02:47:03.611055 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 02:47:03.684758 systemd-networkd[1818]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 02:47:03.690061 systemd-networkd[1818]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 02:47:03.695254 systemd-networkd[1818]: eth0: Link UP May 27 02:47:03.695882 systemd-networkd[1818]: eth0: Gained carrier May 27 02:47:03.697197 systemd-networkd[1818]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 02:47:03.712100 systemd-networkd[1818]: eth0: DHCPv4 address 172.31.29.92/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 27 02:47:03.733067 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 02:47:03.804894 systemd-resolved[1766]: Positive Trust Anchors: May 27 02:47:03.805516 systemd-resolved[1766]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 02:47:03.805593 systemd-resolved[1766]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 02:47:03.816863 systemd-resolved[1766]: Defaulting to hostname 'linux'. May 27 02:47:03.822766 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 02:47:03.825250 systemd[1]: Reached target network.target - Network. May 27 02:47:03.827168 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 02:47:03.829668 systemd[1]: Reached target sysinit.target - System Initialization. May 27 02:47:03.831993 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 02:47:03.834530 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 02:47:03.837580 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 02:47:03.840005 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 02:47:03.842465 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 02:47:03.845038 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 02:47:03.845106 systemd[1]: Reached target paths.target - Path Units. May 27 02:47:03.847104 systemd[1]: Reached target timers.target - Timer Units. May 27 02:47:03.850868 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 02:47:03.856849 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 02:47:03.867365 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 02:47:03.870491 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 02:47:03.873133 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 02:47:03.892155 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 02:47:03.895244 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 02:47:03.900114 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 02:47:03.912191 systemd[1]: Reached target sockets.target - Socket Units. May 27 02:47:03.915875 systemd[1]: Reached target basic.target - Basic System. May 27 02:47:03.918346 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 02:47:03.918419 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 02:47:03.922373 systemd[1]: Starting containerd.service - containerd container runtime... May 27 02:47:03.929307 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 27 02:47:03.935848 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 02:47:03.941632 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 02:47:03.948167 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 02:47:03.954441 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 02:47:03.957181 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 02:47:03.961435 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 02:47:03.974345 systemd[1]: Started ntpd.service - Network Time Service. May 27 02:47:03.981381 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 02:47:03.987050 systemd[1]: Starting setup-oem.service - Setup OEM... May 27 02:47:03.998310 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 02:47:04.005391 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 02:47:04.032250 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 02:47:04.036276 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 02:47:04.039375 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 02:47:04.043407 systemd[1]: Starting update-engine.service - Update Engine... May 27 02:47:04.051329 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 02:47:04.070033 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 02:47:04.101584 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 02:47:04.103144 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 02:47:04.142224 jq[1869]: false May 27 02:47:04.216873 extend-filesystems[1870]: Found loop4 May 27 02:47:04.220122 extend-filesystems[1870]: Found loop5 May 27 02:47:04.220122 extend-filesystems[1870]: Found loop6 May 27 02:47:04.220122 extend-filesystems[1870]: Found loop7 May 27 02:47:04.220122 extend-filesystems[1870]: Found nvme0n1 May 27 02:47:04.220122 extend-filesystems[1870]: Found nvme0n1p1 May 27 02:47:04.229198 extend-filesystems[1870]: Found nvme0n1p2 May 27 02:47:04.229198 extend-filesystems[1870]: Found nvme0n1p3 May 27 02:47:04.229198 extend-filesystems[1870]: Found usr May 27 02:47:04.229198 extend-filesystems[1870]: Found nvme0n1p4 May 27 02:47:04.229198 extend-filesystems[1870]: Found nvme0n1p6 May 27 02:47:04.229198 extend-filesystems[1870]: Found nvme0n1p7 May 27 02:47:04.229198 extend-filesystems[1870]: Found nvme0n1p9 May 27 02:47:04.267589 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 02:47:04.271296 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 02:47:04.272176 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 02:47:04.272635 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 02:47:04.276540 systemd[1]: motdgen.service: Deactivated successfully. May 27 02:47:04.278104 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 02:47:04.308225 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 02:47:04.321487 jq[1879]: true May 27 02:47:04.356847 jq[1937]: true May 27 02:47:04.361680 tar[1881]: linux-arm64/LICENSE May 27 02:47:04.361680 tar[1881]: linux-arm64/helm May 27 02:47:04.376760 (ntainerd)[1931]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 02:47:04.418873 systemd[1]: Finished setup-oem.service - Setup OEM. May 27 02:47:04.453001 update_engine[1878]: I20250527 02:47:04.434847 1878 main.cc:92] Flatcar Update Engine starting May 27 02:47:04.462038 dbus-daemon[1867]: [system] SELinux support is enabled May 27 02:47:04.463175 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 02:47:04.469214 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 02:47:04.469296 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 02:47:04.471921 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 02:47:04.471990 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 02:47:04.495933 dbus-daemon[1867]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1818 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 27 02:47:04.497714 update_engine[1878]: I20250527 02:47:04.497288 1878 update_check_scheduler.cc:74] Next update check in 8m4s May 27 02:47:04.512293 systemd[1]: Started update-engine.service - Update Engine. May 27 02:47:04.522125 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 02:47:04.526549 dbus-daemon[1867]: [system] Successfully activated service 'org.freedesktop.systemd1' May 27 02:47:04.528124 bash[1977]: Updated "/home/core/.ssh/authorized_keys" May 27 02:47:04.538484 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 27 02:47:04.542249 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 02:47:04.551856 systemd[1]: Starting sshkeys.service... May 27 02:47:04.578023 coreos-metadata[1866]: May 27 02:47:04.577 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 27 02:47:04.578023 coreos-metadata[1866]: May 27 02:47:04.577 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 27 02:47:04.578023 coreos-metadata[1866]: May 27 02:47:04.577 INFO Fetch successful May 27 02:47:04.578023 coreos-metadata[1866]: May 27 02:47:04.577 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 27 02:47:04.578023 coreos-metadata[1866]: May 27 02:47:04.577 INFO Fetch successful May 27 02:47:04.578023 coreos-metadata[1866]: May 27 02:47:04.577 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 27 02:47:04.578023 coreos-metadata[1866]: May 27 02:47:04.577 INFO Fetch successful May 27 02:47:04.578023 coreos-metadata[1866]: May 27 02:47:04.577 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 27 02:47:04.578023 coreos-metadata[1866]: May 27 02:47:04.577 INFO Fetch successful May 27 02:47:04.578023 coreos-metadata[1866]: May 27 02:47:04.577 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 27 02:47:04.578023 coreos-metadata[1866]: May 27 02:47:04.577 INFO Fetch failed with 404: resource not found May 27 02:47:04.578023 coreos-metadata[1866]: May 27 02:47:04.577 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 27 02:47:04.578023 coreos-metadata[1866]: May 27 02:47:04.577 INFO Fetch successful May 27 02:47:04.578023 coreos-metadata[1866]: May 27 02:47:04.577 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 27 02:47:04.578023 coreos-metadata[1866]: May 27 02:47:04.577 INFO Fetch successful May 27 02:47:04.578023 coreos-metadata[1866]: May 27 02:47:04.577 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 27 02:47:04.578023 coreos-metadata[1866]: May 27 02:47:04.577 INFO Fetch successful May 27 02:47:04.578023 coreos-metadata[1866]: May 27 02:47:04.577 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 27 02:47:04.578023 coreos-metadata[1866]: May 27 02:47:04.577 INFO Fetch successful May 27 02:47:04.578023 coreos-metadata[1866]: May 27 02:47:04.577 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 27 02:47:04.578023 coreos-metadata[1866]: May 27 02:47:04.577 INFO Fetch successful May 27 02:47:04.751010 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 27 02:47:04.762537 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 27 02:47:04.771851 ntpd[1872]: ntpd 4.2.8p17@1.4004-o Tue May 27 00:38:41 UTC 2025 (1): Starting May 27 02:47:04.772456 ntpd[1872]: 27 May 02:47:04 ntpd[1872]: ntpd 4.2.8p17@1.4004-o Tue May 27 00:38:41 UTC 2025 (1): Starting May 27 02:47:04.772456 ntpd[1872]: 27 May 02:47:04 ntpd[1872]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 27 02:47:04.772456 ntpd[1872]: 27 May 02:47:04 ntpd[1872]: ---------------------------------------------------- May 27 02:47:04.772456 ntpd[1872]: 27 May 02:47:04 ntpd[1872]: ntp-4 is maintained by Network Time Foundation, May 27 02:47:04.772456 ntpd[1872]: 27 May 02:47:04 ntpd[1872]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 27 02:47:04.772456 ntpd[1872]: 27 May 02:47:04 ntpd[1872]: corporation. Support and training for ntp-4 are May 27 02:47:04.772456 ntpd[1872]: 27 May 02:47:04 ntpd[1872]: available at https://www.nwtime.org/support May 27 02:47:04.772456 ntpd[1872]: 27 May 02:47:04 ntpd[1872]: ---------------------------------------------------- May 27 02:47:04.771917 ntpd[1872]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 27 02:47:04.777661 ntpd[1872]: 27 May 02:47:04 ntpd[1872]: proto: precision = 0.096 usec (-23) May 27 02:47:04.777661 ntpd[1872]: 27 May 02:47:04 ntpd[1872]: basedate set to 2025-05-15 May 27 02:47:04.777661 ntpd[1872]: 27 May 02:47:04 ntpd[1872]: gps base set to 2025-05-18 (week 2367) May 27 02:47:04.771937 ntpd[1872]: ---------------------------------------------------- May 27 02:47:04.771997 ntpd[1872]: ntp-4 is maintained by Network Time Foundation, May 27 02:47:04.772016 ntpd[1872]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 27 02:47:04.772033 ntpd[1872]: corporation. Support and training for ntp-4 are May 27 02:47:04.786335 ntpd[1872]: 27 May 02:47:04 ntpd[1872]: Listen and drop on 0 v6wildcard [::]:123 May 27 02:47:04.786335 ntpd[1872]: 27 May 02:47:04 ntpd[1872]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 27 02:47:04.786335 ntpd[1872]: 27 May 02:47:04 ntpd[1872]: Listen normally on 2 lo 127.0.0.1:123 May 27 02:47:04.786335 ntpd[1872]: 27 May 02:47:04 ntpd[1872]: Listen normally on 3 eth0 172.31.29.92:123 May 27 02:47:04.786335 ntpd[1872]: 27 May 02:47:04 ntpd[1872]: Listen normally on 4 lo [::1]:123 May 27 02:47:04.786335 ntpd[1872]: 27 May 02:47:04 ntpd[1872]: bind(21) AF_INET6 fe80::436:dcff:feb3:4ca9%2#123 flags 0x11 failed: Cannot assign requested address May 27 02:47:04.786335 ntpd[1872]: 27 May 02:47:04 ntpd[1872]: unable to create socket on eth0 (5) for fe80::436:dcff:feb3:4ca9%2#123 May 27 02:47:04.786335 ntpd[1872]: 27 May 02:47:04 ntpd[1872]: failed to init interface for address fe80::436:dcff:feb3:4ca9%2 May 27 02:47:04.786335 ntpd[1872]: 27 May 02:47:04 ntpd[1872]: Listening on routing socket on fd #21 for interface updates May 27 02:47:04.786335 ntpd[1872]: 27 May 02:47:04 ntpd[1872]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 02:47:04.786335 ntpd[1872]: 27 May 02:47:04 ntpd[1872]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 02:47:04.772050 ntpd[1872]: available at https://www.nwtime.org/support May 27 02:47:04.772067 ntpd[1872]: ---------------------------------------------------- May 27 02:47:04.775166 ntpd[1872]: proto: precision = 0.096 usec (-23) May 27 02:47:04.776248 ntpd[1872]: basedate set to 2025-05-15 May 27 02:47:04.776284 ntpd[1872]: gps base set to 2025-05-18 (week 2367) May 27 02:47:04.779989 ntpd[1872]: Listen and drop on 0 v6wildcard [::]:123 May 27 02:47:04.780072 ntpd[1872]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 27 02:47:04.780332 ntpd[1872]: Listen normally on 2 lo 127.0.0.1:123 May 27 02:47:04.780406 ntpd[1872]: Listen normally on 3 eth0 172.31.29.92:123 May 27 02:47:04.780477 ntpd[1872]: Listen normally on 4 lo [::1]:123 May 27 02:47:04.780557 ntpd[1872]: bind(21) AF_INET6 fe80::436:dcff:feb3:4ca9%2#123 flags 0x11 failed: Cannot assign requested address May 27 02:47:04.780597 ntpd[1872]: unable to create socket on eth0 (5) for fe80::436:dcff:feb3:4ca9%2#123 May 27 02:47:04.780623 ntpd[1872]: failed to init interface for address fe80::436:dcff:feb3:4ca9%2 May 27 02:47:04.780679 ntpd[1872]: Listening on routing socket on fd #21 for interface updates May 27 02:47:04.783019 ntpd[1872]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 02:47:04.783102 ntpd[1872]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 27 02:47:04.989060 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 27 02:47:04.992529 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 02:47:05.012518 systemd-logind[1877]: New seat seat0. May 27 02:47:05.024277 systemd[1]: Started systemd-logind.service - User Login Management. May 27 02:47:05.163892 containerd[1931]: time="2025-05-27T02:47:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 02:47:05.201580 containerd[1931]: time="2025-05-27T02:47:05.197323019Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 02:47:05.313416 locksmithd[1980]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 02:47:05.322988 containerd[1931]: time="2025-05-27T02:47:05.319849896Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.164µs" May 27 02:47:05.322988 containerd[1931]: time="2025-05-27T02:47:05.319915632Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 02:47:05.331983 containerd[1931]: time="2025-05-27T02:47:05.331863276Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 02:47:05.332390 containerd[1931]: time="2025-05-27T02:47:05.332308416Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 02:47:05.332390 containerd[1931]: time="2025-05-27T02:47:05.332383116Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 02:47:05.332531 containerd[1931]: time="2025-05-27T02:47:05.332459040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 02:47:05.332694 containerd[1931]: time="2025-05-27T02:47:05.332632008Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 02:47:05.332694 containerd[1931]: time="2025-05-27T02:47:05.332680608Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 02:47:05.336990 containerd[1931]: time="2025-05-27T02:47:05.335373228Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 02:47:05.336990 containerd[1931]: time="2025-05-27T02:47:05.335448456Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 02:47:05.336990 containerd[1931]: time="2025-05-27T02:47:05.335489400Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 02:47:05.336990 containerd[1931]: time="2025-05-27T02:47:05.335514696Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 02:47:05.336990 containerd[1931]: time="2025-05-27T02:47:05.335787468Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 02:47:05.342602 containerd[1931]: time="2025-05-27T02:47:05.342504948Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 02:47:05.342750 containerd[1931]: time="2025-05-27T02:47:05.342626256Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 02:47:05.342750 containerd[1931]: time="2025-05-27T02:47:05.342656808Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 02:47:05.342750 containerd[1931]: time="2025-05-27T02:47:05.342734136Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 02:47:05.343452 containerd[1931]: time="2025-05-27T02:47:05.343297200Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 02:47:05.343994 containerd[1931]: time="2025-05-27T02:47:05.343523076Z" level=info msg="metadata content store policy set" policy=shared May 27 02:47:05.354399 coreos-metadata[2001]: May 27 02:47:05.354 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 27 02:47:05.357155 coreos-metadata[2001]: May 27 02:47:05.356 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 27 02:47:05.357504 containerd[1931]: time="2025-05-27T02:47:05.356812656Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 02:47:05.357504 containerd[1931]: time="2025-05-27T02:47:05.356938872Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 02:47:05.357504 containerd[1931]: time="2025-05-27T02:47:05.357014544Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 02:47:05.357504 containerd[1931]: time="2025-05-27T02:47:05.357047244Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 02:47:05.357504 containerd[1931]: time="2025-05-27T02:47:05.357078900Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 02:47:05.357504 containerd[1931]: time="2025-05-27T02:47:05.357157308Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 02:47:05.357504 containerd[1931]: time="2025-05-27T02:47:05.357296352Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 02:47:05.357504 containerd[1931]: time="2025-05-27T02:47:05.357380520Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 02:47:05.357504 containerd[1931]: time="2025-05-27T02:47:05.357417060Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 02:47:05.357504 containerd[1931]: time="2025-05-27T02:47:05.357445764Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 02:47:05.360602 containerd[1931]: time="2025-05-27T02:47:05.357532812Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 02:47:05.360602 containerd[1931]: time="2025-05-27T02:47:05.357574284Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 02:47:05.361717 coreos-metadata[2001]: May 27 02:47:05.360 INFO Fetch successful May 27 02:47:05.361717 coreos-metadata[2001]: May 27 02:47:05.361 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 27 02:47:05.362641 containerd[1931]: time="2025-05-27T02:47:05.362189892Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 02:47:05.362641 containerd[1931]: time="2025-05-27T02:47:05.362274912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 02:47:05.362641 containerd[1931]: time="2025-05-27T02:47:05.362376096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 02:47:05.362641 containerd[1931]: time="2025-05-27T02:47:05.362407824Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 02:47:05.362641 containerd[1931]: time="2025-05-27T02:47:05.362437140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 02:47:05.362641 containerd[1931]: time="2025-05-27T02:47:05.362469528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 02:47:05.362641 containerd[1931]: time="2025-05-27T02:47:05.362499396Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 02:47:05.362641 containerd[1931]: time="2025-05-27T02:47:05.362527068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 02:47:05.362641 containerd[1931]: time="2025-05-27T02:47:05.362556204Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 02:47:05.362641 containerd[1931]: time="2025-05-27T02:47:05.362588748Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 02:47:05.362641 containerd[1931]: time="2025-05-27T02:47:05.362617872Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 02:47:05.363301 coreos-metadata[2001]: May 27 02:47:05.362 INFO Fetch successful May 27 02:47:05.363374 containerd[1931]: time="2025-05-27T02:47:05.362838372Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 02:47:05.363374 containerd[1931]: time="2025-05-27T02:47:05.362885100Z" level=info msg="Start snapshots syncer" May 27 02:47:05.367249 containerd[1931]: time="2025-05-27T02:47:05.366256116Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 02:47:05.367249 containerd[1931]: time="2025-05-27T02:47:05.366751968Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 02:47:05.367150 unknown[2001]: wrote ssh authorized keys file for user: core May 27 02:47:05.368095 containerd[1931]: time="2025-05-27T02:47:05.366867528Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 02:47:05.368095 containerd[1931]: time="2025-05-27T02:47:05.367105680Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 02:47:05.371843 containerd[1931]: time="2025-05-27T02:47:05.371747496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 02:47:05.371843 containerd[1931]: time="2025-05-27T02:47:05.371840868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 02:47:05.372074 containerd[1931]: time="2025-05-27T02:47:05.371874204Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 02:47:05.372074 containerd[1931]: time="2025-05-27T02:47:05.371912160Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 02:47:05.372074 containerd[1931]: time="2025-05-27T02:47:05.371990016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 02:47:05.372074 containerd[1931]: time="2025-05-27T02:47:05.372024696Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 02:47:05.372074 containerd[1931]: time="2025-05-27T02:47:05.372063312Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 02:47:05.372841 containerd[1931]: time="2025-05-27T02:47:05.372125004Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 02:47:05.372841 containerd[1931]: time="2025-05-27T02:47:05.372154764Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 02:47:05.372841 containerd[1931]: time="2025-05-27T02:47:05.372183120Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 02:47:05.372841 containerd[1931]: time="2025-05-27T02:47:05.372291300Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 02:47:05.372841 containerd[1931]: time="2025-05-27T02:47:05.372336888Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 02:47:05.372841 containerd[1931]: time="2025-05-27T02:47:05.372360660Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 02:47:05.372841 containerd[1931]: time="2025-05-27T02:47:05.372387132Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 02:47:05.372841 containerd[1931]: time="2025-05-27T02:47:05.372410436Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 02:47:05.372841 containerd[1931]: time="2025-05-27T02:47:05.372437328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 02:47:05.372841 containerd[1931]: time="2025-05-27T02:47:05.372466152Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 02:47:05.372841 containerd[1931]: time="2025-05-27T02:47:05.372528228Z" level=info msg="runtime interface created" May 27 02:47:05.372841 containerd[1931]: time="2025-05-27T02:47:05.372544572Z" level=info msg="created NRI interface" May 27 02:47:05.372841 containerd[1931]: time="2025-05-27T02:47:05.372569004Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 02:47:05.372841 containerd[1931]: time="2025-05-27T02:47:05.372600852Z" level=info msg="Connect containerd service" May 27 02:47:05.372841 containerd[1931]: time="2025-05-27T02:47:05.372698916Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 02:47:05.390845 containerd[1931]: time="2025-05-27T02:47:05.390234156Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 02:47:05.452134 systemd-networkd[1818]: eth0: Gained IPv6LL May 27 02:47:05.527453 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 02:47:05.549910 systemd[1]: Reached target network-online.target - Network is Online. May 27 02:47:05.556523 update-ssh-keys[2049]: Updated "/home/core/.ssh/authorized_keys" May 27 02:47:05.556202 systemd-logind[1877]: Watching system buttons on /dev/input/event0 (Power Button) May 27 02:47:05.559908 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 27 02:47:05.571083 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:47:05.574161 systemd-logind[1877]: Watching system buttons on /dev/input/event1 (Sleep Button) May 27 02:47:05.581077 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 02:47:05.589069 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 27 02:47:05.601177 systemd[1]: Finished sshkeys.service. May 27 02:47:05.700676 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 02:47:05.786036 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 27 02:47:05.823334 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 02:47:05.877078 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 02:47:05.955713 amazon-ssm-agent[2056]: Initializing new seelog logger May 27 02:47:05.961990 amazon-ssm-agent[2056]: New Seelog Logger Creation Complete May 27 02:47:05.961990 amazon-ssm-agent[2056]: 2025/05/27 02:47:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 02:47:05.961990 amazon-ssm-agent[2056]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 02:47:05.961990 amazon-ssm-agent[2056]: 2025/05/27 02:47:05 processing appconfig overrides May 27 02:47:05.970997 amazon-ssm-agent[2056]: 2025/05/27 02:47:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 02:47:05.970997 amazon-ssm-agent[2056]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 02:47:05.970997 amazon-ssm-agent[2056]: 2025/05/27 02:47:05 processing appconfig overrides May 27 02:47:05.970997 amazon-ssm-agent[2056]: 2025/05/27 02:47:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 02:47:05.970997 amazon-ssm-agent[2056]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 02:47:05.970997 amazon-ssm-agent[2056]: 2025/05/27 02:47:05 processing appconfig overrides May 27 02:47:05.976999 amazon-ssm-agent[2056]: 2025-05-27 02:47:05.9674 INFO Proxy environment variables: May 27 02:47:05.983412 amazon-ssm-agent[2056]: 2025/05/27 02:47:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 02:47:05.983412 amazon-ssm-agent[2056]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 02:47:05.983412 amazon-ssm-agent[2056]: 2025/05/27 02:47:05 processing appconfig overrides May 27 02:47:05.982044 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 02:47:06.075016 amazon-ssm-agent[2056]: 2025-05-27 02:47:05.9675 INFO https_proxy: May 27 02:47:06.132256 containerd[1931]: time="2025-05-27T02:47:06.128862936Z" level=info msg="Start subscribing containerd event" May 27 02:47:06.132256 containerd[1931]: time="2025-05-27T02:47:06.129018408Z" level=info msg="Start recovering state" May 27 02:47:06.132256 containerd[1931]: time="2025-05-27T02:47:06.129168288Z" level=info msg="Start event monitor" May 27 02:47:06.132256 containerd[1931]: time="2025-05-27T02:47:06.129199068Z" level=info msg="Start cni network conf syncer for default" May 27 02:47:06.132256 containerd[1931]: time="2025-05-27T02:47:06.129222948Z" level=info msg="Start streaming server" May 27 02:47:06.132256 containerd[1931]: time="2025-05-27T02:47:06.129246696Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 02:47:06.132256 containerd[1931]: time="2025-05-27T02:47:06.129268116Z" level=info msg="runtime interface starting up..." May 27 02:47:06.132256 containerd[1931]: time="2025-05-27T02:47:06.129287772Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 02:47:06.132256 containerd[1931]: time="2025-05-27T02:47:06.129395736Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 02:47:06.132256 containerd[1931]: time="2025-05-27T02:47:06.129286752Z" level=info msg="starting plugins..." May 27 02:47:06.132256 containerd[1931]: time="2025-05-27T02:47:06.129465588Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 02:47:06.130214 systemd[1]: Started containerd.service - containerd container runtime. May 27 02:47:06.139356 containerd[1931]: time="2025-05-27T02:47:06.136435680Z" level=info msg="containerd successfully booted in 0.976871s" May 27 02:47:06.179991 amazon-ssm-agent[2056]: 2025-05-27 02:47:05.9675 INFO http_proxy: May 27 02:47:06.200357 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 02:47:06.281441 amazon-ssm-agent[2056]: 2025-05-27 02:47:05.9675 INFO no_proxy: May 27 02:47:06.318201 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 27 02:47:06.323880 dbus-daemon[1867]: [system] Successfully activated service 'org.freedesktop.hostname1' May 27 02:47:06.334216 dbus-daemon[1867]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1982 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 27 02:47:06.343561 systemd[1]: Starting polkit.service - Authorization Manager... May 27 02:47:06.380496 amazon-ssm-agent[2056]: 2025-05-27 02:47:05.9677 INFO Checking if agent identity type OnPrem can be assumed May 27 02:47:06.479905 amazon-ssm-agent[2056]: 2025-05-27 02:47:05.9678 INFO Checking if agent identity type EC2 can be assumed May 27 02:47:06.578415 amazon-ssm-agent[2056]: 2025-05-27 02:47:06.2873 INFO Agent will take identity from EC2 May 27 02:47:06.677981 amazon-ssm-agent[2056]: 2025-05-27 02:47:06.2948 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 May 27 02:47:06.771158 polkitd[2092]: Started polkitd version 126 May 27 02:47:06.778719 amazon-ssm-agent[2056]: 2025-05-27 02:47:06.2948 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 May 27 02:47:06.799677 polkitd[2092]: Loading rules from directory /etc/polkit-1/rules.d May 27 02:47:06.803342 polkitd[2092]: Loading rules from directory /run/polkit-1/rules.d May 27 02:47:06.803457 polkitd[2092]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 02:47:06.806558 polkitd[2092]: Loading rules from directory /usr/local/share/polkit-1/rules.d May 27 02:47:06.806661 polkitd[2092]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 02:47:06.806749 polkitd[2092]: Loading rules from directory /usr/share/polkit-1/rules.d May 27 02:47:06.810902 polkitd[2092]: Finished loading, compiling and executing 2 rules May 27 02:47:06.817542 systemd[1]: Started polkit.service - Authorization Manager. May 27 02:47:06.824236 dbus-daemon[1867]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 27 02:47:06.828069 polkitd[2092]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 27 02:47:06.878983 amazon-ssm-agent[2056]: 2025-05-27 02:47:06.2948 INFO [amazon-ssm-agent] Starting Core Agent May 27 02:47:06.886549 systemd-hostnamed[1982]: Hostname set to (transient) May 27 02:47:06.889029 systemd-resolved[1766]: System hostname changed to 'ip-172-31-29-92'. May 27 02:47:06.979162 amazon-ssm-agent[2056]: 2025-05-27 02:47:06.2949 INFO [amazon-ssm-agent] Registrar detected. Attempting registration May 27 02:47:07.082781 amazon-ssm-agent[2056]: 2025-05-27 02:47:06.2949 INFO [Registrar] Starting registrar module May 27 02:47:07.179920 amazon-ssm-agent[2056]: 2025-05-27 02:47:06.3041 INFO [EC2Identity] Checking disk for registration info May 27 02:47:07.218588 tar[1881]: linux-arm64/README.md May 27 02:47:07.263265 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 02:47:07.280399 amazon-ssm-agent[2056]: 2025-05-27 02:47:06.3042 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration May 27 02:47:07.382874 amazon-ssm-agent[2056]: 2025-05-27 02:47:06.3042 INFO [EC2Identity] Generating registration keypair May 27 02:47:07.424308 sshd_keygen[1917]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 02:47:07.496088 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 02:47:07.505542 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 02:47:07.512903 systemd[1]: Started sshd@0-172.31.29.92:22-139.178.68.195:43038.service - OpenSSH per-connection server daemon (139.178.68.195:43038). May 27 02:47:07.566262 systemd[1]: issuegen.service: Deactivated successfully. May 27 02:47:07.569120 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 02:47:07.579598 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 02:47:07.641799 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 02:47:07.648454 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 02:47:07.654472 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 02:47:07.657171 systemd[1]: Reached target getty.target - Login Prompts. May 27 02:47:07.772696 ntpd[1872]: Listen normally on 6 eth0 [fe80::436:dcff:feb3:4ca9%2]:123 May 27 02:47:07.774349 ntpd[1872]: 27 May 02:47:07 ntpd[1872]: Listen normally on 6 eth0 [fe80::436:dcff:feb3:4ca9%2]:123 May 27 02:47:07.814143 sshd[2116]: Accepted publickey for core from 139.178.68.195 port 43038 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:47:07.823112 sshd-session[2116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:07.825009 amazon-ssm-agent[2056]: 2025-05-27 02:47:07.8226 INFO [EC2Identity] Checking write access before registering May 27 02:47:07.842315 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 02:47:07.846597 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 02:47:07.865817 systemd-logind[1877]: New session 1 of user core. May 27 02:47:07.880161 amazon-ssm-agent[2056]: 2025/05/27 02:47:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 27 02:47:07.880161 amazon-ssm-agent[2056]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 27 02:47:07.880579 amazon-ssm-agent[2056]: 2025/05/27 02:47:07 processing appconfig overrides May 27 02:47:07.903912 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 02:47:07.921285 amazon-ssm-agent[2056]: 2025-05-27 02:47:07.8234 INFO [EC2Identity] Registering EC2 instance with Systems Manager May 27 02:47:07.921756 amazon-ssm-agent[2056]: 2025-05-27 02:47:07.8793 INFO [EC2Identity] EC2 registration was successful. May 27 02:47:07.921756 amazon-ssm-agent[2056]: 2025-05-27 02:47:07.8793 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. May 27 02:47:07.921941 amazon-ssm-agent[2056]: 2025-05-27 02:47:07.8795 INFO [CredentialRefresher] credentialRefresher has started May 27 02:47:07.921941 amazon-ssm-agent[2056]: 2025-05-27 02:47:07.8795 INFO [CredentialRefresher] Starting credentials refresher loop May 27 02:47:07.921941 amazon-ssm-agent[2056]: 2025-05-27 02:47:07.9201 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 27 02:47:07.921941 amazon-ssm-agent[2056]: 2025-05-27 02:47:07.9204 INFO [CredentialRefresher] Credentials ready May 27 02:47:07.923234 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 02:47:07.925311 amazon-ssm-agent[2056]: 2025-05-27 02:47:07.9229 INFO [CredentialRefresher] Next credential rotation will be in 29.9999537016 minutes May 27 02:47:07.943615 (systemd)[2129]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 02:47:07.946809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:47:07.952407 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 02:47:07.959536 systemd-logind[1877]: New session c1 of user core. May 27 02:47:07.964537 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 02:47:08.267109 systemd[2129]: Queued start job for default target default.target. May 27 02:47:08.277711 systemd[2129]: Created slice app.slice - User Application Slice. May 27 02:47:08.277780 systemd[2129]: Reached target paths.target - Paths. May 27 02:47:08.277862 systemd[2129]: Reached target timers.target - Timers. May 27 02:47:08.283158 systemd[2129]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 02:47:08.309503 systemd[2129]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 02:47:08.309755 systemd[2129]: Reached target sockets.target - Sockets. May 27 02:47:08.310079 systemd[2129]: Reached target basic.target - Basic System. May 27 02:47:08.310236 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 02:47:08.312245 systemd[2129]: Reached target default.target - Main User Target. May 27 02:47:08.312315 systemd[2129]: Startup finished in 338ms. May 27 02:47:08.320255 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 02:47:08.326138 systemd[1]: Startup finished in 3.774s (kernel) + 11.275s (initrd) + 9.422s (userspace) = 24.473s. May 27 02:47:08.588810 systemd[1]: Started sshd@1-172.31.29.92:22-139.178.68.195:43432.service - OpenSSH per-connection server daemon (139.178.68.195:43432). May 27 02:47:08.795763 sshd[2152]: Accepted publickey for core from 139.178.68.195 port 43432 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:47:08.798682 sshd-session[2152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:08.809468 systemd-logind[1877]: New session 2 of user core. May 27 02:47:08.815267 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 02:47:08.950401 sshd[2155]: Connection closed by 139.178.68.195 port 43432 May 27 02:47:08.951180 sshd-session[2152]: pam_unix(sshd:session): session closed for user core May 27 02:47:08.959859 amazon-ssm-agent[2056]: 2025-05-27 02:47:08.9596 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 27 02:47:08.963058 systemd-logind[1877]: Session 2 logged out. Waiting for processes to exit. May 27 02:47:08.963291 systemd[1]: sshd@1-172.31.29.92:22-139.178.68.195:43432.service: Deactivated successfully. May 27 02:47:08.969653 systemd[1]: session-2.scope: Deactivated successfully. May 27 02:47:08.997982 kubelet[2133]: E0527 02:47:08.993245 2133 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 02:47:08.994001 systemd[1]: Started sshd@2-172.31.29.92:22-139.178.68.195:43444.service - OpenSSH per-connection server daemon (139.178.68.195:43444). May 27 02:47:08.996698 systemd-logind[1877]: Removed session 2. May 27 02:47:09.006740 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 02:47:09.009527 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 02:47:09.012149 systemd[1]: kubelet.service: Consumed 1.388s CPU time, 255.4M memory peak. May 27 02:47:09.060804 amazon-ssm-agent[2056]: 2025-05-27 02:47:08.9678 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2159) started May 27 02:47:09.161395 amazon-ssm-agent[2056]: 2025-05-27 02:47:08.9683 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 27 02:47:09.186984 sshd[2164]: Accepted publickey for core from 139.178.68.195 port 43444 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:47:09.190652 sshd-session[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:09.207574 systemd-logind[1877]: New session 3 of user core. May 27 02:47:09.216306 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 02:47:09.332831 sshd[2177]: Connection closed by 139.178.68.195 port 43444 May 27 02:47:09.333755 sshd-session[2164]: pam_unix(sshd:session): session closed for user core May 27 02:47:09.340925 systemd[1]: sshd@2-172.31.29.92:22-139.178.68.195:43444.service: Deactivated successfully. May 27 02:47:09.344920 systemd[1]: session-3.scope: Deactivated successfully. May 27 02:47:09.346870 systemd-logind[1877]: Session 3 logged out. Waiting for processes to exit. May 27 02:47:09.349811 systemd-logind[1877]: Removed session 3. May 27 02:47:09.371317 systemd[1]: Started sshd@3-172.31.29.92:22-139.178.68.195:43448.service - OpenSSH per-connection server daemon (139.178.68.195:43448). May 27 02:47:09.583919 sshd[2183]: Accepted publickey for core from 139.178.68.195 port 43448 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:47:09.587069 sshd-session[2183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:09.598040 systemd-logind[1877]: New session 4 of user core. May 27 02:47:09.606242 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 02:47:09.732619 sshd[2185]: Connection closed by 139.178.68.195 port 43448 May 27 02:47:09.733118 sshd-session[2183]: pam_unix(sshd:session): session closed for user core May 27 02:47:09.740168 systemd-logind[1877]: Session 4 logged out. Waiting for processes to exit. May 27 02:47:09.741844 systemd[1]: sshd@3-172.31.29.92:22-139.178.68.195:43448.service: Deactivated successfully. May 27 02:47:09.745486 systemd[1]: session-4.scope: Deactivated successfully. May 27 02:47:09.748440 systemd-logind[1877]: Removed session 4. May 27 02:47:09.765191 systemd[1]: Started sshd@4-172.31.29.92:22-139.178.68.195:43464.service - OpenSSH per-connection server daemon (139.178.68.195:43464). May 27 02:47:09.956771 sshd[2191]: Accepted publickey for core from 139.178.68.195 port 43464 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:47:09.959429 sshd-session[2191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:09.969078 systemd-logind[1877]: New session 5 of user core. May 27 02:47:09.977227 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 02:47:10.095854 sudo[2194]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 02:47:10.097260 sudo[2194]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 02:47:10.114697 sudo[2194]: pam_unix(sudo:session): session closed for user root May 27 02:47:10.139402 sshd[2193]: Connection closed by 139.178.68.195 port 43464 May 27 02:47:10.139000 sshd-session[2191]: pam_unix(sshd:session): session closed for user core May 27 02:47:10.147281 systemd[1]: sshd@4-172.31.29.92:22-139.178.68.195:43464.service: Deactivated successfully. May 27 02:47:10.151293 systemd[1]: session-5.scope: Deactivated successfully. May 27 02:47:10.154495 systemd-logind[1877]: Session 5 logged out. Waiting for processes to exit. May 27 02:47:10.158519 systemd-logind[1877]: Removed session 5. May 27 02:47:10.173637 systemd[1]: Started sshd@5-172.31.29.92:22-139.178.68.195:43468.service - OpenSSH per-connection server daemon (139.178.68.195:43468). May 27 02:47:10.385487 sshd[2200]: Accepted publickey for core from 139.178.68.195 port 43468 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:47:10.388560 sshd-session[2200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:10.399386 systemd-logind[1877]: New session 6 of user core. May 27 02:47:10.418273 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 02:47:10.522498 sudo[2204]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 02:47:10.523880 sudo[2204]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 02:47:10.537238 sudo[2204]: pam_unix(sudo:session): session closed for user root May 27 02:47:10.547373 sudo[2203]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 02:47:10.548038 sudo[2203]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 02:47:10.566459 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 02:47:10.627021 augenrules[2226]: No rules May 27 02:47:10.629226 systemd[1]: audit-rules.service: Deactivated successfully. May 27 02:47:10.631053 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 02:47:10.633107 sudo[2203]: pam_unix(sudo:session): session closed for user root May 27 02:47:10.657780 sshd[2202]: Connection closed by 139.178.68.195 port 43468 May 27 02:47:10.656866 sshd-session[2200]: pam_unix(sshd:session): session closed for user core May 27 02:47:10.664024 systemd[1]: sshd@5-172.31.29.92:22-139.178.68.195:43468.service: Deactivated successfully. May 27 02:47:10.666690 systemd[1]: session-6.scope: Deactivated successfully. May 27 02:47:10.668263 systemd-logind[1877]: Session 6 logged out. Waiting for processes to exit. May 27 02:47:10.670870 systemd-logind[1877]: Removed session 6. May 27 02:47:10.694817 systemd[1]: Started sshd@6-172.31.29.92:22-139.178.68.195:43470.service - OpenSSH per-connection server daemon (139.178.68.195:43470). May 27 02:47:10.890038 sshd[2235]: Accepted publickey for core from 139.178.68.195 port 43470 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:47:10.892873 sshd-session[2235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:47:10.902033 systemd-logind[1877]: New session 7 of user core. May 27 02:47:10.911227 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 02:47:11.016190 sudo[2238]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 02:47:11.016794 sudo[2238]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 02:47:11.530234 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 02:47:11.545710 (dockerd)[2256]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 02:47:11.901032 dockerd[2256]: time="2025-05-27T02:47:11.900238221Z" level=info msg="Starting up" May 27 02:47:11.902603 dockerd[2256]: time="2025-05-27T02:47:11.902522529Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 02:47:12.015421 dockerd[2256]: time="2025-05-27T02:47:12.015342790Z" level=info msg="Loading containers: start." May 27 02:47:12.032147 kernel: Initializing XFRM netlink socket May 27 02:47:12.326812 (udev-worker)[2278]: Network interface NamePolicy= disabled on kernel command line. May 27 02:47:12.406035 systemd-networkd[1818]: docker0: Link UP May 27 02:47:12.411392 dockerd[2256]: time="2025-05-27T02:47:12.411336883Z" level=info msg="Loading containers: done." May 27 02:47:12.437053 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck559543002-merged.mount: Deactivated successfully. May 27 02:47:12.440626 dockerd[2256]: time="2025-05-27T02:47:12.440499169Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 02:47:12.440955 dockerd[2256]: time="2025-05-27T02:47:12.440819405Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 02:47:12.441208 dockerd[2256]: time="2025-05-27T02:47:12.441171841Z" level=info msg="Initializing buildkit" May 27 02:47:12.489862 dockerd[2256]: time="2025-05-27T02:47:12.489775011Z" level=info msg="Completed buildkit initialization" May 27 02:47:12.506663 dockerd[2256]: time="2025-05-27T02:47:12.506588597Z" level=info msg="Daemon has completed initialization" May 27 02:47:12.506800 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 02:47:12.507849 dockerd[2256]: time="2025-05-27T02:47:12.507766158Z" level=info msg="API listen on /run/docker.sock" May 27 02:47:13.562563 containerd[1931]: time="2025-05-27T02:47:13.562494454Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 27 02:47:14.302557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1409560626.mount: Deactivated successfully. May 27 02:47:16.003318 containerd[1931]: time="2025-05-27T02:47:16.003237829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:16.005081 containerd[1931]: time="2025-05-27T02:47:16.004990525Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=26326311" May 27 02:47:16.006342 containerd[1931]: time="2025-05-27T02:47:16.006267939Z" level=info msg="ImageCreate event name:\"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:16.010618 containerd[1931]: time="2025-05-27T02:47:16.010571213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:16.012767 containerd[1931]: time="2025-05-27T02:47:16.012534951Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"26323111\" in 2.449956875s" May 27 02:47:16.012767 containerd[1931]: time="2025-05-27T02:47:16.012590239Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\"" May 27 02:47:16.013613 containerd[1931]: time="2025-05-27T02:47:16.013574743Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 27 02:47:18.364990 containerd[1931]: time="2025-05-27T02:47:18.364285567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:18.366719 containerd[1931]: time="2025-05-27T02:47:18.366674869Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=22530547" May 27 02:47:18.369129 containerd[1931]: time="2025-05-27T02:47:18.369066644Z" level=info msg="ImageCreate event name:\"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:18.380297 containerd[1931]: time="2025-05-27T02:47:18.380197923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:18.382807 containerd[1931]: time="2025-05-27T02:47:18.382382427Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"24066313\" in 2.368429435s" May 27 02:47:18.382807 containerd[1931]: time="2025-05-27T02:47:18.382442805Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\"" May 27 02:47:18.383294 containerd[1931]: time="2025-05-27T02:47:18.383243653Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 27 02:47:19.257639 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 02:47:19.260203 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:47:19.624167 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:47:19.635910 (kubelet)[2529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 02:47:19.725847 kubelet[2529]: E0527 02:47:19.725717 2529 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 02:47:19.735728 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 02:47:19.736116 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 02:47:19.737053 systemd[1]: kubelet.service: Consumed 315ms CPU time, 106.9M memory peak. May 27 02:47:20.170767 containerd[1931]: time="2025-05-27T02:47:20.170712350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:20.171986 containerd[1931]: time="2025-05-27T02:47:20.171614252Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=17484190" May 27 02:47:20.173224 containerd[1931]: time="2025-05-27T02:47:20.173182068Z" level=info msg="ImageCreate event name:\"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:20.178177 containerd[1931]: time="2025-05-27T02:47:20.178131169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:20.180146 containerd[1931]: time="2025-05-27T02:47:20.180082396Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"19019974\" in 1.796782314s" May 27 02:47:20.180146 containerd[1931]: time="2025-05-27T02:47:20.180140229Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\"" May 27 02:47:20.180720 containerd[1931]: time="2025-05-27T02:47:20.180672731Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 27 02:47:21.629321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1217224984.mount: Deactivated successfully. May 27 02:47:22.147720 containerd[1931]: time="2025-05-27T02:47:22.147638272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:22.149523 containerd[1931]: time="2025-05-27T02:47:22.149219571Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=27377375" May 27 02:47:22.151164 containerd[1931]: time="2025-05-27T02:47:22.151120745Z" level=info msg="ImageCreate event name:\"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:22.157986 containerd[1931]: time="2025-05-27T02:47:22.157881155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:22.161506 containerd[1931]: time="2025-05-27T02:47:22.161439998Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"27376394\" in 1.980711727s" May 27 02:47:22.161506 containerd[1931]: time="2025-05-27T02:47:22.161503726Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\"" May 27 02:47:22.162680 containerd[1931]: time="2025-05-27T02:47:22.162372900Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 02:47:22.704504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount233474705.mount: Deactivated successfully. May 27 02:47:23.952271 containerd[1931]: time="2025-05-27T02:47:23.952215879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:23.954619 containerd[1931]: time="2025-05-27T02:47:23.954574505Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" May 27 02:47:23.956745 containerd[1931]: time="2025-05-27T02:47:23.956663585Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:23.962458 containerd[1931]: time="2025-05-27T02:47:23.962357923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:23.964541 containerd[1931]: time="2025-05-27T02:47:23.964278967Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.801853313s" May 27 02:47:23.964541 containerd[1931]: time="2025-05-27T02:47:23.964333270Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 27 02:47:23.965257 containerd[1931]: time="2025-05-27T02:47:23.965217861Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 02:47:24.500231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2513018601.mount: Deactivated successfully. May 27 02:47:24.514011 containerd[1931]: time="2025-05-27T02:47:24.513304503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 02:47:24.515269 containerd[1931]: time="2025-05-27T02:47:24.515203348Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 27 02:47:24.517886 containerd[1931]: time="2025-05-27T02:47:24.517811459Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 02:47:24.522360 containerd[1931]: time="2025-05-27T02:47:24.522263260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 02:47:24.523762 containerd[1931]: time="2025-05-27T02:47:24.523573691Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 558.170674ms" May 27 02:47:24.523762 containerd[1931]: time="2025-05-27T02:47:24.523626637Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 27 02:47:24.524597 containerd[1931]: time="2025-05-27T02:47:24.524177304Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 27 02:47:25.176280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount296202781.mount: Deactivated successfully. May 27 02:47:28.197874 containerd[1931]: time="2025-05-27T02:47:28.197793236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:28.200724 containerd[1931]: time="2025-05-27T02:47:28.200647891Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" May 27 02:47:28.203558 containerd[1931]: time="2025-05-27T02:47:28.203492844Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:28.211726 containerd[1931]: time="2025-05-27T02:47:28.211631736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:47:28.214091 containerd[1931]: time="2025-05-27T02:47:28.213710083Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.68948874s" May 27 02:47:28.214091 containerd[1931]: time="2025-05-27T02:47:28.213760808Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 27 02:47:29.964133 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 02:47:29.968278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:47:30.308199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:47:30.320611 (kubelet)[2684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 02:47:30.388433 kubelet[2684]: E0527 02:47:30.388352 2684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 02:47:30.393979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 02:47:30.394284 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 02:47:30.396073 systemd[1]: kubelet.service: Consumed 284ms CPU time, 104.8M memory peak. May 27 02:47:36.899823 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 27 02:47:38.145118 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:47:38.145765 systemd[1]: kubelet.service: Consumed 284ms CPU time, 104.8M memory peak. May 27 02:47:38.151083 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:47:38.201699 systemd[1]: Reload requested from client PID 2701 ('systemctl') (unit session-7.scope)... May 27 02:47:38.201930 systemd[1]: Reloading... May 27 02:47:38.438983 zram_generator::config[2745]: No configuration found. May 27 02:47:38.640761 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 02:47:38.901178 systemd[1]: Reloading finished in 698 ms. May 27 02:47:38.992496 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 02:47:38.992677 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 02:47:38.993395 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:47:38.993483 systemd[1]: kubelet.service: Consumed 218ms CPU time, 94.9M memory peak. May 27 02:47:38.996704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:47:39.359730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:47:39.372515 (kubelet)[2808]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 02:47:39.447511 kubelet[2808]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 02:47:39.447511 kubelet[2808]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 02:47:39.447511 kubelet[2808]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 02:47:39.449116 kubelet[2808]: I0527 02:47:39.448148 2808 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 02:47:40.720037 kubelet[2808]: I0527 02:47:40.719978 2808 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 02:47:40.720037 kubelet[2808]: I0527 02:47:40.720028 2808 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 02:47:40.720651 kubelet[2808]: I0527 02:47:40.720501 2808 server.go:954] "Client rotation is on, will bootstrap in background" May 27 02:47:40.774090 kubelet[2808]: E0527 02:47:40.774041 2808 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.29.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.29.92:6443: connect: connection refused" logger="UnhandledError" May 27 02:47:40.778167 kubelet[2808]: I0527 02:47:40.777792 2808 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 02:47:40.789990 kubelet[2808]: I0527 02:47:40.789917 2808 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 02:47:40.795497 kubelet[2808]: I0527 02:47:40.795461 2808 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 02:47:40.799003 kubelet[2808]: I0527 02:47:40.798683 2808 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 02:47:40.799112 kubelet[2808]: I0527 02:47:40.798743 2808 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-92","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 02:47:40.799112 kubelet[2808]: I0527 02:47:40.799095 2808 topology_manager.go:138] "Creating topology manager with none policy" May 27 02:47:40.799353 kubelet[2808]: I0527 02:47:40.799115 2808 container_manager_linux.go:304] "Creating device plugin manager" May 27 02:47:40.799353 kubelet[2808]: I0527 02:47:40.799321 2808 state_mem.go:36] "Initialized new in-memory state store" May 27 02:47:40.805058 kubelet[2808]: I0527 02:47:40.804867 2808 kubelet.go:446] "Attempting to sync node with API server" May 27 02:47:40.805058 kubelet[2808]: I0527 02:47:40.804915 2808 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 02:47:40.805058 kubelet[2808]: I0527 02:47:40.804977 2808 kubelet.go:352] "Adding apiserver pod source" May 27 02:47:40.805058 kubelet[2808]: I0527 02:47:40.805000 2808 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 02:47:40.812511 kubelet[2808]: W0527 02:47:40.812425 2808 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-92&limit=500&resourceVersion=0": dial tcp 172.31.29.92:6443: connect: connection refused May 27 02:47:40.813976 kubelet[2808]: E0527 02:47:40.812701 2808 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.29.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-92&limit=500&resourceVersion=0\": dial tcp 172.31.29.92:6443: connect: connection refused" logger="UnhandledError" May 27 02:47:40.813976 kubelet[2808]: I0527 02:47:40.812855 2808 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 02:47:40.813976 kubelet[2808]: I0527 02:47:40.813667 2808 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 02:47:40.813976 kubelet[2808]: W0527 02:47:40.813768 2808 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 02:47:40.817243 kubelet[2808]: I0527 02:47:40.817203 2808 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 02:47:40.817493 kubelet[2808]: I0527 02:47:40.817473 2808 server.go:1287] "Started kubelet" May 27 02:47:40.823324 kubelet[2808]: W0527 02:47:40.823241 2808 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.29.92:6443: connect: connection refused May 27 02:47:40.823473 kubelet[2808]: E0527 02:47:40.823339 2808 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.29.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.92:6443: connect: connection refused" logger="UnhandledError" May 27 02:47:40.832864 kubelet[2808]: I0527 02:47:40.832814 2808 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 02:47:40.835266 kubelet[2808]: I0527 02:47:40.835214 2808 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 02:47:40.835483 kubelet[2808]: I0527 02:47:40.835460 2808 server.go:479] "Adding debug handlers to kubelet server" May 27 02:47:40.838831 kubelet[2808]: I0527 02:47:40.838741 2808 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 02:47:40.839363 kubelet[2808]: I0527 02:47:40.839329 2808 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 02:47:40.840031 kubelet[2808]: E0527 02:47:40.839765 2808 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.92:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.92:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-92.1843426016d7d921 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-92,UID:ip-172-31-29-92,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-92,},FirstTimestamp:2025-05-27 02:47:40.817422625 +0000 UTC m=+1.438637798,LastTimestamp:2025-05-27 02:47:40.817422625 +0000 UTC m=+1.438637798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-92,}" May 27 02:47:40.847184 kubelet[2808]: I0527 02:47:40.847140 2808 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 02:47:40.847989 kubelet[2808]: I0527 02:47:40.847834 2808 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 02:47:40.848298 kubelet[2808]: E0527 02:47:40.848254 2808 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-29-92\" not found" May 27 02:47:40.851667 kubelet[2808]: E0527 02:47:40.851611 2808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-92?timeout=10s\": dial tcp 172.31.29.92:6443: connect: connection refused" interval="200ms" May 27 02:47:40.853995 kubelet[2808]: I0527 02:47:40.852987 2808 factory.go:221] Registration of the systemd container factory successfully May 27 02:47:40.853995 kubelet[2808]: I0527 02:47:40.853152 2808 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 02:47:40.854242 kubelet[2808]: I0527 02:47:40.854190 2808 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 02:47:40.854363 kubelet[2808]: I0527 02:47:40.854332 2808 reconciler.go:26] "Reconciler: start to sync state" May 27 02:47:40.855263 kubelet[2808]: E0527 02:47:40.855231 2808 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 02:47:40.856141 kubelet[2808]: I0527 02:47:40.856069 2808 factory.go:221] Registration of the containerd container factory successfully May 27 02:47:40.870062 kubelet[2808]: I0527 02:47:40.869768 2808 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 02:47:40.875520 kubelet[2808]: W0527 02:47:40.875451 2808 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.92:6443: connect: connection refused May 27 02:47:40.875988 kubelet[2808]: E0527 02:47:40.875718 2808 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.29.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.92:6443: connect: connection refused" logger="UnhandledError" May 27 02:47:40.878642 kubelet[2808]: I0527 02:47:40.878602 2808 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 02:47:40.878894 kubelet[2808]: I0527 02:47:40.878826 2808 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 02:47:40.879086 kubelet[2808]: I0527 02:47:40.879002 2808 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 02:47:40.879086 kubelet[2808]: I0527 02:47:40.879022 2808 kubelet.go:2382] "Starting kubelet main sync loop" May 27 02:47:40.879288 kubelet[2808]: E0527 02:47:40.879251 2808 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 02:47:40.885405 kubelet[2808]: W0527 02:47:40.885307 2808 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.92:6443: connect: connection refused May 27 02:47:40.885405 kubelet[2808]: E0527 02:47:40.885425 2808 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.29.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.92:6443: connect: connection refused" logger="UnhandledError" May 27 02:47:40.895590 kubelet[2808]: I0527 02:47:40.895559 2808 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 02:47:40.896018 kubelet[2808]: I0527 02:47:40.895902 2808 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 02:47:40.896018 kubelet[2808]: I0527 02:47:40.896001 2808 state_mem.go:36] "Initialized new in-memory state store" May 27 02:47:40.901546 kubelet[2808]: I0527 02:47:40.901500 2808 policy_none.go:49] "None policy: Start" May 27 02:47:40.901546 kubelet[2808]: I0527 02:47:40.901541 2808 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 02:47:40.901685 kubelet[2808]: I0527 02:47:40.901566 2808 state_mem.go:35] "Initializing new in-memory state store" May 27 02:47:40.915235 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 02:47:40.932444 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 02:47:40.940072 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 02:47:40.949286 kubelet[2808]: E0527 02:47:40.949249 2808 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-29-92\" not found" May 27 02:47:40.951729 kubelet[2808]: I0527 02:47:40.951646 2808 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 02:47:40.952020 kubelet[2808]: I0527 02:47:40.951979 2808 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 02:47:40.952087 kubelet[2808]: I0527 02:47:40.952009 2808 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 02:47:40.953770 kubelet[2808]: I0527 02:47:40.953703 2808 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 02:47:40.956886 kubelet[2808]: E0527 02:47:40.956829 2808 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 02:47:40.957131 kubelet[2808]: E0527 02:47:40.956914 2808 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-92\" not found" May 27 02:47:41.002356 systemd[1]: Created slice kubepods-burstable-pod145c1205d8fc4f94a08d835a42176765.slice - libcontainer container kubepods-burstable-pod145c1205d8fc4f94a08d835a42176765.slice. May 27 02:47:41.019793 kubelet[2808]: E0527 02:47:41.019732 2808 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-92\" not found" node="ip-172-31-29-92" May 27 02:47:41.025189 systemd[1]: Created slice kubepods-burstable-podabf47ae66d6642b7c019acbf189bb0c8.slice - libcontainer container kubepods-burstable-podabf47ae66d6642b7c019acbf189bb0c8.slice. May 27 02:47:41.040669 kubelet[2808]: E0527 02:47:41.040611 2808 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-92\" not found" node="ip-172-31-29-92" May 27 02:47:41.047327 systemd[1]: Created slice kubepods-burstable-podc30e8b87a77a85c588c63a7bc53f3e48.slice - libcontainer container kubepods-burstable-podc30e8b87a77a85c588c63a7bc53f3e48.slice. May 27 02:47:41.051536 kubelet[2808]: E0527 02:47:41.051479 2808 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-92\" not found" node="ip-172-31-29-92" May 27 02:47:41.053500 kubelet[2808]: E0527 02:47:41.053393 2808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-92?timeout=10s\": dial tcp 172.31.29.92:6443: connect: connection refused" interval="400ms" May 27 02:47:41.054978 kubelet[2808]: I0527 02:47:41.054628 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abf47ae66d6642b7c019acbf189bb0c8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-92\" (UID: \"abf47ae66d6642b7c019acbf189bb0c8\") " pod="kube-system/kube-controller-manager-ip-172-31-29-92" May 27 02:47:41.054978 kubelet[2808]: I0527 02:47:41.054686 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c30e8b87a77a85c588c63a7bc53f3e48-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-92\" (UID: \"c30e8b87a77a85c588c63a7bc53f3e48\") " pod="kube-system/kube-scheduler-ip-172-31-29-92" May 27 02:47:41.054978 kubelet[2808]: I0527 02:47:41.054727 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/145c1205d8fc4f94a08d835a42176765-ca-certs\") pod \"kube-apiserver-ip-172-31-29-92\" (UID: \"145c1205d8fc4f94a08d835a42176765\") " pod="kube-system/kube-apiserver-ip-172-31-29-92" May 27 02:47:41.054978 kubelet[2808]: I0527 02:47:41.054763 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/145c1205d8fc4f94a08d835a42176765-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-92\" (UID: \"145c1205d8fc4f94a08d835a42176765\") " pod="kube-system/kube-apiserver-ip-172-31-29-92" May 27 02:47:41.054978 kubelet[2808]: I0527 02:47:41.054809 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/145c1205d8fc4f94a08d835a42176765-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-92\" (UID: \"145c1205d8fc4f94a08d835a42176765\") " pod="kube-system/kube-apiserver-ip-172-31-29-92" May 27 02:47:41.055317 kubelet[2808]: I0527 02:47:41.054847 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abf47ae66d6642b7c019acbf189bb0c8-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-92\" (UID: \"abf47ae66d6642b7c019acbf189bb0c8\") " pod="kube-system/kube-controller-manager-ip-172-31-29-92" May 27 02:47:41.055317 kubelet[2808]: I0527 02:47:41.054884 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/abf47ae66d6642b7c019acbf189bb0c8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-92\" (UID: \"abf47ae66d6642b7c019acbf189bb0c8\") " pod="kube-system/kube-controller-manager-ip-172-31-29-92" May 27 02:47:41.055317 kubelet[2808]: I0527 02:47:41.054919 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abf47ae66d6642b7c019acbf189bb0c8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-92\" (UID: \"abf47ae66d6642b7c019acbf189bb0c8\") " pod="kube-system/kube-controller-manager-ip-172-31-29-92" May 27 02:47:41.055317 kubelet[2808]: I0527 02:47:41.055271 2808 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-92" May 27 02:47:41.055800 kubelet[2808]: I0527 02:47:41.055547 2808 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/abf47ae66d6642b7c019acbf189bb0c8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-92\" (UID: \"abf47ae66d6642b7c019acbf189bb0c8\") " pod="kube-system/kube-controller-manager-ip-172-31-29-92" May 27 02:47:41.055800 kubelet[2808]: E0527 02:47:41.055756 2808 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.92:6443/api/v1/nodes\": dial tcp 172.31.29.92:6443: connect: connection refused" node="ip-172-31-29-92" May 27 02:47:41.260604 kubelet[2808]: I0527 02:47:41.259862 2808 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-92" May 27 02:47:41.261983 kubelet[2808]: E0527 02:47:41.261891 2808 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.92:6443/api/v1/nodes\": dial tcp 172.31.29.92:6443: connect: connection refused" node="ip-172-31-29-92" May 27 02:47:41.321556 containerd[1931]: time="2025-05-27T02:47:41.321492378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-92,Uid:145c1205d8fc4f94a08d835a42176765,Namespace:kube-system,Attempt:0,}" May 27 02:47:41.342586 containerd[1931]: time="2025-05-27T02:47:41.342237113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-92,Uid:abf47ae66d6642b7c019acbf189bb0c8,Namespace:kube-system,Attempt:0,}" May 27 02:47:41.352830 containerd[1931]: time="2025-05-27T02:47:41.352764934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-92,Uid:c30e8b87a77a85c588c63a7bc53f3e48,Namespace:kube-system,Attempt:0,}" May 27 02:47:41.378241 containerd[1931]: time="2025-05-27T02:47:41.378163760Z" level=info msg="connecting to shim 2ba49f3739388cd846f4fb7bc496a0582cb6f41cb0ed34d4c906b2191f60d8fd" address="unix:///run/containerd/s/2ee445fd6d170093acef5e986c4b21e78cf2d402a565209f9e4d90af4671f6df" namespace=k8s.io protocol=ttrpc version=3 May 27 02:47:41.454598 systemd[1]: Started cri-containerd-2ba49f3739388cd846f4fb7bc496a0582cb6f41cb0ed34d4c906b2191f60d8fd.scope - libcontainer container 2ba49f3739388cd846f4fb7bc496a0582cb6f41cb0ed34d4c906b2191f60d8fd. May 27 02:47:41.457394 kubelet[2808]: E0527 02:47:41.457246 2808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-92?timeout=10s\": dial tcp 172.31.29.92:6443: connect: connection refused" interval="800ms" May 27 02:47:41.459361 containerd[1931]: time="2025-05-27T02:47:41.459308808Z" level=info msg="connecting to shim b28752aeed8e348a8fa388017c72454e745aacd5389e455890a5a87455c82b83" address="unix:///run/containerd/s/7b911af82feb6e4afac637e29e4f6e4e6f3a212a993f925c998a82314dfacdc7" namespace=k8s.io protocol=ttrpc version=3 May 27 02:47:41.464651 containerd[1931]: time="2025-05-27T02:47:41.464584137Z" level=info msg="connecting to shim 1c8908360615640fb6597535ee7485d79d43dbfaca8053c2ba1d197d08aa1bc1" address="unix:///run/containerd/s/896ec9898fa4ed96b05a4514cac12513b2cb487b9cc8b88130603a587270a062" namespace=k8s.io protocol=ttrpc version=3 May 27 02:47:41.535774 systemd[1]: Started cri-containerd-b28752aeed8e348a8fa388017c72454e745aacd5389e455890a5a87455c82b83.scope - libcontainer container b28752aeed8e348a8fa388017c72454e745aacd5389e455890a5a87455c82b83. May 27 02:47:41.549579 systemd[1]: Started cri-containerd-1c8908360615640fb6597535ee7485d79d43dbfaca8053c2ba1d197d08aa1bc1.scope - libcontainer container 1c8908360615640fb6597535ee7485d79d43dbfaca8053c2ba1d197d08aa1bc1. May 27 02:47:41.610301 containerd[1931]: time="2025-05-27T02:47:41.610234759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-92,Uid:145c1205d8fc4f94a08d835a42176765,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ba49f3739388cd846f4fb7bc496a0582cb6f41cb0ed34d4c906b2191f60d8fd\"" May 27 02:47:41.620917 containerd[1931]: time="2025-05-27T02:47:41.620819452Z" level=info msg="CreateContainer within sandbox \"2ba49f3739388cd846f4fb7bc496a0582cb6f41cb0ed34d4c906b2191f60d8fd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 02:47:41.648394 containerd[1931]: time="2025-05-27T02:47:41.648281497Z" level=info msg="Container 35982b373ed31bbf07d2ead72325d957516bebeac1b35b7650e374479f2d91e7: CDI devices from CRI Config.CDIDevices: []" May 27 02:47:41.668270 kubelet[2808]: I0527 02:47:41.667622 2808 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-92" May 27 02:47:41.669146 kubelet[2808]: E0527 02:47:41.669097 2808 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.29.92:6443/api/v1/nodes\": dial tcp 172.31.29.92:6443: connect: connection refused" node="ip-172-31-29-92" May 27 02:47:41.678763 containerd[1931]: time="2025-05-27T02:47:41.678240237Z" level=info msg="CreateContainer within sandbox \"2ba49f3739388cd846f4fb7bc496a0582cb6f41cb0ed34d4c906b2191f60d8fd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"35982b373ed31bbf07d2ead72325d957516bebeac1b35b7650e374479f2d91e7\"" May 27 02:47:41.681495 containerd[1931]: time="2025-05-27T02:47:41.681318707Z" level=info msg="StartContainer for \"35982b373ed31bbf07d2ead72325d957516bebeac1b35b7650e374479f2d91e7\"" May 27 02:47:41.684862 containerd[1931]: time="2025-05-27T02:47:41.684765990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-92,Uid:c30e8b87a77a85c588c63a7bc53f3e48,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c8908360615640fb6597535ee7485d79d43dbfaca8053c2ba1d197d08aa1bc1\"" May 27 02:47:41.690145 containerd[1931]: time="2025-05-27T02:47:41.689878949Z" level=info msg="connecting to shim 35982b373ed31bbf07d2ead72325d957516bebeac1b35b7650e374479f2d91e7" address="unix:///run/containerd/s/2ee445fd6d170093acef5e986c4b21e78cf2d402a565209f9e4d90af4671f6df" protocol=ttrpc version=3 May 27 02:47:41.698801 containerd[1931]: time="2025-05-27T02:47:41.697427326Z" level=info msg="CreateContainer within sandbox \"1c8908360615640fb6597535ee7485d79d43dbfaca8053c2ba1d197d08aa1bc1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 02:47:41.714976 containerd[1931]: time="2025-05-27T02:47:41.714735932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-92,Uid:abf47ae66d6642b7c019acbf189bb0c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"b28752aeed8e348a8fa388017c72454e745aacd5389e455890a5a87455c82b83\"" May 27 02:47:41.724832 containerd[1931]: time="2025-05-27T02:47:41.724771315Z" level=info msg="CreateContainer within sandbox \"b28752aeed8e348a8fa388017c72454e745aacd5389e455890a5a87455c82b83\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 02:47:41.735876 containerd[1931]: time="2025-05-27T02:47:41.734678306Z" level=info msg="Container 8f17f4f474436da3739246965ab9f1e60fe3fb9f32118251e0c7fea3f0c577d2: CDI devices from CRI Config.CDIDevices: []" May 27 02:47:41.737266 systemd[1]: Started cri-containerd-35982b373ed31bbf07d2ead72325d957516bebeac1b35b7650e374479f2d91e7.scope - libcontainer container 35982b373ed31bbf07d2ead72325d957516bebeac1b35b7650e374479f2d91e7. May 27 02:47:41.759211 containerd[1931]: time="2025-05-27T02:47:41.759161122Z" level=info msg="Container 802bd8cfc2c8fb953e08b9ac7e5df076fba9647cc26fa0dac87789d5e57b9cbe: CDI devices from CRI Config.CDIDevices: []" May 27 02:47:41.771351 containerd[1931]: time="2025-05-27T02:47:41.771296426Z" level=info msg="CreateContainer within sandbox \"1c8908360615640fb6597535ee7485d79d43dbfaca8053c2ba1d197d08aa1bc1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8f17f4f474436da3739246965ab9f1e60fe3fb9f32118251e0c7fea3f0c577d2\"" May 27 02:47:41.774270 containerd[1931]: time="2025-05-27T02:47:41.774220956Z" level=info msg="StartContainer for \"8f17f4f474436da3739246965ab9f1e60fe3fb9f32118251e0c7fea3f0c577d2\"" May 27 02:47:41.778789 containerd[1931]: time="2025-05-27T02:47:41.778736449Z" level=info msg="connecting to shim 8f17f4f474436da3739246965ab9f1e60fe3fb9f32118251e0c7fea3f0c577d2" address="unix:///run/containerd/s/896ec9898fa4ed96b05a4514cac12513b2cb487b9cc8b88130603a587270a062" protocol=ttrpc version=3 May 27 02:47:41.782229 containerd[1931]: time="2025-05-27T02:47:41.782116606Z" level=info msg="CreateContainer within sandbox \"b28752aeed8e348a8fa388017c72454e745aacd5389e455890a5a87455c82b83\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"802bd8cfc2c8fb953e08b9ac7e5df076fba9647cc26fa0dac87789d5e57b9cbe\"" May 27 02:47:41.785092 containerd[1931]: time="2025-05-27T02:47:41.785017904Z" level=info msg="StartContainer for \"802bd8cfc2c8fb953e08b9ac7e5df076fba9647cc26fa0dac87789d5e57b9cbe\"" May 27 02:47:41.789554 containerd[1931]: time="2025-05-27T02:47:41.788564704Z" level=info msg="connecting to shim 802bd8cfc2c8fb953e08b9ac7e5df076fba9647cc26fa0dac87789d5e57b9cbe" address="unix:///run/containerd/s/7b911af82feb6e4afac637e29e4f6e4e6f3a212a993f925c998a82314dfacdc7" protocol=ttrpc version=3 May 27 02:47:41.834369 systemd[1]: Started cri-containerd-802bd8cfc2c8fb953e08b9ac7e5df076fba9647cc26fa0dac87789d5e57b9cbe.scope - libcontainer container 802bd8cfc2c8fb953e08b9ac7e5df076fba9647cc26fa0dac87789d5e57b9cbe. May 27 02:47:41.865995 systemd[1]: Started cri-containerd-8f17f4f474436da3739246965ab9f1e60fe3fb9f32118251e0c7fea3f0c577d2.scope - libcontainer container 8f17f4f474436da3739246965ab9f1e60fe3fb9f32118251e0c7fea3f0c577d2. May 27 02:47:41.890545 containerd[1931]: time="2025-05-27T02:47:41.890430404Z" level=info msg="StartContainer for \"35982b373ed31bbf07d2ead72325d957516bebeac1b35b7650e374479f2d91e7\" returns successfully" May 27 02:47:41.919638 kubelet[2808]: E0527 02:47:41.919381 2808 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-92\" not found" node="ip-172-31-29-92" May 27 02:47:41.992189 containerd[1931]: time="2025-05-27T02:47:41.992128381Z" level=info msg="StartContainer for \"802bd8cfc2c8fb953e08b9ac7e5df076fba9647cc26fa0dac87789d5e57b9cbe\" returns successfully" May 27 02:47:42.044845 containerd[1931]: time="2025-05-27T02:47:42.044332150Z" level=info msg="StartContainer for \"8f17f4f474436da3739246965ab9f1e60fe3fb9f32118251e0c7fea3f0c577d2\" returns successfully" May 27 02:47:42.472990 kubelet[2808]: I0527 02:47:42.472064 2808 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-92" May 27 02:47:42.925626 kubelet[2808]: E0527 02:47:42.925352 2808 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-92\" not found" node="ip-172-31-29-92" May 27 02:47:42.931941 kubelet[2808]: E0527 02:47:42.931883 2808 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-92\" not found" node="ip-172-31-29-92" May 27 02:47:42.933886 kubelet[2808]: E0527 02:47:42.933589 2808 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-92\" not found" node="ip-172-31-29-92" May 27 02:47:43.937970 kubelet[2808]: E0527 02:47:43.935792 2808 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-92\" not found" node="ip-172-31-29-92" May 27 02:47:43.939907 kubelet[2808]: E0527 02:47:43.939260 2808 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-92\" not found" node="ip-172-31-29-92" May 27 02:47:43.940772 kubelet[2808]: E0527 02:47:43.940746 2808 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-92\" not found" node="ip-172-31-29-92" May 27 02:47:44.939030 kubelet[2808]: E0527 02:47:44.938496 2808 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-92\" not found" node="ip-172-31-29-92" May 27 02:47:44.941926 kubelet[2808]: E0527 02:47:44.941842 2808 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-29-92\" not found" node="ip-172-31-29-92" May 27 02:47:46.778712 kubelet[2808]: I0527 02:47:46.778651 2808 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-92" May 27 02:47:46.778712 kubelet[2808]: E0527 02:47:46.778712 2808 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-29-92\": node \"ip-172-31-29-92\" not found" May 27 02:47:46.831659 kubelet[2808]: I0527 02:47:46.831563 2808 apiserver.go:52] "Watching apiserver" May 27 02:47:46.848838 kubelet[2808]: I0527 02:47:46.848776 2808 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-92" May 27 02:47:46.889088 kubelet[2808]: E0527 02:47:46.886518 2808 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-29-92.1843426016d7d921 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-92,UID:ip-172-31-29-92,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-92,},FirstTimestamp:2025-05-27 02:47:40.817422625 +0000 UTC m=+1.438637798,LastTimestamp:2025-05-27 02:47:40.817422625 +0000 UTC m=+1.438637798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-92,}" May 27 02:47:46.917843 kubelet[2808]: E0527 02:47:46.917766 2808 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-29-92\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-29-92" May 27 02:47:46.917843 kubelet[2808]: I0527 02:47:46.917826 2808 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-92" May 27 02:47:46.925983 kubelet[2808]: E0527 02:47:46.924210 2808 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-29-92\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-29-92" May 27 02:47:46.925983 kubelet[2808]: I0527 02:47:46.924282 2808 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-92" May 27 02:47:46.931188 kubelet[2808]: E0527 02:47:46.931100 2808 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-29-92\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-29-92" May 27 02:47:46.955295 kubelet[2808]: I0527 02:47:46.955238 2808 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 02:47:49.371663 systemd[1]: Reload requested from client PID 3081 ('systemctl') (unit session-7.scope)... May 27 02:47:49.371698 systemd[1]: Reloading... May 27 02:47:49.659065 zram_generator::config[3128]: No configuration found. May 27 02:47:49.860117 update_engine[1878]: I20250527 02:47:49.860006 1878 update_attempter.cc:509] Updating boot flags... May 27 02:47:49.873768 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 02:47:50.281473 systemd[1]: Reloading finished in 909 ms. May 27 02:47:50.457983 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:47:50.519829 systemd[1]: kubelet.service: Deactivated successfully. May 27 02:47:50.521205 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:47:50.521298 systemd[1]: kubelet.service: Consumed 2.206s CPU time, 126.1M memory peak. May 27 02:47:50.527396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:47:51.063301 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:47:51.077310 (kubelet)[3369]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 02:47:51.190396 kubelet[3369]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 02:47:51.190396 kubelet[3369]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 02:47:51.190396 kubelet[3369]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 02:47:51.192094 kubelet[3369]: I0527 02:47:51.190529 3369 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 02:47:51.203161 kubelet[3369]: I0527 02:47:51.203100 3369 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 02:47:51.203161 kubelet[3369]: I0527 02:47:51.203151 3369 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 02:47:51.204035 kubelet[3369]: I0527 02:47:51.203624 3369 server.go:954] "Client rotation is on, will bootstrap in background" May 27 02:47:51.214472 kubelet[3369]: I0527 02:47:51.214190 3369 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 02:47:51.224387 kubelet[3369]: I0527 02:47:51.224244 3369 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 02:47:51.231622 sudo[3383]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 27 02:47:51.232361 sudo[3383]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 27 02:47:51.237227 kubelet[3369]: I0527 02:47:51.237179 3369 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 02:47:51.247243 kubelet[3369]: I0527 02:47:51.247064 3369 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 02:47:51.248321 kubelet[3369]: I0527 02:47:51.247785 3369 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 02:47:51.248321 kubelet[3369]: I0527 02:47:51.247837 3369 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-92","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 02:47:51.248321 kubelet[3369]: I0527 02:47:51.248177 3369 topology_manager.go:138] "Creating topology manager with none policy" May 27 02:47:51.248321 kubelet[3369]: I0527 02:47:51.248197 3369 container_manager_linux.go:304] "Creating device plugin manager" May 27 02:47:51.248674 kubelet[3369]: I0527 02:47:51.248269 3369 state_mem.go:36] "Initialized new in-memory state store" May 27 02:47:51.249002 kubelet[3369]: I0527 02:47:51.248981 3369 kubelet.go:446] "Attempting to sync node with API server" May 27 02:47:51.249721 kubelet[3369]: I0527 02:47:51.249695 3369 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 02:47:51.249939 kubelet[3369]: I0527 02:47:51.249890 3369 kubelet.go:352] "Adding apiserver pod source" May 27 02:47:51.250145 kubelet[3369]: I0527 02:47:51.250036 3369 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 02:47:51.255421 kubelet[3369]: I0527 02:47:51.254544 3369 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 02:47:51.260882 kubelet[3369]: I0527 02:47:51.260397 3369 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 02:47:51.262753 kubelet[3369]: I0527 02:47:51.262366 3369 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 02:47:51.263613 kubelet[3369]: I0527 02:47:51.262991 3369 server.go:1287] "Started kubelet" May 27 02:47:51.273403 kubelet[3369]: I0527 02:47:51.273367 3369 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 02:47:51.278432 kubelet[3369]: I0527 02:47:51.278355 3369 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 02:47:51.290069 kubelet[3369]: I0527 02:47:51.289677 3369 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 02:47:51.291361 kubelet[3369]: I0527 02:47:51.291334 3369 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 02:47:51.300810 kubelet[3369]: I0527 02:47:51.300765 3369 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 02:47:51.315892 kubelet[3369]: I0527 02:47:51.315764 3369 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 02:47:51.347779 kubelet[3369]: E0527 02:47:51.346072 3369 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-29-92\" not found" May 27 02:47:51.347779 kubelet[3369]: I0527 02:47:51.317685 3369 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 02:47:51.347779 kubelet[3369]: I0527 02:47:51.346738 3369 reconciler.go:26] "Reconciler: start to sync state" May 27 02:47:51.350774 kubelet[3369]: I0527 02:47:51.350732 3369 server.go:479] "Adding debug handlers to kubelet server" May 27 02:47:51.369516 kubelet[3369]: I0527 02:47:51.369279 3369 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 02:47:51.406825 kubelet[3369]: I0527 02:47:51.406218 3369 factory.go:221] Registration of the containerd container factory successfully May 27 02:47:51.419471 kubelet[3369]: I0527 02:47:51.419418 3369 factory.go:221] Registration of the systemd container factory successfully May 27 02:47:51.437260 kubelet[3369]: I0527 02:47:51.436856 3369 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 02:47:51.442507 kubelet[3369]: I0527 02:47:51.442464 3369 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 02:47:51.442693 kubelet[3369]: I0527 02:47:51.442675 3369 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 02:47:51.442835 kubelet[3369]: I0527 02:47:51.442813 3369 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 02:47:51.442969 kubelet[3369]: I0527 02:47:51.442924 3369 kubelet.go:2382] "Starting kubelet main sync loop" May 27 02:47:51.443148 kubelet[3369]: E0527 02:47:51.443118 3369 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 02:47:51.456806 kubelet[3369]: E0527 02:47:51.456453 3369 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 02:47:51.544178 kubelet[3369]: E0527 02:47:51.544098 3369 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 02:47:51.605400 kubelet[3369]: I0527 02:47:51.605255 3369 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 02:47:51.605400 kubelet[3369]: I0527 02:47:51.605316 3369 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 02:47:51.605400 kubelet[3369]: I0527 02:47:51.605356 3369 state_mem.go:36] "Initialized new in-memory state store" May 27 02:47:51.606007 kubelet[3369]: I0527 02:47:51.605961 3369 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 02:47:51.606136 kubelet[3369]: I0527 02:47:51.606003 3369 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 02:47:51.606136 kubelet[3369]: I0527 02:47:51.606044 3369 policy_none.go:49] "None policy: Start" May 27 02:47:51.606136 kubelet[3369]: I0527 02:47:51.606062 3369 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 02:47:51.606136 kubelet[3369]: I0527 02:47:51.606084 3369 state_mem.go:35] "Initializing new in-memory state store" May 27 02:47:51.606338 kubelet[3369]: I0527 02:47:51.606286 3369 state_mem.go:75] "Updated machine memory state" May 27 02:47:51.626425 kubelet[3369]: I0527 02:47:51.624380 3369 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 02:47:51.626425 kubelet[3369]: I0527 02:47:51.624851 3369 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 02:47:51.626425 kubelet[3369]: I0527 02:47:51.624875 3369 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 02:47:51.626425 kubelet[3369]: I0527 02:47:51.625432 3369 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 02:47:51.632905 kubelet[3369]: E0527 02:47:51.631859 3369 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 02:47:51.745927 kubelet[3369]: I0527 02:47:51.745868 3369 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-29-92" May 27 02:47:51.749580 kubelet[3369]: I0527 02:47:51.749236 3369 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-92" May 27 02:47:51.749832 kubelet[3369]: I0527 02:47:51.749600 3369 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-29-92" May 27 02:47:51.750333 kubelet[3369]: I0527 02:47:51.750228 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/145c1205d8fc4f94a08d835a42176765-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-92\" (UID: \"145c1205d8fc4f94a08d835a42176765\") " pod="kube-system/kube-apiserver-ip-172-31-29-92" May 27 02:47:51.750333 kubelet[3369]: I0527 02:47:51.750289 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/abf47ae66d6642b7c019acbf189bb0c8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-92\" (UID: \"abf47ae66d6642b7c019acbf189bb0c8\") " pod="kube-system/kube-controller-manager-ip-172-31-29-92" May 27 02:47:51.750996 kubelet[3369]: I0527 02:47:51.750332 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abf47ae66d6642b7c019acbf189bb0c8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-92\" (UID: \"abf47ae66d6642b7c019acbf189bb0c8\") " pod="kube-system/kube-controller-manager-ip-172-31-29-92" May 27 02:47:51.750996 kubelet[3369]: I0527 02:47:51.750375 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/abf47ae66d6642b7c019acbf189bb0c8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-92\" (UID: \"abf47ae66d6642b7c019acbf189bb0c8\") " pod="kube-system/kube-controller-manager-ip-172-31-29-92" May 27 02:47:51.750996 kubelet[3369]: I0527 02:47:51.750415 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abf47ae66d6642b7c019acbf189bb0c8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-92\" (UID: \"abf47ae66d6642b7c019acbf189bb0c8\") " pod="kube-system/kube-controller-manager-ip-172-31-29-92" May 27 02:47:51.750996 kubelet[3369]: I0527 02:47:51.750451 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c30e8b87a77a85c588c63a7bc53f3e48-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-92\" (UID: \"c30e8b87a77a85c588c63a7bc53f3e48\") " pod="kube-system/kube-scheduler-ip-172-31-29-92" May 27 02:47:51.750996 kubelet[3369]: I0527 02:47:51.750484 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/145c1205d8fc4f94a08d835a42176765-ca-certs\") pod \"kube-apiserver-ip-172-31-29-92\" (UID: \"145c1205d8fc4f94a08d835a42176765\") " pod="kube-system/kube-apiserver-ip-172-31-29-92" May 27 02:47:51.753212 kubelet[3369]: I0527 02:47:51.750525 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/145c1205d8fc4f94a08d835a42176765-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-92\" (UID: \"145c1205d8fc4f94a08d835a42176765\") " pod="kube-system/kube-apiserver-ip-172-31-29-92" May 27 02:47:51.753212 kubelet[3369]: I0527 02:47:51.750562 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abf47ae66d6642b7c019acbf189bb0c8-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-92\" (UID: \"abf47ae66d6642b7c019acbf189bb0c8\") " pod="kube-system/kube-controller-manager-ip-172-31-29-92" May 27 02:47:51.779782 kubelet[3369]: I0527 02:47:51.779000 3369 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-29-92" May 27 02:47:51.816612 kubelet[3369]: I0527 02:47:51.816464 3369 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-29-92" May 27 02:47:51.817049 kubelet[3369]: I0527 02:47:51.817001 3369 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-29-92" May 27 02:47:52.180704 sudo[3383]: pam_unix(sudo:session): session closed for user root May 27 02:47:52.265263 kubelet[3369]: I0527 02:47:52.265181 3369 apiserver.go:52] "Watching apiserver" May 27 02:47:52.346870 kubelet[3369]: I0527 02:47:52.346819 3369 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 02:47:52.516859 kubelet[3369]: I0527 02:47:52.516813 3369 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-29-92" May 27 02:47:52.529086 kubelet[3369]: E0527 02:47:52.527830 3369 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-29-92\" already exists" pod="kube-system/kube-scheduler-ip-172-31-29-92" May 27 02:47:52.561970 kubelet[3369]: I0527 02:47:52.561701 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-92" podStartSLOduration=1.561595547 podStartE2EDuration="1.561595547s" podCreationTimestamp="2025-05-27 02:47:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:47:52.558494241 +0000 UTC m=+1.471460906" watchObservedRunningTime="2025-05-27 02:47:52.561595547 +0000 UTC m=+1.474562176" May 27 02:47:52.615749 kubelet[3369]: I0527 02:47:52.615656 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-92" podStartSLOduration=1.615630027 podStartE2EDuration="1.615630027s" podCreationTimestamp="2025-05-27 02:47:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:47:52.589503926 +0000 UTC m=+1.502470579" watchObservedRunningTime="2025-05-27 02:47:52.615630027 +0000 UTC m=+1.528596644" May 27 02:47:52.830316 kubelet[3369]: I0527 02:47:52.830152 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-92" podStartSLOduration=1.830106185 podStartE2EDuration="1.830106185s" podCreationTimestamp="2025-05-27 02:47:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:47:52.616914093 +0000 UTC m=+1.529880746" watchObservedRunningTime="2025-05-27 02:47:52.830106185 +0000 UTC m=+1.743072814" May 27 02:47:53.241596 kubelet[3369]: I0527 02:47:53.241458 3369 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 02:47:53.243502 containerd[1931]: time="2025-05-27T02:47:53.243370969Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 02:47:53.244712 kubelet[3369]: I0527 02:47:53.244670 3369 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 02:47:53.927981 kubelet[3369]: W0527 02:47:53.927860 3369 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-29-92" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-92' and this object May 27 02:47:53.928936 kubelet[3369]: E0527 02:47:53.927926 3369 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-29-92\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-29-92' and this object" logger="UnhandledError" May 27 02:47:53.928936 kubelet[3369]: I0527 02:47:53.928623 3369 status_manager.go:890] "Failed to get status for pod" podUID="c798bb2c-0d7c-439d-b08a-4cc2fa1f74f9" pod="kube-system/kube-proxy-8fnb6" err="pods \"kube-proxy-8fnb6\" is forbidden: User \"system:node:ip-172-31-29-92\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-29-92' and this object" May 27 02:47:53.928936 kubelet[3369]: W0527 02:47:53.928702 3369 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-29-92" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-92' and this object May 27 02:47:53.928936 kubelet[3369]: E0527 02:47:53.928873 3369 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-29-92\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-29-92' and this object" logger="UnhandledError" May 27 02:47:53.934785 systemd[1]: Created slice kubepods-besteffort-podc798bb2c_0d7c_439d_b08a_4cc2fa1f74f9.slice - libcontainer container kubepods-besteffort-podc798bb2c_0d7c_439d_b08a_4cc2fa1f74f9.slice. May 27 02:47:53.969067 kubelet[3369]: I0527 02:47:53.968453 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c798bb2c-0d7c-439d-b08a-4cc2fa1f74f9-kube-proxy\") pod \"kube-proxy-8fnb6\" (UID: \"c798bb2c-0d7c-439d-b08a-4cc2fa1f74f9\") " pod="kube-system/kube-proxy-8fnb6" May 27 02:47:53.971079 kubelet[3369]: I0527 02:47:53.969446 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c798bb2c-0d7c-439d-b08a-4cc2fa1f74f9-lib-modules\") pod \"kube-proxy-8fnb6\" (UID: \"c798bb2c-0d7c-439d-b08a-4cc2fa1f74f9\") " pod="kube-system/kube-proxy-8fnb6" May 27 02:47:53.971373 kubelet[3369]: I0527 02:47:53.971297 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbj7c\" (UniqueName: \"kubernetes.io/projected/c798bb2c-0d7c-439d-b08a-4cc2fa1f74f9-kube-api-access-xbj7c\") pod \"kube-proxy-8fnb6\" (UID: \"c798bb2c-0d7c-439d-b08a-4cc2fa1f74f9\") " pod="kube-system/kube-proxy-8fnb6" May 27 02:47:53.971640 kubelet[3369]: I0527 02:47:53.971467 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c798bb2c-0d7c-439d-b08a-4cc2fa1f74f9-xtables-lock\") pod \"kube-proxy-8fnb6\" (UID: \"c798bb2c-0d7c-439d-b08a-4cc2fa1f74f9\") " pod="kube-system/kube-proxy-8fnb6" May 27 02:47:53.996426 systemd[1]: Created slice kubepods-burstable-pod90477c99_4629_46f4_9970_205b5b5856b4.slice - libcontainer container kubepods-burstable-pod90477c99_4629_46f4_9970_205b5b5856b4.slice. May 27 02:47:54.072311 kubelet[3369]: I0527 02:47:54.071843 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-lib-modules\") pod \"cilium-2rrgq\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " pod="kube-system/cilium-2rrgq" May 27 02:47:54.072311 kubelet[3369]: I0527 02:47:54.071931 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/90477c99-4629-46f4-9970-205b5b5856b4-clustermesh-secrets\") pod \"cilium-2rrgq\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " pod="kube-system/cilium-2rrgq" May 27 02:47:54.072311 kubelet[3369]: I0527 02:47:54.071993 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-host-proc-sys-kernel\") pod \"cilium-2rrgq\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " pod="kube-system/cilium-2rrgq" May 27 02:47:54.072311 kubelet[3369]: I0527 02:47:54.072063 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-hostproc\") pod \"cilium-2rrgq\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " pod="kube-system/cilium-2rrgq" May 27 02:47:54.072311 kubelet[3369]: I0527 02:47:54.072115 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-xtables-lock\") pod \"cilium-2rrgq\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " pod="kube-system/cilium-2rrgq" May 27 02:47:54.072311 kubelet[3369]: I0527 02:47:54.072220 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-bpf-maps\") pod \"cilium-2rrgq\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " pod="kube-system/cilium-2rrgq" May 27 02:47:54.072692 kubelet[3369]: I0527 02:47:54.072647 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-cni-path\") pod \"cilium-2rrgq\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " pod="kube-system/cilium-2rrgq" May 27 02:47:54.072745 kubelet[3369]: I0527 02:47:54.072705 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90477c99-4629-46f4-9970-205b5b5856b4-cilium-config-path\") pod \"cilium-2rrgq\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " pod="kube-system/cilium-2rrgq" May 27 02:47:54.072805 kubelet[3369]: I0527 02:47:54.072742 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25xcs\" (UniqueName: \"kubernetes.io/projected/90477c99-4629-46f4-9970-205b5b5856b4-kube-api-access-25xcs\") pod \"cilium-2rrgq\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " pod="kube-system/cilium-2rrgq" May 27 02:47:54.072861 kubelet[3369]: I0527 02:47:54.072828 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-cilium-run\") pod \"cilium-2rrgq\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " pod="kube-system/cilium-2rrgq" May 27 02:47:54.072919 kubelet[3369]: I0527 02:47:54.072872 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/90477c99-4629-46f4-9970-205b5b5856b4-hubble-tls\") pod \"cilium-2rrgq\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " pod="kube-system/cilium-2rrgq" May 27 02:47:54.073006 kubelet[3369]: I0527 02:47:54.072989 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-etc-cni-netd\") pod \"cilium-2rrgq\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " pod="kube-system/cilium-2rrgq" May 27 02:47:54.073738 kubelet[3369]: I0527 02:47:54.073115 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-host-proc-sys-net\") pod \"cilium-2rrgq\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " pod="kube-system/cilium-2rrgq" May 27 02:47:54.073738 kubelet[3369]: I0527 02:47:54.073172 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-cilium-cgroup\") pod \"cilium-2rrgq\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " pod="kube-system/cilium-2rrgq" May 27 02:47:54.340981 kubelet[3369]: I0527 02:47:54.337360 3369 status_manager.go:890] "Failed to get status for pod" podUID="facea54c-c136-45df-8b53-16fbd783d7e5" pod="kube-system/cilium-operator-6c4d7847fc-2mvps" err="pods \"cilium-operator-6c4d7847fc-2mvps\" is forbidden: User \"system:node:ip-172-31-29-92\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-29-92' and this object" May 27 02:47:54.338886 systemd[1]: Created slice kubepods-besteffort-podfacea54c_c136_45df_8b53_16fbd783d7e5.slice - libcontainer container kubepods-besteffort-podfacea54c_c136_45df_8b53_16fbd783d7e5.slice. May 27 02:47:54.376734 kubelet[3369]: I0527 02:47:54.376672 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/facea54c-c136-45df-8b53-16fbd783d7e5-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-2mvps\" (UID: \"facea54c-c136-45df-8b53-16fbd783d7e5\") " pod="kube-system/cilium-operator-6c4d7847fc-2mvps" May 27 02:47:54.376919 kubelet[3369]: I0527 02:47:54.376752 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5pw2\" (UniqueName: \"kubernetes.io/projected/facea54c-c136-45df-8b53-16fbd783d7e5-kube-api-access-r5pw2\") pod \"cilium-operator-6c4d7847fc-2mvps\" (UID: \"facea54c-c136-45df-8b53-16fbd783d7e5\") " pod="kube-system/cilium-operator-6c4d7847fc-2mvps" May 27 02:47:55.073466 kubelet[3369]: E0527 02:47:55.073341 3369 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition May 27 02:47:55.074153 kubelet[3369]: E0527 02:47:55.073480 3369 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c798bb2c-0d7c-439d-b08a-4cc2fa1f74f9-kube-proxy podName:c798bb2c-0d7c-439d-b08a-4cc2fa1f74f9 nodeName:}" failed. No retries permitted until 2025-05-27 02:47:55.573435032 +0000 UTC m=+4.486401649 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/c798bb2c-0d7c-439d-b08a-4cc2fa1f74f9-kube-proxy") pod "kube-proxy-8fnb6" (UID: "c798bb2c-0d7c-439d-b08a-4cc2fa1f74f9") : failed to sync configmap cache: timed out waiting for the condition May 27 02:47:55.210212 containerd[1931]: time="2025-05-27T02:47:55.209423021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2rrgq,Uid:90477c99-4629-46f4-9970-205b5b5856b4,Namespace:kube-system,Attempt:0,}" May 27 02:47:55.250005 containerd[1931]: time="2025-05-27T02:47:55.249839360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2mvps,Uid:facea54c-c136-45df-8b53-16fbd783d7e5,Namespace:kube-system,Attempt:0,}" May 27 02:47:55.258070 containerd[1931]: time="2025-05-27T02:47:55.257336194Z" level=info msg="connecting to shim fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f" address="unix:///run/containerd/s/8c963d748b542655ec13284024d449c91b5842f0d305fe4b4e4364d17c6e6df4" namespace=k8s.io protocol=ttrpc version=3 May 27 02:47:55.316096 containerd[1931]: time="2025-05-27T02:47:55.316025317Z" level=info msg="connecting to shim 1b140523a348e60d0abdb5c781ea11c3ad96fee4e028b3d08e5a0a10525f0ffc" address="unix:///run/containerd/s/04a28cb3ad5e5018fa6cf167b1a80e0bcfbb6d48fe2f6077f3a665de5c3b85ee" namespace=k8s.io protocol=ttrpc version=3 May 27 02:47:55.325276 systemd[1]: Started cri-containerd-fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f.scope - libcontainer container fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f. May 27 02:47:55.389364 systemd[1]: Started cri-containerd-1b140523a348e60d0abdb5c781ea11c3ad96fee4e028b3d08e5a0a10525f0ffc.scope - libcontainer container 1b140523a348e60d0abdb5c781ea11c3ad96fee4e028b3d08e5a0a10525f0ffc. May 27 02:47:55.393200 containerd[1931]: time="2025-05-27T02:47:55.393111281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2rrgq,Uid:90477c99-4629-46f4-9970-205b5b5856b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f\"" May 27 02:47:55.398854 containerd[1931]: time="2025-05-27T02:47:55.398573639Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 27 02:47:55.483615 containerd[1931]: time="2025-05-27T02:47:55.483506124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2mvps,Uid:facea54c-c136-45df-8b53-16fbd783d7e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b140523a348e60d0abdb5c781ea11c3ad96fee4e028b3d08e5a0a10525f0ffc\"" May 27 02:47:55.756583 containerd[1931]: time="2025-05-27T02:47:55.756514210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8fnb6,Uid:c798bb2c-0d7c-439d-b08a-4cc2fa1f74f9,Namespace:kube-system,Attempt:0,}" May 27 02:47:55.793304 containerd[1931]: time="2025-05-27T02:47:55.793220166Z" level=info msg="connecting to shim d6cefbb7c146834ac9ab639c280d128d982be1ef7b758c80a63ee9efa6f5201c" address="unix:///run/containerd/s/ae2cb75ccce457b3745cc3530d542c174e340df672ec06d310e7304364c6ccc6" namespace=k8s.io protocol=ttrpc version=3 May 27 02:47:55.836237 systemd[1]: Started cri-containerd-d6cefbb7c146834ac9ab639c280d128d982be1ef7b758c80a63ee9efa6f5201c.scope - libcontainer container d6cefbb7c146834ac9ab639c280d128d982be1ef7b758c80a63ee9efa6f5201c. May 27 02:47:55.884907 containerd[1931]: time="2025-05-27T02:47:55.884846513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8fnb6,Uid:c798bb2c-0d7c-439d-b08a-4cc2fa1f74f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6cefbb7c146834ac9ab639c280d128d982be1ef7b758c80a63ee9efa6f5201c\"" May 27 02:47:55.891449 containerd[1931]: time="2025-05-27T02:47:55.891387154Z" level=info msg="CreateContainer within sandbox \"d6cefbb7c146834ac9ab639c280d128d982be1ef7b758c80a63ee9efa6f5201c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 02:47:55.914034 containerd[1931]: time="2025-05-27T02:47:55.913918574Z" level=info msg="Container 2b3e59f79e1274a6196bc1b966046d0b835e2eb12bf96e06d76b453fc33588c3: CDI devices from CRI Config.CDIDevices: []" May 27 02:47:55.930082 containerd[1931]: time="2025-05-27T02:47:55.929815851Z" level=info msg="CreateContainer within sandbox \"d6cefbb7c146834ac9ab639c280d128d982be1ef7b758c80a63ee9efa6f5201c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2b3e59f79e1274a6196bc1b966046d0b835e2eb12bf96e06d76b453fc33588c3\"" May 27 02:47:55.930851 containerd[1931]: time="2025-05-27T02:47:55.930799502Z" level=info msg="StartContainer for \"2b3e59f79e1274a6196bc1b966046d0b835e2eb12bf96e06d76b453fc33588c3\"" May 27 02:47:55.934261 containerd[1931]: time="2025-05-27T02:47:55.934188220Z" level=info msg="connecting to shim 2b3e59f79e1274a6196bc1b966046d0b835e2eb12bf96e06d76b453fc33588c3" address="unix:///run/containerd/s/ae2cb75ccce457b3745cc3530d542c174e340df672ec06d310e7304364c6ccc6" protocol=ttrpc version=3 May 27 02:47:55.967253 systemd[1]: Started cri-containerd-2b3e59f79e1274a6196bc1b966046d0b835e2eb12bf96e06d76b453fc33588c3.scope - libcontainer container 2b3e59f79e1274a6196bc1b966046d0b835e2eb12bf96e06d76b453fc33588c3. May 27 02:47:56.049063 containerd[1931]: time="2025-05-27T02:47:56.048898719Z" level=info msg="StartContainer for \"2b3e59f79e1274a6196bc1b966046d0b835e2eb12bf96e06d76b453fc33588c3\" returns successfully" May 27 02:47:56.586680 kubelet[3369]: I0527 02:47:56.586446 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8fnb6" podStartSLOduration=3.586421714 podStartE2EDuration="3.586421714s" podCreationTimestamp="2025-05-27 02:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:47:56.565192081 +0000 UTC m=+5.478158746" watchObservedRunningTime="2025-05-27 02:47:56.586421714 +0000 UTC m=+5.499388343" May 27 02:48:07.150856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount55885263.mount: Deactivated successfully. May 27 02:48:09.759678 containerd[1931]: time="2025-05-27T02:48:09.759575875Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:48:09.761653 containerd[1931]: time="2025-05-27T02:48:09.761598106Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 27 02:48:09.764910 containerd[1931]: time="2025-05-27T02:48:09.764856715Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:48:09.767822 containerd[1931]: time="2025-05-27T02:48:09.767774149Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 14.369133889s" May 27 02:48:09.768119 containerd[1931]: time="2025-05-27T02:48:09.767983749Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 27 02:48:09.770367 containerd[1931]: time="2025-05-27T02:48:09.770308699Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 27 02:48:09.772003 containerd[1931]: time="2025-05-27T02:48:09.771748675Z" level=info msg="CreateContainer within sandbox \"fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 02:48:09.791161 containerd[1931]: time="2025-05-27T02:48:09.791088120Z" level=info msg="Container fb275733e91be30c19f58fa1ef6385e791d9d5e882e13cf98b8c5c18e43239d5: CDI devices from CRI Config.CDIDevices: []" May 27 02:48:09.807579 containerd[1931]: time="2025-05-27T02:48:09.807441072Z" level=info msg="CreateContainer within sandbox \"fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fb275733e91be30c19f58fa1ef6385e791d9d5e882e13cf98b8c5c18e43239d5\"" May 27 02:48:09.808783 containerd[1931]: time="2025-05-27T02:48:09.808727311Z" level=info msg="StartContainer for \"fb275733e91be30c19f58fa1ef6385e791d9d5e882e13cf98b8c5c18e43239d5\"" May 27 02:48:09.810601 containerd[1931]: time="2025-05-27T02:48:09.810538728Z" level=info msg="connecting to shim fb275733e91be30c19f58fa1ef6385e791d9d5e882e13cf98b8c5c18e43239d5" address="unix:///run/containerd/s/8c963d748b542655ec13284024d449c91b5842f0d305fe4b4e4364d17c6e6df4" protocol=ttrpc version=3 May 27 02:48:09.855245 systemd[1]: Started cri-containerd-fb275733e91be30c19f58fa1ef6385e791d9d5e882e13cf98b8c5c18e43239d5.scope - libcontainer container fb275733e91be30c19f58fa1ef6385e791d9d5e882e13cf98b8c5c18e43239d5. May 27 02:48:09.910241 containerd[1931]: time="2025-05-27T02:48:09.910182190Z" level=info msg="StartContainer for \"fb275733e91be30c19f58fa1ef6385e791d9d5e882e13cf98b8c5c18e43239d5\" returns successfully" May 27 02:48:09.935163 systemd[1]: cri-containerd-fb275733e91be30c19f58fa1ef6385e791d9d5e882e13cf98b8c5c18e43239d5.scope: Deactivated successfully. May 27 02:48:09.939372 containerd[1931]: time="2025-05-27T02:48:09.939074809Z" level=info msg="received exit event container_id:\"fb275733e91be30c19f58fa1ef6385e791d9d5e882e13cf98b8c5c18e43239d5\" id:\"fb275733e91be30c19f58fa1ef6385e791d9d5e882e13cf98b8c5c18e43239d5\" pid:3767 exited_at:{seconds:1748314089 nanos:937473077}" May 27 02:48:09.940253 containerd[1931]: time="2025-05-27T02:48:09.940209532Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb275733e91be30c19f58fa1ef6385e791d9d5e882e13cf98b8c5c18e43239d5\" id:\"fb275733e91be30c19f58fa1ef6385e791d9d5e882e13cf98b8c5c18e43239d5\" pid:3767 exited_at:{seconds:1748314089 nanos:937473077}" May 27 02:48:09.974457 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb275733e91be30c19f58fa1ef6385e791d9d5e882e13cf98b8c5c18e43239d5-rootfs.mount: Deactivated successfully. May 27 02:48:11.621990 containerd[1931]: time="2025-05-27T02:48:11.621749398Z" level=info msg="CreateContainer within sandbox \"fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 02:48:11.654283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3021237518.mount: Deactivated successfully. May 27 02:48:11.656794 containerd[1931]: time="2025-05-27T02:48:11.656436749Z" level=info msg="Container e0d331c38919a6a175c80bd11e287f9a35361b2fdc13679eb8a8b3465a9d8789: CDI devices from CRI Config.CDIDevices: []" May 27 02:48:11.672272 containerd[1931]: time="2025-05-27T02:48:11.672211673Z" level=info msg="CreateContainer within sandbox \"fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e0d331c38919a6a175c80bd11e287f9a35361b2fdc13679eb8a8b3465a9d8789\"" May 27 02:48:11.673903 containerd[1931]: time="2025-05-27T02:48:11.673720539Z" level=info msg="StartContainer for \"e0d331c38919a6a175c80bd11e287f9a35361b2fdc13679eb8a8b3465a9d8789\"" May 27 02:48:11.675978 containerd[1931]: time="2025-05-27T02:48:11.675815826Z" level=info msg="connecting to shim e0d331c38919a6a175c80bd11e287f9a35361b2fdc13679eb8a8b3465a9d8789" address="unix:///run/containerd/s/8c963d748b542655ec13284024d449c91b5842f0d305fe4b4e4364d17c6e6df4" protocol=ttrpc version=3 May 27 02:48:11.726263 systemd[1]: Started cri-containerd-e0d331c38919a6a175c80bd11e287f9a35361b2fdc13679eb8a8b3465a9d8789.scope - libcontainer container e0d331c38919a6a175c80bd11e287f9a35361b2fdc13679eb8a8b3465a9d8789. May 27 02:48:11.813287 containerd[1931]: time="2025-05-27T02:48:11.813230872Z" level=info msg="StartContainer for \"e0d331c38919a6a175c80bd11e287f9a35361b2fdc13679eb8a8b3465a9d8789\" returns successfully" May 27 02:48:11.821267 kubelet[3369]: I0527 02:48:11.821216 3369 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 02:48:11.823241 kubelet[3369]: I0527 02:48:11.821396 3369 container_gc.go:86] "Attempting to delete unused containers" May 27 02:48:11.829767 kubelet[3369]: I0527 02:48:11.829729 3369 image_gc_manager.go:431] "Attempting to delete unused images" May 27 02:48:11.845435 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 02:48:11.846276 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 02:48:11.847531 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 27 02:48:11.853566 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 02:48:11.860685 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 02:48:11.866397 containerd[1931]: time="2025-05-27T02:48:11.861345290Z" level=info msg="received exit event container_id:\"e0d331c38919a6a175c80bd11e287f9a35361b2fdc13679eb8a8b3465a9d8789\" id:\"e0d331c38919a6a175c80bd11e287f9a35361b2fdc13679eb8a8b3465a9d8789\" pid:3813 exited_at:{seconds:1748314091 nanos:860885124}" May 27 02:48:11.866397 containerd[1931]: time="2025-05-27T02:48:11.862714082Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0d331c38919a6a175c80bd11e287f9a35361b2fdc13679eb8a8b3465a9d8789\" id:\"e0d331c38919a6a175c80bd11e287f9a35361b2fdc13679eb8a8b3465a9d8789\" pid:3813 exited_at:{seconds:1748314091 nanos:860885124}" May 27 02:48:11.863040 systemd[1]: cri-containerd-e0d331c38919a6a175c80bd11e287f9a35361b2fdc13679eb8a8b3465a9d8789.scope: Deactivated successfully. May 27 02:48:11.885986 kubelet[3369]: I0527 02:48:11.885090 3369 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 02:48:11.887004 kubelet[3369]: I0527 02:48:11.886667 3369 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-2mvps","kube-system/cilium-2rrgq","kube-system/kube-controller-manager-ip-172-31-29-92","kube-system/kube-proxy-8fnb6","kube-system/kube-apiserver-ip-172-31-29-92","kube-system/kube-scheduler-ip-172-31-29-92"] May 27 02:48:11.887004 kubelet[3369]: E0527 02:48:11.886750 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-2mvps" May 27 02:48:11.887004 kubelet[3369]: E0527 02:48:11.886774 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-2rrgq" May 27 02:48:11.887004 kubelet[3369]: E0527 02:48:11.886798 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ip-172-31-29-92" May 27 02:48:11.887004 kubelet[3369]: E0527 02:48:11.886823 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-8fnb6" May 27 02:48:11.887004 kubelet[3369]: E0527 02:48:11.886846 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ip-172-31-29-92" May 27 02:48:11.887004 kubelet[3369]: E0527 02:48:11.886870 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ip-172-31-29-92" May 27 02:48:11.887004 kubelet[3369]: I0527 02:48:11.886891 3369 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 02:48:11.929849 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 02:48:12.637066 containerd[1931]: time="2025-05-27T02:48:12.637005987Z" level=info msg="CreateContainer within sandbox \"fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 02:48:12.649696 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0d331c38919a6a175c80bd11e287f9a35361b2fdc13679eb8a8b3465a9d8789-rootfs.mount: Deactivated successfully. May 27 02:48:12.710542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount356248955.mount: Deactivated successfully. May 27 02:48:12.713988 containerd[1931]: time="2025-05-27T02:48:12.713311826Z" level=info msg="Container 460c132048dacd4b8ddab46946be2c2148b7b5f1e12eb050d07977815c3450ac: CDI devices from CRI Config.CDIDevices: []" May 27 02:48:12.737992 containerd[1931]: time="2025-05-27T02:48:12.737893475Z" level=info msg="CreateContainer within sandbox \"fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"460c132048dacd4b8ddab46946be2c2148b7b5f1e12eb050d07977815c3450ac\"" May 27 02:48:12.745823 containerd[1931]: time="2025-05-27T02:48:12.745772028Z" level=info msg="StartContainer for \"460c132048dacd4b8ddab46946be2c2148b7b5f1e12eb050d07977815c3450ac\"" May 27 02:48:12.753115 containerd[1931]: time="2025-05-27T02:48:12.753052142Z" level=info msg="connecting to shim 460c132048dacd4b8ddab46946be2c2148b7b5f1e12eb050d07977815c3450ac" address="unix:///run/containerd/s/8c963d748b542655ec13284024d449c91b5842f0d305fe4b4e4364d17c6e6df4" protocol=ttrpc version=3 May 27 02:48:12.803525 systemd[1]: Started cri-containerd-460c132048dacd4b8ddab46946be2c2148b7b5f1e12eb050d07977815c3450ac.scope - libcontainer container 460c132048dacd4b8ddab46946be2c2148b7b5f1e12eb050d07977815c3450ac. May 27 02:48:12.923353 systemd[1]: cri-containerd-460c132048dacd4b8ddab46946be2c2148b7b5f1e12eb050d07977815c3450ac.scope: Deactivated successfully. May 27 02:48:12.930867 containerd[1931]: time="2025-05-27T02:48:12.930388171Z" level=info msg="received exit event container_id:\"460c132048dacd4b8ddab46946be2c2148b7b5f1e12eb050d07977815c3450ac\" id:\"460c132048dacd4b8ddab46946be2c2148b7b5f1e12eb050d07977815c3450ac\" pid:3871 exited_at:{seconds:1748314092 nanos:928560594}" May 27 02:48:12.931387 containerd[1931]: time="2025-05-27T02:48:12.931336429Z" level=info msg="StartContainer for \"460c132048dacd4b8ddab46946be2c2148b7b5f1e12eb050d07977815c3450ac\" returns successfully" May 27 02:48:12.934394 containerd[1931]: time="2025-05-27T02:48:12.934297493Z" level=info msg="TaskExit event in podsandbox handler container_id:\"460c132048dacd4b8ddab46946be2c2148b7b5f1e12eb050d07977815c3450ac\" id:\"460c132048dacd4b8ddab46946be2c2148b7b5f1e12eb050d07977815c3450ac\" pid:3871 exited_at:{seconds:1748314092 nanos:928560594}" May 27 02:48:13.396657 containerd[1931]: time="2025-05-27T02:48:13.396580458Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:48:13.398634 containerd[1931]: time="2025-05-27T02:48:13.398566010Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 27 02:48:13.401021 containerd[1931]: time="2025-05-27T02:48:13.400922404Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:48:13.403872 containerd[1931]: time="2025-05-27T02:48:13.403638053Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.633268063s" May 27 02:48:13.403872 containerd[1931]: time="2025-05-27T02:48:13.403747116Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 27 02:48:13.410355 containerd[1931]: time="2025-05-27T02:48:13.409517319Z" level=info msg="CreateContainer within sandbox \"1b140523a348e60d0abdb5c781ea11c3ad96fee4e028b3d08e5a0a10525f0ffc\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 27 02:48:13.426562 containerd[1931]: time="2025-05-27T02:48:13.426492111Z" level=info msg="Container 32c40778e0dab2c4077b62b3b00f9301b0b341e47c21e0063f2f274f8449b01f: CDI devices from CRI Config.CDIDevices: []" May 27 02:48:13.438585 containerd[1931]: time="2025-05-27T02:48:13.438445489Z" level=info msg="CreateContainer within sandbox \"1b140523a348e60d0abdb5c781ea11c3ad96fee4e028b3d08e5a0a10525f0ffc\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"32c40778e0dab2c4077b62b3b00f9301b0b341e47c21e0063f2f274f8449b01f\"" May 27 02:48:13.439462 containerd[1931]: time="2025-05-27T02:48:13.439385426Z" level=info msg="StartContainer for \"32c40778e0dab2c4077b62b3b00f9301b0b341e47c21e0063f2f274f8449b01f\"" May 27 02:48:13.442517 containerd[1931]: time="2025-05-27T02:48:13.442440857Z" level=info msg="connecting to shim 32c40778e0dab2c4077b62b3b00f9301b0b341e47c21e0063f2f274f8449b01f" address="unix:///run/containerd/s/04a28cb3ad5e5018fa6cf167b1a80e0bcfbb6d48fe2f6077f3a665de5c3b85ee" protocol=ttrpc version=3 May 27 02:48:13.479236 systemd[1]: Started cri-containerd-32c40778e0dab2c4077b62b3b00f9301b0b341e47c21e0063f2f274f8449b01f.scope - libcontainer container 32c40778e0dab2c4077b62b3b00f9301b0b341e47c21e0063f2f274f8449b01f. May 27 02:48:13.553406 containerd[1931]: time="2025-05-27T02:48:13.553349188Z" level=info msg="StartContainer for \"32c40778e0dab2c4077b62b3b00f9301b0b341e47c21e0063f2f274f8449b01f\" returns successfully" May 27 02:48:13.656718 containerd[1931]: time="2025-05-27T02:48:13.656529918Z" level=info msg="CreateContainer within sandbox \"fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 02:48:13.663723 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-460c132048dacd4b8ddab46946be2c2148b7b5f1e12eb050d07977815c3450ac-rootfs.mount: Deactivated successfully. May 27 02:48:13.692503 containerd[1931]: time="2025-05-27T02:48:13.691433905Z" level=info msg="Container d57147f2f418e07f543e3c07ff71428329b9a4120e63be211d0a2a5abb3b6cf1: CDI devices from CRI Config.CDIDevices: []" May 27 02:48:13.713973 containerd[1931]: time="2025-05-27T02:48:13.713593560Z" level=info msg="CreateContainer within sandbox \"fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d57147f2f418e07f543e3c07ff71428329b9a4120e63be211d0a2a5abb3b6cf1\"" May 27 02:48:13.720630 containerd[1931]: time="2025-05-27T02:48:13.718056802Z" level=info msg="StartContainer for \"d57147f2f418e07f543e3c07ff71428329b9a4120e63be211d0a2a5abb3b6cf1\"" May 27 02:48:13.724749 containerd[1931]: time="2025-05-27T02:48:13.724537077Z" level=info msg="connecting to shim d57147f2f418e07f543e3c07ff71428329b9a4120e63be211d0a2a5abb3b6cf1" address="unix:///run/containerd/s/8c963d748b542655ec13284024d449c91b5842f0d305fe4b4e4364d17c6e6df4" protocol=ttrpc version=3 May 27 02:48:13.782257 systemd[1]: Started cri-containerd-d57147f2f418e07f543e3c07ff71428329b9a4120e63be211d0a2a5abb3b6cf1.scope - libcontainer container d57147f2f418e07f543e3c07ff71428329b9a4120e63be211d0a2a5abb3b6cf1. May 27 02:48:13.868633 containerd[1931]: time="2025-05-27T02:48:13.868128922Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d57147f2f418e07f543e3c07ff71428329b9a4120e63be211d0a2a5abb3b6cf1\" id:\"d57147f2f418e07f543e3c07ff71428329b9a4120e63be211d0a2a5abb3b6cf1\" pid:3950 exited_at:{seconds:1748314093 nanos:867525308}" May 27 02:48:13.868422 systemd[1]: cri-containerd-d57147f2f418e07f543e3c07ff71428329b9a4120e63be211d0a2a5abb3b6cf1.scope: Deactivated successfully. May 27 02:48:13.871048 containerd[1931]: time="2025-05-27T02:48:13.870513374Z" level=info msg="received exit event container_id:\"d57147f2f418e07f543e3c07ff71428329b9a4120e63be211d0a2a5abb3b6cf1\" id:\"d57147f2f418e07f543e3c07ff71428329b9a4120e63be211d0a2a5abb3b6cf1\" pid:3950 exited_at:{seconds:1748314093 nanos:867525308}" May 27 02:48:13.903760 containerd[1931]: time="2025-05-27T02:48:13.903440801Z" level=info msg="StartContainer for \"d57147f2f418e07f543e3c07ff71428329b9a4120e63be211d0a2a5abb3b6cf1\" returns successfully" May 27 02:48:13.958458 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d57147f2f418e07f543e3c07ff71428329b9a4120e63be211d0a2a5abb3b6cf1-rootfs.mount: Deactivated successfully. May 27 02:48:14.694446 containerd[1931]: time="2025-05-27T02:48:14.694373095Z" level=info msg="CreateContainer within sandbox \"fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 02:48:14.719317 containerd[1931]: time="2025-05-27T02:48:14.719247378Z" level=info msg="Container d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2: CDI devices from CRI Config.CDIDevices: []" May 27 02:48:14.750308 containerd[1931]: time="2025-05-27T02:48:14.750236894Z" level=info msg="CreateContainer within sandbox \"fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2\"" May 27 02:48:14.756980 containerd[1931]: time="2025-05-27T02:48:14.756189180Z" level=info msg="StartContainer for \"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2\"" May 27 02:48:14.760291 containerd[1931]: time="2025-05-27T02:48:14.760226306Z" level=info msg="connecting to shim d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2" address="unix:///run/containerd/s/8c963d748b542655ec13284024d449c91b5842f0d305fe4b4e4364d17c6e6df4" protocol=ttrpc version=3 May 27 02:48:14.824217 systemd[1]: Started cri-containerd-d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2.scope - libcontainer container d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2. May 27 02:48:14.860197 kubelet[3369]: I0527 02:48:14.860074 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-2mvps" podStartSLOduration=2.941523358 podStartE2EDuration="20.860049941s" podCreationTimestamp="2025-05-27 02:47:54 +0000 UTC" firstStartedPulling="2025-05-27 02:47:55.486559502 +0000 UTC m=+4.399526131" lastFinishedPulling="2025-05-27 02:48:13.405086085 +0000 UTC m=+22.318052714" observedRunningTime="2025-05-27 02:48:13.757453003 +0000 UTC m=+22.670419656" watchObservedRunningTime="2025-05-27 02:48:14.860049941 +0000 UTC m=+23.773016751" May 27 02:48:14.961798 containerd[1931]: time="2025-05-27T02:48:14.960940070Z" level=info msg="StartContainer for \"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2\" returns successfully" May 27 02:48:15.347753 containerd[1931]: time="2025-05-27T02:48:15.347604108Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2\" id:\"f149636100c4e674b4d0288ee589de850b4a6f5a8d248bfed90f87587f98dd36\" pid:4015 exited_at:{seconds:1748314095 nanos:345034164}" May 27 02:48:15.436137 kubelet[3369]: I0527 02:48:15.436075 3369 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 02:48:15.732787 kubelet[3369]: I0527 02:48:15.732704 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2rrgq" podStartSLOduration=8.360845569 podStartE2EDuration="22.732657973s" podCreationTimestamp="2025-05-27 02:47:53 +0000 UTC" firstStartedPulling="2025-05-27 02:47:55.39748054 +0000 UTC m=+4.310447169" lastFinishedPulling="2025-05-27 02:48:09.769292956 +0000 UTC m=+18.682259573" observedRunningTime="2025-05-27 02:48:15.729516219 +0000 UTC m=+24.642482860" watchObservedRunningTime="2025-05-27 02:48:15.732657973 +0000 UTC m=+24.645624602" May 27 02:48:17.966356 containerd[1931]: time="2025-05-27T02:48:17.966303105Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2\" id:\"947d5d07bb175a4213782e2d4f5726c710826f6c0773abea058670519ed26581\" pid:4098 exit_status:1 exited_at:{seconds:1748314097 nanos:965842507}" May 27 02:48:18.195918 systemd-networkd[1818]: cilium_host: Link UP May 27 02:48:18.198457 (udev-worker)[4052]: Network interface NamePolicy= disabled on kernel command line. May 27 02:48:18.198458 (udev-worker)[4051]: Network interface NamePolicy= disabled on kernel command line. May 27 02:48:18.200434 systemd-networkd[1818]: cilium_net: Link UP May 27 02:48:18.200817 systemd-networkd[1818]: cilium_net: Gained carrier May 27 02:48:18.205003 systemd-networkd[1818]: cilium_host: Gained carrier May 27 02:48:18.369298 (udev-worker)[4118]: Network interface NamePolicy= disabled on kernel command line. May 27 02:48:18.378577 systemd-networkd[1818]: cilium_vxlan: Link UP May 27 02:48:18.378595 systemd-networkd[1818]: cilium_vxlan: Gained carrier May 27 02:48:18.380734 systemd-networkd[1818]: cilium_net: Gained IPv6LL May 27 02:48:18.460316 systemd-networkd[1818]: cilium_host: Gained IPv6LL May 27 02:48:18.864098 kernel: NET: Registered PF_ALG protocol family May 27 02:48:19.500355 systemd-networkd[1818]: cilium_vxlan: Gained IPv6LL May 27 02:48:20.206712 systemd-networkd[1818]: lxc_health: Link UP May 27 02:48:20.207655 (udev-worker)[4117]: Network interface NamePolicy= disabled on kernel command line. May 27 02:48:20.208975 systemd-networkd[1818]: lxc_health: Gained carrier May 27 02:48:20.302477 containerd[1931]: time="2025-05-27T02:48:20.302391126Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2\" id:\"689e1d29bbeb5cddcf508aa5ea6a8dd6c01b49d7691234206eaf077eef340ba1\" pid:4435 exit_status:1 exited_at:{seconds:1748314100 nanos:301420093}" May 27 02:48:21.548139 systemd-networkd[1818]: lxc_health: Gained IPv6LL May 27 02:48:21.947289 kubelet[3369]: I0527 02:48:21.947130 3369 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 02:48:21.948874 kubelet[3369]: I0527 02:48:21.947861 3369 container_gc.go:86] "Attempting to delete unused containers" May 27 02:48:21.954591 kubelet[3369]: I0527 02:48:21.954338 3369 image_gc_manager.go:431] "Attempting to delete unused images" May 27 02:48:22.007423 kubelet[3369]: I0527 02:48:22.007358 3369 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 02:48:22.007611 kubelet[3369]: I0527 02:48:22.007572 3369 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-2mvps","kube-system/cilium-2rrgq","kube-system/kube-controller-manager-ip-172-31-29-92","kube-system/kube-proxy-8fnb6","kube-system/kube-apiserver-ip-172-31-29-92","kube-system/kube-scheduler-ip-172-31-29-92"] May 27 02:48:22.007700 kubelet[3369]: E0527 02:48:22.007648 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-2mvps" May 27 02:48:22.007700 kubelet[3369]: E0527 02:48:22.007684 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-2rrgq" May 27 02:48:22.007866 kubelet[3369]: E0527 02:48:22.007709 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ip-172-31-29-92" May 27 02:48:22.007866 kubelet[3369]: E0527 02:48:22.007731 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-8fnb6" May 27 02:48:22.007866 kubelet[3369]: E0527 02:48:22.007752 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ip-172-31-29-92" May 27 02:48:22.007866 kubelet[3369]: E0527 02:48:22.007774 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ip-172-31-29-92" May 27 02:48:22.007866 kubelet[3369]: I0527 02:48:22.007795 3369 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 02:48:22.619496 containerd[1931]: time="2025-05-27T02:48:22.619437979Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2\" id:\"0056b5db363c1cf6dbc90c2cea9205365e17108cbbb92ebcfd4918036ad0117e\" pid:4497 exited_at:{seconds:1748314102 nanos:616596086}" May 27 02:48:23.772700 ntpd[1872]: Listen normally on 7 cilium_host 192.168.0.18:123 May 27 02:48:23.773501 ntpd[1872]: 27 May 02:48:23 ntpd[1872]: Listen normally on 7 cilium_host 192.168.0.18:123 May 27 02:48:23.773501 ntpd[1872]: 27 May 02:48:23 ntpd[1872]: Listen normally on 8 cilium_net [fe80::f08c:9eff:fe3d:b813%4]:123 May 27 02:48:23.772867 ntpd[1872]: Listen normally on 8 cilium_net [fe80::f08c:9eff:fe3d:b813%4]:123 May 27 02:48:23.773719 ntpd[1872]: Listen normally on 9 cilium_host [fe80::1c09:c6ff:fe6c:1532%5]:123 May 27 02:48:23.774772 ntpd[1872]: 27 May 02:48:23 ntpd[1872]: Listen normally on 9 cilium_host [fe80::1c09:c6ff:fe6c:1532%5]:123 May 27 02:48:23.774772 ntpd[1872]: 27 May 02:48:23 ntpd[1872]: Listen normally on 10 cilium_vxlan [fe80::4889:a9ff:fee2:3b9a%6]:123 May 27 02:48:23.774772 ntpd[1872]: 27 May 02:48:23 ntpd[1872]: Listen normally on 11 lxc_health [fe80::4c0d:1cff:fecd:c7f%8]:123 May 27 02:48:23.773853 ntpd[1872]: Listen normally on 10 cilium_vxlan [fe80::4889:a9ff:fee2:3b9a%6]:123 May 27 02:48:23.773920 ntpd[1872]: Listen normally on 11 lxc_health [fe80::4c0d:1cff:fecd:c7f%8]:123 May 27 02:48:24.855211 containerd[1931]: time="2025-05-27T02:48:24.855082854Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2\" id:\"81bc829f591627f1b77f1fbb5485156459db6627f2f4986b4eef96489641ba87\" pid:4525 exited_at:{seconds:1748314104 nanos:853468347}" May 27 02:48:27.042306 containerd[1931]: time="2025-05-27T02:48:27.042228712Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2\" id:\"2b993ea9aa1a017096cc350f5c27057d0cd7021fddf389774af6161197ff47f7\" pid:4553 exited_at:{seconds:1748314107 nanos:41303205}" May 27 02:48:27.868180 sudo[2238]: pam_unix(sudo:session): session closed for user root May 27 02:48:27.892984 sshd[2237]: Connection closed by 139.178.68.195 port 43470 May 27 02:48:27.893784 sshd-session[2235]: pam_unix(sshd:session): session closed for user core May 27 02:48:27.903791 systemd[1]: sshd@6-172.31.29.92:22-139.178.68.195:43470.service: Deactivated successfully. May 27 02:48:27.904029 systemd-logind[1877]: Session 7 logged out. Waiting for processes to exit. May 27 02:48:27.911424 systemd[1]: session-7.scope: Deactivated successfully. May 27 02:48:27.912256 systemd[1]: session-7.scope: Consumed 13.776s CPU time, 270.5M memory peak. May 27 02:48:27.917214 systemd-logind[1877]: Removed session 7. May 27 02:48:32.040672 kubelet[3369]: I0527 02:48:32.040557 3369 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 02:48:32.042627 kubelet[3369]: I0527 02:48:32.041297 3369 container_gc.go:86] "Attempting to delete unused containers" May 27 02:48:32.047613 kubelet[3369]: I0527 02:48:32.047488 3369 image_gc_manager.go:431] "Attempting to delete unused images" May 27 02:48:32.074117 kubelet[3369]: I0527 02:48:32.074069 3369 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 02:48:32.074317 kubelet[3369]: I0527 02:48:32.074277 3369 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-2mvps","kube-system/cilium-2rrgq","kube-system/kube-controller-manager-ip-172-31-29-92","kube-system/kube-proxy-8fnb6","kube-system/kube-apiserver-ip-172-31-29-92","kube-system/kube-scheduler-ip-172-31-29-92"] May 27 02:48:32.074426 kubelet[3369]: E0527 02:48:32.074340 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-2mvps" May 27 02:48:32.074426 kubelet[3369]: E0527 02:48:32.074370 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-2rrgq" May 27 02:48:32.074426 kubelet[3369]: E0527 02:48:32.074395 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ip-172-31-29-92" May 27 02:48:32.074426 kubelet[3369]: E0527 02:48:32.074417 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-8fnb6" May 27 02:48:32.074699 kubelet[3369]: E0527 02:48:32.074439 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ip-172-31-29-92" May 27 02:48:32.074699 kubelet[3369]: E0527 02:48:32.074461 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ip-172-31-29-92" May 27 02:48:32.074699 kubelet[3369]: I0527 02:48:32.074481 3369 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 02:48:42.109539 kubelet[3369]: I0527 02:48:42.109445 3369 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 02:48:42.109539 kubelet[3369]: I0527 02:48:42.109507 3369 container_gc.go:86] "Attempting to delete unused containers" May 27 02:48:42.113519 kubelet[3369]: I0527 02:48:42.113429 3369 image_gc_manager.go:431] "Attempting to delete unused images" May 27 02:48:42.143765 kubelet[3369]: I0527 02:48:42.143669 3369 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 02:48:42.144220 kubelet[3369]: I0527 02:48:42.144156 3369 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-2mvps","kube-system/cilium-2rrgq","kube-system/kube-controller-manager-ip-172-31-29-92","kube-system/kube-proxy-8fnb6","kube-system/kube-apiserver-ip-172-31-29-92","kube-system/kube-scheduler-ip-172-31-29-92"] May 27 02:48:42.144369 kubelet[3369]: E0527 02:48:42.144348 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-2mvps" May 27 02:48:42.144607 kubelet[3369]: E0527 02:48:42.144433 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-2rrgq" May 27 02:48:42.144607 kubelet[3369]: E0527 02:48:42.144458 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ip-172-31-29-92" May 27 02:48:42.144814 kubelet[3369]: E0527 02:48:42.144739 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-8fnb6" May 27 02:48:42.144814 kubelet[3369]: E0527 02:48:42.144773 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ip-172-31-29-92" May 27 02:48:42.145033 kubelet[3369]: E0527 02:48:42.144978 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ip-172-31-29-92" May 27 02:48:42.145033 kubelet[3369]: I0527 02:48:42.145010 3369 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 02:48:52.177689 kubelet[3369]: I0527 02:48:52.177616 3369 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 02:48:52.177689 kubelet[3369]: I0527 02:48:52.177685 3369 container_gc.go:86] "Attempting to delete unused containers" May 27 02:48:52.183594 kubelet[3369]: I0527 02:48:52.183541 3369 image_gc_manager.go:431] "Attempting to delete unused images" May 27 02:48:52.213800 kubelet[3369]: I0527 02:48:52.213489 3369 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 02:48:52.213800 kubelet[3369]: I0527 02:48:52.213639 3369 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-2mvps","kube-system/cilium-2rrgq","kube-system/kube-proxy-8fnb6","kube-system/kube-controller-manager-ip-172-31-29-92","kube-system/kube-apiserver-ip-172-31-29-92","kube-system/kube-scheduler-ip-172-31-29-92"] May 27 02:48:52.214165 kubelet[3369]: E0527 02:48:52.213694 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-2mvps" May 27 02:48:52.214423 kubelet[3369]: E0527 02:48:52.214267 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-2rrgq" May 27 02:48:52.214423 kubelet[3369]: E0527 02:48:52.214304 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-8fnb6" May 27 02:48:52.214423 kubelet[3369]: E0527 02:48:52.214330 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ip-172-31-29-92" May 27 02:48:52.214423 kubelet[3369]: E0527 02:48:52.214352 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ip-172-31-29-92" May 27 02:48:52.214423 kubelet[3369]: E0527 02:48:52.214374 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ip-172-31-29-92" May 27 02:48:52.214423 kubelet[3369]: I0527 02:48:52.214398 3369 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 02:49:02.244677 kubelet[3369]: I0527 02:49:02.244619 3369 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 02:49:02.245921 kubelet[3369]: I0527 02:49:02.245360 3369 container_gc.go:86] "Attempting to delete unused containers" May 27 02:49:02.252345 kubelet[3369]: I0527 02:49:02.251609 3369 image_gc_manager.go:431] "Attempting to delete unused images" May 27 02:49:02.277487 kubelet[3369]: I0527 02:49:02.277450 3369 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 02:49:02.278883 kubelet[3369]: I0527 02:49:02.278823 3369 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-2mvps","kube-system/cilium-2rrgq","kube-system/kube-controller-manager-ip-172-31-29-92","kube-system/kube-proxy-8fnb6","kube-system/kube-apiserver-ip-172-31-29-92","kube-system/kube-scheduler-ip-172-31-29-92"] May 27 02:49:02.279173 kubelet[3369]: E0527 02:49:02.279149 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-2mvps" May 27 02:49:02.279353 kubelet[3369]: E0527 02:49:02.279256 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-2rrgq" May 27 02:49:02.279531 kubelet[3369]: E0527 02:49:02.279288 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ip-172-31-29-92" May 27 02:49:02.279531 kubelet[3369]: E0527 02:49:02.279453 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-8fnb6" May 27 02:49:02.279531 kubelet[3369]: E0527 02:49:02.279481 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ip-172-31-29-92" May 27 02:49:02.279531 kubelet[3369]: E0527 02:49:02.279501 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ip-172-31-29-92" May 27 02:49:02.279806 kubelet[3369]: I0527 02:49:02.279774 3369 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 02:49:05.571605 systemd[1]: Started sshd@7-172.31.29.92:22-139.178.68.195:38892.service - OpenSSH per-connection server daemon (139.178.68.195:38892). May 27 02:49:05.768722 sshd[4590]: Accepted publickey for core from 139.178.68.195 port 38892 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:05.771013 sshd-session[4590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:05.778665 systemd-logind[1877]: New session 8 of user core. May 27 02:49:05.786211 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 02:49:06.051187 sshd[4592]: Connection closed by 139.178.68.195 port 38892 May 27 02:49:06.049228 sshd-session[4590]: pam_unix(sshd:session): session closed for user core May 27 02:49:06.061355 systemd[1]: sshd@7-172.31.29.92:22-139.178.68.195:38892.service: Deactivated successfully. May 27 02:49:06.068267 systemd[1]: session-8.scope: Deactivated successfully. May 27 02:49:06.071557 systemd-logind[1877]: Session 8 logged out. Waiting for processes to exit. May 27 02:49:06.074939 systemd-logind[1877]: Removed session 8. May 27 02:49:11.092151 systemd[1]: Started sshd@8-172.31.29.92:22-139.178.68.195:38906.service - OpenSSH per-connection server daemon (139.178.68.195:38906). May 27 02:49:11.291042 sshd[4606]: Accepted publickey for core from 139.178.68.195 port 38906 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:11.293533 sshd-session[4606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:11.301466 systemd-logind[1877]: New session 9 of user core. May 27 02:49:11.314260 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 02:49:11.560173 sshd[4608]: Connection closed by 139.178.68.195 port 38906 May 27 02:49:11.561053 sshd-session[4606]: pam_unix(sshd:session): session closed for user core May 27 02:49:11.567915 systemd[1]: sshd@8-172.31.29.92:22-139.178.68.195:38906.service: Deactivated successfully. May 27 02:49:11.573912 systemd[1]: session-9.scope: Deactivated successfully. May 27 02:49:11.576063 systemd-logind[1877]: Session 9 logged out. Waiting for processes to exit. May 27 02:49:11.579325 systemd-logind[1877]: Removed session 9. May 27 02:49:12.312386 kubelet[3369]: I0527 02:49:12.312315 3369 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 02:49:12.312386 kubelet[3369]: I0527 02:49:12.312385 3369 container_gc.go:86] "Attempting to delete unused containers" May 27 02:49:12.316716 kubelet[3369]: I0527 02:49:12.315934 3369 image_gc_manager.go:431] "Attempting to delete unused images" May 27 02:49:12.343435 kubelet[3369]: I0527 02:49:12.343386 3369 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 02:49:12.343614 kubelet[3369]: I0527 02:49:12.343574 3369 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-2mvps","kube-system/cilium-2rrgq","kube-system/kube-proxy-8fnb6","kube-system/kube-controller-manager-ip-172-31-29-92","kube-system/kube-apiserver-ip-172-31-29-92","kube-system/kube-scheduler-ip-172-31-29-92"] May 27 02:49:12.343744 kubelet[3369]: E0527 02:49:12.343640 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-2mvps" May 27 02:49:12.343744 kubelet[3369]: E0527 02:49:12.343669 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-2rrgq" May 27 02:49:12.343744 kubelet[3369]: E0527 02:49:12.343695 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-8fnb6" May 27 02:49:12.343744 kubelet[3369]: E0527 02:49:12.343720 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ip-172-31-29-92" May 27 02:49:12.343744 kubelet[3369]: E0527 02:49:12.343742 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ip-172-31-29-92" May 27 02:49:12.343744 kubelet[3369]: E0527 02:49:12.343762 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ip-172-31-29-92" May 27 02:49:12.344132 kubelet[3369]: I0527 02:49:12.343782 3369 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 02:49:16.600753 systemd[1]: Started sshd@9-172.31.29.92:22-139.178.68.195:52908.service - OpenSSH per-connection server daemon (139.178.68.195:52908). May 27 02:49:16.791853 sshd[4622]: Accepted publickey for core from 139.178.68.195 port 52908 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:16.794454 sshd-session[4622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:16.806119 systemd-logind[1877]: New session 10 of user core. May 27 02:49:16.812275 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 02:49:17.057707 sshd[4624]: Connection closed by 139.178.68.195 port 52908 May 27 02:49:17.058840 sshd-session[4622]: pam_unix(sshd:session): session closed for user core May 27 02:49:17.065052 systemd[1]: sshd@9-172.31.29.92:22-139.178.68.195:52908.service: Deactivated successfully. May 27 02:49:17.068670 systemd[1]: session-10.scope: Deactivated successfully. May 27 02:49:17.073622 systemd-logind[1877]: Session 10 logged out. Waiting for processes to exit. May 27 02:49:17.076099 systemd-logind[1877]: Removed session 10. May 27 02:49:22.097264 systemd[1]: Started sshd@10-172.31.29.92:22-139.178.68.195:52920.service - OpenSSH per-connection server daemon (139.178.68.195:52920). May 27 02:49:22.289908 sshd[4637]: Accepted publickey for core from 139.178.68.195 port 52920 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:22.293075 sshd-session[4637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:22.301020 systemd-logind[1877]: New session 11 of user core. May 27 02:49:22.313250 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 02:49:22.375392 kubelet[3369]: I0527 02:49:22.374830 3369 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 02:49:22.375392 kubelet[3369]: I0527 02:49:22.374897 3369 container_gc.go:86] "Attempting to delete unused containers" May 27 02:49:22.381234 kubelet[3369]: I0527 02:49:22.381087 3369 image_gc_manager.go:431] "Attempting to delete unused images" May 27 02:49:22.406884 kubelet[3369]: I0527 02:49:22.406409 3369 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 02:49:22.406884 kubelet[3369]: I0527 02:49:22.406619 3369 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-2mvps","kube-system/cilium-2rrgq","kube-system/kube-controller-manager-ip-172-31-29-92","kube-system/kube-proxy-8fnb6","kube-system/kube-apiserver-ip-172-31-29-92","kube-system/kube-scheduler-ip-172-31-29-92"] May 27 02:49:22.406884 kubelet[3369]: E0527 02:49:22.406678 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-2mvps" May 27 02:49:22.406884 kubelet[3369]: E0527 02:49:22.406710 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-2rrgq" May 27 02:49:22.406884 kubelet[3369]: E0527 02:49:22.406734 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ip-172-31-29-92" May 27 02:49:22.406884 kubelet[3369]: E0527 02:49:22.406761 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-8fnb6" May 27 02:49:22.406884 kubelet[3369]: E0527 02:49:22.406783 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ip-172-31-29-92" May 27 02:49:22.406884 kubelet[3369]: E0527 02:49:22.406806 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ip-172-31-29-92" May 27 02:49:22.406884 kubelet[3369]: I0527 02:49:22.406827 3369 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 02:49:22.555879 sshd[4639]: Connection closed by 139.178.68.195 port 52920 May 27 02:49:22.556723 sshd-session[4637]: pam_unix(sshd:session): session closed for user core May 27 02:49:22.563705 systemd[1]: sshd@10-172.31.29.92:22-139.178.68.195:52920.service: Deactivated successfully. May 27 02:49:22.567975 systemd[1]: session-11.scope: Deactivated successfully. May 27 02:49:22.569802 systemd-logind[1877]: Session 11 logged out. Waiting for processes to exit. May 27 02:49:22.572911 systemd-logind[1877]: Removed session 11. May 27 02:49:27.601671 systemd[1]: Started sshd@11-172.31.29.92:22-139.178.68.195:57646.service - OpenSSH per-connection server daemon (139.178.68.195:57646). May 27 02:49:27.800404 sshd[4655]: Accepted publickey for core from 139.178.68.195 port 57646 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:27.802972 sshd-session[4655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:27.811173 systemd-logind[1877]: New session 12 of user core. May 27 02:49:27.820190 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 02:49:28.065644 sshd[4657]: Connection closed by 139.178.68.195 port 57646 May 27 02:49:28.066501 sshd-session[4655]: pam_unix(sshd:session): session closed for user core May 27 02:49:28.073442 systemd-logind[1877]: Session 12 logged out. Waiting for processes to exit. May 27 02:49:28.074485 systemd[1]: sshd@11-172.31.29.92:22-139.178.68.195:57646.service: Deactivated successfully. May 27 02:49:28.078919 systemd[1]: session-12.scope: Deactivated successfully. May 27 02:49:28.083055 systemd-logind[1877]: Removed session 12. May 27 02:49:28.099917 systemd[1]: Started sshd@12-172.31.29.92:22-139.178.68.195:57654.service - OpenSSH per-connection server daemon (139.178.68.195:57654). May 27 02:49:28.297830 sshd[4669]: Accepted publickey for core from 139.178.68.195 port 57654 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:28.300353 sshd-session[4669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:28.308401 systemd-logind[1877]: New session 13 of user core. May 27 02:49:28.320185 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 02:49:28.652311 sshd[4671]: Connection closed by 139.178.68.195 port 57654 May 27 02:49:28.653576 sshd-session[4669]: pam_unix(sshd:session): session closed for user core May 27 02:49:28.669183 systemd-logind[1877]: Session 13 logged out. Waiting for processes to exit. May 27 02:49:28.669713 systemd[1]: sshd@12-172.31.29.92:22-139.178.68.195:57654.service: Deactivated successfully. May 27 02:49:28.678214 systemd[1]: session-13.scope: Deactivated successfully. May 27 02:49:28.705060 systemd[1]: Started sshd@13-172.31.29.92:22-139.178.68.195:57656.service - OpenSSH per-connection server daemon (139.178.68.195:57656). May 27 02:49:28.708640 systemd-logind[1877]: Removed session 13. May 27 02:49:28.906645 sshd[4681]: Accepted publickey for core from 139.178.68.195 port 57656 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:28.909709 sshd-session[4681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:28.919005 systemd-logind[1877]: New session 14 of user core. May 27 02:49:28.929217 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 02:49:29.172918 sshd[4683]: Connection closed by 139.178.68.195 port 57656 May 27 02:49:29.174026 sshd-session[4681]: pam_unix(sshd:session): session closed for user core May 27 02:49:29.181478 systemd[1]: sshd@13-172.31.29.92:22-139.178.68.195:57656.service: Deactivated successfully. May 27 02:49:29.184999 systemd[1]: session-14.scope: Deactivated successfully. May 27 02:49:29.187350 systemd-logind[1877]: Session 14 logged out. Waiting for processes to exit. May 27 02:49:29.190343 systemd-logind[1877]: Removed session 14. May 27 02:49:32.437022 kubelet[3369]: I0527 02:49:32.436895 3369 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 02:49:32.437022 kubelet[3369]: I0527 02:49:32.436984 3369 container_gc.go:86] "Attempting to delete unused containers" May 27 02:49:32.443026 kubelet[3369]: I0527 02:49:32.442971 3369 image_gc_manager.go:431] "Attempting to delete unused images" May 27 02:49:32.471625 kubelet[3369]: I0527 02:49:32.471558 3369 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 02:49:32.472140 kubelet[3369]: I0527 02:49:32.472024 3369 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-2mvps","kube-system/cilium-2rrgq","kube-system/kube-controller-manager-ip-172-31-29-92","kube-system/kube-proxy-8fnb6","kube-system/kube-apiserver-ip-172-31-29-92","kube-system/kube-scheduler-ip-172-31-29-92"] May 27 02:49:32.472140 kubelet[3369]: E0527 02:49:32.472106 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-2mvps" May 27 02:49:32.472354 kubelet[3369]: E0527 02:49:32.472290 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-2rrgq" May 27 02:49:32.472354 kubelet[3369]: E0527 02:49:32.472322 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ip-172-31-29-92" May 27 02:49:32.472575 kubelet[3369]: E0527 02:49:32.472504 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-8fnb6" May 27 02:49:32.472575 kubelet[3369]: E0527 02:49:32.472536 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ip-172-31-29-92" May 27 02:49:32.472791 kubelet[3369]: E0527 02:49:32.472707 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ip-172-31-29-92" May 27 02:49:32.472791 kubelet[3369]: I0527 02:49:32.472749 3369 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 02:49:34.217452 systemd[1]: Started sshd@14-172.31.29.92:22-139.178.68.195:46434.service - OpenSSH per-connection server daemon (139.178.68.195:46434). May 27 02:49:34.415084 sshd[4695]: Accepted publickey for core from 139.178.68.195 port 46434 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:34.417589 sshd-session[4695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:34.426093 systemd-logind[1877]: New session 15 of user core. May 27 02:49:34.434212 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 02:49:34.683527 sshd[4697]: Connection closed by 139.178.68.195 port 46434 May 27 02:49:34.684615 sshd-session[4695]: pam_unix(sshd:session): session closed for user core May 27 02:49:34.692296 systemd[1]: sshd@14-172.31.29.92:22-139.178.68.195:46434.service: Deactivated successfully. May 27 02:49:34.696417 systemd[1]: session-15.scope: Deactivated successfully. May 27 02:49:34.698790 systemd-logind[1877]: Session 15 logged out. Waiting for processes to exit. May 27 02:49:34.702117 systemd-logind[1877]: Removed session 15. May 27 02:49:39.737698 systemd[1]: Started sshd@15-172.31.29.92:22-139.178.68.195:46444.service - OpenSSH per-connection server daemon (139.178.68.195:46444). May 27 02:49:39.945053 sshd[4709]: Accepted publickey for core from 139.178.68.195 port 46444 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:39.947499 sshd-session[4709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:39.955807 systemd-logind[1877]: New session 16 of user core. May 27 02:49:39.965275 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 02:49:40.214213 sshd[4711]: Connection closed by 139.178.68.195 port 46444 May 27 02:49:40.215310 sshd-session[4709]: pam_unix(sshd:session): session closed for user core May 27 02:49:40.226540 systemd[1]: sshd@15-172.31.29.92:22-139.178.68.195:46444.service: Deactivated successfully. May 27 02:49:40.230417 systemd[1]: session-16.scope: Deactivated successfully. May 27 02:49:40.232167 systemd-logind[1877]: Session 16 logged out. Waiting for processes to exit. May 27 02:49:40.235905 systemd-logind[1877]: Removed session 16. May 27 02:49:42.517500 kubelet[3369]: I0527 02:49:42.516618 3369 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 02:49:42.517500 kubelet[3369]: I0527 02:49:42.516685 3369 container_gc.go:86] "Attempting to delete unused containers" May 27 02:49:42.523615 kubelet[3369]: I0527 02:49:42.523560 3369 image_gc_manager.go:431] "Attempting to delete unused images" May 27 02:49:42.579693 kubelet[3369]: I0527 02:49:42.579611 3369 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 02:49:42.580640 kubelet[3369]: I0527 02:49:42.580492 3369 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-2mvps","kube-system/cilium-2rrgq","kube-system/kube-controller-manager-ip-172-31-29-92","kube-system/kube-proxy-8fnb6","kube-system/kube-apiserver-ip-172-31-29-92","kube-system/kube-scheduler-ip-172-31-29-92"] May 27 02:49:42.581166 kubelet[3369]: E0527 02:49:42.580928 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-2mvps" May 27 02:49:42.581166 kubelet[3369]: E0527 02:49:42.581113 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-2rrgq" May 27 02:49:42.581567 kubelet[3369]: E0527 02:49:42.581402 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ip-172-31-29-92" May 27 02:49:42.581567 kubelet[3369]: E0527 02:49:42.581436 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-8fnb6" May 27 02:49:42.581904 kubelet[3369]: E0527 02:49:42.581761 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ip-172-31-29-92" May 27 02:49:42.581904 kubelet[3369]: E0527 02:49:42.581838 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ip-172-31-29-92" May 27 02:49:42.581904 kubelet[3369]: I0527 02:49:42.581862 3369 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 02:49:45.255071 systemd[1]: Started sshd@16-172.31.29.92:22-139.178.68.195:47360.service - OpenSSH per-connection server daemon (139.178.68.195:47360). May 27 02:49:45.464346 sshd[4730]: Accepted publickey for core from 139.178.68.195 port 47360 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:45.466808 sshd-session[4730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:45.475071 systemd-logind[1877]: New session 17 of user core. May 27 02:49:45.481305 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 02:49:45.731003 sshd[4732]: Connection closed by 139.178.68.195 port 47360 May 27 02:49:45.731812 sshd-session[4730]: pam_unix(sshd:session): session closed for user core May 27 02:49:45.738895 systemd[1]: sshd@16-172.31.29.92:22-139.178.68.195:47360.service: Deactivated successfully. May 27 02:49:45.743463 systemd[1]: session-17.scope: Deactivated successfully. May 27 02:49:45.746252 systemd-logind[1877]: Session 17 logged out. Waiting for processes to exit. May 27 02:49:45.749313 systemd-logind[1877]: Removed session 17. May 27 02:49:50.772660 systemd[1]: Started sshd@17-172.31.29.92:22-139.178.68.195:47370.service - OpenSSH per-connection server daemon (139.178.68.195:47370). May 27 02:49:50.970806 sshd[4748]: Accepted publickey for core from 139.178.68.195 port 47370 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:50.974062 sshd-session[4748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:50.982001 systemd-logind[1877]: New session 18 of user core. May 27 02:49:51.003215 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 02:49:51.254059 sshd[4750]: Connection closed by 139.178.68.195 port 47370 May 27 02:49:51.255854 sshd-session[4748]: pam_unix(sshd:session): session closed for user core May 27 02:49:51.263870 systemd[1]: sshd@17-172.31.29.92:22-139.178.68.195:47370.service: Deactivated successfully. May 27 02:49:51.268507 systemd[1]: session-18.scope: Deactivated successfully. May 27 02:49:51.271349 systemd-logind[1877]: Session 18 logged out. Waiting for processes to exit. May 27 02:49:51.275552 systemd-logind[1877]: Removed session 18. May 27 02:49:51.288649 systemd[1]: Started sshd@18-172.31.29.92:22-139.178.68.195:47372.service - OpenSSH per-connection server daemon (139.178.68.195:47372). May 27 02:49:51.496217 sshd[4762]: Accepted publickey for core from 139.178.68.195 port 47372 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:51.499733 sshd-session[4762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:51.508448 systemd-logind[1877]: New session 19 of user core. May 27 02:49:51.519250 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 02:49:51.820281 sshd[4766]: Connection closed by 139.178.68.195 port 47372 May 27 02:49:51.821407 sshd-session[4762]: pam_unix(sshd:session): session closed for user core May 27 02:49:51.827614 systemd[1]: sshd@18-172.31.29.92:22-139.178.68.195:47372.service: Deactivated successfully. May 27 02:49:51.832201 systemd[1]: session-19.scope: Deactivated successfully. May 27 02:49:51.836724 systemd-logind[1877]: Session 19 logged out. Waiting for processes to exit. May 27 02:49:51.838823 systemd-logind[1877]: Removed session 19. May 27 02:49:51.858435 systemd[1]: Started sshd@19-172.31.29.92:22-139.178.68.195:47376.service - OpenSSH per-connection server daemon (139.178.68.195:47376). May 27 02:49:52.066647 sshd[4775]: Accepted publickey for core from 139.178.68.195 port 47376 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:52.069285 sshd-session[4775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:52.079095 systemd-logind[1877]: New session 20 of user core. May 27 02:49:52.085216 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 02:49:52.614736 kubelet[3369]: I0527 02:49:52.614654 3369 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" May 27 02:49:52.614736 kubelet[3369]: I0527 02:49:52.614736 3369 container_gc.go:86] "Attempting to delete unused containers" May 27 02:49:52.620851 kubelet[3369]: I0527 02:49:52.620688 3369 image_gc_manager.go:431] "Attempting to delete unused images" May 27 02:49:52.624483 kubelet[3369]: I0527 02:49:52.623688 3369 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4" size=16948420 runtimeHandler="" May 27 02:49:52.625702 containerd[1931]: time="2025-05-27T02:49:52.625573755Z" level=info msg="RemoveImage \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 27 02:49:52.628267 containerd[1931]: time="2025-05-27T02:49:52.628161192Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 02:49:52.630354 containerd[1931]: time="2025-05-27T02:49:52.630267632Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\"" May 27 02:49:52.632335 containerd[1931]: time="2025-05-27T02:49:52.632278793Z" level=info msg="RemoveImage \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" returns successfully" May 27 02:49:52.632573 containerd[1931]: time="2025-05-27T02:49:52.632490999Z" level=info msg="ImageDelete event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 27 02:49:52.633071 kubelet[3369]: I0527 02:49:52.632932 3369 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82" size=67941650 runtimeHandler="" May 27 02:49:52.633767 containerd[1931]: time="2025-05-27T02:49:52.633482227Z" level=info msg="RemoveImage \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 27 02:49:52.635830 containerd[1931]: time="2025-05-27T02:49:52.635771567Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.16-0\"" May 27 02:49:52.637829 containerd[1931]: time="2025-05-27T02:49:52.637774876Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\"" May 27 02:49:52.639838 containerd[1931]: time="2025-05-27T02:49:52.639712056Z" level=info msg="RemoveImage \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" returns successfully" May 27 02:49:52.639838 containerd[1931]: time="2025-05-27T02:49:52.639802125Z" level=info msg="ImageDelete event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 27 02:49:52.664977 kubelet[3369]: I0527 02:49:52.664869 3369 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" May 27 02:49:52.665380 kubelet[3369]: I0527 02:49:52.665325 3369 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/cilium-operator-6c4d7847fc-2mvps","kube-system/cilium-2rrgq","kube-system/kube-controller-manager-ip-172-31-29-92","kube-system/kube-proxy-8fnb6","kube-system/kube-apiserver-ip-172-31-29-92","kube-system/kube-scheduler-ip-172-31-29-92"] May 27 02:49:52.665582 kubelet[3369]: E0527 02:49:52.665528 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-operator-6c4d7847fc-2mvps" May 27 02:49:52.665753 kubelet[3369]: E0527 02:49:52.665674 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/cilium-2rrgq" May 27 02:49:52.665753 kubelet[3369]: E0527 02:49:52.665707 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-ip-172-31-29-92" May 27 02:49:52.666128 kubelet[3369]: E0527 02:49:52.665729 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-8fnb6" May 27 02:49:52.666128 kubelet[3369]: E0527 02:49:52.665999 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-ip-172-31-29-92" May 27 02:49:52.666128 kubelet[3369]: E0527 02:49:52.666025 3369 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-ip-172-31-29-92" May 27 02:49:52.666640 kubelet[3369]: I0527 02:49:52.666604 3369 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" May 27 02:49:53.529076 sshd[4777]: Connection closed by 139.178.68.195 port 47376 May 27 02:49:53.529880 sshd-session[4775]: pam_unix(sshd:session): session closed for user core May 27 02:49:53.540620 systemd[1]: sshd@19-172.31.29.92:22-139.178.68.195:47376.service: Deactivated successfully. May 27 02:49:53.547362 systemd[1]: session-20.scope: Deactivated successfully. May 27 02:49:53.557288 systemd-logind[1877]: Session 20 logged out. Waiting for processes to exit. May 27 02:49:53.580465 systemd[1]: Started sshd@20-172.31.29.92:22-139.178.68.195:36298.service - OpenSSH per-connection server daemon (139.178.68.195:36298). May 27 02:49:53.586335 systemd-logind[1877]: Removed session 20. May 27 02:49:53.780720 sshd[4794]: Accepted publickey for core from 139.178.68.195 port 36298 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:53.782290 sshd-session[4794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:53.791030 systemd-logind[1877]: New session 21 of user core. May 27 02:49:53.801210 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 02:49:54.285300 sshd[4796]: Connection closed by 139.178.68.195 port 36298 May 27 02:49:54.288204 sshd-session[4794]: pam_unix(sshd:session): session closed for user core May 27 02:49:54.296936 systemd[1]: sshd@20-172.31.29.92:22-139.178.68.195:36298.service: Deactivated successfully. May 27 02:49:54.302898 systemd[1]: session-21.scope: Deactivated successfully. May 27 02:49:54.305849 systemd-logind[1877]: Session 21 logged out. Waiting for processes to exit. May 27 02:49:54.321638 systemd[1]: Started sshd@21-172.31.29.92:22-139.178.68.195:36314.service - OpenSSH per-connection server daemon (139.178.68.195:36314). May 27 02:49:54.324367 systemd-logind[1877]: Removed session 21. May 27 02:49:54.522545 sshd[4806]: Accepted publickey for core from 139.178.68.195 port 36314 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:49:54.525550 sshd-session[4806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:49:54.533730 systemd-logind[1877]: New session 22 of user core. May 27 02:49:54.543339 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 02:49:54.786827 sshd[4808]: Connection closed by 139.178.68.195 port 36314 May 27 02:49:54.787644 sshd-session[4806]: pam_unix(sshd:session): session closed for user core May 27 02:49:54.794683 systemd[1]: sshd@21-172.31.29.92:22-139.178.68.195:36314.service: Deactivated successfully. May 27 02:49:54.798304 systemd[1]: session-22.scope: Deactivated successfully. May 27 02:49:54.800252 systemd-logind[1877]: Session 22 logged out. Waiting for processes to exit. May 27 02:49:54.804035 systemd-logind[1877]: Removed session 22. May 27 02:49:59.828378 systemd[1]: Started sshd@22-172.31.29.92:22-139.178.68.195:36320.service - OpenSSH per-connection server daemon (139.178.68.195:36320). May 27 02:50:00.029517 sshd[4822]: Accepted publickey for core from 139.178.68.195 port 36320 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:50:00.032534 sshd-session[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:50:00.041568 systemd-logind[1877]: New session 23 of user core. May 27 02:50:00.049226 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 02:50:00.302140 sshd[4824]: Connection closed by 139.178.68.195 port 36320 May 27 02:50:00.302672 sshd-session[4822]: pam_unix(sshd:session): session closed for user core May 27 02:50:00.308818 systemd[1]: sshd@22-172.31.29.92:22-139.178.68.195:36320.service: Deactivated successfully. May 27 02:50:00.313311 systemd[1]: session-23.scope: Deactivated successfully. May 27 02:50:00.318490 systemd-logind[1877]: Session 23 logged out. Waiting for processes to exit. May 27 02:50:00.320680 systemd-logind[1877]: Removed session 23. May 27 02:50:05.340168 systemd[1]: Started sshd@23-172.31.29.92:22-139.178.68.195:34018.service - OpenSSH per-connection server daemon (139.178.68.195:34018). May 27 02:50:05.532540 sshd[4838]: Accepted publickey for core from 139.178.68.195 port 34018 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:50:05.534183 sshd-session[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:50:05.545092 systemd-logind[1877]: New session 24 of user core. May 27 02:50:05.552483 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 02:50:05.803985 sshd[4840]: Connection closed by 139.178.68.195 port 34018 May 27 02:50:05.804052 sshd-session[4838]: pam_unix(sshd:session): session closed for user core May 27 02:50:05.811051 systemd-logind[1877]: Session 24 logged out. Waiting for processes to exit. May 27 02:50:05.813386 systemd[1]: sshd@23-172.31.29.92:22-139.178.68.195:34018.service: Deactivated successfully. May 27 02:50:05.817710 systemd[1]: session-24.scope: Deactivated successfully. May 27 02:50:05.823079 systemd-logind[1877]: Removed session 24. May 27 02:50:10.844648 systemd[1]: Started sshd@24-172.31.29.92:22-139.178.68.195:34032.service - OpenSSH per-connection server daemon (139.178.68.195:34032). May 27 02:50:11.043874 sshd[4852]: Accepted publickey for core from 139.178.68.195 port 34032 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:50:11.046434 sshd-session[4852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:50:11.055091 systemd-logind[1877]: New session 25 of user core. May 27 02:50:11.064309 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 02:50:11.320824 sshd[4854]: Connection closed by 139.178.68.195 port 34032 May 27 02:50:11.321811 sshd-session[4852]: pam_unix(sshd:session): session closed for user core May 27 02:50:11.328690 systemd[1]: sshd@24-172.31.29.92:22-139.178.68.195:34032.service: Deactivated successfully. May 27 02:50:11.332438 systemd[1]: session-25.scope: Deactivated successfully. May 27 02:50:11.334698 systemd-logind[1877]: Session 25 logged out. Waiting for processes to exit. May 27 02:50:11.337765 systemd-logind[1877]: Removed session 25. May 27 02:50:16.365336 systemd[1]: Started sshd@25-172.31.29.92:22-139.178.68.195:49538.service - OpenSSH per-connection server daemon (139.178.68.195:49538). May 27 02:50:16.575538 sshd[4868]: Accepted publickey for core from 139.178.68.195 port 49538 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:50:16.578876 sshd-session[4868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:50:16.587451 systemd-logind[1877]: New session 26 of user core. May 27 02:50:16.593273 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 02:50:16.839904 sshd[4870]: Connection closed by 139.178.68.195 port 49538 May 27 02:50:16.840740 sshd-session[4868]: pam_unix(sshd:session): session closed for user core May 27 02:50:16.847020 systemd-logind[1877]: Session 26 logged out. Waiting for processes to exit. May 27 02:50:16.847044 systemd[1]: sshd@25-172.31.29.92:22-139.178.68.195:49538.service: Deactivated successfully. May 27 02:50:16.851266 systemd[1]: session-26.scope: Deactivated successfully. May 27 02:50:16.858107 systemd-logind[1877]: Removed session 26. May 27 02:50:16.878840 systemd[1]: Started sshd@26-172.31.29.92:22-139.178.68.195:49542.service - OpenSSH per-connection server daemon (139.178.68.195:49542). May 27 02:50:17.082090 sshd[4882]: Accepted publickey for core from 139.178.68.195 port 49542 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:50:17.084628 sshd-session[4882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:50:17.094497 systemd-logind[1877]: New session 27 of user core. May 27 02:50:17.105224 systemd[1]: Started session-27.scope - Session 27 of User core. May 27 02:50:19.849557 containerd[1931]: time="2025-05-27T02:50:19.849475321Z" level=info msg="StopContainer for \"32c40778e0dab2c4077b62b3b00f9301b0b341e47c21e0063f2f274f8449b01f\" with timeout 30 (s)" May 27 02:50:19.851272 containerd[1931]: time="2025-05-27T02:50:19.851210437Z" level=info msg="Stop container \"32c40778e0dab2c4077b62b3b00f9301b0b341e47c21e0063f2f274f8449b01f\" with signal terminated" May 27 02:50:19.881724 systemd[1]: cri-containerd-32c40778e0dab2c4077b62b3b00f9301b0b341e47c21e0063f2f274f8449b01f.scope: Deactivated successfully. May 27 02:50:19.887717 containerd[1931]: time="2025-05-27T02:50:19.887636485Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 02:50:19.889838 containerd[1931]: time="2025-05-27T02:50:19.889529773Z" level=info msg="received exit event container_id:\"32c40778e0dab2c4077b62b3b00f9301b0b341e47c21e0063f2f274f8449b01f\" id:\"32c40778e0dab2c4077b62b3b00f9301b0b341e47c21e0063f2f274f8449b01f\" pid:3915 exited_at:{seconds:1748314219 nanos:888933733}" May 27 02:50:19.890220 containerd[1931]: time="2025-05-27T02:50:19.889759297Z" level=info msg="TaskExit event in podsandbox handler container_id:\"32c40778e0dab2c4077b62b3b00f9301b0b341e47c21e0063f2f274f8449b01f\" id:\"32c40778e0dab2c4077b62b3b00f9301b0b341e47c21e0063f2f274f8449b01f\" pid:3915 exited_at:{seconds:1748314219 nanos:888933733}" May 27 02:50:19.900160 containerd[1931]: time="2025-05-27T02:50:19.899920237Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2\" id:\"ba511211f74713b41e0c21af216160072e3042b860ed384d156b8660d9400c4f\" pid:4908 exited_at:{seconds:1748314219 nanos:898839937}" May 27 02:50:19.906711 containerd[1931]: time="2025-05-27T02:50:19.906351421Z" level=info msg="StopContainer for \"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2\" with timeout 2 (s)" May 27 02:50:19.907603 containerd[1931]: time="2025-05-27T02:50:19.907564213Z" level=info msg="Stop container \"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2\" with signal terminated" May 27 02:50:19.931360 systemd-networkd[1818]: lxc_health: Link DOWN May 27 02:50:19.931378 systemd-networkd[1818]: lxc_health: Lost carrier May 27 02:50:19.963587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32c40778e0dab2c4077b62b3b00f9301b0b341e47c21e0063f2f274f8449b01f-rootfs.mount: Deactivated successfully. May 27 02:50:19.970885 systemd[1]: cri-containerd-d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2.scope: Deactivated successfully. May 27 02:50:19.971506 systemd[1]: cri-containerd-d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2.scope: Consumed 13.913s CPU time, 133.4M memory peak, 152K read from disk, 12.9M written to disk. May 27 02:50:19.974644 containerd[1931]: time="2025-05-27T02:50:19.974169349Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2\" id:\"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2\" pid:3988 exited_at:{seconds:1748314219 nanos:971459701}" May 27 02:50:19.975407 containerd[1931]: time="2025-05-27T02:50:19.975251341Z" level=info msg="received exit event container_id:\"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2\" id:\"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2\" pid:3988 exited_at:{seconds:1748314219 nanos:971459701}" May 27 02:50:20.003938 containerd[1931]: time="2025-05-27T02:50:20.003723898Z" level=info msg="StopContainer for \"32c40778e0dab2c4077b62b3b00f9301b0b341e47c21e0063f2f274f8449b01f\" returns successfully" May 27 02:50:20.006383 containerd[1931]: time="2025-05-27T02:50:20.005474710Z" level=info msg="StopPodSandbox for \"1b140523a348e60d0abdb5c781ea11c3ad96fee4e028b3d08e5a0a10525f0ffc\"" May 27 02:50:20.006383 containerd[1931]: time="2025-05-27T02:50:20.005658502Z" level=info msg="Container to stop \"32c40778e0dab2c4077b62b3b00f9301b0b341e47c21e0063f2f274f8449b01f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 02:50:20.027105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2-rootfs.mount: Deactivated successfully. May 27 02:50:20.031239 systemd[1]: cri-containerd-1b140523a348e60d0abdb5c781ea11c3ad96fee4e028b3d08e5a0a10525f0ffc.scope: Deactivated successfully. May 27 02:50:20.034488 containerd[1931]: time="2025-05-27T02:50:20.032714410Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1b140523a348e60d0abdb5c781ea11c3ad96fee4e028b3d08e5a0a10525f0ffc\" id:\"1b140523a348e60d0abdb5c781ea11c3ad96fee4e028b3d08e5a0a10525f0ffc\" pid:3506 exit_status:137 exited_at:{seconds:1748314220 nanos:30408394}" May 27 02:50:20.058631 containerd[1931]: time="2025-05-27T02:50:20.058393198Z" level=info msg="StopContainer for \"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2\" returns successfully" May 27 02:50:20.060491 containerd[1931]: time="2025-05-27T02:50:20.059805802Z" level=info msg="StopPodSandbox for \"fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f\"" May 27 02:50:20.060491 containerd[1931]: time="2025-05-27T02:50:20.060172222Z" level=info msg="Container to stop \"fb275733e91be30c19f58fa1ef6385e791d9d5e882e13cf98b8c5c18e43239d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 02:50:20.060491 containerd[1931]: time="2025-05-27T02:50:20.060203086Z" level=info msg="Container to stop \"e0d331c38919a6a175c80bd11e287f9a35361b2fdc13679eb8a8b3465a9d8789\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 02:50:20.060491 containerd[1931]: time="2025-05-27T02:50:20.060346918Z" level=info msg="Container to stop \"460c132048dacd4b8ddab46946be2c2148b7b5f1e12eb050d07977815c3450ac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 02:50:20.060491 containerd[1931]: time="2025-05-27T02:50:20.060374662Z" level=info msg="Container to stop \"d57147f2f418e07f543e3c07ff71428329b9a4120e63be211d0a2a5abb3b6cf1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 02:50:20.060838 containerd[1931]: time="2025-05-27T02:50:20.060398326Z" level=info msg="Container to stop \"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 02:50:20.081195 systemd[1]: cri-containerd-fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f.scope: Deactivated successfully. May 27 02:50:20.108843 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b140523a348e60d0abdb5c781ea11c3ad96fee4e028b3d08e5a0a10525f0ffc-rootfs.mount: Deactivated successfully. May 27 02:50:20.116325 containerd[1931]: time="2025-05-27T02:50:20.116241514Z" level=info msg="shim disconnected" id=1b140523a348e60d0abdb5c781ea11c3ad96fee4e028b3d08e5a0a10525f0ffc namespace=k8s.io May 27 02:50:20.117141 containerd[1931]: time="2025-05-27T02:50:20.116301706Z" level=warning msg="cleaning up after shim disconnected" id=1b140523a348e60d0abdb5c781ea11c3ad96fee4e028b3d08e5a0a10525f0ffc namespace=k8s.io May 27 02:50:20.117141 containerd[1931]: time="2025-05-27T02:50:20.116627770Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 02:50:20.145301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f-rootfs.mount: Deactivated successfully. May 27 02:50:20.148465 containerd[1931]: time="2025-05-27T02:50:20.147919162Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f\" id:\"fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f\" pid:3480 exit_status:137 exited_at:{seconds:1748314220 nanos:86126194}" May 27 02:50:20.150257 containerd[1931]: time="2025-05-27T02:50:20.150189874Z" level=info msg="received exit event sandbox_id:\"1b140523a348e60d0abdb5c781ea11c3ad96fee4e028b3d08e5a0a10525f0ffc\" exit_status:137 exited_at:{seconds:1748314220 nanos:30408394}" May 27 02:50:20.153162 containerd[1931]: time="2025-05-27T02:50:20.152080186Z" level=info msg="TearDown network for sandbox \"1b140523a348e60d0abdb5c781ea11c3ad96fee4e028b3d08e5a0a10525f0ffc\" successfully" May 27 02:50:20.153162 containerd[1931]: time="2025-05-27T02:50:20.152128618Z" level=info msg="StopPodSandbox for \"1b140523a348e60d0abdb5c781ea11c3ad96fee4e028b3d08e5a0a10525f0ffc\" returns successfully" May 27 02:50:20.157983 containerd[1931]: time="2025-05-27T02:50:20.157136650Z" level=info msg="received exit event sandbox_id:\"fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f\" exit_status:137 exited_at:{seconds:1748314220 nanos:86126194}" May 27 02:50:20.157904 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1b140523a348e60d0abdb5c781ea11c3ad96fee4e028b3d08e5a0a10525f0ffc-shm.mount: Deactivated successfully. May 27 02:50:20.159091 containerd[1931]: time="2025-05-27T02:50:20.159045862Z" level=info msg="shim disconnected" id=fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f namespace=k8s.io May 27 02:50:20.159371 containerd[1931]: time="2025-05-27T02:50:20.159239146Z" level=warning msg="cleaning up after shim disconnected" id=fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f namespace=k8s.io May 27 02:50:20.161149 containerd[1931]: time="2025-05-27T02:50:20.160513258Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 02:50:20.165924 containerd[1931]: time="2025-05-27T02:50:20.165527710Z" level=info msg="TearDown network for sandbox \"fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f\" successfully" May 27 02:50:20.166831 containerd[1931]: time="2025-05-27T02:50:20.166781866Z" level=info msg="StopPodSandbox for \"fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f\" returns successfully" May 27 02:50:20.327795 kubelet[3369]: I0527 02:50:20.327752 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/90477c99-4629-46f4-9970-205b5b5856b4-hubble-tls\") pod \"90477c99-4629-46f4-9970-205b5b5856b4\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " May 27 02:50:20.329721 kubelet[3369]: I0527 02:50:20.328490 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5pw2\" (UniqueName: \"kubernetes.io/projected/facea54c-c136-45df-8b53-16fbd783d7e5-kube-api-access-r5pw2\") pod \"facea54c-c136-45df-8b53-16fbd783d7e5\" (UID: \"facea54c-c136-45df-8b53-16fbd783d7e5\") " May 27 02:50:20.329721 kubelet[3369]: I0527 02:50:20.328555 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-lib-modules\") pod \"90477c99-4629-46f4-9970-205b5b5856b4\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " May 27 02:50:20.329721 kubelet[3369]: I0527 02:50:20.328590 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-hostproc\") pod \"90477c99-4629-46f4-9970-205b5b5856b4\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " May 27 02:50:20.329721 kubelet[3369]: I0527 02:50:20.328629 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-cilium-run\") pod \"90477c99-4629-46f4-9970-205b5b5856b4\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " May 27 02:50:20.329721 kubelet[3369]: I0527 02:50:20.328666 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-host-proc-sys-kernel\") pod \"90477c99-4629-46f4-9970-205b5b5856b4\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " May 27 02:50:20.329721 kubelet[3369]: I0527 02:50:20.328710 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/facea54c-c136-45df-8b53-16fbd783d7e5-cilium-config-path\") pod \"facea54c-c136-45df-8b53-16fbd783d7e5\" (UID: \"facea54c-c136-45df-8b53-16fbd783d7e5\") " May 27 02:50:20.330188 kubelet[3369]: I0527 02:50:20.328752 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/90477c99-4629-46f4-9970-205b5b5856b4-clustermesh-secrets\") pod \"90477c99-4629-46f4-9970-205b5b5856b4\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " May 27 02:50:20.330188 kubelet[3369]: I0527 02:50:20.328789 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-bpf-maps\") pod \"90477c99-4629-46f4-9970-205b5b5856b4\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " May 27 02:50:20.330188 kubelet[3369]: I0527 02:50:20.328821 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-cilium-cgroup\") pod \"90477c99-4629-46f4-9970-205b5b5856b4\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " May 27 02:50:20.330188 kubelet[3369]: I0527 02:50:20.328857 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90477c99-4629-46f4-9970-205b5b5856b4-cilium-config-path\") pod \"90477c99-4629-46f4-9970-205b5b5856b4\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " May 27 02:50:20.330188 kubelet[3369]: I0527 02:50:20.328897 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-xtables-lock\") pod \"90477c99-4629-46f4-9970-205b5b5856b4\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " May 27 02:50:20.330188 kubelet[3369]: I0527 02:50:20.328935 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25xcs\" (UniqueName: \"kubernetes.io/projected/90477c99-4629-46f4-9970-205b5b5856b4-kube-api-access-25xcs\") pod \"90477c99-4629-46f4-9970-205b5b5856b4\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " May 27 02:50:20.330503 kubelet[3369]: I0527 02:50:20.329003 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-cni-path\") pod \"90477c99-4629-46f4-9970-205b5b5856b4\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " May 27 02:50:20.330503 kubelet[3369]: I0527 02:50:20.329041 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-etc-cni-netd\") pod \"90477c99-4629-46f4-9970-205b5b5856b4\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " May 27 02:50:20.330503 kubelet[3369]: I0527 02:50:20.329077 3369 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-host-proc-sys-net\") pod \"90477c99-4629-46f4-9970-205b5b5856b4\" (UID: \"90477c99-4629-46f4-9970-205b5b5856b4\") " May 27 02:50:20.330503 kubelet[3369]: I0527 02:50:20.329180 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "90477c99-4629-46f4-9970-205b5b5856b4" (UID: "90477c99-4629-46f4-9970-205b5b5856b4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 02:50:20.331486 kubelet[3369]: I0527 02:50:20.331387 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "90477c99-4629-46f4-9970-205b5b5856b4" (UID: "90477c99-4629-46f4-9970-205b5b5856b4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 02:50:20.331486 kubelet[3369]: I0527 02:50:20.331471 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "90477c99-4629-46f4-9970-205b5b5856b4" (UID: "90477c99-4629-46f4-9970-205b5b5856b4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 02:50:20.331683 kubelet[3369]: I0527 02:50:20.331509 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-hostproc" (OuterVolumeSpecName: "hostproc") pod "90477c99-4629-46f4-9970-205b5b5856b4" (UID: "90477c99-4629-46f4-9970-205b5b5856b4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 02:50:20.331683 kubelet[3369]: I0527 02:50:20.331545 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "90477c99-4629-46f4-9970-205b5b5856b4" (UID: "90477c99-4629-46f4-9970-205b5b5856b4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 02:50:20.331683 kubelet[3369]: I0527 02:50:20.331579 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "90477c99-4629-46f4-9970-205b5b5856b4" (UID: "90477c99-4629-46f4-9970-205b5b5856b4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 02:50:20.333003 kubelet[3369]: I0527 02:50:20.332677 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "90477c99-4629-46f4-9970-205b5b5856b4" (UID: "90477c99-4629-46f4-9970-205b5b5856b4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 02:50:20.334371 kubelet[3369]: I0527 02:50:20.334311 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "90477c99-4629-46f4-9970-205b5b5856b4" (UID: "90477c99-4629-46f4-9970-205b5b5856b4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 02:50:20.337282 kubelet[3369]: I0527 02:50:20.337209 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-cni-path" (OuterVolumeSpecName: "cni-path") pod "90477c99-4629-46f4-9970-205b5b5856b4" (UID: "90477c99-4629-46f4-9970-205b5b5856b4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 02:50:20.337518 kubelet[3369]: I0527 02:50:20.337373 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "90477c99-4629-46f4-9970-205b5b5856b4" (UID: "90477c99-4629-46f4-9970-205b5b5856b4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 02:50:20.338302 kubelet[3369]: I0527 02:50:20.338235 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90477c99-4629-46f4-9970-205b5b5856b4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "90477c99-4629-46f4-9970-205b5b5856b4" (UID: "90477c99-4629-46f4-9970-205b5b5856b4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 02:50:20.340352 kubelet[3369]: I0527 02:50:20.340279 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/facea54c-c136-45df-8b53-16fbd783d7e5-kube-api-access-r5pw2" (OuterVolumeSpecName: "kube-api-access-r5pw2") pod "facea54c-c136-45df-8b53-16fbd783d7e5" (UID: "facea54c-c136-45df-8b53-16fbd783d7e5"). InnerVolumeSpecName "kube-api-access-r5pw2". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 02:50:20.344984 kubelet[3369]: I0527 02:50:20.344871 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90477c99-4629-46f4-9970-205b5b5856b4-kube-api-access-25xcs" (OuterVolumeSpecName: "kube-api-access-25xcs") pod "90477c99-4629-46f4-9970-205b5b5856b4" (UID: "90477c99-4629-46f4-9970-205b5b5856b4"). InnerVolumeSpecName "kube-api-access-25xcs". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 02:50:20.345852 kubelet[3369]: I0527 02:50:20.345782 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90477c99-4629-46f4-9970-205b5b5856b4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "90477c99-4629-46f4-9970-205b5b5856b4" (UID: "90477c99-4629-46f4-9970-205b5b5856b4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 02:50:20.346051 kubelet[3369]: I0527 02:50:20.346008 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90477c99-4629-46f4-9970-205b5b5856b4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "90477c99-4629-46f4-9970-205b5b5856b4" (UID: "90477c99-4629-46f4-9970-205b5b5856b4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 02:50:20.348100 kubelet[3369]: I0527 02:50:20.348018 3369 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/facea54c-c136-45df-8b53-16fbd783d7e5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "facea54c-c136-45df-8b53-16fbd783d7e5" (UID: "facea54c-c136-45df-8b53-16fbd783d7e5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 02:50:20.430556 kubelet[3369]: I0527 02:50:20.430265 3369 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-xtables-lock\") on node \"ip-172-31-29-92\" DevicePath \"\"" May 27 02:50:20.430556 kubelet[3369]: I0527 02:50:20.430318 3369 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-25xcs\" (UniqueName: \"kubernetes.io/projected/90477c99-4629-46f4-9970-205b5b5856b4-kube-api-access-25xcs\") on node \"ip-172-31-29-92\" DevicePath \"\"" May 27 02:50:20.430556 kubelet[3369]: I0527 02:50:20.430347 3369 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-etc-cni-netd\") on node \"ip-172-31-29-92\" DevicePath \"\"" May 27 02:50:20.430556 kubelet[3369]: I0527 02:50:20.430369 3369 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-cni-path\") on node \"ip-172-31-29-92\" DevicePath \"\"" May 27 02:50:20.430556 kubelet[3369]: I0527 02:50:20.430389 3369 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-host-proc-sys-net\") on node \"ip-172-31-29-92\" DevicePath \"\"" May 27 02:50:20.430556 kubelet[3369]: I0527 02:50:20.430412 3369 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-hostproc\") on node \"ip-172-31-29-92\" DevicePath \"\"" May 27 02:50:20.430556 kubelet[3369]: I0527 02:50:20.430432 3369 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-cilium-run\") on node \"ip-172-31-29-92\" DevicePath \"\"" May 27 02:50:20.431250 kubelet[3369]: I0527 02:50:20.431034 3369 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/90477c99-4629-46f4-9970-205b5b5856b4-hubble-tls\") on node \"ip-172-31-29-92\" DevicePath \"\"" May 27 02:50:20.431250 kubelet[3369]: I0527 02:50:20.431068 3369 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r5pw2\" (UniqueName: \"kubernetes.io/projected/facea54c-c136-45df-8b53-16fbd783d7e5-kube-api-access-r5pw2\") on node \"ip-172-31-29-92\" DevicePath \"\"" May 27 02:50:20.431250 kubelet[3369]: I0527 02:50:20.431088 3369 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-lib-modules\") on node \"ip-172-31-29-92\" DevicePath \"\"" May 27 02:50:20.431250 kubelet[3369]: I0527 02:50:20.431109 3369 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/facea54c-c136-45df-8b53-16fbd783d7e5-cilium-config-path\") on node \"ip-172-31-29-92\" DevicePath \"\"" May 27 02:50:20.431250 kubelet[3369]: I0527 02:50:20.431135 3369 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-host-proc-sys-kernel\") on node \"ip-172-31-29-92\" DevicePath \"\"" May 27 02:50:20.431250 kubelet[3369]: I0527 02:50:20.431155 3369 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/90477c99-4629-46f4-9970-205b5b5856b4-clustermesh-secrets\") on node \"ip-172-31-29-92\" DevicePath \"\"" May 27 02:50:20.431250 kubelet[3369]: I0527 02:50:20.431173 3369 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-cilium-cgroup\") on node \"ip-172-31-29-92\" DevicePath \"\"" May 27 02:50:20.431250 kubelet[3369]: I0527 02:50:20.431193 3369 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90477c99-4629-46f4-9970-205b5b5856b4-cilium-config-path\") on node \"ip-172-31-29-92\" DevicePath \"\"" May 27 02:50:20.431653 kubelet[3369]: I0527 02:50:20.431213 3369 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/90477c99-4629-46f4-9970-205b5b5856b4-bpf-maps\") on node \"ip-172-31-29-92\" DevicePath \"\"" May 27 02:50:20.959369 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f-shm.mount: Deactivated successfully. May 27 02:50:20.959561 systemd[1]: var-lib-kubelet-pods-facea54c\x2dc136\x2d45df\x2d8b53\x2d16fbd783d7e5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr5pw2.mount: Deactivated successfully. May 27 02:50:20.959757 systemd[1]: var-lib-kubelet-pods-90477c99\x2d4629\x2d46f4\x2d9970\x2d205b5b5856b4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d25xcs.mount: Deactivated successfully. May 27 02:50:20.959896 systemd[1]: var-lib-kubelet-pods-90477c99\x2d4629\x2d46f4\x2d9970\x2d205b5b5856b4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 27 02:50:20.960058 systemd[1]: var-lib-kubelet-pods-90477c99\x2d4629\x2d46f4\x2d9970\x2d205b5b5856b4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 27 02:50:21.049971 kubelet[3369]: I0527 02:50:21.049911 3369 scope.go:117] "RemoveContainer" containerID="32c40778e0dab2c4077b62b3b00f9301b0b341e47c21e0063f2f274f8449b01f" May 27 02:50:21.055626 containerd[1931]: time="2025-05-27T02:50:21.055223687Z" level=info msg="RemoveContainer for \"32c40778e0dab2c4077b62b3b00f9301b0b341e47c21e0063f2f274f8449b01f\"" May 27 02:50:21.067554 containerd[1931]: time="2025-05-27T02:50:21.067460771Z" level=info msg="RemoveContainer for \"32c40778e0dab2c4077b62b3b00f9301b0b341e47c21e0063f2f274f8449b01f\" returns successfully" May 27 02:50:21.071984 kubelet[3369]: I0527 02:50:21.070907 3369 scope.go:117] "RemoveContainer" containerID="d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2" May 27 02:50:21.073190 systemd[1]: Removed slice kubepods-besteffort-podfacea54c_c136_45df_8b53_16fbd783d7e5.slice - libcontainer container kubepods-besteffort-podfacea54c_c136_45df_8b53_16fbd783d7e5.slice. May 27 02:50:21.080880 containerd[1931]: time="2025-05-27T02:50:21.080821559Z" level=info msg="RemoveContainer for \"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2\"" May 27 02:50:21.087400 systemd[1]: Removed slice kubepods-burstable-pod90477c99_4629_46f4_9970_205b5b5856b4.slice - libcontainer container kubepods-burstable-pod90477c99_4629_46f4_9970_205b5b5856b4.slice. May 27 02:50:21.087630 systemd[1]: kubepods-burstable-pod90477c99_4629_46f4_9970_205b5b5856b4.slice: Consumed 14.089s CPU time, 133.9M memory peak, 152K read from disk, 12.9M written to disk. May 27 02:50:21.098931 containerd[1931]: time="2025-05-27T02:50:21.098492231Z" level=info msg="RemoveContainer for \"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2\" returns successfully" May 27 02:50:21.101425 kubelet[3369]: I0527 02:50:21.101141 3369 scope.go:117] "RemoveContainer" containerID="d57147f2f418e07f543e3c07ff71428329b9a4120e63be211d0a2a5abb3b6cf1" May 27 02:50:21.111200 containerd[1931]: time="2025-05-27T02:50:21.111124331Z" level=info msg="RemoveContainer for \"d57147f2f418e07f543e3c07ff71428329b9a4120e63be211d0a2a5abb3b6cf1\"" May 27 02:50:21.132117 containerd[1931]: time="2025-05-27T02:50:21.131883863Z" level=info msg="RemoveContainer for \"d57147f2f418e07f543e3c07ff71428329b9a4120e63be211d0a2a5abb3b6cf1\" returns successfully" May 27 02:50:21.136935 kubelet[3369]: I0527 02:50:21.136310 3369 scope.go:117] "RemoveContainer" containerID="460c132048dacd4b8ddab46946be2c2148b7b5f1e12eb050d07977815c3450ac" May 27 02:50:21.143467 containerd[1931]: time="2025-05-27T02:50:21.143395199Z" level=info msg="RemoveContainer for \"460c132048dacd4b8ddab46946be2c2148b7b5f1e12eb050d07977815c3450ac\"" May 27 02:50:21.153704 containerd[1931]: time="2025-05-27T02:50:21.153629099Z" level=info msg="RemoveContainer for \"460c132048dacd4b8ddab46946be2c2148b7b5f1e12eb050d07977815c3450ac\" returns successfully" May 27 02:50:21.154549 kubelet[3369]: I0527 02:50:21.154505 3369 scope.go:117] "RemoveContainer" containerID="e0d331c38919a6a175c80bd11e287f9a35361b2fdc13679eb8a8b3465a9d8789" May 27 02:50:21.159604 containerd[1931]: time="2025-05-27T02:50:21.159419423Z" level=info msg="RemoveContainer for \"e0d331c38919a6a175c80bd11e287f9a35361b2fdc13679eb8a8b3465a9d8789\"" May 27 02:50:21.166897 containerd[1931]: time="2025-05-27T02:50:21.166819175Z" level=info msg="RemoveContainer for \"e0d331c38919a6a175c80bd11e287f9a35361b2fdc13679eb8a8b3465a9d8789\" returns successfully" May 27 02:50:21.167330 kubelet[3369]: I0527 02:50:21.167298 3369 scope.go:117] "RemoveContainer" containerID="fb275733e91be30c19f58fa1ef6385e791d9d5e882e13cf98b8c5c18e43239d5" May 27 02:50:21.170749 containerd[1931]: time="2025-05-27T02:50:21.170686283Z" level=info msg="RemoveContainer for \"fb275733e91be30c19f58fa1ef6385e791d9d5e882e13cf98b8c5c18e43239d5\"" May 27 02:50:21.177819 containerd[1931]: time="2025-05-27T02:50:21.177740939Z" level=info msg="RemoveContainer for \"fb275733e91be30c19f58fa1ef6385e791d9d5e882e13cf98b8c5c18e43239d5\" returns successfully" May 27 02:50:21.178224 kubelet[3369]: I0527 02:50:21.178136 3369 scope.go:117] "RemoveContainer" containerID="d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2" May 27 02:50:21.178612 containerd[1931]: time="2025-05-27T02:50:21.178559963Z" level=error msg="ContainerStatus for \"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2\": not found" May 27 02:50:21.179080 kubelet[3369]: E0527 02:50:21.179015 3369 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2\": not found" containerID="d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2" May 27 02:50:21.179242 kubelet[3369]: I0527 02:50:21.179096 3369 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2"} err="failed to get container status \"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2\": rpc error: code = NotFound desc = an error occurred when try to find container \"d52e00feb46c392c2964d302a4be48569824aca9fdef6b71eb6ff367dd18dde2\": not found" May 27 02:50:21.179321 kubelet[3369]: I0527 02:50:21.179265 3369 scope.go:117] "RemoveContainer" containerID="d57147f2f418e07f543e3c07ff71428329b9a4120e63be211d0a2a5abb3b6cf1" May 27 02:50:21.179671 containerd[1931]: time="2025-05-27T02:50:21.179602439Z" level=error msg="ContainerStatus for \"d57147f2f418e07f543e3c07ff71428329b9a4120e63be211d0a2a5abb3b6cf1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d57147f2f418e07f543e3c07ff71428329b9a4120e63be211d0a2a5abb3b6cf1\": not found" May 27 02:50:21.179996 kubelet[3369]: E0527 02:50:21.179923 3369 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d57147f2f418e07f543e3c07ff71428329b9a4120e63be211d0a2a5abb3b6cf1\": not found" containerID="d57147f2f418e07f543e3c07ff71428329b9a4120e63be211d0a2a5abb3b6cf1" May 27 02:50:21.180085 kubelet[3369]: I0527 02:50:21.180009 3369 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d57147f2f418e07f543e3c07ff71428329b9a4120e63be211d0a2a5abb3b6cf1"} err="failed to get container status \"d57147f2f418e07f543e3c07ff71428329b9a4120e63be211d0a2a5abb3b6cf1\": rpc error: code = NotFound desc = an error occurred when try to find container \"d57147f2f418e07f543e3c07ff71428329b9a4120e63be211d0a2a5abb3b6cf1\": not found" May 27 02:50:21.180085 kubelet[3369]: I0527 02:50:21.180043 3369 scope.go:117] "RemoveContainer" containerID="460c132048dacd4b8ddab46946be2c2148b7b5f1e12eb050d07977815c3450ac" May 27 02:50:21.180469 containerd[1931]: time="2025-05-27T02:50:21.180423851Z" level=error msg="ContainerStatus for \"460c132048dacd4b8ddab46946be2c2148b7b5f1e12eb050d07977815c3450ac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"460c132048dacd4b8ddab46946be2c2148b7b5f1e12eb050d07977815c3450ac\": not found" May 27 02:50:21.180892 kubelet[3369]: E0527 02:50:21.180814 3369 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"460c132048dacd4b8ddab46946be2c2148b7b5f1e12eb050d07977815c3450ac\": not found" containerID="460c132048dacd4b8ddab46946be2c2148b7b5f1e12eb050d07977815c3450ac" May 27 02:50:21.180892 kubelet[3369]: I0527 02:50:21.180861 3369 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"460c132048dacd4b8ddab46946be2c2148b7b5f1e12eb050d07977815c3450ac"} err="failed to get container status \"460c132048dacd4b8ddab46946be2c2148b7b5f1e12eb050d07977815c3450ac\": rpc error: code = NotFound desc = an error occurred when try to find container \"460c132048dacd4b8ddab46946be2c2148b7b5f1e12eb050d07977815c3450ac\": not found" May 27 02:50:21.180892 kubelet[3369]: I0527 02:50:21.180893 3369 scope.go:117] "RemoveContainer" containerID="e0d331c38919a6a175c80bd11e287f9a35361b2fdc13679eb8a8b3465a9d8789" May 27 02:50:21.181647 containerd[1931]: time="2025-05-27T02:50:21.181572227Z" level=error msg="ContainerStatus for \"e0d331c38919a6a175c80bd11e287f9a35361b2fdc13679eb8a8b3465a9d8789\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e0d331c38919a6a175c80bd11e287f9a35361b2fdc13679eb8a8b3465a9d8789\": not found" May 27 02:50:21.182086 kubelet[3369]: E0527 02:50:21.181849 3369 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e0d331c38919a6a175c80bd11e287f9a35361b2fdc13679eb8a8b3465a9d8789\": not found" containerID="e0d331c38919a6a175c80bd11e287f9a35361b2fdc13679eb8a8b3465a9d8789" May 27 02:50:21.182086 kubelet[3369]: I0527 02:50:21.181888 3369 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e0d331c38919a6a175c80bd11e287f9a35361b2fdc13679eb8a8b3465a9d8789"} err="failed to get container status \"e0d331c38919a6a175c80bd11e287f9a35361b2fdc13679eb8a8b3465a9d8789\": rpc error: code = NotFound desc = an error occurred when try to find container \"e0d331c38919a6a175c80bd11e287f9a35361b2fdc13679eb8a8b3465a9d8789\": not found" May 27 02:50:21.182086 kubelet[3369]: I0527 02:50:21.181920 3369 scope.go:117] "RemoveContainer" containerID="fb275733e91be30c19f58fa1ef6385e791d9d5e882e13cf98b8c5c18e43239d5" May 27 02:50:21.182475 containerd[1931]: time="2025-05-27T02:50:21.182316335Z" level=error msg="ContainerStatus for \"fb275733e91be30c19f58fa1ef6385e791d9d5e882e13cf98b8c5c18e43239d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb275733e91be30c19f58fa1ef6385e791d9d5e882e13cf98b8c5c18e43239d5\": not found" May 27 02:50:21.182850 kubelet[3369]: E0527 02:50:21.182810 3369 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb275733e91be30c19f58fa1ef6385e791d9d5e882e13cf98b8c5c18e43239d5\": not found" containerID="fb275733e91be30c19f58fa1ef6385e791d9d5e882e13cf98b8c5c18e43239d5" May 27 02:50:21.182919 kubelet[3369]: I0527 02:50:21.182858 3369 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb275733e91be30c19f58fa1ef6385e791d9d5e882e13cf98b8c5c18e43239d5"} err="failed to get container status \"fb275733e91be30c19f58fa1ef6385e791d9d5e882e13cf98b8c5c18e43239d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb275733e91be30c19f58fa1ef6385e791d9d5e882e13cf98b8c5c18e43239d5\": not found" May 27 02:50:21.447970 kubelet[3369]: I0527 02:50:21.447901 3369 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90477c99-4629-46f4-9970-205b5b5856b4" path="/var/lib/kubelet/pods/90477c99-4629-46f4-9970-205b5b5856b4/volumes" May 27 02:50:21.449328 kubelet[3369]: I0527 02:50:21.449275 3369 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="facea54c-c136-45df-8b53-16fbd783d7e5" path="/var/lib/kubelet/pods/facea54c-c136-45df-8b53-16fbd783d7e5/volumes" May 27 02:50:21.677244 kubelet[3369]: E0527 02:50:21.677129 3369 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 02:50:21.781259 sshd[4885]: Connection closed by 139.178.68.195 port 49542 May 27 02:50:21.781681 sshd-session[4882]: pam_unix(sshd:session): session closed for user core May 27 02:50:21.789266 systemd[1]: sshd@26-172.31.29.92:22-139.178.68.195:49542.service: Deactivated successfully. May 27 02:50:21.792850 systemd[1]: session-27.scope: Deactivated successfully. May 27 02:50:21.793362 systemd[1]: session-27.scope: Consumed 1.993s CPU time, 22.6M memory peak. May 27 02:50:21.795055 systemd-logind[1877]: Session 27 logged out. Waiting for processes to exit. May 27 02:50:21.798382 systemd-logind[1877]: Removed session 27. May 27 02:50:21.818568 systemd[1]: Started sshd@27-172.31.29.92:22-139.178.68.195:49558.service - OpenSSH per-connection server daemon (139.178.68.195:49558). May 27 02:50:22.020124 sshd[5033]: Accepted publickey for core from 139.178.68.195 port 49558 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:50:22.022535 sshd-session[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:50:22.031016 systemd-logind[1877]: New session 28 of user core. May 27 02:50:22.050201 systemd[1]: Started session-28.scope - Session 28 of User core. May 27 02:50:22.772636 ntpd[1872]: Deleting interface #11 lxc_health, fe80::4c0d:1cff:fecd:c7f%8#123, interface stats: received=0, sent=0, dropped=0, active_time=119 secs May 27 02:50:22.773390 ntpd[1872]: 27 May 02:50:22 ntpd[1872]: Deleting interface #11 lxc_health, fe80::4c0d:1cff:fecd:c7f%8#123, interface stats: received=0, sent=0, dropped=0, active_time=119 secs May 27 02:50:23.342066 sshd[5036]: Connection closed by 139.178.68.195 port 49558 May 27 02:50:23.342852 sshd-session[5033]: pam_unix(sshd:session): session closed for user core May 27 02:50:23.355052 systemd-logind[1877]: Session 28 logged out. Waiting for processes to exit. May 27 02:50:23.359177 systemd[1]: sshd@27-172.31.29.92:22-139.178.68.195:49558.service: Deactivated successfully. May 27 02:50:23.370440 systemd[1]: session-28.scope: Deactivated successfully. May 27 02:50:23.371439 systemd[1]: session-28.scope: Consumed 1.089s CPU time, 21.6M memory peak. May 27 02:50:23.382694 kubelet[3369]: I0527 02:50:23.382605 3369 memory_manager.go:355] "RemoveStaleState removing state" podUID="90477c99-4629-46f4-9970-205b5b5856b4" containerName="cilium-agent" May 27 02:50:23.382694 kubelet[3369]: I0527 02:50:23.382656 3369 memory_manager.go:355] "RemoveStaleState removing state" podUID="facea54c-c136-45df-8b53-16fbd783d7e5" containerName="cilium-operator" May 27 02:50:23.400035 systemd-logind[1877]: Removed session 28. May 27 02:50:23.408398 systemd[1]: Started sshd@28-172.31.29.92:22-139.178.68.195:49566.service - OpenSSH per-connection server daemon (139.178.68.195:49566). May 27 02:50:23.415372 kubelet[3369]: I0527 02:50:23.415302 3369 status_manager.go:890] "Failed to get status for pod" podUID="6fd1ff4b-b629-4551-8f17-b18190687024" pod="kube-system/cilium-57787" err="pods \"cilium-57787\" is forbidden: User \"system:node:ip-172-31-29-92\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-29-92' and this object" May 27 02:50:23.415525 kubelet[3369]: W0527 02:50:23.415433 3369 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-29-92" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-92' and this object May 27 02:50:23.415525 kubelet[3369]: E0527 02:50:23.415477 3369 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-29-92\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-29-92' and this object" logger="UnhandledError" May 27 02:50:23.415670 kubelet[3369]: W0527 02:50:23.415563 3369 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-29-92" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-92' and this object May 27 02:50:23.415670 kubelet[3369]: E0527 02:50:23.415589 3369 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ip-172-31-29-92\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-29-92' and this object" logger="UnhandledError" May 27 02:50:23.415670 kubelet[3369]: W0527 02:50:23.415666 3369 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-29-92" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-92' and this object May 27 02:50:23.415817 kubelet[3369]: E0527 02:50:23.415692 3369 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ip-172-31-29-92\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-29-92' and this object" logger="UnhandledError" May 27 02:50:23.415817 kubelet[3369]: W0527 02:50:23.415768 3369 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-29-92" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-29-92' and this object May 27 02:50:23.415817 kubelet[3369]: E0527 02:50:23.415792 3369 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ip-172-31-29-92\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-29-92' and this object" logger="UnhandledError" May 27 02:50:23.432167 systemd[1]: Created slice kubepods-burstable-pod6fd1ff4b_b629_4551_8f17_b18190687024.slice - libcontainer container kubepods-burstable-pod6fd1ff4b_b629_4551_8f17_b18190687024.slice. May 27 02:50:23.451035 kubelet[3369]: I0527 02:50:23.450932 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6fd1ff4b-b629-4551-8f17-b18190687024-cilium-run\") pod \"cilium-57787\" (UID: \"6fd1ff4b-b629-4551-8f17-b18190687024\") " pod="kube-system/cilium-57787" May 27 02:50:23.451172 kubelet[3369]: I0527 02:50:23.451041 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6fd1ff4b-b629-4551-8f17-b18190687024-hostproc\") pod \"cilium-57787\" (UID: \"6fd1ff4b-b629-4551-8f17-b18190687024\") " pod="kube-system/cilium-57787" May 27 02:50:23.451172 kubelet[3369]: I0527 02:50:23.451086 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6fd1ff4b-b629-4551-8f17-b18190687024-hubble-tls\") pod \"cilium-57787\" (UID: \"6fd1ff4b-b629-4551-8f17-b18190687024\") " pod="kube-system/cilium-57787" May 27 02:50:23.451172 kubelet[3369]: I0527 02:50:23.451123 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fd1ff4b-b629-4551-8f17-b18190687024-lib-modules\") pod \"cilium-57787\" (UID: \"6fd1ff4b-b629-4551-8f17-b18190687024\") " pod="kube-system/cilium-57787" May 27 02:50:23.451172 kubelet[3369]: I0527 02:50:23.451162 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8grl\" (UniqueName: \"kubernetes.io/projected/6fd1ff4b-b629-4551-8f17-b18190687024-kube-api-access-h8grl\") pod \"cilium-57787\" (UID: \"6fd1ff4b-b629-4551-8f17-b18190687024\") " pod="kube-system/cilium-57787" May 27 02:50:23.451414 kubelet[3369]: I0527 02:50:23.451202 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6fd1ff4b-b629-4551-8f17-b18190687024-cilium-cgroup\") pod \"cilium-57787\" (UID: \"6fd1ff4b-b629-4551-8f17-b18190687024\") " pod="kube-system/cilium-57787" May 27 02:50:23.451414 kubelet[3369]: I0527 02:50:23.451263 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6fd1ff4b-b629-4551-8f17-b18190687024-host-proc-sys-net\") pod \"cilium-57787\" (UID: \"6fd1ff4b-b629-4551-8f17-b18190687024\") " pod="kube-system/cilium-57787" May 27 02:50:23.451414 kubelet[3369]: I0527 02:50:23.451301 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6fd1ff4b-b629-4551-8f17-b18190687024-bpf-maps\") pod \"cilium-57787\" (UID: \"6fd1ff4b-b629-4551-8f17-b18190687024\") " pod="kube-system/cilium-57787" May 27 02:50:23.451414 kubelet[3369]: I0527 02:50:23.451337 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6fd1ff4b-b629-4551-8f17-b18190687024-cni-path\") pod \"cilium-57787\" (UID: \"6fd1ff4b-b629-4551-8f17-b18190687024\") " pod="kube-system/cilium-57787" May 27 02:50:23.451414 kubelet[3369]: I0527 02:50:23.451375 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6fd1ff4b-b629-4551-8f17-b18190687024-clustermesh-secrets\") pod \"cilium-57787\" (UID: \"6fd1ff4b-b629-4551-8f17-b18190687024\") " pod="kube-system/cilium-57787" May 27 02:50:23.451414 kubelet[3369]: I0527 02:50:23.451412 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6fd1ff4b-b629-4551-8f17-b18190687024-cilium-ipsec-secrets\") pod \"cilium-57787\" (UID: \"6fd1ff4b-b629-4551-8f17-b18190687024\") " pod="kube-system/cilium-57787" May 27 02:50:23.452182 kubelet[3369]: I0527 02:50:23.451448 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6fd1ff4b-b629-4551-8f17-b18190687024-host-proc-sys-kernel\") pod \"cilium-57787\" (UID: \"6fd1ff4b-b629-4551-8f17-b18190687024\") " pod="kube-system/cilium-57787" May 27 02:50:23.452182 kubelet[3369]: I0527 02:50:23.451489 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6fd1ff4b-b629-4551-8f17-b18190687024-etc-cni-netd\") pod \"cilium-57787\" (UID: \"6fd1ff4b-b629-4551-8f17-b18190687024\") " pod="kube-system/cilium-57787" May 27 02:50:23.452182 kubelet[3369]: I0527 02:50:23.451526 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fd1ff4b-b629-4551-8f17-b18190687024-xtables-lock\") pod \"cilium-57787\" (UID: \"6fd1ff4b-b629-4551-8f17-b18190687024\") " pod="kube-system/cilium-57787" May 27 02:50:23.452182 kubelet[3369]: I0527 02:50:23.451566 3369 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6fd1ff4b-b629-4551-8f17-b18190687024-cilium-config-path\") pod \"cilium-57787\" (UID: \"6fd1ff4b-b629-4551-8f17-b18190687024\") " pod="kube-system/cilium-57787" May 27 02:50:23.648981 sshd[5046]: Accepted publickey for core from 139.178.68.195 port 49566 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:50:23.652840 sshd-session[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:50:23.662050 systemd-logind[1877]: New session 29 of user core. May 27 02:50:23.671226 systemd[1]: Started session-29.scope - Session 29 of User core. May 27 02:50:23.790798 sshd[5049]: Connection closed by 139.178.68.195 port 49566 May 27 02:50:23.791567 sshd-session[5046]: pam_unix(sshd:session): session closed for user core May 27 02:50:23.798921 systemd[1]: sshd@28-172.31.29.92:22-139.178.68.195:49566.service: Deactivated successfully. May 27 02:50:23.804368 systemd[1]: session-29.scope: Deactivated successfully. May 27 02:50:23.807802 systemd-logind[1877]: Session 29 logged out. Waiting for processes to exit. May 27 02:50:23.810842 systemd-logind[1877]: Removed session 29. May 27 02:50:23.830433 systemd[1]: Started sshd@29-172.31.29.92:22-139.178.68.195:33410.service - OpenSSH per-connection server daemon (139.178.68.195:33410). May 27 02:50:24.027125 sshd[5056]: Accepted publickey for core from 139.178.68.195 port 33410 ssh2: RSA SHA256:wB7DXbDl54cvXXypqfSM11xNUGSlmUqWSyH8J9Yllv0 May 27 02:50:24.029541 sshd-session[5056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:50:24.040037 systemd-logind[1877]: New session 30 of user core. May 27 02:50:24.045209 systemd[1]: Started session-30.scope - Session 30 of User core. May 27 02:50:24.553869 kubelet[3369]: E0527 02:50:24.553149 3369 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition May 27 02:50:24.553869 kubelet[3369]: E0527 02:50:24.553189 3369 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-57787: failed to sync secret cache: timed out waiting for the condition May 27 02:50:24.553869 kubelet[3369]: E0527 02:50:24.553292 3369 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6fd1ff4b-b629-4551-8f17-b18190687024-hubble-tls podName:6fd1ff4b-b629-4551-8f17-b18190687024 nodeName:}" failed. No retries permitted until 2025-05-27 02:50:25.053265668 +0000 UTC m=+153.966232297 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/6fd1ff4b-b629-4551-8f17-b18190687024-hubble-tls") pod "cilium-57787" (UID: "6fd1ff4b-b629-4551-8f17-b18190687024") : failed to sync secret cache: timed out waiting for the condition May 27 02:50:24.553869 kubelet[3369]: E0527 02:50:24.553336 3369 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition May 27 02:50:24.553869 kubelet[3369]: E0527 02:50:24.553385 3369 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6fd1ff4b-b629-4551-8f17-b18190687024-cilium-ipsec-secrets podName:6fd1ff4b-b629-4551-8f17-b18190687024 nodeName:}" failed. No retries permitted until 2025-05-27 02:50:25.053369948 +0000 UTC m=+153.966336565 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/6fd1ff4b-b629-4551-8f17-b18190687024-cilium-ipsec-secrets") pod "cilium-57787" (UID: "6fd1ff4b-b629-4551-8f17-b18190687024") : failed to sync secret cache: timed out waiting for the condition May 27 02:50:24.554815 kubelet[3369]: E0527 02:50:24.554769 3369 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition May 27 02:50:24.554981 kubelet[3369]: E0527 02:50:24.554885 3369 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6fd1ff4b-b629-4551-8f17-b18190687024-clustermesh-secrets podName:6fd1ff4b-b629-4551-8f17-b18190687024 nodeName:}" failed. No retries permitted until 2025-05-27 02:50:25.054853592 +0000 UTC m=+153.967820233 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/6fd1ff4b-b629-4551-8f17-b18190687024-clustermesh-secrets") pod "cilium-57787" (UID: "6fd1ff4b-b629-4551-8f17-b18190687024") : failed to sync secret cache: timed out waiting for the condition May 27 02:50:24.703523 kubelet[3369]: I0527 02:50:24.703435 3369 setters.go:602] "Node became not ready" node="ip-172-31-29-92" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-27T02:50:24Z","lastTransitionTime":"2025-05-27T02:50:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 27 02:50:25.243143 containerd[1931]: time="2025-05-27T02:50:25.243057256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-57787,Uid:6fd1ff4b-b629-4551-8f17-b18190687024,Namespace:kube-system,Attempt:0,}" May 27 02:50:25.283798 containerd[1931]: time="2025-05-27T02:50:25.283622332Z" level=info msg="connecting to shim b03840a08ab3414b5914f20993d864a01a04af9715bc1692d8e0be89ce1039f8" address="unix:///run/containerd/s/08462b4bd46d0e7cd8ec1c89c5e4523c643e6696ae13a15b6ece4bb6165bb466" namespace=k8s.io protocol=ttrpc version=3 May 27 02:50:25.330258 systemd[1]: Started cri-containerd-b03840a08ab3414b5914f20993d864a01a04af9715bc1692d8e0be89ce1039f8.scope - libcontainer container b03840a08ab3414b5914f20993d864a01a04af9715bc1692d8e0be89ce1039f8. May 27 02:50:25.379291 containerd[1931]: time="2025-05-27T02:50:25.379223668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-57787,Uid:6fd1ff4b-b629-4551-8f17-b18190687024,Namespace:kube-system,Attempt:0,} returns sandbox id \"b03840a08ab3414b5914f20993d864a01a04af9715bc1692d8e0be89ce1039f8\"" May 27 02:50:25.385749 containerd[1931]: time="2025-05-27T02:50:25.385666588Z" level=info msg="CreateContainer within sandbox \"b03840a08ab3414b5914f20993d864a01a04af9715bc1692d8e0be89ce1039f8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 02:50:25.399678 containerd[1931]: time="2025-05-27T02:50:25.399625552Z" level=info msg="Container 94c1117596d594bf4ff87ca53f1d651295a2859adc12174b3d21fb542981a1c3: CDI devices from CRI Config.CDIDevices: []" May 27 02:50:25.412938 containerd[1931]: time="2025-05-27T02:50:25.412803700Z" level=info msg="CreateContainer within sandbox \"b03840a08ab3414b5914f20993d864a01a04af9715bc1692d8e0be89ce1039f8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"94c1117596d594bf4ff87ca53f1d651295a2859adc12174b3d21fb542981a1c3\"" May 27 02:50:25.414194 containerd[1931]: time="2025-05-27T02:50:25.414093796Z" level=info msg="StartContainer for \"94c1117596d594bf4ff87ca53f1d651295a2859adc12174b3d21fb542981a1c3\"" May 27 02:50:25.416648 containerd[1931]: time="2025-05-27T02:50:25.416588176Z" level=info msg="connecting to shim 94c1117596d594bf4ff87ca53f1d651295a2859adc12174b3d21fb542981a1c3" address="unix:///run/containerd/s/08462b4bd46d0e7cd8ec1c89c5e4523c643e6696ae13a15b6ece4bb6165bb466" protocol=ttrpc version=3 May 27 02:50:25.452264 systemd[1]: Started cri-containerd-94c1117596d594bf4ff87ca53f1d651295a2859adc12174b3d21fb542981a1c3.scope - libcontainer container 94c1117596d594bf4ff87ca53f1d651295a2859adc12174b3d21fb542981a1c3. May 27 02:50:25.511749 containerd[1931]: time="2025-05-27T02:50:25.511536137Z" level=info msg="StartContainer for \"94c1117596d594bf4ff87ca53f1d651295a2859adc12174b3d21fb542981a1c3\" returns successfully" May 27 02:50:25.528733 systemd[1]: cri-containerd-94c1117596d594bf4ff87ca53f1d651295a2859adc12174b3d21fb542981a1c3.scope: Deactivated successfully. May 27 02:50:25.533971 containerd[1931]: time="2025-05-27T02:50:25.533899085Z" level=info msg="received exit event container_id:\"94c1117596d594bf4ff87ca53f1d651295a2859adc12174b3d21fb542981a1c3\" id:\"94c1117596d594bf4ff87ca53f1d651295a2859adc12174b3d21fb542981a1c3\" pid:5125 exited_at:{seconds:1748314225 nanos:533368505}" May 27 02:50:25.533971 containerd[1931]: time="2025-05-27T02:50:25.534238193Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94c1117596d594bf4ff87ca53f1d651295a2859adc12174b3d21fb542981a1c3\" id:\"94c1117596d594bf4ff87ca53f1d651295a2859adc12174b3d21fb542981a1c3\" pid:5125 exited_at:{seconds:1748314225 nanos:533368505}" May 27 02:50:26.076393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3316186720.mount: Deactivated successfully. May 27 02:50:26.095986 containerd[1931]: time="2025-05-27T02:50:26.095704060Z" level=info msg="CreateContainer within sandbox \"b03840a08ab3414b5914f20993d864a01a04af9715bc1692d8e0be89ce1039f8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 02:50:26.116031 containerd[1931]: time="2025-05-27T02:50:26.113963692Z" level=info msg="Container 513e8b7f5c20ba42db1a06d3d9bdefe4874ef414d1139280d576cd59312a8145: CDI devices from CRI Config.CDIDevices: []" May 27 02:50:26.133146 containerd[1931]: time="2025-05-27T02:50:26.133071532Z" level=info msg="CreateContainer within sandbox \"b03840a08ab3414b5914f20993d864a01a04af9715bc1692d8e0be89ce1039f8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"513e8b7f5c20ba42db1a06d3d9bdefe4874ef414d1139280d576cd59312a8145\"" May 27 02:50:26.134268 containerd[1931]: time="2025-05-27T02:50:26.134183620Z" level=info msg="StartContainer for \"513e8b7f5c20ba42db1a06d3d9bdefe4874ef414d1139280d576cd59312a8145\"" May 27 02:50:26.136844 containerd[1931]: time="2025-05-27T02:50:26.136731004Z" level=info msg="connecting to shim 513e8b7f5c20ba42db1a06d3d9bdefe4874ef414d1139280d576cd59312a8145" address="unix:///run/containerd/s/08462b4bd46d0e7cd8ec1c89c5e4523c643e6696ae13a15b6ece4bb6165bb466" protocol=ttrpc version=3 May 27 02:50:26.176350 systemd[1]: Started cri-containerd-513e8b7f5c20ba42db1a06d3d9bdefe4874ef414d1139280d576cd59312a8145.scope - libcontainer container 513e8b7f5c20ba42db1a06d3d9bdefe4874ef414d1139280d576cd59312a8145. May 27 02:50:26.239240 containerd[1931]: time="2025-05-27T02:50:26.239106197Z" level=info msg="StartContainer for \"513e8b7f5c20ba42db1a06d3d9bdefe4874ef414d1139280d576cd59312a8145\" returns successfully" May 27 02:50:26.278806 systemd[1]: cri-containerd-513e8b7f5c20ba42db1a06d3d9bdefe4874ef414d1139280d576cd59312a8145.scope: Deactivated successfully. May 27 02:50:26.282712 containerd[1931]: time="2025-05-27T02:50:26.282649061Z" level=info msg="received exit event container_id:\"513e8b7f5c20ba42db1a06d3d9bdefe4874ef414d1139280d576cd59312a8145\" id:\"513e8b7f5c20ba42db1a06d3d9bdefe4874ef414d1139280d576cd59312a8145\" pid:5170 exited_at:{seconds:1748314226 nanos:282182321}" May 27 02:50:26.286170 containerd[1931]: time="2025-05-27T02:50:26.285406757Z" level=info msg="TaskExit event in podsandbox handler container_id:\"513e8b7f5c20ba42db1a06d3d9bdefe4874ef414d1139280d576cd59312a8145\" id:\"513e8b7f5c20ba42db1a06d3d9bdefe4874ef414d1139280d576cd59312a8145\" pid:5170 exited_at:{seconds:1748314226 nanos:282182321}" May 27 02:50:26.363629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-513e8b7f5c20ba42db1a06d3d9bdefe4874ef414d1139280d576cd59312a8145-rootfs.mount: Deactivated successfully. May 27 02:50:26.678695 kubelet[3369]: E0527 02:50:26.678543 3369 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 02:50:27.104066 containerd[1931]: time="2025-05-27T02:50:27.103987397Z" level=info msg="CreateContainer within sandbox \"b03840a08ab3414b5914f20993d864a01a04af9715bc1692d8e0be89ce1039f8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 02:50:27.129176 containerd[1931]: time="2025-05-27T02:50:27.129101117Z" level=info msg="Container a48416fd71ca0c216b0ac655cbd8a16017c7ce22084648f1a8dcf4112d230422: CDI devices from CRI Config.CDIDevices: []" May 27 02:50:27.150384 containerd[1931]: time="2025-05-27T02:50:27.150303821Z" level=info msg="CreateContainer within sandbox \"b03840a08ab3414b5914f20993d864a01a04af9715bc1692d8e0be89ce1039f8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a48416fd71ca0c216b0ac655cbd8a16017c7ce22084648f1a8dcf4112d230422\"" May 27 02:50:27.151651 containerd[1931]: time="2025-05-27T02:50:27.151585325Z" level=info msg="StartContainer for \"a48416fd71ca0c216b0ac655cbd8a16017c7ce22084648f1a8dcf4112d230422\"" May 27 02:50:27.154831 containerd[1931]: time="2025-05-27T02:50:27.154708697Z" level=info msg="connecting to shim a48416fd71ca0c216b0ac655cbd8a16017c7ce22084648f1a8dcf4112d230422" address="unix:///run/containerd/s/08462b4bd46d0e7cd8ec1c89c5e4523c643e6696ae13a15b6ece4bb6165bb466" protocol=ttrpc version=3 May 27 02:50:27.194345 systemd[1]: Started cri-containerd-a48416fd71ca0c216b0ac655cbd8a16017c7ce22084648f1a8dcf4112d230422.scope - libcontainer container a48416fd71ca0c216b0ac655cbd8a16017c7ce22084648f1a8dcf4112d230422. May 27 02:50:27.287012 systemd[1]: cri-containerd-a48416fd71ca0c216b0ac655cbd8a16017c7ce22084648f1a8dcf4112d230422.scope: Deactivated successfully. May 27 02:50:27.291156 containerd[1931]: time="2025-05-27T02:50:27.291017802Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a48416fd71ca0c216b0ac655cbd8a16017c7ce22084648f1a8dcf4112d230422\" id:\"a48416fd71ca0c216b0ac655cbd8a16017c7ce22084648f1a8dcf4112d230422\" pid:5214 exited_at:{seconds:1748314227 nanos:289288398}" May 27 02:50:27.292249 containerd[1931]: time="2025-05-27T02:50:27.291827298Z" level=info msg="received exit event container_id:\"a48416fd71ca0c216b0ac655cbd8a16017c7ce22084648f1a8dcf4112d230422\" id:\"a48416fd71ca0c216b0ac655cbd8a16017c7ce22084648f1a8dcf4112d230422\" pid:5214 exited_at:{seconds:1748314227 nanos:289288398}" May 27 02:50:27.292605 containerd[1931]: time="2025-05-27T02:50:27.292568742Z" level=info msg="StartContainer for \"a48416fd71ca0c216b0ac655cbd8a16017c7ce22084648f1a8dcf4112d230422\" returns successfully" May 27 02:50:27.335255 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a48416fd71ca0c216b0ac655cbd8a16017c7ce22084648f1a8dcf4112d230422-rootfs.mount: Deactivated successfully. May 27 02:50:28.110828 containerd[1931]: time="2025-05-27T02:50:28.110745978Z" level=info msg="CreateContainer within sandbox \"b03840a08ab3414b5914f20993d864a01a04af9715bc1692d8e0be89ce1039f8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 02:50:28.136227 containerd[1931]: time="2025-05-27T02:50:28.136157094Z" level=info msg="Container ee1bb5f01c1766fd469ae31385ea88046afe2fbb9cdc0abb296abd4232df86d0: CDI devices from CRI Config.CDIDevices: []" May 27 02:50:28.156971 containerd[1931]: time="2025-05-27T02:50:28.156626406Z" level=info msg="CreateContainer within sandbox \"b03840a08ab3414b5914f20993d864a01a04af9715bc1692d8e0be89ce1039f8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ee1bb5f01c1766fd469ae31385ea88046afe2fbb9cdc0abb296abd4232df86d0\"" May 27 02:50:28.160656 containerd[1931]: time="2025-05-27T02:50:28.160606014Z" level=info msg="StartContainer for \"ee1bb5f01c1766fd469ae31385ea88046afe2fbb9cdc0abb296abd4232df86d0\"" May 27 02:50:28.163637 containerd[1931]: time="2025-05-27T02:50:28.163573122Z" level=info msg="connecting to shim ee1bb5f01c1766fd469ae31385ea88046afe2fbb9cdc0abb296abd4232df86d0" address="unix:///run/containerd/s/08462b4bd46d0e7cd8ec1c89c5e4523c643e6696ae13a15b6ece4bb6165bb466" protocol=ttrpc version=3 May 27 02:50:28.201238 systemd[1]: Started cri-containerd-ee1bb5f01c1766fd469ae31385ea88046afe2fbb9cdc0abb296abd4232df86d0.scope - libcontainer container ee1bb5f01c1766fd469ae31385ea88046afe2fbb9cdc0abb296abd4232df86d0. May 27 02:50:28.257837 systemd[1]: cri-containerd-ee1bb5f01c1766fd469ae31385ea88046afe2fbb9cdc0abb296abd4232df86d0.scope: Deactivated successfully. May 27 02:50:28.259983 containerd[1931]: time="2025-05-27T02:50:28.259574071Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee1bb5f01c1766fd469ae31385ea88046afe2fbb9cdc0abb296abd4232df86d0\" id:\"ee1bb5f01c1766fd469ae31385ea88046afe2fbb9cdc0abb296abd4232df86d0\" pid:5256 exited_at:{seconds:1748314228 nanos:258108343}" May 27 02:50:28.260606 containerd[1931]: time="2025-05-27T02:50:28.260559223Z" level=info msg="received exit event container_id:\"ee1bb5f01c1766fd469ae31385ea88046afe2fbb9cdc0abb296abd4232df86d0\" id:\"ee1bb5f01c1766fd469ae31385ea88046afe2fbb9cdc0abb296abd4232df86d0\" pid:5256 exited_at:{seconds:1748314228 nanos:258108343}" May 27 02:50:28.276341 containerd[1931]: time="2025-05-27T02:50:28.276293755Z" level=info msg="StartContainer for \"ee1bb5f01c1766fd469ae31385ea88046afe2fbb9cdc0abb296abd4232df86d0\" returns successfully" May 27 02:50:28.303838 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee1bb5f01c1766fd469ae31385ea88046afe2fbb9cdc0abb296abd4232df86d0-rootfs.mount: Deactivated successfully. May 27 02:50:29.121988 containerd[1931]: time="2025-05-27T02:50:29.121022167Z" level=info msg="CreateContainer within sandbox \"b03840a08ab3414b5914f20993d864a01a04af9715bc1692d8e0be89ce1039f8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 02:50:29.144688 containerd[1931]: time="2025-05-27T02:50:29.144286567Z" level=info msg="Container 8a663ee12ff8d4e1f1010bc1b67b4a5275e4bae1a000ef9d8ea0d83f92489772: CDI devices from CRI Config.CDIDevices: []" May 27 02:50:29.164874 containerd[1931]: time="2025-05-27T02:50:29.164803327Z" level=info msg="CreateContainer within sandbox \"b03840a08ab3414b5914f20993d864a01a04af9715bc1692d8e0be89ce1039f8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8a663ee12ff8d4e1f1010bc1b67b4a5275e4bae1a000ef9d8ea0d83f92489772\"" May 27 02:50:29.166018 containerd[1931]: time="2025-05-27T02:50:29.165700111Z" level=info msg="StartContainer for \"8a663ee12ff8d4e1f1010bc1b67b4a5275e4bae1a000ef9d8ea0d83f92489772\"" May 27 02:50:29.169294 containerd[1931]: time="2025-05-27T02:50:29.169109071Z" level=info msg="connecting to shim 8a663ee12ff8d4e1f1010bc1b67b4a5275e4bae1a000ef9d8ea0d83f92489772" address="unix:///run/containerd/s/08462b4bd46d0e7cd8ec1c89c5e4523c643e6696ae13a15b6ece4bb6165bb466" protocol=ttrpc version=3 May 27 02:50:29.203647 systemd[1]: Started cri-containerd-8a663ee12ff8d4e1f1010bc1b67b4a5275e4bae1a000ef9d8ea0d83f92489772.scope - libcontainer container 8a663ee12ff8d4e1f1010bc1b67b4a5275e4bae1a000ef9d8ea0d83f92489772. May 27 02:50:29.275133 containerd[1931]: time="2025-05-27T02:50:29.275084300Z" level=info msg="StartContainer for \"8a663ee12ff8d4e1f1010bc1b67b4a5275e4bae1a000ef9d8ea0d83f92489772\" returns successfully" May 27 02:50:29.398603 containerd[1931]: time="2025-05-27T02:50:29.396583400Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a663ee12ff8d4e1f1010bc1b67b4a5275e4bae1a000ef9d8ea0d83f92489772\" id:\"4cb41e0bbc8e0115da199909834d6aae131519db3aacd7b265a93bb2f7543f2d\" pid:5322 exited_at:{seconds:1748314229 nanos:395821568}" May 27 02:50:30.052032 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 27 02:50:30.598318 containerd[1931]: time="2025-05-27T02:50:30.598234750Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a663ee12ff8d4e1f1010bc1b67b4a5275e4bae1a000ef9d8ea0d83f92489772\" id:\"8bf41d9f641935d9c3fe52b2df3eb2212c28f3c45510ad361b4456efc656c480\" pid:5398 exit_status:1 exited_at:{seconds:1748314230 nanos:597535690}" May 27 02:50:32.902674 containerd[1931]: time="2025-05-27T02:50:32.902591750Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a663ee12ff8d4e1f1010bc1b67b4a5275e4bae1a000ef9d8ea0d83f92489772\" id:\"5290e8e46dcc3cecfc413cb2b996017def417505d4a3b7d6eec1c2b1d915b964\" pid:5522 exit_status:1 exited_at:{seconds:1748314232 nanos:901250438}" May 27 02:50:34.169386 systemd-networkd[1818]: lxc_health: Link UP May 27 02:50:34.180690 (udev-worker)[5813]: Network interface NamePolicy= disabled on kernel command line. May 27 02:50:34.189322 systemd-networkd[1818]: lxc_health: Gained carrier May 27 02:50:35.136235 containerd[1931]: time="2025-05-27T02:50:35.135716617Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a663ee12ff8d4e1f1010bc1b67b4a5275e4bae1a000ef9d8ea0d83f92489772\" id:\"97e87d78ad1b335848327a3ea915a005c72e68575993af32b9480016b88132c7\" pid:5845 exited_at:{seconds:1748314235 nanos:133384129}" May 27 02:50:35.148928 kubelet[3369]: E0527 02:50:35.148859 3369 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40846->127.0.0.1:35569: write tcp 127.0.0.1:40846->127.0.0.1:35569: write: broken pipe May 27 02:50:35.294166 kubelet[3369]: I0527 02:50:35.294021 3369 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-57787" podStartSLOduration=12.294000073 podStartE2EDuration="12.294000073s" podCreationTimestamp="2025-05-27 02:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:50:30.16662434 +0000 UTC m=+159.079590957" watchObservedRunningTime="2025-05-27 02:50:35.294000073 +0000 UTC m=+164.206966726" May 27 02:50:35.629050 systemd-networkd[1818]: lxc_health: Gained IPv6LL May 27 02:50:37.410634 containerd[1931]: time="2025-05-27T02:50:37.410556232Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a663ee12ff8d4e1f1010bc1b67b4a5275e4bae1a000ef9d8ea0d83f92489772\" id:\"6aa9faa578675f74176c2d584ce90c044ca7ad9e49e2c26461c527a00e66b298\" pid:5879 exited_at:{seconds:1748314237 nanos:407988160}" May 27 02:50:37.416910 kubelet[3369]: E0527 02:50:37.416852 3369 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:50382->127.0.0.1:35569: write tcp 127.0.0.1:50382->127.0.0.1:35569: write: broken pipe May 27 02:50:37.772707 ntpd[1872]: Listen normally on 12 lxc_health [fe80::204f:17ff:feeb:aa1%10]:123 May 27 02:50:37.773306 ntpd[1872]: 27 May 02:50:37 ntpd[1872]: Listen normally on 12 lxc_health [fe80::204f:17ff:feeb:aa1%10]:123 May 27 02:50:39.648449 containerd[1931]: time="2025-05-27T02:50:39.648385327Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a663ee12ff8d4e1f1010bc1b67b4a5275e4bae1a000ef9d8ea0d83f92489772\" id:\"cbd9226c144779ed468d39eb17da47d0a4e65e32703301831f3211db7b27939b\" pid:5904 exited_at:{seconds:1748314239 nanos:647972011}" May 27 02:50:39.724126 sshd[5058]: Connection closed by 139.178.68.195 port 33410 May 27 02:50:39.725045 sshd-session[5056]: pam_unix(sshd:session): session closed for user core May 27 02:50:39.734924 systemd[1]: sshd@29-172.31.29.92:22-139.178.68.195:33410.service: Deactivated successfully. May 27 02:50:39.742639 systemd[1]: session-30.scope: Deactivated successfully. May 27 02:50:39.746486 systemd-logind[1877]: Session 30 logged out. Waiting for processes to exit. May 27 02:50:39.751634 systemd-logind[1877]: Removed session 30. May 27 02:50:51.413872 containerd[1931]: time="2025-05-27T02:50:51.413533362Z" level=info msg="StopPodSandbox for \"fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f\"" May 27 02:50:51.413872 containerd[1931]: time="2025-05-27T02:50:51.413741022Z" level=info msg="TearDown network for sandbox \"fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f\" successfully" May 27 02:50:51.413872 containerd[1931]: time="2025-05-27T02:50:51.413765190Z" level=info msg="StopPodSandbox for \"fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f\" returns successfully" May 27 02:50:51.414726 containerd[1931]: time="2025-05-27T02:50:51.414548382Z" level=info msg="RemovePodSandbox for \"fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f\"" May 27 02:50:51.414726 containerd[1931]: time="2025-05-27T02:50:51.414620634Z" level=info msg="Forcibly stopping sandbox \"fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f\"" May 27 02:50:51.414886 containerd[1931]: time="2025-05-27T02:50:51.414806766Z" level=info msg="TearDown network for sandbox \"fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f\" successfully" May 27 02:50:51.417093 containerd[1931]: time="2025-05-27T02:50:51.417016290Z" level=info msg="Ensure that sandbox fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f in task-service has been cleanup successfully" May 27 02:50:51.424687 containerd[1931]: time="2025-05-27T02:50:51.424614918Z" level=info msg="RemovePodSandbox \"fbf9255393c13ffadb50b2dccba4a61f0fc4b4d5a619433f929dea42fc9d129f\" returns successfully" May 27 02:50:51.426003 containerd[1931]: time="2025-05-27T02:50:51.425761710Z" level=info msg="StopPodSandbox for \"1b140523a348e60d0abdb5c781ea11c3ad96fee4e028b3d08e5a0a10525f0ffc\"" May 27 02:50:51.426003 containerd[1931]: time="2025-05-27T02:50:51.425933394Z" level=info msg="TearDown network for sandbox \"1b140523a348e60d0abdb5c781ea11c3ad96fee4e028b3d08e5a0a10525f0ffc\" successfully" May 27 02:50:51.426003 containerd[1931]: time="2025-05-27T02:50:51.425985162Z" level=info msg="StopPodSandbox for \"1b140523a348e60d0abdb5c781ea11c3ad96fee4e028b3d08e5a0a10525f0ffc\" returns successfully" May 27 02:50:51.426987 containerd[1931]: time="2025-05-27T02:50:51.426422970Z" level=info msg="RemovePodSandbox for \"1b140523a348e60d0abdb5c781ea11c3ad96fee4e028b3d08e5a0a10525f0ffc\"" May 27 02:50:51.426987 containerd[1931]: time="2025-05-27T02:50:51.426470310Z" level=info msg="Forcibly stopping sandbox \"1b140523a348e60d0abdb5c781ea11c3ad96fee4e028b3d08e5a0a10525f0ffc\"" May 27 02:50:51.426987 containerd[1931]: time="2025-05-27T02:50:51.426591582Z" level=info msg="TearDown network for sandbox \"1b140523a348e60d0abdb5c781ea11c3ad96fee4e028b3d08e5a0a10525f0ffc\" successfully" May 27 02:50:51.428635 containerd[1931]: time="2025-05-27T02:50:51.428591190Z" level=info msg="Ensure that sandbox 1b140523a348e60d0abdb5c781ea11c3ad96fee4e028b3d08e5a0a10525f0ffc in task-service has been cleanup successfully" May 27 02:50:51.435633 containerd[1931]: time="2025-05-27T02:50:51.435589854Z" level=info msg="RemovePodSandbox \"1b140523a348e60d0abdb5c781ea11c3ad96fee4e028b3d08e5a0a10525f0ffc\" returns successfully" May 27 02:50:53.132109 systemd[1]: cri-containerd-802bd8cfc2c8fb953e08b9ac7e5df076fba9647cc26fa0dac87789d5e57b9cbe.scope: Deactivated successfully. May 27 02:50:53.132998 systemd[1]: cri-containerd-802bd8cfc2c8fb953e08b9ac7e5df076fba9647cc26fa0dac87789d5e57b9cbe.scope: Consumed 6.915s CPU time, 55.1M memory peak. May 27 02:50:53.136159 containerd[1931]: time="2025-05-27T02:50:53.136082994Z" level=info msg="received exit event container_id:\"802bd8cfc2c8fb953e08b9ac7e5df076fba9647cc26fa0dac87789d5e57b9cbe\" id:\"802bd8cfc2c8fb953e08b9ac7e5df076fba9647cc26fa0dac87789d5e57b9cbe\" pid:3029 exit_status:1 exited_at:{seconds:1748314253 nanos:135480714}" May 27 02:50:53.136681 containerd[1931]: time="2025-05-27T02:50:53.136619334Z" level=info msg="TaskExit event in podsandbox handler container_id:\"802bd8cfc2c8fb953e08b9ac7e5df076fba9647cc26fa0dac87789d5e57b9cbe\" id:\"802bd8cfc2c8fb953e08b9ac7e5df076fba9647cc26fa0dac87789d5e57b9cbe\" pid:3029 exit_status:1 exited_at:{seconds:1748314253 nanos:135480714}" May 27 02:50:53.179868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-802bd8cfc2c8fb953e08b9ac7e5df076fba9647cc26fa0dac87789d5e57b9cbe-rootfs.mount: Deactivated successfully. May 27 02:50:54.215466 kubelet[3369]: I0527 02:50:54.215407 3369 scope.go:117] "RemoveContainer" containerID="802bd8cfc2c8fb953e08b9ac7e5df076fba9647cc26fa0dac87789d5e57b9cbe" May 27 02:50:54.219174 containerd[1931]: time="2025-05-27T02:50:54.219053060Z" level=info msg="CreateContainer within sandbox \"b28752aeed8e348a8fa388017c72454e745aacd5389e455890a5a87455c82b83\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 27 02:50:54.237192 containerd[1931]: time="2025-05-27T02:50:54.236237024Z" level=info msg="Container f05fe20b92b790f2969ee1a049cc0c0e83ec8a76981dad221429ca12a8897954: CDI devices from CRI Config.CDIDevices: []" May 27 02:50:54.252128 containerd[1931]: time="2025-05-27T02:50:54.252070772Z" level=info msg="CreateContainer within sandbox \"b28752aeed8e348a8fa388017c72454e745aacd5389e455890a5a87455c82b83\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f05fe20b92b790f2969ee1a049cc0c0e83ec8a76981dad221429ca12a8897954\"" May 27 02:50:54.253069 containerd[1931]: time="2025-05-27T02:50:54.252971456Z" level=info msg="StartContainer for \"f05fe20b92b790f2969ee1a049cc0c0e83ec8a76981dad221429ca12a8897954\"" May 27 02:50:54.255353 containerd[1931]: time="2025-05-27T02:50:54.255284948Z" level=info msg="connecting to shim f05fe20b92b790f2969ee1a049cc0c0e83ec8a76981dad221429ca12a8897954" address="unix:///run/containerd/s/7b911af82feb6e4afac637e29e4f6e4e6f3a212a993f925c998a82314dfacdc7" protocol=ttrpc version=3 May 27 02:50:54.300756 systemd[1]: Started cri-containerd-f05fe20b92b790f2969ee1a049cc0c0e83ec8a76981dad221429ca12a8897954.scope - libcontainer container f05fe20b92b790f2969ee1a049cc0c0e83ec8a76981dad221429ca12a8897954. May 27 02:50:54.391883 containerd[1931]: time="2025-05-27T02:50:54.391826204Z" level=info msg="StartContainer for \"f05fe20b92b790f2969ee1a049cc0c0e83ec8a76981dad221429ca12a8897954\" returns successfully" May 27 02:50:55.065602 kubelet[3369]: E0527 02:50:55.064731 3369 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-92?timeout=10s\": context deadline exceeded" May 27 02:50:59.315918 systemd[1]: cri-containerd-8f17f4f474436da3739246965ab9f1e60fe3fb9f32118251e0c7fea3f0c577d2.scope: Deactivated successfully. May 27 02:50:59.318168 systemd[1]: cri-containerd-8f17f4f474436da3739246965ab9f1e60fe3fb9f32118251e0c7fea3f0c577d2.scope: Consumed 5.899s CPU time, 21.1M memory peak. May 27 02:50:59.323005 containerd[1931]: time="2025-05-27T02:50:59.322697101Z" level=info msg="received exit event container_id:\"8f17f4f474436da3739246965ab9f1e60fe3fb9f32118251e0c7fea3f0c577d2\" id:\"8f17f4f474436da3739246965ab9f1e60fe3fb9f32118251e0c7fea3f0c577d2\" pid:3041 exit_status:1 exited_at:{seconds:1748314259 nanos:322316173}" May 27 02:50:59.323548 containerd[1931]: time="2025-05-27T02:50:59.323356549Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f17f4f474436da3739246965ab9f1e60fe3fb9f32118251e0c7fea3f0c577d2\" id:\"8f17f4f474436da3739246965ab9f1e60fe3fb9f32118251e0c7fea3f0c577d2\" pid:3041 exit_status:1 exited_at:{seconds:1748314259 nanos:322316173}" May 27 02:50:59.361729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f17f4f474436da3739246965ab9f1e60fe3fb9f32118251e0c7fea3f0c577d2-rootfs.mount: Deactivated successfully. May 27 02:51:00.242615 kubelet[3369]: I0527 02:51:00.242578 3369 scope.go:117] "RemoveContainer" containerID="8f17f4f474436da3739246965ab9f1e60fe3fb9f32118251e0c7fea3f0c577d2" May 27 02:51:00.246283 containerd[1931]: time="2025-05-27T02:51:00.246228037Z" level=info msg="CreateContainer within sandbox \"1c8908360615640fb6597535ee7485d79d43dbfaca8053c2ba1d197d08aa1bc1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 27 02:51:00.262623 containerd[1931]: time="2025-05-27T02:51:00.261323966Z" level=info msg="Container aef374fdc19aeaa66dbbefc1b22b0b2c29b012819ed78212f9130d3304b3520b: CDI devices from CRI Config.CDIDevices: []" May 27 02:51:00.270126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1287471663.mount: Deactivated successfully. May 27 02:51:00.280932 containerd[1931]: time="2025-05-27T02:51:00.280881014Z" level=info msg="CreateContainer within sandbox \"1c8908360615640fb6597535ee7485d79d43dbfaca8053c2ba1d197d08aa1bc1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"aef374fdc19aeaa66dbbefc1b22b0b2c29b012819ed78212f9130d3304b3520b\"" May 27 02:51:00.282648 containerd[1931]: time="2025-05-27T02:51:00.282285470Z" level=info msg="StartContainer for \"aef374fdc19aeaa66dbbefc1b22b0b2c29b012819ed78212f9130d3304b3520b\"" May 27 02:51:00.284318 containerd[1931]: time="2025-05-27T02:51:00.284252426Z" level=info msg="connecting to shim aef374fdc19aeaa66dbbefc1b22b0b2c29b012819ed78212f9130d3304b3520b" address="unix:///run/containerd/s/896ec9898fa4ed96b05a4514cac12513b2cb487b9cc8b88130603a587270a062" protocol=ttrpc version=3 May 27 02:51:00.319241 systemd[1]: Started cri-containerd-aef374fdc19aeaa66dbbefc1b22b0b2c29b012819ed78212f9130d3304b3520b.scope - libcontainer container aef374fdc19aeaa66dbbefc1b22b0b2c29b012819ed78212f9130d3304b3520b. May 27 02:51:00.398870 containerd[1931]: time="2025-05-27T02:51:00.398786558Z" level=info msg="StartContainer for \"aef374fdc19aeaa66dbbefc1b22b0b2c29b012819ed78212f9130d3304b3520b\" returns successfully" May 27 02:51:05.065854 kubelet[3369]: E0527 02:51:05.065766 3369 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-92?timeout=10s\": context deadline exceeded"