May 14 23:49:03.220594 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] May 14 23:49:03.220637 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed May 14 22:22:56 -00 2025 May 14 23:49:03.220662 kernel: KASLR disabled due to lack of seed May 14 23:49:03.220678 kernel: efi: EFI v2.7 by EDK II May 14 23:49:03.220694 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a733a98 MEMRESERVE=0x78557598 May 14 23:49:03.220709 kernel: secureboot: Secure boot disabled May 14 23:49:03.220727 kernel: ACPI: Early table checksum verification disabled May 14 23:49:03.220742 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) May 14 23:49:03.220758 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) May 14 23:49:03.220773 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) May 14 23:49:03.220793 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) May 14 23:49:03.220809 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 14 23:49:03.220825 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) May 14 23:49:03.220840 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) May 14 23:49:03.220859 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) May 14 23:49:03.220880 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 14 23:49:03.220897 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) May 14 23:49:03.220913 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) May 14 23:49:03.220930 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 May 14 23:49:03.220946 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') May 14 23:49:03.220963 kernel: printk: bootconsole [uart0] enabled May 14 23:49:03.220979 kernel: NUMA: Failed to initialise from firmware May 14 23:49:03.220995 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] May 14 23:49:03.221012 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] May 14 23:49:03.221028 kernel: Zone ranges: May 14 23:49:03.221044 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 14 23:49:03.221064 kernel: DMA32 empty May 14 23:49:03.221081 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] May 14 23:49:03.221097 kernel: Movable zone start for each node May 14 23:49:03.221136 kernel: Early memory node ranges May 14 23:49:03.221153 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] May 14 23:49:03.221170 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] May 14 23:49:03.221186 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] May 14 23:49:03.221203 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] May 14 23:49:03.221219 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] May 14 23:49:03.221235 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] May 14 23:49:03.221251 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] May 14 23:49:03.221267 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] May 14 23:49:03.221289 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] May 14 23:49:03.221306 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges May 14 23:49:03.221330 kernel: psci: probing for conduit method from ACPI. May 14 23:49:03.221347 kernel: psci: PSCIv1.0 detected in firmware. May 14 23:49:03.221364 kernel: psci: Using standard PSCI v0.2 function IDs May 14 23:49:03.221385 kernel: psci: Trusted OS migration not required May 14 23:49:03.221402 kernel: psci: SMC Calling Convention v1.1 May 14 23:49:03.221419 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 14 23:49:03.221437 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 14 23:49:03.221459 kernel: pcpu-alloc: [0] 0 [0] 1 May 14 23:49:03.221479 kernel: Detected PIPT I-cache on CPU0 May 14 23:49:03.221523 kernel: CPU features: detected: GIC system register CPU interface May 14 23:49:03.221544 kernel: CPU features: detected: Spectre-v2 May 14 23:49:03.221561 kernel: CPU features: detected: Spectre-v3a May 14 23:49:03.221578 kernel: CPU features: detected: Spectre-BHB May 14 23:49:03.221595 kernel: CPU features: detected: ARM erratum 1742098 May 14 23:49:03.221612 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 May 14 23:49:03.221635 kernel: alternatives: applying boot alternatives May 14 23:49:03.221655 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=bfa141d6f8686d8fe96245516ecbaee60c938beef41636c397e3939a2c9a6ed9 May 14 23:49:03.221673 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 23:49:03.221691 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 23:49:03.221708 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 23:49:03.221725 kernel: Fallback order for Node 0: 0 May 14 23:49:03.221742 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 May 14 23:49:03.221760 kernel: Policy zone: Normal May 14 23:49:03.221777 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 23:49:03.221794 kernel: software IO TLB: area num 2. May 14 23:49:03.221816 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) May 14 23:49:03.221834 kernel: Memory: 3821176K/4030464K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 209288K reserved, 0K cma-reserved) May 14 23:49:03.221852 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 14 23:49:03.221870 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 23:49:03.221888 kernel: rcu: RCU event tracing is enabled. May 14 23:49:03.221906 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 14 23:49:03.221925 kernel: Trampoline variant of Tasks RCU enabled. May 14 23:49:03.221944 kernel: Tracing variant of Tasks RCU enabled. May 14 23:49:03.221962 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 23:49:03.221980 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 14 23:49:03.221998 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 23:49:03.222021 kernel: GICv3: 96 SPIs implemented May 14 23:49:03.222038 kernel: GICv3: 0 Extended SPIs implemented May 14 23:49:03.222055 kernel: Root IRQ handler: gic_handle_irq May 14 23:49:03.222073 kernel: GICv3: GICv3 features: 16 PPIs May 14 23:49:03.222090 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 May 14 23:49:03.222158 kernel: ITS [mem 0x10080000-0x1009ffff] May 14 23:49:03.222179 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) May 14 23:49:03.222197 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) May 14 23:49:03.222215 kernel: GICv3: using LPI property table @0x00000004000d0000 May 14 23:49:03.222232 kernel: ITS: Using hypervisor restricted LPI range [128] May 14 23:49:03.222250 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 May 14 23:49:03.222267 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 23:49:03.222319 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). May 14 23:49:03.222338 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns May 14 23:49:03.222356 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns May 14 23:49:03.222373 kernel: Console: colour dummy device 80x25 May 14 23:49:03.222391 kernel: printk: console [tty1] enabled May 14 23:49:03.222408 kernel: ACPI: Core revision 20230628 May 14 23:49:03.222426 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) May 14 23:49:03.222444 kernel: pid_max: default: 32768 minimum: 301 May 14 23:49:03.222461 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 14 23:49:03.222479 kernel: landlock: Up and running. May 14 23:49:03.222502 kernel: SELinux: Initializing. May 14 23:49:03.222520 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:49:03.222538 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 23:49:03.222555 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 23:49:03.222573 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 23:49:03.222590 kernel: rcu: Hierarchical SRCU implementation. May 14 23:49:03.222608 kernel: rcu: Max phase no-delay instances is 400. May 14 23:49:03.222626 kernel: Platform MSI: ITS@0x10080000 domain created May 14 23:49:03.222647 kernel: PCI/MSI: ITS@0x10080000 domain created May 14 23:49:03.222665 kernel: Remapping and enabling EFI services. May 14 23:49:03.222682 kernel: smp: Bringing up secondary CPUs ... May 14 23:49:03.222699 kernel: Detected PIPT I-cache on CPU1 May 14 23:49:03.222717 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 May 14 23:49:03.222734 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 May 14 23:49:03.222752 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] May 14 23:49:03.222769 kernel: smp: Brought up 1 node, 2 CPUs May 14 23:49:03.222786 kernel: SMP: Total of 2 processors activated. May 14 23:49:03.222803 kernel: CPU features: detected: 32-bit EL0 Support May 14 23:49:03.222825 kernel: CPU features: detected: 32-bit EL1 Support May 14 23:49:03.222843 kernel: CPU features: detected: CRC32 instructions May 14 23:49:03.222871 kernel: CPU: All CPU(s) started at EL1 May 14 23:49:03.222893 kernel: alternatives: applying system-wide alternatives May 14 23:49:03.222911 kernel: devtmpfs: initialized May 14 23:49:03.222929 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 23:49:03.222947 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 14 23:49:03.222965 kernel: pinctrl core: initialized pinctrl subsystem May 14 23:49:03.222984 kernel: SMBIOS 3.0.0 present. May 14 23:49:03.223006 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 May 14 23:49:03.223025 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 23:49:03.223043 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 23:49:03.223062 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 23:49:03.223080 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 23:49:03.223098 kernel: audit: initializing netlink subsys (disabled) May 14 23:49:03.223166 kernel: audit: type=2000 audit(0.218:1): state=initialized audit_enabled=0 res=1 May 14 23:49:03.223191 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 23:49:03.223210 kernel: cpuidle: using governor menu May 14 23:49:03.223228 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 23:49:03.223247 kernel: ASID allocator initialised with 65536 entries May 14 23:49:03.223265 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 23:49:03.223283 kernel: Serial: AMBA PL011 UART driver May 14 23:49:03.223301 kernel: Modules: 17744 pages in range for non-PLT usage May 14 23:49:03.223319 kernel: Modules: 509264 pages in range for PLT usage May 14 23:49:03.223338 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 23:49:03.223360 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 14 23:49:03.223379 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 14 23:49:03.223398 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 14 23:49:03.223416 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 23:49:03.223434 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 14 23:49:03.223452 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 14 23:49:03.223471 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 14 23:49:03.223489 kernel: ACPI: Added _OSI(Module Device) May 14 23:49:03.223507 kernel: ACPI: Added _OSI(Processor Device) May 14 23:49:03.223530 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 23:49:03.223548 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 23:49:03.223566 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 23:49:03.223603 kernel: ACPI: Interpreter enabled May 14 23:49:03.223622 kernel: ACPI: Using GIC for interrupt routing May 14 23:49:03.223640 kernel: ACPI: MCFG table detected, 1 entries May 14 23:49:03.223659 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) May 14 23:49:03.224001 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 23:49:03.224247 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 14 23:49:03.224451 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 14 23:49:03.224647 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 May 14 23:49:03.224844 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] May 14 23:49:03.224869 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] May 14 23:49:03.224888 kernel: acpiphp: Slot [1] registered May 14 23:49:03.224906 kernel: acpiphp: Slot [2] registered May 14 23:49:03.224924 kernel: acpiphp: Slot [3] registered May 14 23:49:03.224948 kernel: acpiphp: Slot [4] registered May 14 23:49:03.224967 kernel: acpiphp: Slot [5] registered May 14 23:49:03.224985 kernel: acpiphp: Slot [6] registered May 14 23:49:03.225003 kernel: acpiphp: Slot [7] registered May 14 23:49:03.225021 kernel: acpiphp: Slot [8] registered May 14 23:49:03.225039 kernel: acpiphp: Slot [9] registered May 14 23:49:03.225058 kernel: acpiphp: Slot [10] registered May 14 23:49:03.225076 kernel: acpiphp: Slot [11] registered May 14 23:49:03.225094 kernel: acpiphp: Slot [12] registered May 14 23:49:03.225132 kernel: acpiphp: Slot [13] registered May 14 23:49:03.225158 kernel: acpiphp: Slot [14] registered May 14 23:49:03.225176 kernel: acpiphp: Slot [15] registered May 14 23:49:03.225194 kernel: acpiphp: Slot [16] registered May 14 23:49:03.225212 kernel: acpiphp: Slot [17] registered May 14 23:49:03.225230 kernel: acpiphp: Slot [18] registered May 14 23:49:03.225249 kernel: acpiphp: Slot [19] registered May 14 23:49:03.225267 kernel: acpiphp: Slot [20] registered May 14 23:49:03.225285 kernel: acpiphp: Slot [21] registered May 14 23:49:03.225303 kernel: acpiphp: Slot [22] registered May 14 23:49:03.225325 kernel: acpiphp: Slot [23] registered May 14 23:49:03.225344 kernel: acpiphp: Slot [24] registered May 14 23:49:03.225362 kernel: acpiphp: Slot [25] registered May 14 23:49:03.225380 kernel: acpiphp: Slot [26] registered May 14 23:49:03.225398 kernel: acpiphp: Slot [27] registered May 14 23:49:03.225416 kernel: acpiphp: Slot [28] registered May 14 23:49:03.225434 kernel: acpiphp: Slot [29] registered May 14 23:49:03.225452 kernel: acpiphp: Slot [30] registered May 14 23:49:03.225469 kernel: acpiphp: Slot [31] registered May 14 23:49:03.225487 kernel: PCI host bridge to bus 0000:00 May 14 23:49:03.225693 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] May 14 23:49:03.225872 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 14 23:49:03.226051 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] May 14 23:49:03.226256 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] May 14 23:49:03.226493 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 May 14 23:49:03.226729 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 May 14 23:49:03.226945 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] May 14 23:49:03.227203 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 14 23:49:03.227423 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] May 14 23:49:03.227651 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold May 14 23:49:03.227877 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 14 23:49:03.228085 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] May 14 23:49:03.228318 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] May 14 23:49:03.228563 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] May 14 23:49:03.228778 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold May 14 23:49:03.228995 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] May 14 23:49:03.229240 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] May 14 23:49:03.229448 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] May 14 23:49:03.229667 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] May 14 23:49:03.229880 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] May 14 23:49:03.230079 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] May 14 23:49:03.230292 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 14 23:49:03.230478 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] May 14 23:49:03.230502 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 14 23:49:03.230522 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 14 23:49:03.230541 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 14 23:49:03.230559 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 14 23:49:03.230577 kernel: iommu: Default domain type: Translated May 14 23:49:03.230603 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 23:49:03.230622 kernel: efivars: Registered efivars operations May 14 23:49:03.230640 kernel: vgaarb: loaded May 14 23:49:03.230658 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 23:49:03.230677 kernel: VFS: Disk quotas dquot_6.6.0 May 14 23:49:03.230695 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 23:49:03.230714 kernel: pnp: PnP ACPI init May 14 23:49:03.230933 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved May 14 23:49:03.230964 kernel: pnp: PnP ACPI: found 1 devices May 14 23:49:03.230983 kernel: NET: Registered PF_INET protocol family May 14 23:49:03.231002 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 23:49:03.231021 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 23:49:03.231039 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 23:49:03.231058 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 23:49:03.231077 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 23:49:03.231095 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 23:49:03.231155 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:49:03.231181 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 23:49:03.231200 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 23:49:03.231218 kernel: PCI: CLS 0 bytes, default 64 May 14 23:49:03.231236 kernel: kvm [1]: HYP mode not available May 14 23:49:03.231255 kernel: Initialise system trusted keyrings May 14 23:49:03.231273 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 23:49:03.231292 kernel: Key type asymmetric registered May 14 23:49:03.231310 kernel: Asymmetric key parser 'x509' registered May 14 23:49:03.231328 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 14 23:49:03.231351 kernel: io scheduler mq-deadline registered May 14 23:49:03.231369 kernel: io scheduler kyber registered May 14 23:49:03.231387 kernel: io scheduler bfq registered May 14 23:49:03.231668 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered May 14 23:49:03.231698 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 23:49:03.231717 kernel: ACPI: button: Power Button [PWRB] May 14 23:49:03.231735 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 May 14 23:49:03.231754 kernel: ACPI: button: Sleep Button [SLPB] May 14 23:49:03.231779 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 23:49:03.231798 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 14 23:49:03.232006 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) May 14 23:49:03.232031 kernel: printk: console [ttyS0] disabled May 14 23:49:03.232050 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A May 14 23:49:03.232069 kernel: printk: console [ttyS0] enabled May 14 23:49:03.232087 kernel: printk: bootconsole [uart0] disabled May 14 23:49:03.232202 kernel: thunder_xcv, ver 1.0 May 14 23:49:03.232224 kernel: thunder_bgx, ver 1.0 May 14 23:49:03.232243 kernel: nicpf, ver 1.0 May 14 23:49:03.232268 kernel: nicvf, ver 1.0 May 14 23:49:03.232481 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 23:49:03.232669 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T23:49:02 UTC (1747266542) May 14 23:49:03.232694 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 23:49:03.232713 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available May 14 23:49:03.232732 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 14 23:49:03.232750 kernel: watchdog: Hard watchdog permanently disabled May 14 23:49:03.232773 kernel: NET: Registered PF_INET6 protocol family May 14 23:49:03.232792 kernel: Segment Routing with IPv6 May 14 23:49:03.232811 kernel: In-situ OAM (IOAM) with IPv6 May 14 23:49:03.232829 kernel: NET: Registered PF_PACKET protocol family May 14 23:49:03.232847 kernel: Key type dns_resolver registered May 14 23:49:03.232865 kernel: registered taskstats version 1 May 14 23:49:03.232883 kernel: Loading compiled-in X.509 certificates May 14 23:49:03.232901 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: cdb7ce3984a1665183e8a6ab3419833bc5e4e7f4' May 14 23:49:03.232920 kernel: Key type .fscrypt registered May 14 23:49:03.232937 kernel: Key type fscrypt-provisioning registered May 14 23:49:03.232960 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 23:49:03.232978 kernel: ima: Allocated hash algorithm: sha1 May 14 23:49:03.232996 kernel: ima: No architecture policies found May 14 23:49:03.233015 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 23:49:03.233033 kernel: clk: Disabling unused clocks May 14 23:49:03.233051 kernel: Freeing unused kernel memory: 38336K May 14 23:49:03.233070 kernel: Run /init as init process May 14 23:49:03.233088 kernel: with arguments: May 14 23:49:03.233124 kernel: /init May 14 23:49:03.233151 kernel: with environment: May 14 23:49:03.233169 kernel: HOME=/ May 14 23:49:03.233187 kernel: TERM=linux May 14 23:49:03.233205 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 23:49:03.233225 systemd[1]: Successfully made /usr/ read-only. May 14 23:49:03.233250 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:49:03.233271 systemd[1]: Detected virtualization amazon. May 14 23:49:03.233295 systemd[1]: Detected architecture arm64. May 14 23:49:03.233315 systemd[1]: Running in initrd. May 14 23:49:03.233334 systemd[1]: No hostname configured, using default hostname. May 14 23:49:03.233355 systemd[1]: Hostname set to . May 14 23:49:03.233374 systemd[1]: Initializing machine ID from VM UUID. May 14 23:49:03.233394 systemd[1]: Queued start job for default target initrd.target. May 14 23:49:03.233414 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:49:03.233434 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:49:03.233455 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 23:49:03.233479 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:49:03.233500 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 23:49:03.233521 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 23:49:03.233543 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 23:49:03.233563 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 23:49:03.233583 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:49:03.233608 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:49:03.233629 systemd[1]: Reached target paths.target - Path Units. May 14 23:49:03.233649 systemd[1]: Reached target slices.target - Slice Units. May 14 23:49:03.233668 systemd[1]: Reached target swap.target - Swaps. May 14 23:49:03.233688 systemd[1]: Reached target timers.target - Timer Units. May 14 23:49:03.233708 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:49:03.233728 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:49:03.233748 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 23:49:03.233768 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 23:49:03.233792 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:49:03.233812 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:49:03.233832 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:49:03.233852 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:49:03.233872 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 23:49:03.233892 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:49:03.233912 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 23:49:03.233932 systemd[1]: Starting systemd-fsck-usr.service... May 14 23:49:03.233955 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:49:03.233975 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:49:03.233995 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:49:03.234015 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 23:49:03.234035 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:49:03.234056 systemd[1]: Finished systemd-fsck-usr.service. May 14 23:49:03.234080 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 23:49:03.234098 kernel: Bridge firewalling registered May 14 23:49:03.234139 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 23:49:03.234160 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:49:03.234217 systemd-journald[252]: Collecting audit messages is disabled. May 14 23:49:03.234266 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:49:03.234287 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 23:49:03.234307 systemd-journald[252]: Journal started May 14 23:49:03.234343 systemd-journald[252]: Runtime Journal (/run/log/journal/ec2692a9a3982acd96167407dfef6b5d) is 8M, max 75.3M, 67.3M free. May 14 23:49:03.167248 systemd-modules-load[253]: Inserted module 'overlay' May 14 23:49:03.199768 systemd-modules-load[253]: Inserted module 'br_netfilter' May 14 23:49:03.245820 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:49:03.255632 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:49:03.262712 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:49:03.276128 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:49:03.298381 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:49:03.317210 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:49:03.332462 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:49:03.338209 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:49:03.349377 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 23:49:03.378878 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:49:03.386994 dracut-cmdline[288]: dracut-dracut-053 May 14 23:49:03.394696 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=bfa141d6f8686d8fe96245516ecbaee60c938beef41636c397e3939a2c9a6ed9 May 14 23:49:03.410421 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:49:03.486763 systemd-resolved[300]: Positive Trust Anchors: May 14 23:49:03.486799 systemd-resolved[300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:49:03.486860 systemd-resolved[300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:49:03.553142 kernel: SCSI subsystem initialized May 14 23:49:03.561140 kernel: Loading iSCSI transport class v2.0-870. May 14 23:49:03.573463 kernel: iscsi: registered transport (tcp) May 14 23:49:03.595138 kernel: iscsi: registered transport (qla4xxx) May 14 23:49:03.595212 kernel: QLogic iSCSI HBA Driver May 14 23:49:03.684141 kernel: random: crng init done May 14 23:49:03.684416 systemd-resolved[300]: Defaulting to hostname 'linux'. May 14 23:49:03.690828 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:49:03.696597 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:49:03.714556 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 23:49:03.731776 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 23:49:03.762831 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 23:49:03.762914 kernel: device-mapper: uevent: version 1.0.3 May 14 23:49:03.762941 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 14 23:49:03.829162 kernel: raid6: neonx8 gen() 6562 MB/s May 14 23:49:03.846135 kernel: raid6: neonx4 gen() 6470 MB/s May 14 23:49:03.863134 kernel: raid6: neonx2 gen() 5370 MB/s May 14 23:49:03.880134 kernel: raid6: neonx1 gen() 3931 MB/s May 14 23:49:03.897134 kernel: raid6: int64x8 gen() 3600 MB/s May 14 23:49:03.914134 kernel: raid6: int64x4 gen() 3687 MB/s May 14 23:49:03.931134 kernel: raid6: int64x2 gen() 3562 MB/s May 14 23:49:03.948962 kernel: raid6: int64x1 gen() 2761 MB/s May 14 23:49:03.948994 kernel: raid6: using algorithm neonx8 gen() 6562 MB/s May 14 23:49:03.966932 kernel: raid6: .... xor() 4811 MB/s, rmw enabled May 14 23:49:03.966969 kernel: raid6: using neon recovery algorithm May 14 23:49:03.974929 kernel: xor: measuring software checksum speed May 14 23:49:03.974988 kernel: 8regs : 12926 MB/sec May 14 23:49:03.976137 kernel: 32regs : 12057 MB/sec May 14 23:49:03.978203 kernel: arm64_neon : 8916 MB/sec May 14 23:49:03.978237 kernel: xor: using function: 8regs (12926 MB/sec) May 14 23:49:04.060150 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 23:49:04.079119 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 23:49:04.091480 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:49:04.126769 systemd-udevd[472]: Using default interface naming scheme 'v255'. May 14 23:49:04.136359 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:49:04.150861 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 23:49:04.182146 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation May 14 23:49:04.239167 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:49:04.252546 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:49:04.366167 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:49:04.377992 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 23:49:04.418353 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 23:49:04.424493 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:49:04.438466 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:49:04.443365 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:49:04.458428 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 23:49:04.505617 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 23:49:04.559169 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 14 23:49:04.564687 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) May 14 23:49:04.579808 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 14 23:49:04.580263 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 14 23:49:04.589135 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:6f:64:cd:88:6f May 14 23:49:04.589753 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:49:04.589994 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:49:04.593998 (udev-worker)[525]: Network interface NamePolicy= disabled on kernel command line. May 14 23:49:04.595834 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:49:04.607258 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:49:04.637237 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 14 23:49:04.637288 kernel: nvme nvme0: pci function 0000:00:04.0 May 14 23:49:04.612670 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:49:04.622349 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:49:04.649635 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 14 23:49:04.650939 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:49:04.655098 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 23:49:04.666466 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 23:49:04.666504 kernel: GPT:9289727 != 16777215 May 14 23:49:04.666529 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 23:49:04.666553 kernel: GPT:9289727 != 16777215 May 14 23:49:04.666588 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 23:49:04.666613 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 14 23:49:04.682750 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:49:04.695487 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 23:49:04.732512 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:49:04.776154 kernel: BTRFS: device fsid 369506fd-904a-45c2-a4ab-2d03e7866799 devid 1 transid 44 /dev/nvme0n1p3 scanned by (udev-worker) (533) May 14 23:49:04.804158 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (532) May 14 23:49:04.814932 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 14 23:49:04.917990 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 14 23:49:04.958529 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 14 23:49:04.976535 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 14 23:49:04.976713 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 14 23:49:05.004469 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 23:49:05.022721 disk-uuid[666]: Primary Header is updated. May 14 23:49:05.022721 disk-uuid[666]: Secondary Entries is updated. May 14 23:49:05.022721 disk-uuid[666]: Secondary Header is updated. May 14 23:49:05.035172 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 14 23:49:06.057147 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 14 23:49:06.059885 disk-uuid[667]: The operation has completed successfully. May 14 23:49:06.255966 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 23:49:06.256214 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 23:49:06.343400 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 23:49:06.367642 sh[928]: Success May 14 23:49:06.392185 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 14 23:49:06.529262 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 23:49:06.546330 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 23:49:06.558332 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 23:49:06.588123 kernel: BTRFS info (device dm-0): first mount of filesystem 369506fd-904a-45c2-a4ab-2d03e7866799 May 14 23:49:06.588186 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 14 23:49:06.588213 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 14 23:49:06.591120 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 14 23:49:06.591155 kernel: BTRFS info (device dm-0): using free space tree May 14 23:49:06.618143 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 14 23:49:06.634468 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 23:49:06.639833 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 23:49:06.653345 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 23:49:06.664405 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 23:49:06.712943 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:49:06.713014 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 14 23:49:06.714546 kernel: BTRFS info (device nvme0n1p6): using free space tree May 14 23:49:06.722131 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 14 23:49:06.730180 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:49:06.733778 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 23:49:06.746938 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 23:49:06.899737 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:49:06.914681 ignition[1025]: Ignition 2.20.0 May 14 23:49:06.914708 ignition[1025]: Stage: fetch-offline May 14 23:49:06.915361 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:49:06.925893 ignition[1025]: no configs at "/usr/lib/ignition/base.d" May 14 23:49:06.925932 ignition[1025]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 14 23:49:06.936920 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:49:06.926403 ignition[1025]: Ignition finished successfully May 14 23:49:06.985395 systemd-networkd[1130]: lo: Link UP May 14 23:49:06.985409 systemd-networkd[1130]: lo: Gained carrier May 14 23:49:06.988629 systemd-networkd[1130]: Enumeration completed May 14 23:49:06.988818 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:49:06.989776 systemd-networkd[1130]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:49:06.989784 systemd-networkd[1130]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:49:07.001046 systemd-networkd[1130]: eth0: Link UP May 14 23:49:07.001058 systemd-networkd[1130]: eth0: Gained carrier May 14 23:49:07.001075 systemd-networkd[1130]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:49:07.016566 systemd[1]: Reached target network.target - Network. May 14 23:49:07.035244 systemd-networkd[1130]: eth0: DHCPv4 address 172.31.17.61/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 14 23:49:07.038775 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 14 23:49:07.068517 ignition[1135]: Ignition 2.20.0 May 14 23:49:07.069016 ignition[1135]: Stage: fetch May 14 23:49:07.069641 ignition[1135]: no configs at "/usr/lib/ignition/base.d" May 14 23:49:07.069666 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 14 23:49:07.069872 ignition[1135]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 14 23:49:07.105551 ignition[1135]: PUT result: OK May 14 23:49:07.110891 ignition[1135]: parsed url from cmdline: "" May 14 23:49:07.110957 ignition[1135]: no config URL provided May 14 23:49:07.110975 ignition[1135]: reading system config file "/usr/lib/ignition/user.ign" May 14 23:49:07.111002 ignition[1135]: no config at "/usr/lib/ignition/user.ign" May 14 23:49:07.111035 ignition[1135]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 14 23:49:07.113323 ignition[1135]: PUT result: OK May 14 23:49:07.113406 ignition[1135]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 14 23:49:07.124858 ignition[1135]: GET result: OK May 14 23:49:07.125020 ignition[1135]: parsing config with SHA512: b47c706afcfed11308a20f854c42491a08d60b8d9974630241be40fa6846503ba7e0b9508ae92c06dbcd95e0dc747650c25bb6f10db180fe855fbd34dcf45463 May 14 23:49:07.134645 unknown[1135]: fetched base config from "system" May 14 23:49:07.135430 ignition[1135]: fetch: fetch complete May 14 23:49:07.134662 unknown[1135]: fetched base config from "system" May 14 23:49:07.135442 ignition[1135]: fetch: fetch passed May 14 23:49:07.134676 unknown[1135]: fetched user config from "aws" May 14 23:49:07.135524 ignition[1135]: Ignition finished successfully May 14 23:49:07.140430 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 14 23:49:07.160393 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 23:49:07.190254 ignition[1143]: Ignition 2.20.0 May 14 23:49:07.191095 ignition[1143]: Stage: kargs May 14 23:49:07.191767 ignition[1143]: no configs at "/usr/lib/ignition/base.d" May 14 23:49:07.191793 ignition[1143]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 14 23:49:07.191948 ignition[1143]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 14 23:49:07.194786 ignition[1143]: PUT result: OK May 14 23:49:07.202533 ignition[1143]: kargs: kargs passed May 14 23:49:07.202634 ignition[1143]: Ignition finished successfully May 14 23:49:07.212420 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 23:49:07.226861 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 23:49:07.249792 ignition[1150]: Ignition 2.20.0 May 14 23:49:07.249825 ignition[1150]: Stage: disks May 14 23:49:07.250764 ignition[1150]: no configs at "/usr/lib/ignition/base.d" May 14 23:49:07.250790 ignition[1150]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 14 23:49:07.250958 ignition[1150]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 14 23:49:07.255000 ignition[1150]: PUT result: OK May 14 23:49:07.265888 ignition[1150]: disks: disks passed May 14 23:49:07.265983 ignition[1150]: Ignition finished successfully May 14 23:49:07.269349 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 23:49:07.274413 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 23:49:07.279666 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 23:49:07.282622 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:49:07.285062 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:49:07.287647 systemd[1]: Reached target basic.target - Basic System. May 14 23:49:07.315458 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 23:49:07.374344 systemd-fsck[1158]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 14 23:49:07.384895 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 23:49:07.401304 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 23:49:07.490297 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 737cda88-7069-47ce-b2bc-d891099a68fb r/w with ordered data mode. Quota mode: none. May 14 23:49:07.491599 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 23:49:07.498828 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 23:49:07.519301 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:49:07.529526 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 23:49:07.540745 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 23:49:07.540843 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 23:49:07.540898 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:49:07.575159 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1177) May 14 23:49:07.579392 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:49:07.579469 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 14 23:49:07.581570 kernel: BTRFS info (device nvme0n1p6): using free space tree May 14 23:49:07.586721 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 23:49:07.597901 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 23:49:07.604297 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 14 23:49:07.607632 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:49:07.721870 initrd-setup-root[1201]: cut: /sysroot/etc/passwd: No such file or directory May 14 23:49:07.734015 initrd-setup-root[1208]: cut: /sysroot/etc/group: No such file or directory May 14 23:49:07.746473 initrd-setup-root[1215]: cut: /sysroot/etc/shadow: No such file or directory May 14 23:49:07.757475 initrd-setup-root[1222]: cut: /sysroot/etc/gshadow: No such file or directory May 14 23:49:07.949506 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 23:49:07.966441 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 23:49:07.975411 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 23:49:07.998298 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 23:49:08.002131 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:49:08.029282 systemd-networkd[1130]: eth0: Gained IPv6LL May 14 23:49:08.050383 ignition[1290]: INFO : Ignition 2.20.0 May 14 23:49:08.053386 ignition[1290]: INFO : Stage: mount May 14 23:49:08.053386 ignition[1290]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:49:08.053386 ignition[1290]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 14 23:49:08.053386 ignition[1290]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 14 23:49:08.066168 ignition[1290]: INFO : PUT result: OK May 14 23:49:08.072144 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 23:49:08.080820 ignition[1290]: INFO : mount: mount passed May 14 23:49:08.082971 ignition[1290]: INFO : Ignition finished successfully May 14 23:49:08.087255 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 23:49:08.106979 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 23:49:08.125540 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 23:49:08.153137 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1302) May 14 23:49:08.156873 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 02f9d4a0-2ee9-4834-b15d-b55399b9ff01 May 14 23:49:08.156916 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 14 23:49:08.156942 kernel: BTRFS info (device nvme0n1p6): using free space tree May 14 23:49:08.164148 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 14 23:49:08.168864 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 23:49:08.201634 ignition[1320]: INFO : Ignition 2.20.0 May 14 23:49:08.201634 ignition[1320]: INFO : Stage: files May 14 23:49:08.206836 ignition[1320]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:49:08.206836 ignition[1320]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 14 23:49:08.206836 ignition[1320]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 14 23:49:08.217169 ignition[1320]: INFO : PUT result: OK May 14 23:49:08.227507 ignition[1320]: DEBUG : files: compiled without relabeling support, skipping May 14 23:49:08.231645 ignition[1320]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 23:49:08.231645 ignition[1320]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 23:49:08.240821 ignition[1320]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 23:49:08.244206 ignition[1320]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 23:49:08.244206 ignition[1320]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 23:49:08.243337 unknown[1320]: wrote ssh authorized keys file for user: core May 14 23:49:08.254623 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 23:49:08.254623 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 14 23:49:08.336911 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 23:49:08.470870 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 23:49:08.470870 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 23:49:08.481533 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 14 23:49:09.174256 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 23:49:09.434764 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 23:49:09.434764 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 23:49:09.443753 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 23:49:09.443753 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 23:49:09.443753 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 23:49:09.443753 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:49:09.443753 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 23:49:09.464546 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:49:09.468864 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 23:49:09.473397 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:49:09.478057 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 23:49:09.483018 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 23:49:09.483018 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 23:49:09.483018 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 23:49:09.483018 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 14 23:49:09.861437 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 23:49:10.218433 ignition[1320]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 23:49:10.218433 ignition[1320]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 14 23:49:10.229207 ignition[1320]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:49:10.229207 ignition[1320]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 23:49:10.229207 ignition[1320]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 14 23:49:10.229207 ignition[1320]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 14 23:49:10.229207 ignition[1320]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 14 23:49:10.229207 ignition[1320]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 23:49:10.229207 ignition[1320]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 23:49:10.229207 ignition[1320]: INFO : files: files passed May 14 23:49:10.229207 ignition[1320]: INFO : Ignition finished successfully May 14 23:49:10.247842 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 23:49:10.271529 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 23:49:10.282250 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 23:49:10.293703 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 23:49:10.296743 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 23:49:10.318495 initrd-setup-root-after-ignition[1348]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:49:10.318495 initrd-setup-root-after-ignition[1348]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 23:49:10.328791 initrd-setup-root-after-ignition[1352]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 23:49:10.336203 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:49:10.342542 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 23:49:10.361457 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 23:49:10.412553 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 23:49:10.412777 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 23:49:10.417328 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 23:49:10.420187 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 23:49:10.422500 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 23:49:10.444461 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 23:49:10.472092 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:49:10.483506 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 23:49:10.515908 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 23:49:10.516170 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 23:49:10.525858 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 23:49:10.528297 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:49:10.530777 systemd[1]: Stopped target timers.target - Timer Units. May 14 23:49:10.532732 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 23:49:10.532841 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 23:49:10.535581 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 23:49:10.542038 systemd[1]: Stopped target basic.target - Basic System. May 14 23:49:10.546777 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 23:49:10.549597 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 23:49:10.553547 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 23:49:10.556703 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 23:49:10.560688 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 23:49:10.565772 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 23:49:10.571665 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 23:49:10.574366 systemd[1]: Stopped target swap.target - Swaps. May 14 23:49:10.576759 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 23:49:10.576892 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 23:49:10.610028 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 23:49:10.612882 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:49:10.615936 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 23:49:10.623807 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:49:10.627021 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 23:49:10.627187 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 23:49:10.630378 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 23:49:10.630509 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 23:49:10.641731 systemd[1]: ignition-files.service: Deactivated successfully. May 14 23:49:10.641852 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 23:49:10.661265 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 23:49:10.664740 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 23:49:10.664886 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:49:10.684630 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 23:49:10.690352 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 23:49:10.690509 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:49:10.695072 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 23:49:10.695238 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 23:49:10.714649 ignition[1373]: INFO : Ignition 2.20.0 May 14 23:49:10.720063 ignition[1373]: INFO : Stage: umount May 14 23:49:10.720063 ignition[1373]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 23:49:10.720063 ignition[1373]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 14 23:49:10.720063 ignition[1373]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 14 23:49:10.720063 ignition[1373]: INFO : PUT result: OK May 14 23:49:10.748379 ignition[1373]: INFO : umount: umount passed May 14 23:49:10.748379 ignition[1373]: INFO : Ignition finished successfully May 14 23:49:10.751453 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 23:49:10.751723 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 23:49:10.765459 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 23:49:10.766981 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 23:49:10.767533 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 23:49:10.777891 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 23:49:10.778021 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 23:49:10.780747 systemd[1]: ignition-fetch.service: Deactivated successfully. May 14 23:49:10.780856 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 14 23:49:10.783477 systemd[1]: Stopped target network.target - Network. May 14 23:49:10.785793 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 23:49:10.785935 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 23:49:10.789250 systemd[1]: Stopped target paths.target - Path Units. May 14 23:49:10.791492 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 23:49:10.814217 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:49:10.816943 systemd[1]: Stopped target slices.target - Slice Units. May 14 23:49:10.819379 systemd[1]: Stopped target sockets.target - Socket Units. May 14 23:49:10.826687 systemd[1]: iscsid.socket: Deactivated successfully. May 14 23:49:10.826780 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 23:49:10.829991 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 23:49:10.831056 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 23:49:10.846661 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 23:49:10.856093 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 23:49:10.859717 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 23:49:10.859835 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 23:49:10.863597 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 23:49:10.880291 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 23:49:10.889313 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 23:49:10.891230 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 23:49:10.917084 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 23:49:10.917937 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 23:49:10.918269 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 23:49:10.934300 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 23:49:10.934901 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 23:49:10.937935 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 23:49:10.945417 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 23:49:10.945593 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 23:49:10.952857 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 23:49:10.953007 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 23:49:10.972380 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 23:49:10.974805 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 23:49:10.974962 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 23:49:10.979510 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 23:49:10.979655 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 23:49:10.993902 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 23:49:10.994026 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 23:49:11.006829 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 23:49:11.006956 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:49:11.020903 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:49:11.026782 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 23:49:11.026917 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 23:49:11.050858 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 23:49:11.052059 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 23:49:11.062319 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 23:49:11.062804 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:49:11.069866 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 23:49:11.069955 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 23:49:11.075472 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 23:49:11.075559 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:49:11.087735 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 23:49:11.087836 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 23:49:11.090566 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 23:49:11.090650 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 23:49:11.093227 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 23:49:11.093307 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 23:49:11.113437 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 23:49:11.116522 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 23:49:11.116641 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:49:11.128602 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 23:49:11.128705 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:49:11.145072 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 23:49:11.145218 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 23:49:11.145826 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 23:49:11.146318 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 23:49:11.161882 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 23:49:11.182474 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 23:49:11.196494 systemd[1]: Switching root. May 14 23:49:11.233833 systemd-journald[252]: Journal stopped May 14 23:49:13.394167 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). May 14 23:49:13.394295 kernel: SELinux: policy capability network_peer_controls=1 May 14 23:49:13.394338 kernel: SELinux: policy capability open_perms=1 May 14 23:49:13.394374 kernel: SELinux: policy capability extended_socket_class=1 May 14 23:49:13.394404 kernel: SELinux: policy capability always_check_network=0 May 14 23:49:13.394433 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 23:49:13.394462 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 23:49:13.394492 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 23:49:13.394531 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 23:49:13.394560 kernel: audit: type=1403 audit(1747266551.576:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 23:49:13.394598 systemd[1]: Successfully loaded SELinux policy in 50.958ms. May 14 23:49:13.394650 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 30.200ms. May 14 23:49:13.394685 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 23:49:13.394717 systemd[1]: Detected virtualization amazon. May 14 23:49:13.394748 systemd[1]: Detected architecture arm64. May 14 23:49:13.394780 systemd[1]: Detected first boot. May 14 23:49:13.394810 systemd[1]: Initializing machine ID from VM UUID. May 14 23:49:13.394839 zram_generator::config[1417]: No configuration found. May 14 23:49:13.394872 kernel: NET: Registered PF_VSOCK protocol family May 14 23:49:13.394902 systemd[1]: Populated /etc with preset unit settings. May 14 23:49:13.394938 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 23:49:13.394971 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 23:49:13.395002 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 23:49:13.395033 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 23:49:13.395063 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 23:49:13.395095 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 23:49:13.395262 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 23:49:13.395296 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 23:49:13.395329 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 23:49:13.395369 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 23:49:13.395400 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 23:49:13.395429 systemd[1]: Created slice user.slice - User and Session Slice. May 14 23:49:13.395461 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 23:49:13.395491 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 23:49:13.395520 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 23:49:13.395568 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 23:49:13.395601 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 23:49:13.395649 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 23:49:13.395681 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 23:49:13.395713 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 23:49:13.395744 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 23:49:13.395776 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 23:49:13.395807 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 23:49:13.395837 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 23:49:13.395865 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 23:49:13.395899 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 23:49:13.395931 systemd[1]: Reached target slices.target - Slice Units. May 14 23:49:13.395961 systemd[1]: Reached target swap.target - Swaps. May 14 23:49:13.395991 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 23:49:13.396019 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 23:49:13.396051 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 23:49:13.396082 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 23:49:13.396132 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 23:49:13.396166 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 23:49:13.396196 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 23:49:13.396231 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 23:49:13.396260 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 23:49:13.396288 systemd[1]: Mounting media.mount - External Media Directory... May 14 23:49:13.396317 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 23:49:13.396347 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 23:49:13.396376 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 23:49:13.396405 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 23:49:13.396440 systemd[1]: Reached target machines.target - Containers. May 14 23:49:13.396474 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 23:49:13.396504 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:49:13.396534 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 23:49:13.396563 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 23:49:13.396592 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:49:13.396620 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:49:13.396650 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:49:13.396680 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 23:49:13.396708 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:49:13.396751 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 23:49:13.396780 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 23:49:13.396809 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 23:49:13.396838 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 23:49:13.396867 systemd[1]: Stopped systemd-fsck-usr.service. May 14 23:49:13.396897 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:49:13.396924 kernel: fuse: init (API version 7.39) May 14 23:49:13.396953 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 23:49:13.396986 kernel: ACPI: bus type drm_connector registered May 14 23:49:13.397013 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 23:49:13.397041 kernel: loop: module loaded May 14 23:49:13.397068 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 23:49:13.399136 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 23:49:13.399175 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 23:49:13.399208 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 23:49:13.399245 systemd[1]: verity-setup.service: Deactivated successfully. May 14 23:49:13.399275 systemd[1]: Stopped verity-setup.service. May 14 23:49:13.399303 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 23:49:13.399332 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 23:49:13.399361 systemd[1]: Mounted media.mount - External Media Directory. May 14 23:49:13.399390 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 23:49:13.399424 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 23:49:13.399459 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 23:49:13.399531 systemd-journald[1503]: Collecting audit messages is disabled. May 14 23:49:13.399600 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 23:49:13.399632 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 23:49:13.399668 systemd-journald[1503]: Journal started May 14 23:49:13.399716 systemd-journald[1503]: Runtime Journal (/run/log/journal/ec2692a9a3982acd96167407dfef6b5d) is 8M, max 75.3M, 67.3M free. May 14 23:49:12.766148 systemd[1]: Queued start job for default target multi-user.target. May 14 23:49:12.779342 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 14 23:49:12.780325 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 23:49:13.406045 systemd[1]: Started systemd-journald.service - Journal Service. May 14 23:49:13.408061 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 23:49:13.408494 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 23:49:13.414442 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:49:13.414820 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:49:13.420516 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:49:13.420893 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:49:13.426223 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:49:13.426618 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:49:13.432622 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 23:49:13.432992 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 23:49:13.438413 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:49:13.438798 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:49:13.444663 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 23:49:13.450094 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 23:49:13.456443 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 23:49:13.462653 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 23:49:13.489844 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 23:49:13.502391 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 23:49:13.517796 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 23:49:13.523700 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 23:49:13.523775 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 23:49:13.532197 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 23:49:13.549447 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 23:49:13.558060 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 23:49:13.562939 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:49:13.575419 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 23:49:13.582478 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 23:49:13.588592 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:49:13.597437 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 23:49:13.604634 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:49:13.614955 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:49:13.624615 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 23:49:13.641413 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 23:49:13.653353 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 23:49:13.666895 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 23:49:13.677004 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 23:49:13.683080 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 23:49:13.701263 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 23:49:13.706831 systemd-journald[1503]: Time spent on flushing to /var/log/journal/ec2692a9a3982acd96167407dfef6b5d is 74.276ms for 923 entries. May 14 23:49:13.706831 systemd-journald[1503]: System Journal (/var/log/journal/ec2692a9a3982acd96167407dfef6b5d) is 8M, max 195.6M, 187.6M free. May 14 23:49:13.792079 systemd-journald[1503]: Received client request to flush runtime journal. May 14 23:49:13.792222 kernel: loop0: detected capacity change from 0 to 123192 May 14 23:49:13.792260 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 23:49:13.713509 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 23:49:13.743449 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 23:49:13.754503 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 14 23:49:13.801003 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 23:49:13.820741 udevadm[1562]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 14 23:49:13.837541 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 23:49:13.846383 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 23:49:13.857180 kernel: loop1: detected capacity change from 0 to 189592 May 14 23:49:13.861320 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:49:13.909098 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 23:49:13.931051 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 23:49:14.015512 systemd-tmpfiles[1573]: ACLs are not supported, ignoring. May 14 23:49:14.018204 systemd-tmpfiles[1573]: ACLs are not supported, ignoring. May 14 23:49:14.038996 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 23:49:14.165209 kernel: loop2: detected capacity change from 0 to 53784 May 14 23:49:14.219963 kernel: loop3: detected capacity change from 0 to 113512 May 14 23:49:14.298848 kernel: loop4: detected capacity change from 0 to 123192 May 14 23:49:14.330180 kernel: loop5: detected capacity change from 0 to 189592 May 14 23:49:14.377167 kernel: loop6: detected capacity change from 0 to 53784 May 14 23:49:14.410187 kernel: loop7: detected capacity change from 0 to 113512 May 14 23:49:14.444089 (sd-merge)[1579]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 14 23:49:14.445724 (sd-merge)[1579]: Merged extensions into '/usr'. May 14 23:49:14.461712 systemd[1]: Reload requested from client PID 1553 ('systemd-sysext') (unit systemd-sysext.service)... May 14 23:49:14.461750 systemd[1]: Reloading... May 14 23:49:14.600199 ldconfig[1547]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 23:49:14.660839 zram_generator::config[1603]: No configuration found. May 14 23:49:14.968739 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:49:15.161160 systemd[1]: Reloading finished in 696 ms. May 14 23:49:15.183813 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 23:49:15.187624 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 23:49:15.191570 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 23:49:15.208786 systemd[1]: Starting ensure-sysext.service... May 14 23:49:15.216552 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 23:49:15.230488 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 23:49:15.270811 systemd[1]: Reload requested from client PID 1660 ('systemctl') (unit ensure-sysext.service)... May 14 23:49:15.270847 systemd[1]: Reloading... May 14 23:49:15.284154 systemd-tmpfiles[1661]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 23:49:15.285336 systemd-tmpfiles[1661]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 23:49:15.287512 systemd-tmpfiles[1661]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 23:49:15.288556 systemd-tmpfiles[1661]: ACLs are not supported, ignoring. May 14 23:49:15.288882 systemd-tmpfiles[1661]: ACLs are not supported, ignoring. May 14 23:49:15.299866 systemd-tmpfiles[1661]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:49:15.300081 systemd-tmpfiles[1661]: Skipping /boot May 14 23:49:15.328256 systemd-tmpfiles[1661]: Detected autofs mount point /boot during canonicalization of boot. May 14 23:49:15.328444 systemd-tmpfiles[1661]: Skipping /boot May 14 23:49:15.407337 systemd-udevd[1662]: Using default interface naming scheme 'v255'. May 14 23:49:15.507750 zram_generator::config[1692]: No configuration found. May 14 23:49:15.654054 (udev-worker)[1711]: Network interface NamePolicy= disabled on kernel command line. May 14 23:49:15.961186 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (1696) May 14 23:49:16.011849 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:49:16.206937 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 23:49:16.208071 systemd[1]: Reloading finished in 936 ms. May 14 23:49:16.229222 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 23:49:16.237694 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 23:49:16.338287 systemd[1]: Finished ensure-sysext.service. May 14 23:49:16.370181 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 14 23:49:16.404941 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 14 23:49:16.415454 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:49:16.432303 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 23:49:16.437179 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 23:49:16.440603 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 14 23:49:16.454434 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 23:49:16.463397 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 23:49:16.471479 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 23:49:16.479353 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 23:49:16.483853 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 23:49:16.488605 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 23:49:16.493471 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 23:49:16.503012 lvm[1861]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:49:16.501361 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 23:49:16.512308 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 23:49:16.525550 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 23:49:16.530290 systemd[1]: Reached target time-set.target - System Time Set. May 14 23:49:16.561406 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 23:49:16.567501 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 23:49:16.571545 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 23:49:16.572052 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 23:49:16.642695 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 23:49:16.643404 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 23:49:16.643828 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 23:49:16.649025 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 23:49:16.661226 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 23:49:16.667646 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 23:49:16.669225 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 23:49:16.681008 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 14 23:49:16.702448 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 23:49:16.703285 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 23:49:16.708896 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 23:49:16.723017 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 23:49:16.726653 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 23:49:16.738423 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 14 23:49:16.738687 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 23:49:16.752547 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 23:49:16.787328 lvm[1897]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 23:49:16.790064 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 23:49:16.790469 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 23:49:16.812716 augenrules[1906]: No rules May 14 23:49:16.814426 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:49:16.817013 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:49:16.822535 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 23:49:16.862320 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 14 23:49:16.864800 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 23:49:16.908797 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 23:49:17.013421 systemd-networkd[1872]: lo: Link UP May 14 23:49:17.013443 systemd-networkd[1872]: lo: Gained carrier May 14 23:49:17.016656 systemd-networkd[1872]: Enumeration completed May 14 23:49:17.016844 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 23:49:17.023433 systemd-resolved[1873]: Positive Trust Anchors: May 14 23:49:17.023937 systemd-networkd[1872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:49:17.023960 systemd-networkd[1872]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 23:49:17.024170 systemd-resolved[1873]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 23:49:17.024369 systemd-resolved[1873]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 23:49:17.028596 systemd-networkd[1872]: eth0: Link UP May 14 23:49:17.031382 systemd-networkd[1872]: eth0: Gained carrier May 14 23:49:17.031436 systemd-networkd[1872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 23:49:17.032520 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 23:49:17.034419 systemd-resolved[1873]: Defaulting to hostname 'linux'. May 14 23:49:17.048500 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 23:49:17.052913 systemd-networkd[1872]: eth0: DHCPv4 address 172.31.17.61/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 14 23:49:17.054763 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 23:49:17.060772 systemd[1]: Reached target network.target - Network. May 14 23:49:17.063803 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 23:49:17.066839 systemd[1]: Reached target sysinit.target - System Initialization. May 14 23:49:17.069251 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 23:49:17.071835 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 23:49:17.074659 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 23:49:17.077148 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 23:49:17.079824 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 23:49:17.083310 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 23:49:17.083363 systemd[1]: Reached target paths.target - Path Units. May 14 23:49:17.085798 systemd[1]: Reached target timers.target - Timer Units. May 14 23:49:17.090049 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 23:49:17.095703 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 23:49:17.103480 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 23:49:17.107011 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 23:49:17.110207 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 23:49:17.122567 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 23:49:17.127651 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 23:49:17.133182 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 23:49:17.136689 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 23:49:17.141013 systemd[1]: Reached target sockets.target - Socket Units. May 14 23:49:17.143457 systemd[1]: Reached target basic.target - Basic System. May 14 23:49:17.146026 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 23:49:17.146127 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 23:49:17.165392 systemd[1]: Starting containerd.service - containerd container runtime... May 14 23:49:17.170792 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 14 23:49:17.176543 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 23:49:17.186385 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 23:49:17.192813 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 23:49:17.195632 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 23:49:17.202535 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 23:49:17.212559 systemd[1]: Started ntpd.service - Network Time Service. May 14 23:49:17.221370 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 23:49:17.227685 systemd[1]: Starting setup-oem.service - Setup OEM... May 14 23:49:17.236415 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 23:49:17.244486 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 23:49:17.255426 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 23:49:17.259813 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 23:49:17.262009 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 23:49:17.267833 systemd[1]: Starting update-engine.service - Update Engine... May 14 23:49:17.276239 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 23:49:17.311188 jq[1933]: false May 14 23:49:17.304822 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 23:49:17.307386 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 23:49:17.330666 dbus-daemon[1932]: [system] SELinux support is enabled May 14 23:49:17.333934 dbus-daemon[1932]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1872 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 14 23:49:17.341397 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 23:49:17.352865 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 23:49:17.352912 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 23:49:17.358403 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 23:49:17.364899 dbus-daemon[1932]: [system] Successfully activated service 'org.freedesktop.systemd1' May 14 23:49:17.358447 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 23:49:17.375294 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 14 23:49:17.387005 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 23:49:17.387549 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 23:49:17.430252 jq[1946]: true May 14 23:49:17.430912 (ntainerd)[1965]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 23:49:17.479368 systemd[1]: motdgen.service: Deactivated successfully. May 14 23:49:17.481095 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 23:49:17.494390 extend-filesystems[1934]: Found loop4 May 14 23:49:17.504258 extend-filesystems[1934]: Found loop5 May 14 23:49:17.504258 extend-filesystems[1934]: Found loop6 May 14 23:49:17.504258 extend-filesystems[1934]: Found loop7 May 14 23:49:17.504258 extend-filesystems[1934]: Found nvme0n1 May 14 23:49:17.504258 extend-filesystems[1934]: Found nvme0n1p1 May 14 23:49:17.504258 extend-filesystems[1934]: Found nvme0n1p2 May 14 23:49:17.504258 extend-filesystems[1934]: Found nvme0n1p3 May 14 23:49:17.504258 extend-filesystems[1934]: Found usr May 14 23:49:17.504258 extend-filesystems[1934]: Found nvme0n1p4 May 14 23:49:17.504258 extend-filesystems[1934]: Found nvme0n1p6 May 14 23:49:17.504258 extend-filesystems[1934]: Found nvme0n1p7 May 14 23:49:17.504258 extend-filesystems[1934]: Found nvme0n1p9 May 14 23:49:17.504258 extend-filesystems[1934]: Checking size of /dev/nvme0n1p9 May 14 23:49:17.562181 tar[1957]: linux-arm64/helm May 14 23:49:17.565171 jq[1968]: true May 14 23:49:17.587338 update_engine[1944]: I20250514 23:49:17.586666 1944 main.cc:92] Flatcar Update Engine starting May 14 23:49:17.605191 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 23:49:17.615508 systemd[1]: Started update-engine.service - Update Engine. May 14 23:49:17.623186 update_engine[1944]: I20250514 23:49:17.621189 1944 update_check_scheduler.cc:74] Next update check in 11m48s May 14 23:49:17.626319 extend-filesystems[1934]: Resized partition /dev/nvme0n1p9 May 14 23:49:17.642405 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 23:49:17.662165 extend-filesystems[1986]: resize2fs 1.47.1 (20-May-2024) May 14 23:49:17.658615 ntpd[1936]: ntpd 4.2.8p17@1.4004-o Wed May 14 21:39:21 UTC 2025 (1): Starting May 14 23:49:17.686750 ntpd[1936]: 14 May 23:49:17 ntpd[1936]: ntpd 4.2.8p17@1.4004-o Wed May 14 21:39:21 UTC 2025 (1): Starting May 14 23:49:17.686750 ntpd[1936]: 14 May 23:49:17 ntpd[1936]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 14 23:49:17.686750 ntpd[1936]: 14 May 23:49:17 ntpd[1936]: ---------------------------------------------------- May 14 23:49:17.686750 ntpd[1936]: 14 May 23:49:17 ntpd[1936]: ntp-4 is maintained by Network Time Foundation, May 14 23:49:17.686750 ntpd[1936]: 14 May 23:49:17 ntpd[1936]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 14 23:49:17.686750 ntpd[1936]: 14 May 23:49:17 ntpd[1936]: corporation. Support and training for ntp-4 are May 14 23:49:17.686750 ntpd[1936]: 14 May 23:49:17 ntpd[1936]: available at https://www.nwtime.org/support May 14 23:49:17.686750 ntpd[1936]: 14 May 23:49:17 ntpd[1936]: ---------------------------------------------------- May 14 23:49:17.686750 ntpd[1936]: 14 May 23:49:17 ntpd[1936]: proto: precision = 0.096 usec (-23) May 14 23:49:17.686750 ntpd[1936]: 14 May 23:49:17 ntpd[1936]: basedate set to 2025-05-02 May 14 23:49:17.686750 ntpd[1936]: 14 May 23:49:17 ntpd[1936]: gps base set to 2025-05-04 (week 2365) May 14 23:49:17.686750 ntpd[1936]: 14 May 23:49:17 ntpd[1936]: Listen and drop on 0 v6wildcard [::]:123 May 14 23:49:17.686750 ntpd[1936]: 14 May 23:49:17 ntpd[1936]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 14 23:49:17.686750 ntpd[1936]: 14 May 23:49:17 ntpd[1936]: Listen normally on 2 lo 127.0.0.1:123 May 14 23:49:17.686750 ntpd[1936]: 14 May 23:49:17 ntpd[1936]: Listen normally on 3 eth0 172.31.17.61:123 May 14 23:49:17.686750 ntpd[1936]: 14 May 23:49:17 ntpd[1936]: Listen normally on 4 lo [::1]:123 May 14 23:49:17.686750 ntpd[1936]: 14 May 23:49:17 ntpd[1936]: bind(21) AF_INET6 fe80::46f:64ff:fecd:886f%2#123 flags 0x11 failed: Cannot assign requested address May 14 23:49:17.686750 ntpd[1936]: 14 May 23:49:17 ntpd[1936]: unable to create socket on eth0 (5) for fe80::46f:64ff:fecd:886f%2#123 May 14 23:49:17.686750 ntpd[1936]: 14 May 23:49:17 ntpd[1936]: failed to init interface for address fe80::46f:64ff:fecd:886f%2 May 14 23:49:17.686750 ntpd[1936]: 14 May 23:49:17 ntpd[1936]: Listening on routing socket on fd #21 for interface updates May 14 23:49:17.670196 systemd[1]: Finished setup-oem.service - Setup OEM. May 14 23:49:17.658665 ntpd[1936]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 14 23:49:17.700879 ntpd[1936]: 14 May 23:49:17 ntpd[1936]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 14 23:49:17.700879 ntpd[1936]: 14 May 23:49:17 ntpd[1936]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 14 23:49:17.658684 ntpd[1936]: ---------------------------------------------------- May 14 23:49:17.658703 ntpd[1936]: ntp-4 is maintained by Network Time Foundation, May 14 23:49:17.658721 ntpd[1936]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 14 23:49:17.658738 ntpd[1936]: corporation. Support and training for ntp-4 are May 14 23:49:17.658757 ntpd[1936]: available at https://www.nwtime.org/support May 14 23:49:17.658775 ntpd[1936]: ---------------------------------------------------- May 14 23:49:17.672746 ntpd[1936]: proto: precision = 0.096 usec (-23) May 14 23:49:17.676593 ntpd[1936]: basedate set to 2025-05-02 May 14 23:49:17.676625 ntpd[1936]: gps base set to 2025-05-04 (week 2365) May 14 23:49:17.681144 ntpd[1936]: Listen and drop on 0 v6wildcard [::]:123 May 14 23:49:17.681236 ntpd[1936]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 14 23:49:17.681495 ntpd[1936]: Listen normally on 2 lo 127.0.0.1:123 May 14 23:49:17.681556 ntpd[1936]: Listen normally on 3 eth0 172.31.17.61:123 May 14 23:49:17.681625 ntpd[1936]: Listen normally on 4 lo [::1]:123 May 14 23:49:17.681700 ntpd[1936]: bind(21) AF_INET6 fe80::46f:64ff:fecd:886f%2#123 flags 0x11 failed: Cannot assign requested address May 14 23:49:17.681737 ntpd[1936]: unable to create socket on eth0 (5) for fe80::46f:64ff:fecd:886f%2#123 May 14 23:49:17.681763 ntpd[1936]: failed to init interface for address fe80::46f:64ff:fecd:886f%2 May 14 23:49:17.681812 ntpd[1936]: Listening on routing socket on fd #21 for interface updates May 14 23:49:17.691015 ntpd[1936]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 14 23:49:17.705308 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 14 23:49:17.691074 ntpd[1936]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 14 23:49:17.763733 coreos-metadata[1931]: May 14 23:49:17.763 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 14 23:49:17.769795 coreos-metadata[1931]: May 14 23:49:17.769 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 14 23:49:17.770648 coreos-metadata[1931]: May 14 23:49:17.770 INFO Fetch successful May 14 23:49:17.770648 coreos-metadata[1931]: May 14 23:49:17.770 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 14 23:49:17.774947 coreos-metadata[1931]: May 14 23:49:17.774 INFO Fetch successful May 14 23:49:17.774947 coreos-metadata[1931]: May 14 23:49:17.774 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 14 23:49:17.775869 coreos-metadata[1931]: May 14 23:49:17.775 INFO Fetch successful May 14 23:49:17.775869 coreos-metadata[1931]: May 14 23:49:17.775 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 14 23:49:17.779175 coreos-metadata[1931]: May 14 23:49:17.779 INFO Fetch successful May 14 23:49:17.779175 coreos-metadata[1931]: May 14 23:49:17.779 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 14 23:49:17.784643 coreos-metadata[1931]: May 14 23:49:17.783 INFO Fetch failed with 404: resource not found May 14 23:49:17.784643 coreos-metadata[1931]: May 14 23:49:17.783 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 14 23:49:17.785382 coreos-metadata[1931]: May 14 23:49:17.785 INFO Fetch successful May 14 23:49:17.785382 coreos-metadata[1931]: May 14 23:49:17.785 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 14 23:49:17.787209 coreos-metadata[1931]: May 14 23:49:17.787 INFO Fetch successful May 14 23:49:17.787209 coreos-metadata[1931]: May 14 23:49:17.787 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 14 23:49:17.792961 coreos-metadata[1931]: May 14 23:49:17.792 INFO Fetch successful May 14 23:49:17.792961 coreos-metadata[1931]: May 14 23:49:17.792 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 14 23:49:17.800760 coreos-metadata[1931]: May 14 23:49:17.798 INFO Fetch successful May 14 23:49:17.800760 coreos-metadata[1931]: May 14 23:49:17.798 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 14 23:49:17.800760 coreos-metadata[1931]: May 14 23:49:17.799 INFO Fetch successful May 14 23:49:17.880249 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 14 23:49:17.903023 extend-filesystems[1986]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 14 23:49:17.903023 extend-filesystems[1986]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 23:49:17.903023 extend-filesystems[1986]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 14 23:49:17.973030 bash[2014]: Updated "/home/core/.ssh/authorized_keys" May 14 23:49:17.987898 extend-filesystems[1934]: Resized filesystem in /dev/nvme0n1p9 May 14 23:49:17.923643 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 23:49:17.926209 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 23:49:17.944705 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 23:49:17.957306 systemd-logind[1943]: Watching system buttons on /dev/input/event0 (Power Button) May 14 23:49:17.957342 systemd-logind[1943]: Watching system buttons on /dev/input/event1 (Sleep Button) May 14 23:49:17.960438 systemd-logind[1943]: New seat seat0. May 14 23:49:17.973779 systemd[1]: Started systemd-logind.service - User Login Management. May 14 23:49:17.981198 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 14 23:49:17.996531 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 23:49:18.042774 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (1701) May 14 23:49:18.036904 systemd[1]: Starting sshkeys.service... May 14 23:49:18.066412 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 14 23:49:18.079774 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 14 23:49:18.231222 containerd[1965]: time="2025-05-14T23:49:18.228233938Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 14 23:49:18.244980 locksmithd[1985]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 23:49:18.322945 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 14 23:49:18.333595 dbus-daemon[1932]: [system] Successfully activated service 'org.freedesktop.hostname1' May 14 23:49:18.337565 dbus-daemon[1932]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1956 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 14 23:49:18.380163 containerd[1965]: time="2025-05-14T23:49:18.377241454Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 14 23:49:18.405127 systemd[1]: Starting polkit.service - Authorization Manager... May 14 23:49:18.421232 containerd[1965]: time="2025-05-14T23:49:18.421136723Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 14 23:49:18.421232 containerd[1965]: time="2025-05-14T23:49:18.421222283Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 14 23:49:18.421436 containerd[1965]: time="2025-05-14T23:49:18.421261823Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 14 23:49:18.421693 containerd[1965]: time="2025-05-14T23:49:18.421622615Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 14 23:49:18.421779 containerd[1965]: time="2025-05-14T23:49:18.421696547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 14 23:49:18.422181 containerd[1965]: time="2025-05-14T23:49:18.421852211Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:49:18.422181 containerd[1965]: time="2025-05-14T23:49:18.421896647Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 14 23:49:18.422493 containerd[1965]: time="2025-05-14T23:49:18.422345987Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:49:18.422493 containerd[1965]: time="2025-05-14T23:49:18.422407079Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 14 23:49:18.422493 containerd[1965]: time="2025-05-14T23:49:18.422448167Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:49:18.422493 containerd[1965]: time="2025-05-14T23:49:18.422474831Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 14 23:49:18.422789 containerd[1965]: time="2025-05-14T23:49:18.422717783Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 14 23:49:18.429160 containerd[1965]: time="2025-05-14T23:49:18.428616251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 14 23:49:18.430378 containerd[1965]: time="2025-05-14T23:49:18.430071311Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 23:49:18.430378 containerd[1965]: time="2025-05-14T23:49:18.430371887Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 14 23:49:18.435145 containerd[1965]: time="2025-05-14T23:49:18.432604475Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 14 23:49:18.436152 containerd[1965]: time="2025-05-14T23:49:18.435338015Z" level=info msg="metadata content store policy set" policy=shared May 14 23:49:18.440619 containerd[1965]: time="2025-05-14T23:49:18.440540939Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 14 23:49:18.440787 containerd[1965]: time="2025-05-14T23:49:18.440755451Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 14 23:49:18.440868 containerd[1965]: time="2025-05-14T23:49:18.440805719Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 14 23:49:18.440868 containerd[1965]: time="2025-05-14T23:49:18.440845775Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 14 23:49:18.440996 containerd[1965]: time="2025-05-14T23:49:18.440878571Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 14 23:49:18.445156 containerd[1965]: time="2025-05-14T23:49:18.443209979Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 14 23:49:18.445156 containerd[1965]: time="2025-05-14T23:49:18.443698607Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 14 23:49:18.445156 containerd[1965]: time="2025-05-14T23:49:18.444039779Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 14 23:49:18.446553 containerd[1965]: time="2025-05-14T23:49:18.446465375Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 14 23:49:18.446686 containerd[1965]: time="2025-05-14T23:49:18.446563043Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 14 23:49:18.446686 containerd[1965]: time="2025-05-14T23:49:18.446603027Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 14 23:49:18.446686 containerd[1965]: time="2025-05-14T23:49:18.446637059Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 14 23:49:18.446686 containerd[1965]: time="2025-05-14T23:49:18.446671967Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 14 23:49:18.446889 containerd[1965]: time="2025-05-14T23:49:18.446712767Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 14 23:49:18.446889 containerd[1965]: time="2025-05-14T23:49:18.446747447Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 14 23:49:18.446889 containerd[1965]: time="2025-05-14T23:49:18.446779211Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 14 23:49:18.446889 containerd[1965]: time="2025-05-14T23:49:18.446808839Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 14 23:49:18.446889 containerd[1965]: time="2025-05-14T23:49:18.446837591Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 14 23:49:18.446889 containerd[1965]: time="2025-05-14T23:49:18.446880587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 14 23:49:18.447195 containerd[1965]: time="2025-05-14T23:49:18.446916407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 14 23:49:18.447195 containerd[1965]: time="2025-05-14T23:49:18.446948447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 14 23:49:18.447195 containerd[1965]: time="2025-05-14T23:49:18.446982731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 14 23:49:18.447195 containerd[1965]: time="2025-05-14T23:49:18.447013343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 14 23:49:18.447195 containerd[1965]: time="2025-05-14T23:49:18.447046259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 14 23:49:18.447195 containerd[1965]: time="2025-05-14T23:49:18.447075635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 14 23:49:18.449770 containerd[1965]: time="2025-05-14T23:49:18.449205467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 14 23:49:18.449770 containerd[1965]: time="2025-05-14T23:49:18.449276963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 14 23:49:18.449770 containerd[1965]: time="2025-05-14T23:49:18.449319203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 14 23:49:18.449770 containerd[1965]: time="2025-05-14T23:49:18.449357219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 14 23:49:18.449770 containerd[1965]: time="2025-05-14T23:49:18.449387807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 14 23:49:18.449770 containerd[1965]: time="2025-05-14T23:49:18.449417351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 14 23:49:18.449770 containerd[1965]: time="2025-05-14T23:49:18.449456171Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 14 23:49:18.449770 containerd[1965]: time="2025-05-14T23:49:18.449509487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 14 23:49:18.449770 containerd[1965]: time="2025-05-14T23:49:18.449543075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 14 23:49:18.449770 containerd[1965]: time="2025-05-14T23:49:18.449583803Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 14 23:49:18.450333 containerd[1965]: time="2025-05-14T23:49:18.449978003Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 14 23:49:18.454248 containerd[1965]: time="2025-05-14T23:49:18.453351599Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 14 23:49:18.454248 containerd[1965]: time="2025-05-14T23:49:18.453418607Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 14 23:49:18.454248 containerd[1965]: time="2025-05-14T23:49:18.453452999Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 14 23:49:18.462170 containerd[1965]: time="2025-05-14T23:49:18.453478607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 14 23:49:18.467488 containerd[1965]: time="2025-05-14T23:49:18.461080679Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 14 23:49:18.467488 containerd[1965]: time="2025-05-14T23:49:18.462882923Z" level=info msg="NRI interface is disabled by configuration." May 14 23:49:18.467488 containerd[1965]: time="2025-05-14T23:49:18.462931667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 14 23:49:18.465980 polkitd[2076]: Started polkitd version 121 May 14 23:49:18.468749 containerd[1965]: time="2025-05-14T23:49:18.466745843Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 14 23:49:18.468749 containerd[1965]: time="2025-05-14T23:49:18.466876583Z" level=info msg="Connect containerd service" May 14 23:49:18.468749 containerd[1965]: time="2025-05-14T23:49:18.466981067Z" level=info msg="using legacy CRI server" May 14 23:49:18.468749 containerd[1965]: time="2025-05-14T23:49:18.467012171Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 23:49:18.468749 containerd[1965]: time="2025-05-14T23:49:18.467465615Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 14 23:49:18.473174 containerd[1965]: time="2025-05-14T23:49:18.472436351Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 23:49:18.489799 containerd[1965]: time="2025-05-14T23:49:18.476372171Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 23:49:18.489799 containerd[1965]: time="2025-05-14T23:49:18.476512595Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 23:49:18.489799 containerd[1965]: time="2025-05-14T23:49:18.476572067Z" level=info msg="Start subscribing containerd event" May 14 23:49:18.489799 containerd[1965]: time="2025-05-14T23:49:18.476632967Z" level=info msg="Start recovering state" May 14 23:49:18.489799 containerd[1965]: time="2025-05-14T23:49:18.476755943Z" level=info msg="Start event monitor" May 14 23:49:18.489799 containerd[1965]: time="2025-05-14T23:49:18.476780747Z" level=info msg="Start snapshots syncer" May 14 23:49:18.489799 containerd[1965]: time="2025-05-14T23:49:18.476802551Z" level=info msg="Start cni network conf syncer for default" May 14 23:49:18.489799 containerd[1965]: time="2025-05-14T23:49:18.476821391Z" level=info msg="Start streaming server" May 14 23:49:18.489799 containerd[1965]: time="2025-05-14T23:49:18.477298703Z" level=info msg="containerd successfully booted in 0.257468s" May 14 23:49:18.477086 systemd[1]: Started containerd.service - containerd container runtime. May 14 23:49:18.494346 coreos-metadata[2037]: May 14 23:49:18.493 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 14 23:49:18.500426 coreos-metadata[2037]: May 14 23:49:18.497 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 14 23:49:18.500579 coreos-metadata[2037]: May 14 23:49:18.500 INFO Fetch successful May 14 23:49:18.500579 coreos-metadata[2037]: May 14 23:49:18.500 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 14 23:49:18.507201 coreos-metadata[2037]: May 14 23:49:18.505 INFO Fetch successful May 14 23:49:18.509623 unknown[2037]: wrote ssh authorized keys file for user: core May 14 23:49:18.524362 polkitd[2076]: Loading rules from directory /etc/polkit-1/rules.d May 14 23:49:18.524542 polkitd[2076]: Loading rules from directory /usr/share/polkit-1/rules.d May 14 23:49:18.530371 polkitd[2076]: Finished loading, compiling and executing 2 rules May 14 23:49:18.536413 dbus-daemon[1932]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 14 23:49:18.540536 polkitd[2076]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 14 23:49:18.568397 systemd[1]: Started polkit.service - Authorization Manager. May 14 23:49:18.588321 systemd-networkd[1872]: eth0: Gained IPv6LL May 14 23:49:18.606785 update-ssh-keys[2109]: Updated "/home/core/.ssh/authorized_keys" May 14 23:49:18.607592 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 23:49:18.614807 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 14 23:49:18.636598 systemd[1]: Finished sshkeys.service. May 14 23:49:18.660808 systemd-hostnamed[1956]: Hostname set to (transient) May 14 23:49:18.661615 systemd-resolved[1873]: System hostname changed to 'ip-172-31-17-61'. May 14 23:49:18.672639 systemd[1]: Reached target network-online.target - Network is Online. May 14 23:49:18.709329 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 14 23:49:18.719095 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:49:18.728579 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 23:49:18.902601 amazon-ssm-agent[2126]: Initializing new seelog logger May 14 23:49:18.914017 amazon-ssm-agent[2126]: New Seelog Logger Creation Complete May 14 23:49:18.914017 amazon-ssm-agent[2126]: 2025/05/14 23:49:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 14 23:49:18.914017 amazon-ssm-agent[2126]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 14 23:49:18.914017 amazon-ssm-agent[2126]: 2025/05/14 23:49:18 processing appconfig overrides May 14 23:49:18.914017 amazon-ssm-agent[2126]: 2025/05/14 23:49:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 14 23:49:18.914017 amazon-ssm-agent[2126]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 14 23:49:18.914017 amazon-ssm-agent[2126]: 2025/05/14 23:49:18 processing appconfig overrides May 14 23:49:18.918935 amazon-ssm-agent[2126]: 2025/05/14 23:49:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 14 23:49:18.919084 amazon-ssm-agent[2126]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 14 23:49:18.919391 amazon-ssm-agent[2126]: 2025/05/14 23:49:18 processing appconfig overrides May 14 23:49:18.920159 amazon-ssm-agent[2126]: 2025-05-14 23:49:18 INFO Proxy environment variables: May 14 23:49:18.936472 amazon-ssm-agent[2126]: 2025/05/14 23:49:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 14 23:49:18.944963 amazon-ssm-agent[2126]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 14 23:49:18.944963 amazon-ssm-agent[2126]: 2025/05/14 23:49:18 processing appconfig overrides May 14 23:49:18.965073 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 23:49:19.022019 amazon-ssm-agent[2126]: 2025-05-14 23:49:18 INFO https_proxy: May 14 23:49:19.121601 amazon-ssm-agent[2126]: 2025-05-14 23:49:18 INFO http_proxy: May 14 23:49:19.222441 amazon-ssm-agent[2126]: 2025-05-14 23:49:18 INFO no_proxy: May 14 23:49:19.321218 amazon-ssm-agent[2126]: 2025-05-14 23:49:18 INFO Checking if agent identity type OnPrem can be assumed May 14 23:49:19.419911 amazon-ssm-agent[2126]: 2025-05-14 23:49:18 INFO Checking if agent identity type EC2 can be assumed May 14 23:49:19.522699 amazon-ssm-agent[2126]: 2025-05-14 23:49:19 INFO Agent will take identity from EC2 May 14 23:49:19.624139 amazon-ssm-agent[2126]: 2025-05-14 23:49:19 INFO [amazon-ssm-agent] using named pipe channel for IPC May 14 23:49:19.709634 sshd_keygen[1966]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 23:49:19.721964 amazon-ssm-agent[2126]: 2025-05-14 23:49:19 INFO [amazon-ssm-agent] using named pipe channel for IPC May 14 23:49:19.722546 tar[1957]: linux-arm64/LICENSE May 14 23:49:19.724021 tar[1957]: linux-arm64/README.md May 14 23:49:19.763223 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 23:49:19.791846 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 23:49:19.807948 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 23:49:19.816590 systemd[1]: Started sshd@0-172.31.17.61:22-139.178.89.65:49828.service - OpenSSH per-connection server daemon (139.178.89.65:49828). May 14 23:49:19.825481 amazon-ssm-agent[2126]: 2025-05-14 23:49:19 INFO [amazon-ssm-agent] using named pipe channel for IPC May 14 23:49:19.866766 systemd[1]: issuegen.service: Deactivated successfully. May 14 23:49:19.867307 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 23:49:19.885081 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 23:49:19.917500 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 23:49:19.922357 amazon-ssm-agent[2126]: 2025-05-14 23:49:19 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 May 14 23:49:19.939013 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 23:49:19.958633 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 23:49:19.965143 systemd[1]: Reached target getty.target - Login Prompts. May 14 23:49:20.024092 amazon-ssm-agent[2126]: 2025-05-14 23:49:19 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 May 14 23:49:20.099793 sshd[2169]: Accepted publickey for core from 139.178.89.65 port 49828 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:49:20.105983 sshd-session[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:20.125461 amazon-ssm-agent[2126]: 2025-05-14 23:49:19 INFO [amazon-ssm-agent] Starting Core Agent May 14 23:49:20.131272 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 23:49:20.145689 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 23:49:20.179228 systemd-logind[1943]: New session 1 of user core. May 14 23:49:20.199358 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 23:49:20.218815 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 23:49:20.230318 amazon-ssm-agent[2126]: 2025-05-14 23:49:19 INFO [amazon-ssm-agent] registrar detected. Attempting registration May 14 23:49:20.242764 (systemd)[2180]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 23:49:20.252737 systemd-logind[1943]: New session c1 of user core. May 14 23:49:20.330864 amazon-ssm-agent[2126]: 2025-05-14 23:49:19 INFO [Registrar] Starting registrar module May 14 23:49:20.433194 amazon-ssm-agent[2126]: 2025-05-14 23:49:19 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration May 14 23:49:20.642968 systemd[2180]: Queued start job for default target default.target. May 14 23:49:20.653075 systemd[2180]: Created slice app.slice - User Application Slice. May 14 23:49:20.653182 systemd[2180]: Reached target paths.target - Paths. May 14 23:49:20.653290 systemd[2180]: Reached target timers.target - Timers. May 14 23:49:20.660429 ntpd[1936]: Listen normally on 6 eth0 [fe80::46f:64ff:fecd:886f%2]:123 May 14 23:49:20.660920 ntpd[1936]: 14 May 23:49:20 ntpd[1936]: Listen normally on 6 eth0 [fe80::46f:64ff:fecd:886f%2]:123 May 14 23:49:20.664588 systemd[2180]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 23:49:20.706674 systemd[2180]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 23:49:20.706992 systemd[2180]: Reached target sockets.target - Sockets. May 14 23:49:20.707162 systemd[2180]: Reached target basic.target - Basic System. May 14 23:49:20.707267 systemd[2180]: Reached target default.target - Main User Target. May 14 23:49:20.707332 systemd[2180]: Startup finished in 424ms. May 14 23:49:20.708553 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 23:49:20.727448 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 23:49:20.864444 amazon-ssm-agent[2126]: 2025-05-14 23:49:20 INFO [EC2Identity] EC2 registration was successful. May 14 23:49:20.902151 amazon-ssm-agent[2126]: 2025-05-14 23:49:20 INFO [CredentialRefresher] credentialRefresher has started May 14 23:49:20.902819 amazon-ssm-agent[2126]: 2025-05-14 23:49:20 INFO [CredentialRefresher] Starting credentials refresher loop May 14 23:49:20.902819 amazon-ssm-agent[2126]: 2025-05-14 23:49:20 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 14 23:49:20.906689 systemd[1]: Started sshd@1-172.31.17.61:22-139.178.89.65:49836.service - OpenSSH per-connection server daemon (139.178.89.65:49836). May 14 23:49:20.964856 amazon-ssm-agent[2126]: 2025-05-14 23:49:20 INFO [CredentialRefresher] Next credential rotation will be in 31.41664967006667 minutes May 14 23:49:21.116236 sshd[2191]: Accepted publickey for core from 139.178.89.65 port 49836 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:49:21.119017 sshd-session[2191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:21.129242 systemd-logind[1943]: New session 2 of user core. May 14 23:49:21.141445 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 23:49:21.274686 sshd[2193]: Connection closed by 139.178.89.65 port 49836 May 14 23:49:21.277443 sshd-session[2191]: pam_unix(sshd:session): session closed for user core May 14 23:49:21.285072 systemd[1]: sshd@1-172.31.17.61:22-139.178.89.65:49836.service: Deactivated successfully. May 14 23:49:21.288885 systemd[1]: session-2.scope: Deactivated successfully. May 14 23:49:21.292029 systemd-logind[1943]: Session 2 logged out. Waiting for processes to exit. May 14 23:49:21.294689 systemd-logind[1943]: Removed session 2. May 14 23:49:21.322038 systemd[1]: Started sshd@2-172.31.17.61:22-139.178.89.65:49842.service - OpenSSH per-connection server daemon (139.178.89.65:49842). May 14 23:49:21.519885 sshd[2199]: Accepted publickey for core from 139.178.89.65 port 49842 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:49:21.523348 sshd-session[2199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:21.535246 systemd-logind[1943]: New session 3 of user core. May 14 23:49:21.540486 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 23:49:21.677246 sshd[2201]: Connection closed by 139.178.89.65 port 49842 May 14 23:49:21.677849 sshd-session[2199]: pam_unix(sshd:session): session closed for user core May 14 23:49:21.686613 systemd[1]: sshd@2-172.31.17.61:22-139.178.89.65:49842.service: Deactivated successfully. May 14 23:49:21.690770 systemd[1]: session-3.scope: Deactivated successfully. May 14 23:49:21.692464 systemd-logind[1943]: Session 3 logged out. Waiting for processes to exit. May 14 23:49:21.695447 systemd-logind[1943]: Removed session 3. May 14 23:49:21.937301 amazon-ssm-agent[2126]: 2025-05-14 23:49:21 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 14 23:49:22.037268 amazon-ssm-agent[2126]: 2025-05-14 23:49:21 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2207) started May 14 23:49:22.137785 amazon-ssm-agent[2126]: 2025-05-14 23:49:21 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 14 23:49:22.276554 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:49:22.283529 (kubelet)[2221]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:49:22.284299 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 23:49:22.289307 systemd[1]: Startup finished in 1.074s (kernel) + 8.791s (initrd) + 10.762s (userspace) = 20.628s. May 14 23:49:23.637514 kubelet[2221]: E0514 23:49:23.637421 2221 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:49:23.642347 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:49:23.643427 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:49:23.644357 systemd[1]: kubelet.service: Consumed 1.341s CPU time, 235.5M memory peak. May 14 23:49:24.929615 systemd-resolved[1873]: Clock change detected. Flushing caches. May 14 23:49:31.984824 systemd[1]: Started sshd@3-172.31.17.61:22-139.178.89.65:33290.service - OpenSSH per-connection server daemon (139.178.89.65:33290). May 14 23:49:32.179087 sshd[2234]: Accepted publickey for core from 139.178.89.65 port 33290 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:49:32.181536 sshd-session[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:32.190424 systemd-logind[1943]: New session 4 of user core. May 14 23:49:32.197323 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 23:49:32.322976 sshd[2236]: Connection closed by 139.178.89.65 port 33290 May 14 23:49:32.323819 sshd-session[2234]: pam_unix(sshd:session): session closed for user core May 14 23:49:32.330466 systemd[1]: sshd@3-172.31.17.61:22-139.178.89.65:33290.service: Deactivated successfully. May 14 23:49:32.334795 systemd[1]: session-4.scope: Deactivated successfully. May 14 23:49:32.336196 systemd-logind[1943]: Session 4 logged out. Waiting for processes to exit. May 14 23:49:32.338430 systemd-logind[1943]: Removed session 4. May 14 23:49:32.365610 systemd[1]: Started sshd@4-172.31.17.61:22-139.178.89.65:33298.service - OpenSSH per-connection server daemon (139.178.89.65:33298). May 14 23:49:32.558217 sshd[2242]: Accepted publickey for core from 139.178.89.65 port 33298 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:49:32.560603 sshd-session[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:32.569310 systemd-logind[1943]: New session 5 of user core. May 14 23:49:32.581357 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 23:49:32.700100 sshd[2244]: Connection closed by 139.178.89.65 port 33298 May 14 23:49:32.700958 sshd-session[2242]: pam_unix(sshd:session): session closed for user core May 14 23:49:32.706093 systemd-logind[1943]: Session 5 logged out. Waiting for processes to exit. May 14 23:49:32.706409 systemd[1]: sshd@4-172.31.17.61:22-139.178.89.65:33298.service: Deactivated successfully. May 14 23:49:32.709283 systemd[1]: session-5.scope: Deactivated successfully. May 14 23:49:32.712838 systemd-logind[1943]: Removed session 5. May 14 23:49:32.743557 systemd[1]: Started sshd@5-172.31.17.61:22-139.178.89.65:33302.service - OpenSSH per-connection server daemon (139.178.89.65:33302). May 14 23:49:32.927265 sshd[2250]: Accepted publickey for core from 139.178.89.65 port 33302 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:49:32.930321 sshd-session[2250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:32.938172 systemd-logind[1943]: New session 6 of user core. May 14 23:49:32.950329 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 23:49:33.076923 sshd[2252]: Connection closed by 139.178.89.65 port 33302 May 14 23:49:33.076806 sshd-session[2250]: pam_unix(sshd:session): session closed for user core May 14 23:49:33.081937 systemd[1]: sshd@5-172.31.17.61:22-139.178.89.65:33302.service: Deactivated successfully. May 14 23:49:33.084977 systemd[1]: session-6.scope: Deactivated successfully. May 14 23:49:33.087987 systemd-logind[1943]: Session 6 logged out. Waiting for processes to exit. May 14 23:49:33.089986 systemd-logind[1943]: Removed session 6. May 14 23:49:33.128513 systemd[1]: Started sshd@6-172.31.17.61:22-139.178.89.65:33316.service - OpenSSH per-connection server daemon (139.178.89.65:33316). May 14 23:49:33.305737 sshd[2258]: Accepted publickey for core from 139.178.89.65 port 33316 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:49:33.308491 sshd-session[2258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:33.315948 systemd-logind[1943]: New session 7 of user core. May 14 23:49:33.325338 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 23:49:33.443998 sudo[2261]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 23:49:33.445400 sudo[2261]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:49:33.465400 sudo[2261]: pam_unix(sudo:session): session closed for user root May 14 23:49:33.488290 sshd[2260]: Connection closed by 139.178.89.65 port 33316 May 14 23:49:33.489320 sshd-session[2258]: pam_unix(sshd:session): session closed for user core May 14 23:49:33.495993 systemd[1]: sshd@6-172.31.17.61:22-139.178.89.65:33316.service: Deactivated successfully. May 14 23:49:33.499762 systemd[1]: session-7.scope: Deactivated successfully. May 14 23:49:33.501654 systemd-logind[1943]: Session 7 logged out. Waiting for processes to exit. May 14 23:49:33.503601 systemd-logind[1943]: Removed session 7. May 14 23:49:33.531567 systemd[1]: Started sshd@7-172.31.17.61:22-139.178.89.65:33320.service - OpenSSH per-connection server daemon (139.178.89.65:33320). May 14 23:49:33.720022 sshd[2267]: Accepted publickey for core from 139.178.89.65 port 33320 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:49:33.722448 sshd-session[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:33.731150 systemd-logind[1943]: New session 8 of user core. May 14 23:49:33.737359 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 23:49:33.840572 sudo[2271]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 23:49:33.841746 sudo[2271]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:49:33.848119 sudo[2271]: pam_unix(sudo:session): session closed for user root May 14 23:49:33.857830 sudo[2270]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 23:49:33.858458 sudo[2270]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:49:33.891234 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 23:49:33.935428 augenrules[2293]: No rules May 14 23:49:33.938181 systemd[1]: audit-rules.service: Deactivated successfully. May 14 23:49:33.938626 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 23:49:33.941016 sudo[2270]: pam_unix(sudo:session): session closed for user root May 14 23:49:33.942248 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 23:49:33.954446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:49:33.965273 sshd[2269]: Connection closed by 139.178.89.65 port 33320 May 14 23:49:33.965491 sshd-session[2267]: pam_unix(sshd:session): session closed for user core May 14 23:49:33.981666 systemd[1]: sshd@7-172.31.17.61:22-139.178.89.65:33320.service: Deactivated successfully. May 14 23:49:33.986795 systemd[1]: session-8.scope: Deactivated successfully. May 14 23:49:33.989174 systemd-logind[1943]: Session 8 logged out. Waiting for processes to exit. May 14 23:49:34.015586 systemd[1]: Started sshd@8-172.31.17.61:22-139.178.89.65:33324.service - OpenSSH per-connection server daemon (139.178.89.65:33324). May 14 23:49:34.018261 systemd-logind[1943]: Removed session 8. May 14 23:49:34.216139 sshd[2304]: Accepted publickey for core from 139.178.89.65 port 33324 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:49:34.219436 sshd-session[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:49:34.247383 systemd-logind[1943]: New session 9 of user core. May 14 23:49:34.252384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:49:34.257003 (kubelet)[2311]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:49:34.258549 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 23:49:34.331411 kubelet[2311]: E0514 23:49:34.331316 2311 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:49:34.338776 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:49:34.339282 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:49:34.341233 systemd[1]: kubelet.service: Consumed 274ms CPU time, 94.6M memory peak. May 14 23:49:34.367671 sudo[2320]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 23:49:34.368320 sudo[2320]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 23:49:34.917533 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 23:49:34.917685 (dockerd)[2339]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 23:49:35.265206 dockerd[2339]: time="2025-05-14T23:49:35.263738295Z" level=info msg="Starting up" May 14 23:49:35.397356 dockerd[2339]: time="2025-05-14T23:49:35.397297647Z" level=info msg="Loading containers: start." May 14 23:49:35.623208 kernel: Initializing XFRM netlink socket May 14 23:49:35.655247 (udev-worker)[2363]: Network interface NamePolicy= disabled on kernel command line. May 14 23:49:35.756310 systemd-networkd[1872]: docker0: Link UP May 14 23:49:35.794526 dockerd[2339]: time="2025-05-14T23:49:35.794372537Z" level=info msg="Loading containers: done." May 14 23:49:35.818717 dockerd[2339]: time="2025-05-14T23:49:35.818639382Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 23:49:35.818969 dockerd[2339]: time="2025-05-14T23:49:35.818787954Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 14 23:49:35.819028 dockerd[2339]: time="2025-05-14T23:49:35.819005598Z" level=info msg="Daemon has completed initialization" May 14 23:49:35.868145 dockerd[2339]: time="2025-05-14T23:49:35.868039782Z" level=info msg="API listen on /run/docker.sock" May 14 23:49:35.868597 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 23:49:37.366779 containerd[1965]: time="2025-05-14T23:49:37.366724517Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 14 23:49:38.002781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2522056850.mount: Deactivated successfully. May 14 23:49:39.253743 containerd[1965]: time="2025-05-14T23:49:39.253055815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:39.255770 containerd[1965]: time="2025-05-14T23:49:39.255693055Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554608" May 14 23:49:39.258775 containerd[1965]: time="2025-05-14T23:49:39.258726871Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:39.265207 containerd[1965]: time="2025-05-14T23:49:39.265122643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:39.269190 containerd[1965]: time="2025-05-14T23:49:39.267752335Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 1.900964362s" May 14 23:49:39.269190 containerd[1965]: time="2025-05-14T23:49:39.267837067Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 14 23:49:39.270202 containerd[1965]: time="2025-05-14T23:49:39.270148003Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 14 23:49:40.802395 containerd[1965]: time="2025-05-14T23:49:40.802102210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:40.804114 containerd[1965]: time="2025-05-14T23:49:40.804028606Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458978" May 14 23:49:40.804779 containerd[1965]: time="2025-05-14T23:49:40.804465598Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:40.812279 containerd[1965]: time="2025-05-14T23:49:40.812182198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:40.814646 containerd[1965]: time="2025-05-14T23:49:40.814453174Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.544243359s" May 14 23:49:40.814646 containerd[1965]: time="2025-05-14T23:49:40.814511110Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 14 23:49:40.815715 containerd[1965]: time="2025-05-14T23:49:40.815430370Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 14 23:49:42.004354 containerd[1965]: time="2025-05-14T23:49:42.004286252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:42.006384 containerd[1965]: time="2025-05-14T23:49:42.006318128Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125813" May 14 23:49:42.007108 containerd[1965]: time="2025-05-14T23:49:42.006817532Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:42.012200 containerd[1965]: time="2025-05-14T23:49:42.012122756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:42.014578 containerd[1965]: time="2025-05-14T23:49:42.014391344Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.19890833s" May 14 23:49:42.014578 containerd[1965]: time="2025-05-14T23:49:42.014441192Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 14 23:49:42.015355 containerd[1965]: time="2025-05-14T23:49:42.015290576Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 14 23:49:43.196974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount91427940.mount: Deactivated successfully. May 14 23:49:43.743158 containerd[1965]: time="2025-05-14T23:49:43.743104225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:43.745304 containerd[1965]: time="2025-05-14T23:49:43.745236721Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871917" May 14 23:49:43.746629 containerd[1965]: time="2025-05-14T23:49:43.746558821Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:43.750192 containerd[1965]: time="2025-05-14T23:49:43.750113665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:43.751614 containerd[1965]: time="2025-05-14T23:49:43.751429285Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.735945593s" May 14 23:49:43.751614 containerd[1965]: time="2025-05-14T23:49:43.751479517Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 14 23:49:43.752952 containerd[1965]: time="2025-05-14T23:49:43.752533165Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 23:49:44.274587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2277344177.mount: Deactivated successfully. May 14 23:49:44.589972 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 23:49:44.600285 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:49:44.943400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:49:44.954602 (kubelet)[2627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:49:45.039828 kubelet[2627]: E0514 23:49:45.039628 2627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:49:45.045510 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:49:45.045848 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:49:45.046681 systemd[1]: kubelet.service: Consumed 282ms CPU time, 93M memory peak. May 14 23:49:45.547233 containerd[1965]: time="2025-05-14T23:49:45.547169090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:45.551043 containerd[1965]: time="2025-05-14T23:49:45.550793654Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" May 14 23:49:45.555701 containerd[1965]: time="2025-05-14T23:49:45.555641666Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:45.564717 containerd[1965]: time="2025-05-14T23:49:45.564602966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:45.567135 containerd[1965]: time="2025-05-14T23:49:45.567047006Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.814457741s" May 14 23:49:45.567135 containerd[1965]: time="2025-05-14T23:49:45.567126686Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 14 23:49:45.568198 containerd[1965]: time="2025-05-14T23:49:45.567708362Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 23:49:46.159815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3858746582.mount: Deactivated successfully. May 14 23:49:46.172132 containerd[1965]: time="2025-05-14T23:49:46.171720889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:46.173592 containerd[1965]: time="2025-05-14T23:49:46.173510977Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 14 23:49:46.176133 containerd[1965]: time="2025-05-14T23:49:46.176038753Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:46.182540 containerd[1965]: time="2025-05-14T23:49:46.182444821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:46.184803 containerd[1965]: time="2025-05-14T23:49:46.184168117Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 616.405323ms" May 14 23:49:46.184803 containerd[1965]: time="2025-05-14T23:49:46.184223185Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 14 23:49:46.185730 containerd[1965]: time="2025-05-14T23:49:46.185354689Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 14 23:49:46.764791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2685654324.mount: Deactivated successfully. May 14 23:49:48.621093 containerd[1965]: time="2025-05-14T23:49:48.619530533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:48.622477 containerd[1965]: time="2025-05-14T23:49:48.622414829Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" May 14 23:49:48.623504 containerd[1965]: time="2025-05-14T23:49:48.623464625Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:48.629571 containerd[1965]: time="2025-05-14T23:49:48.629507225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:49:48.632473 containerd[1965]: time="2025-05-14T23:49:48.632407553Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.447001684s" May 14 23:49:48.632473 containerd[1965]: time="2025-05-14T23:49:48.632461889Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 14 23:49:48.951905 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 14 23:49:55.049027 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 14 23:49:55.058506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:49:55.369487 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:49:55.374409 (kubelet)[2746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 23:49:55.456116 kubelet[2746]: E0514 23:49:55.455467 2746 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 23:49:55.460037 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 23:49:55.461451 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 23:49:55.462256 systemd[1]: kubelet.service: Consumed 256ms CPU time, 94.9M memory peak. May 14 23:49:56.263862 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:49:56.264477 systemd[1]: kubelet.service: Consumed 256ms CPU time, 94.9M memory peak. May 14 23:49:56.273588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:49:56.332488 systemd[1]: Reload requested from client PID 2760 ('systemctl') (unit session-9.scope)... May 14 23:49:56.332521 systemd[1]: Reloading... May 14 23:49:56.601119 zram_generator::config[2808]: No configuration found. May 14 23:49:56.839937 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:49:57.068379 systemd[1]: Reloading finished in 735 ms. May 14 23:49:57.162383 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:49:57.174722 (kubelet)[2859]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:49:57.175971 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:49:57.178536 systemd[1]: kubelet.service: Deactivated successfully. May 14 23:49:57.178975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:49:57.179054 systemd[1]: kubelet.service: Consumed 206ms CPU time, 81.8M memory peak. May 14 23:49:57.189651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:49:57.469341 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:49:57.480629 (kubelet)[2871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:49:57.549755 kubelet[2871]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:49:57.549755 kubelet[2871]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 23:49:57.549755 kubelet[2871]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:49:57.550348 kubelet[2871]: I0514 23:49:57.549910 2871 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:49:59.602039 kubelet[2871]: I0514 23:49:59.601897 2871 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 23:49:59.602039 kubelet[2871]: I0514 23:49:59.601953 2871 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:49:59.604116 kubelet[2871]: I0514 23:49:59.603656 2871 server.go:929] "Client rotation is on, will bootstrap in background" May 14 23:49:59.654234 kubelet[2871]: E0514 23:49:59.654185 2871 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.17.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.17.61:6443: connect: connection refused" logger="UnhandledError" May 14 23:49:59.656825 kubelet[2871]: I0514 23:49:59.656785 2871 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:49:59.667429 kubelet[2871]: E0514 23:49:59.667382 2871 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 14 23:49:59.667731 kubelet[2871]: I0514 23:49:59.667709 2871 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 14 23:49:59.674520 kubelet[2871]: I0514 23:49:59.674418 2871 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:49:59.674696 kubelet[2871]: I0514 23:49:59.674662 2871 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 23:49:59.675051 kubelet[2871]: I0514 23:49:59.674995 2871 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:49:59.675359 kubelet[2871]: I0514 23:49:59.675043 2871 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-61","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 23:49:59.675533 kubelet[2871]: I0514 23:49:59.675405 2871 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:49:59.675533 kubelet[2871]: I0514 23:49:59.675432 2871 container_manager_linux.go:300] "Creating device plugin manager" May 14 23:49:59.675638 kubelet[2871]: I0514 23:49:59.675616 2871 state_mem.go:36] "Initialized new in-memory state store" May 14 23:49:59.679202 kubelet[2871]: I0514 23:49:59.678626 2871 kubelet.go:408] "Attempting to sync node with API server" May 14 23:49:59.679202 kubelet[2871]: I0514 23:49:59.678669 2871 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:49:59.679202 kubelet[2871]: I0514 23:49:59.678730 2871 kubelet.go:314] "Adding apiserver pod source" May 14 23:49:59.679202 kubelet[2871]: I0514 23:49:59.678750 2871 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:49:59.684846 kubelet[2871]: W0514 23:49:59.684766 2871 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-61&limit=500&resourceVersion=0": dial tcp 172.31.17.61:6443: connect: connection refused May 14 23:49:59.684971 kubelet[2871]: E0514 23:49:59.684861 2871 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-61&limit=500&resourceVersion=0\": dial tcp 172.31.17.61:6443: connect: connection refused" logger="UnhandledError" May 14 23:49:59.687178 kubelet[2871]: W0514 23:49:59.686661 2871 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.61:6443: connect: connection refused May 14 23:49:59.687178 kubelet[2871]: E0514 23:49:59.686757 2871 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.61:6443: connect: connection refused" logger="UnhandledError" May 14 23:49:59.687178 kubelet[2871]: I0514 23:49:59.686916 2871 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 14 23:49:59.689883 kubelet[2871]: I0514 23:49:59.689829 2871 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:49:59.691777 kubelet[2871]: W0514 23:49:59.691351 2871 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 23:49:59.696753 kubelet[2871]: I0514 23:49:59.696714 2871 server.go:1269] "Started kubelet" May 14 23:49:59.697798 kubelet[2871]: I0514 23:49:59.697740 2871 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:49:59.700841 kubelet[2871]: I0514 23:49:59.700794 2871 server.go:460] "Adding debug handlers to kubelet server" May 14 23:49:59.704508 kubelet[2871]: I0514 23:49:59.700812 2871 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:49:59.706912 kubelet[2871]: I0514 23:49:59.706806 2871 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:49:59.708210 kubelet[2871]: I0514 23:49:59.707226 2871 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:49:59.708210 kubelet[2871]: I0514 23:49:59.707730 2871 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 23:49:59.708419 kubelet[2871]: E0514 23:49:59.708308 2871 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-17-61\" not found" May 14 23:49:59.709425 kubelet[2871]: I0514 23:49:59.709362 2871 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 23:49:59.709550 kubelet[2871]: I0514 23:49:59.709492 2871 reconciler.go:26] "Reconciler: start to sync state" May 14 23:49:59.709957 kubelet[2871]: I0514 23:49:59.709922 2871 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 23:49:59.718742 kubelet[2871]: E0514 23:49:59.715436 2871 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.17.61:6443/api/v1/namespaces/default/events\": dial tcp 172.31.17.61:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-17-61.183f89b70ad335f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-17-61,UID:ip-172-31-17-61,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-17-61,},FirstTimestamp:2025-05-14 23:49:59.696676336 +0000 UTC m=+2.209887708,LastTimestamp:2025-05-14 23:49:59.696676336 +0000 UTC m=+2.209887708,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-17-61,}" May 14 23:49:59.718742 kubelet[2871]: W0514 23:49:59.717542 2871 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.61:6443: connect: connection refused May 14 23:49:59.718742 kubelet[2871]: E0514 23:49:59.717624 2871 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.61:6443: connect: connection refused" logger="UnhandledError" May 14 23:49:59.719570 kubelet[2871]: E0514 23:49:59.719506 2871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-61?timeout=10s\": dial tcp 172.31.17.61:6443: connect: connection refused" interval="200ms" May 14 23:49:59.725796 kubelet[2871]: I0514 23:49:59.725757 2871 factory.go:221] Registration of the containerd container factory successfully May 14 23:49:59.725976 kubelet[2871]: I0514 23:49:59.725957 2871 factory.go:221] Registration of the systemd container factory successfully May 14 23:49:59.726266 kubelet[2871]: I0514 23:49:59.726233 2871 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:49:59.735121 kubelet[2871]: I0514 23:49:59.734291 2871 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:49:59.736337 kubelet[2871]: I0514 23:49:59.736285 2871 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:49:59.736337 kubelet[2871]: I0514 23:49:59.736332 2871 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 23:49:59.736505 kubelet[2871]: I0514 23:49:59.736364 2871 kubelet.go:2321] "Starting kubelet main sync loop" May 14 23:49:59.736505 kubelet[2871]: E0514 23:49:59.736433 2871 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:49:59.748667 kubelet[2871]: W0514 23:49:59.748582 2871 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.61:6443: connect: connection refused May 14 23:49:59.748848 kubelet[2871]: E0514 23:49:59.748683 2871 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.61:6443: connect: connection refused" logger="UnhandledError" May 14 23:49:59.749269 kubelet[2871]: E0514 23:49:59.748909 2871 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 23:49:59.763056 kubelet[2871]: I0514 23:49:59.762985 2871 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 23:49:59.763056 kubelet[2871]: I0514 23:49:59.763049 2871 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 23:49:59.763303 kubelet[2871]: I0514 23:49:59.763115 2871 state_mem.go:36] "Initialized new in-memory state store" May 14 23:49:59.767130 kubelet[2871]: I0514 23:49:59.767084 2871 policy_none.go:49] "None policy: Start" May 14 23:49:59.768543 kubelet[2871]: I0514 23:49:59.768506 2871 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 23:49:59.768642 kubelet[2871]: I0514 23:49:59.768565 2871 state_mem.go:35] "Initializing new in-memory state store" May 14 23:49:59.783703 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 23:49:59.802891 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 23:49:59.808519 kubelet[2871]: E0514 23:49:59.808470 2871 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-17-61\" not found" May 14 23:49:59.809292 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 23:49:59.816695 kubelet[2871]: I0514 23:49:59.816659 2871 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:49:59.817434 kubelet[2871]: I0514 23:49:59.817131 2871 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 23:49:59.817434 kubelet[2871]: I0514 23:49:59.817157 2871 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:49:59.819010 kubelet[2871]: I0514 23:49:59.818645 2871 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:49:59.822663 kubelet[2871]: E0514 23:49:59.822614 2871 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-17-61\" not found" May 14 23:49:59.856976 systemd[1]: Created slice kubepods-burstable-pod8a7ecf532871b1fe480fc8a71b12bf5f.slice - libcontainer container kubepods-burstable-pod8a7ecf532871b1fe480fc8a71b12bf5f.slice. May 14 23:49:59.876050 systemd[1]: Created slice kubepods-burstable-pod1200944c8d368c0c50120ac4533ab757.slice - libcontainer container kubepods-burstable-pod1200944c8d368c0c50120ac4533ab757.slice. May 14 23:49:59.887785 systemd[1]: Created slice kubepods-burstable-pod7c90004aeeb1fc55227c83f59b49319b.slice - libcontainer container kubepods-burstable-pod7c90004aeeb1fc55227c83f59b49319b.slice. May 14 23:49:59.919562 kubelet[2871]: I0514 23:49:59.919496 2871 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-61" May 14 23:49:59.920312 kubelet[2871]: E0514 23:49:59.920100 2871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-61?timeout=10s\": dial tcp 172.31.17.61:6443: connect: connection refused" interval="400ms" May 14 23:49:59.920312 kubelet[2871]: E0514 23:49:59.920273 2871 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.61:6443/api/v1/nodes\": dial tcp 172.31.17.61:6443: connect: connection refused" node="ip-172-31-17-61" May 14 23:50:00.010672 kubelet[2871]: I0514 23:50:00.010582 2871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1200944c8d368c0c50120ac4533ab757-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-61\" (UID: \"1200944c8d368c0c50120ac4533ab757\") " pod="kube-system/kube-controller-manager-ip-172-31-17-61" May 14 23:50:00.010672 kubelet[2871]: I0514 23:50:00.010637 2871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a7ecf532871b1fe480fc8a71b12bf5f-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-61\" (UID: \"8a7ecf532871b1fe480fc8a71b12bf5f\") " pod="kube-system/kube-apiserver-ip-172-31-17-61" May 14 23:50:00.011128 kubelet[2871]: I0514 23:50:00.010679 2871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a7ecf532871b1fe480fc8a71b12bf5f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-61\" (UID: \"8a7ecf532871b1fe480fc8a71b12bf5f\") " pod="kube-system/kube-apiserver-ip-172-31-17-61" May 14 23:50:00.011128 kubelet[2871]: I0514 23:50:00.010722 2871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1200944c8d368c0c50120ac4533ab757-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-61\" (UID: \"1200944c8d368c0c50120ac4533ab757\") " pod="kube-system/kube-controller-manager-ip-172-31-17-61" May 14 23:50:00.011128 kubelet[2871]: I0514 23:50:00.010759 2871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1200944c8d368c0c50120ac4533ab757-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-61\" (UID: \"1200944c8d368c0c50120ac4533ab757\") " pod="kube-system/kube-controller-manager-ip-172-31-17-61" May 14 23:50:00.011128 kubelet[2871]: I0514 23:50:00.010798 2871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c90004aeeb1fc55227c83f59b49319b-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-61\" (UID: \"7c90004aeeb1fc55227c83f59b49319b\") " pod="kube-system/kube-scheduler-ip-172-31-17-61" May 14 23:50:00.011128 kubelet[2871]: I0514 23:50:00.010832 2871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a7ecf532871b1fe480fc8a71b12bf5f-ca-certs\") pod \"kube-apiserver-ip-172-31-17-61\" (UID: \"8a7ecf532871b1fe480fc8a71b12bf5f\") " pod="kube-system/kube-apiserver-ip-172-31-17-61" May 14 23:50:00.011382 kubelet[2871]: I0514 23:50:00.010881 2871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1200944c8d368c0c50120ac4533ab757-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-61\" (UID: \"1200944c8d368c0c50120ac4533ab757\") " pod="kube-system/kube-controller-manager-ip-172-31-17-61" May 14 23:50:00.011382 kubelet[2871]: I0514 23:50:00.010914 2871 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1200944c8d368c0c50120ac4533ab757-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-61\" (UID: \"1200944c8d368c0c50120ac4533ab757\") " pod="kube-system/kube-controller-manager-ip-172-31-17-61" May 14 23:50:00.122842 kubelet[2871]: I0514 23:50:00.122674 2871 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-61" May 14 23:50:00.123756 kubelet[2871]: E0514 23:50:00.123699 2871 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.61:6443/api/v1/nodes\": dial tcp 172.31.17.61:6443: connect: connection refused" node="ip-172-31-17-61" May 14 23:50:00.172824 containerd[1965]: time="2025-05-14T23:50:00.172768383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-61,Uid:8a7ecf532871b1fe480fc8a71b12bf5f,Namespace:kube-system,Attempt:0,}" May 14 23:50:00.182816 containerd[1965]: time="2025-05-14T23:50:00.182380359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-61,Uid:1200944c8d368c0c50120ac4533ab757,Namespace:kube-system,Attempt:0,}" May 14 23:50:00.192913 containerd[1965]: time="2025-05-14T23:50:00.192862971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-61,Uid:7c90004aeeb1fc55227c83f59b49319b,Namespace:kube-system,Attempt:0,}" May 14 23:50:00.321408 kubelet[2871]: E0514 23:50:00.321332 2871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-61?timeout=10s\": dial tcp 172.31.17.61:6443: connect: connection refused" interval="800ms" May 14 23:50:00.526804 kubelet[2871]: I0514 23:50:00.526650 2871 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-61" May 14 23:50:00.527919 kubelet[2871]: E0514 23:50:00.527803 2871 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.61:6443/api/v1/nodes\": dial tcp 172.31.17.61:6443: connect: connection refused" node="ip-172-31-17-61" May 14 23:50:00.688876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4189774830.mount: Deactivated successfully. May 14 23:50:00.693518 kubelet[2871]: W0514 23:50:00.693427 2871 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.17.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-61&limit=500&resourceVersion=0": dial tcp 172.31.17.61:6443: connect: connection refused May 14 23:50:00.694099 kubelet[2871]: E0514 23:50:00.693553 2871 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.17.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-17-61&limit=500&resourceVersion=0\": dial tcp 172.31.17.61:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:00.706209 containerd[1965]: time="2025-05-14T23:50:00.706147301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:50:00.713685 containerd[1965]: time="2025-05-14T23:50:00.713597813Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 14 23:50:00.718452 containerd[1965]: time="2025-05-14T23:50:00.718333697Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:50:00.721937 containerd[1965]: time="2025-05-14T23:50:00.721864193Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:50:00.723906 containerd[1965]: time="2025-05-14T23:50:00.723843245Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:50:00.726348 containerd[1965]: time="2025-05-14T23:50:00.726186077Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 14 23:50:00.728362 containerd[1965]: time="2025-05-14T23:50:00.728295353Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 14 23:50:00.730711 containerd[1965]: time="2025-05-14T23:50:00.730636577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 23:50:00.732914 containerd[1965]: time="2025-05-14T23:50:00.732341681Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 559.46567ms" May 14 23:50:00.742904 containerd[1965]: time="2025-05-14T23:50:00.742539653Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 560.052254ms" May 14 23:50:00.744584 containerd[1965]: time="2025-05-14T23:50:00.744508625Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 551.373098ms" May 14 23:50:00.867469 kubelet[2871]: W0514 23:50:00.865263 2871 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.17.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.17.61:6443: connect: connection refused May 14 23:50:00.867469 kubelet[2871]: E0514 23:50:00.865345 2871 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.17.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.17.61:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:00.931530 containerd[1965]: time="2025-05-14T23:50:00.931042314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:50:00.931530 containerd[1965]: time="2025-05-14T23:50:00.931211778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:50:00.931530 containerd[1965]: time="2025-05-14T23:50:00.931264122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:00.931774 containerd[1965]: time="2025-05-14T23:50:00.931440762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:00.941533 containerd[1965]: time="2025-05-14T23:50:00.940973910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:50:00.941533 containerd[1965]: time="2025-05-14T23:50:00.941095770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:50:00.941533 containerd[1965]: time="2025-05-14T23:50:00.941135526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:00.942228 containerd[1965]: time="2025-05-14T23:50:00.941711922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:00.946401 containerd[1965]: time="2025-05-14T23:50:00.945167082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:50:00.946401 containerd[1965]: time="2025-05-14T23:50:00.945264222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:50:00.946401 containerd[1965]: time="2025-05-14T23:50:00.945292746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:00.946401 containerd[1965]: time="2025-05-14T23:50:00.945467778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:00.947341 kubelet[2871]: W0514 23:50:00.947268 2871 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.17.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.17.61:6443: connect: connection refused May 14 23:50:00.947616 kubelet[2871]: E0514 23:50:00.947554 2871 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.17.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.17.61:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:00.991410 systemd[1]: Started cri-containerd-7bd55e925492f750946a8d7d52483658837dd7d43a4adcb18747fef5eb2cddfa.scope - libcontainer container 7bd55e925492f750946a8d7d52483658837dd7d43a4adcb18747fef5eb2cddfa. May 14 23:50:01.008421 systemd[1]: Started cri-containerd-baf6f29df0f20ed1ce5e1e150046cee57e8f3bf8f0ac4efbe2349745937fe836.scope - libcontainer container baf6f29df0f20ed1ce5e1e150046cee57e8f3bf8f0ac4efbe2349745937fe836. May 14 23:50:01.012683 systemd[1]: Started cri-containerd-ca17bee3cca6f100be1aec3c87c38988a931bd32fb9f998da00f3f236b61a55f.scope - libcontainer container ca17bee3cca6f100be1aec3c87c38988a931bd32fb9f998da00f3f236b61a55f. May 14 23:50:01.114139 containerd[1965]: time="2025-05-14T23:50:01.114039915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-17-61,Uid:7c90004aeeb1fc55227c83f59b49319b,Namespace:kube-system,Attempt:0,} returns sandbox id \"baf6f29df0f20ed1ce5e1e150046cee57e8f3bf8f0ac4efbe2349745937fe836\"" May 14 23:50:01.122703 kubelet[2871]: E0514 23:50:01.121839 2871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.17.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-61?timeout=10s\": dial tcp 172.31.17.61:6443: connect: connection refused" interval="1.6s" May 14 23:50:01.125290 containerd[1965]: time="2025-05-14T23:50:01.125219067Z" level=info msg="CreateContainer within sandbox \"baf6f29df0f20ed1ce5e1e150046cee57e8f3bf8f0ac4efbe2349745937fe836\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 23:50:01.134550 containerd[1965]: time="2025-05-14T23:50:01.134467851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-17-61,Uid:1200944c8d368c0c50120ac4533ab757,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca17bee3cca6f100be1aec3c87c38988a931bd32fb9f998da00f3f236b61a55f\"" May 14 23:50:01.146838 containerd[1965]: time="2025-05-14T23:50:01.146790831Z" level=info msg="CreateContainer within sandbox \"ca17bee3cca6f100be1aec3c87c38988a931bd32fb9f998da00f3f236b61a55f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 23:50:01.151105 containerd[1965]: time="2025-05-14T23:50:01.150999087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-17-61,Uid:8a7ecf532871b1fe480fc8a71b12bf5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bd55e925492f750946a8d7d52483658837dd7d43a4adcb18747fef5eb2cddfa\"" May 14 23:50:01.155529 containerd[1965]: time="2025-05-14T23:50:01.155300199Z" level=info msg="CreateContainer within sandbox \"7bd55e925492f750946a8d7d52483658837dd7d43a4adcb18747fef5eb2cddfa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 23:50:01.172876 containerd[1965]: time="2025-05-14T23:50:01.172711216Z" level=info msg="CreateContainer within sandbox \"baf6f29df0f20ed1ce5e1e150046cee57e8f3bf8f0ac4efbe2349745937fe836\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0771bc3617f979f2755ff254bbbd4e63c5fc1ba347b6200e547c1fc06ffdc734\"" May 14 23:50:01.174036 containerd[1965]: time="2025-05-14T23:50:01.173842900Z" level=info msg="StartContainer for \"0771bc3617f979f2755ff254bbbd4e63c5fc1ba347b6200e547c1fc06ffdc734\"" May 14 23:50:01.193198 containerd[1965]: time="2025-05-14T23:50:01.192966772Z" level=info msg="CreateContainer within sandbox \"ca17bee3cca6f100be1aec3c87c38988a931bd32fb9f998da00f3f236b61a55f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"066df360d139681524984ae7e171beab5ef0871addba5177e071df09d5c37cd4\"" May 14 23:50:01.193992 containerd[1965]: time="2025-05-14T23:50:01.193782064Z" level=info msg="StartContainer for \"066df360d139681524984ae7e171beab5ef0871addba5177e071df09d5c37cd4\"" May 14 23:50:01.206518 containerd[1965]: time="2025-05-14T23:50:01.206283196Z" level=info msg="CreateContainer within sandbox \"7bd55e925492f750946a8d7d52483658837dd7d43a4adcb18747fef5eb2cddfa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3f1d58f104111671ef6f0a8e1f5f9db981b80380606197a1bf46970e64eb0db3\"" May 14 23:50:01.208133 containerd[1965]: time="2025-05-14T23:50:01.207596908Z" level=info msg="StartContainer for \"3f1d58f104111671ef6f0a8e1f5f9db981b80380606197a1bf46970e64eb0db3\"" May 14 23:50:01.237910 systemd[1]: Started cri-containerd-0771bc3617f979f2755ff254bbbd4e63c5fc1ba347b6200e547c1fc06ffdc734.scope - libcontainer container 0771bc3617f979f2755ff254bbbd4e63c5fc1ba347b6200e547c1fc06ffdc734. May 14 23:50:01.254992 kubelet[2871]: W0514 23:50:01.254908 2871 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.17.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.17.61:6443: connect: connection refused May 14 23:50:01.255438 kubelet[2871]: E0514 23:50:01.255354 2871 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.17.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.17.61:6443: connect: connection refused" logger="UnhandledError" May 14 23:50:01.297405 systemd[1]: Started cri-containerd-066df360d139681524984ae7e171beab5ef0871addba5177e071df09d5c37cd4.scope - libcontainer container 066df360d139681524984ae7e171beab5ef0871addba5177e071df09d5c37cd4. May 14 23:50:01.303774 systemd[1]: Started cri-containerd-3f1d58f104111671ef6f0a8e1f5f9db981b80380606197a1bf46970e64eb0db3.scope - libcontainer container 3f1d58f104111671ef6f0a8e1f5f9db981b80380606197a1bf46970e64eb0db3. May 14 23:50:01.331426 kubelet[2871]: I0514 23:50:01.331374 2871 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-61" May 14 23:50:01.332316 kubelet[2871]: E0514 23:50:01.332129 2871 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.17.61:6443/api/v1/nodes\": dial tcp 172.31.17.61:6443: connect: connection refused" node="ip-172-31-17-61" May 14 23:50:01.374437 containerd[1965]: time="2025-05-14T23:50:01.373230869Z" level=info msg="StartContainer for \"0771bc3617f979f2755ff254bbbd4e63c5fc1ba347b6200e547c1fc06ffdc734\" returns successfully" May 14 23:50:01.456562 containerd[1965]: time="2025-05-14T23:50:01.456485441Z" level=info msg="StartContainer for \"3f1d58f104111671ef6f0a8e1f5f9db981b80380606197a1bf46970e64eb0db3\" returns successfully" May 14 23:50:01.457425 containerd[1965]: time="2025-05-14T23:50:01.456618617Z" level=info msg="StartContainer for \"066df360d139681524984ae7e171beab5ef0871addba5177e071df09d5c37cd4\" returns successfully" May 14 23:50:02.934036 update_engine[1944]: I20250514 23:50:02.933108 1944 update_attempter.cc:509] Updating boot flags... May 14 23:50:02.936583 kubelet[2871]: I0514 23:50:02.935191 2871 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-61" May 14 23:50:03.098121 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (3162) May 14 23:50:03.567202 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (3161) May 14 23:50:04.071094 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (3161) May 14 23:50:06.575424 kubelet[2871]: E0514 23:50:06.575365 2871 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-17-61\" not found" node="ip-172-31-17-61" May 14 23:50:06.673050 kubelet[2871]: I0514 23:50:06.672609 2871 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-17-61" May 14 23:50:06.673050 kubelet[2871]: E0514 23:50:06.672663 2871 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-17-61\": node \"ip-172-31-17-61\" not found" May 14 23:50:06.689824 kubelet[2871]: I0514 23:50:06.689778 2871 apiserver.go:52] "Watching apiserver" May 14 23:50:06.710001 kubelet[2871]: I0514 23:50:06.709886 2871 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 23:50:08.976521 systemd[1]: Reload requested from client PID 3419 ('systemctl') (unit session-9.scope)... May 14 23:50:08.976561 systemd[1]: Reloading... May 14 23:50:09.203129 zram_generator::config[3473]: No configuration found. May 14 23:50:09.456343 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 23:50:09.729244 systemd[1]: Reloading finished in 751 ms. May 14 23:50:09.787997 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:09.812050 systemd[1]: kubelet.service: Deactivated successfully. May 14 23:50:09.812627 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:09.812728 systemd[1]: kubelet.service: Consumed 3.005s CPU time, 118.8M memory peak. May 14 23:50:09.823635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 23:50:10.126675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 23:50:10.142663 (kubelet)[3524]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 23:50:10.254862 kubelet[3524]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:50:10.254862 kubelet[3524]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 23:50:10.257505 kubelet[3524]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 23:50:10.257505 kubelet[3524]: I0514 23:50:10.255492 3524 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 23:50:10.271189 kubelet[3524]: I0514 23:50:10.271137 3524 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 23:50:10.271578 kubelet[3524]: I0514 23:50:10.271526 3524 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 23:50:10.272494 kubelet[3524]: I0514 23:50:10.272448 3524 server.go:929] "Client rotation is on, will bootstrap in background" May 14 23:50:10.275867 kubelet[3524]: I0514 23:50:10.275821 3524 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 23:50:10.285817 kubelet[3524]: I0514 23:50:10.284872 3524 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 23:50:10.294368 kubelet[3524]: E0514 23:50:10.294300 3524 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 14 23:50:10.294368 kubelet[3524]: I0514 23:50:10.294363 3524 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 14 23:50:10.300795 kubelet[3524]: I0514 23:50:10.300582 3524 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 23:50:10.300970 kubelet[3524]: I0514 23:50:10.300842 3524 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 23:50:10.302285 kubelet[3524]: I0514 23:50:10.302190 3524 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 23:50:10.302636 kubelet[3524]: I0514 23:50:10.302272 3524 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-17-61","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 23:50:10.302849 kubelet[3524]: I0514 23:50:10.302651 3524 topology_manager.go:138] "Creating topology manager with none policy" May 14 23:50:10.302849 kubelet[3524]: I0514 23:50:10.302677 3524 container_manager_linux.go:300] "Creating device plugin manager" May 14 23:50:10.302849 kubelet[3524]: I0514 23:50:10.302744 3524 state_mem.go:36] "Initialized new in-memory state store" May 14 23:50:10.303056 kubelet[3524]: I0514 23:50:10.302984 3524 kubelet.go:408] "Attempting to sync node with API server" May 14 23:50:10.303056 kubelet[3524]: I0514 23:50:10.303027 3524 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 23:50:10.303629 sudo[3537]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 23:50:10.304474 sudo[3537]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 14 23:50:10.306275 kubelet[3524]: I0514 23:50:10.305171 3524 kubelet.go:314] "Adding apiserver pod source" May 14 23:50:10.306275 kubelet[3524]: I0514 23:50:10.305229 3524 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 23:50:10.312620 kubelet[3524]: I0514 23:50:10.310127 3524 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 14 23:50:10.312620 kubelet[3524]: I0514 23:50:10.310968 3524 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 23:50:10.318977 kubelet[3524]: I0514 23:50:10.318014 3524 server.go:1269] "Started kubelet" May 14 23:50:10.332107 kubelet[3524]: I0514 23:50:10.329521 3524 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 23:50:10.340856 kubelet[3524]: I0514 23:50:10.340767 3524 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 23:50:10.367793 kubelet[3524]: I0514 23:50:10.367684 3524 server.go:460] "Adding debug handlers to kubelet server" May 14 23:50:10.390593 kubelet[3524]: I0514 23:50:10.342521 3524 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 23:50:10.391005 kubelet[3524]: I0514 23:50:10.341431 3524 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 23:50:10.392254 kubelet[3524]: I0514 23:50:10.392217 3524 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 23:50:10.402217 kubelet[3524]: E0514 23:50:10.348919 3524 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-17-61\" not found" May 14 23:50:10.417859 kubelet[3524]: I0514 23:50:10.348680 3524 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 23:50:10.426606 kubelet[3524]: I0514 23:50:10.405527 3524 factory.go:221] Registration of the systemd container factory successfully May 14 23:50:10.427401 kubelet[3524]: I0514 23:50:10.426900 3524 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 23:50:10.427753 kubelet[3524]: I0514 23:50:10.348657 3524 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 23:50:10.428411 kubelet[3524]: I0514 23:50:10.428381 3524 reconciler.go:26] "Reconciler: start to sync state" May 14 23:50:10.454324 kubelet[3524]: I0514 23:50:10.452197 3524 factory.go:221] Registration of the containerd container factory successfully May 14 23:50:10.470284 kubelet[3524]: E0514 23:50:10.469856 3524 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 23:50:10.479757 kubelet[3524]: I0514 23:50:10.479709 3524 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 23:50:10.491028 kubelet[3524]: I0514 23:50:10.490976 3524 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 23:50:10.491273 kubelet[3524]: I0514 23:50:10.491249 3524 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 23:50:10.491916 kubelet[3524]: I0514 23:50:10.491409 3524 kubelet.go:2321] "Starting kubelet main sync loop" May 14 23:50:10.491916 kubelet[3524]: E0514 23:50:10.491500 3524 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 23:50:10.591814 kubelet[3524]: E0514 23:50:10.591750 3524 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 23:50:10.600168 kubelet[3524]: I0514 23:50:10.599173 3524 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 23:50:10.600168 kubelet[3524]: I0514 23:50:10.599207 3524 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 23:50:10.600168 kubelet[3524]: I0514 23:50:10.599290 3524 state_mem.go:36] "Initialized new in-memory state store" May 14 23:50:10.600168 kubelet[3524]: I0514 23:50:10.599545 3524 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 23:50:10.600168 kubelet[3524]: I0514 23:50:10.599567 3524 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 23:50:10.600168 kubelet[3524]: I0514 23:50:10.599603 3524 policy_none.go:49] "None policy: Start" May 14 23:50:10.601770 kubelet[3524]: I0514 23:50:10.601735 3524 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 23:50:10.602046 kubelet[3524]: I0514 23:50:10.601985 3524 state_mem.go:35] "Initializing new in-memory state store" May 14 23:50:10.603098 kubelet[3524]: I0514 23:50:10.602885 3524 state_mem.go:75] "Updated machine memory state" May 14 23:50:10.614391 kubelet[3524]: I0514 23:50:10.613993 3524 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 23:50:10.615947 kubelet[3524]: I0514 23:50:10.615703 3524 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 23:50:10.616280 kubelet[3524]: I0514 23:50:10.616161 3524 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 23:50:10.617404 kubelet[3524]: I0514 23:50:10.617314 3524 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 23:50:10.752238 kubelet[3524]: I0514 23:50:10.751606 3524 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-17-61" May 14 23:50:10.768444 kubelet[3524]: I0514 23:50:10.766817 3524 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-17-61" May 14 23:50:10.768444 kubelet[3524]: I0514 23:50:10.766981 3524 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-17-61" May 14 23:50:10.807030 kubelet[3524]: E0514 23:50:10.806986 3524 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-17-61\" already exists" pod="kube-system/kube-scheduler-ip-172-31-17-61" May 14 23:50:10.830947 kubelet[3524]: I0514 23:50:10.830887 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c90004aeeb1fc55227c83f59b49319b-kubeconfig\") pod \"kube-scheduler-ip-172-31-17-61\" (UID: \"7c90004aeeb1fc55227c83f59b49319b\") " pod="kube-system/kube-scheduler-ip-172-31-17-61" May 14 23:50:10.831124 kubelet[3524]: I0514 23:50:10.830956 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a7ecf532871b1fe480fc8a71b12bf5f-ca-certs\") pod \"kube-apiserver-ip-172-31-17-61\" (UID: \"8a7ecf532871b1fe480fc8a71b12bf5f\") " pod="kube-system/kube-apiserver-ip-172-31-17-61" May 14 23:50:10.831124 kubelet[3524]: I0514 23:50:10.830995 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a7ecf532871b1fe480fc8a71b12bf5f-k8s-certs\") pod \"kube-apiserver-ip-172-31-17-61\" (UID: \"8a7ecf532871b1fe480fc8a71b12bf5f\") " pod="kube-system/kube-apiserver-ip-172-31-17-61" May 14 23:50:10.831124 kubelet[3524]: I0514 23:50:10.831030 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1200944c8d368c0c50120ac4533ab757-k8s-certs\") pod \"kube-controller-manager-ip-172-31-17-61\" (UID: \"1200944c8d368c0c50120ac4533ab757\") " pod="kube-system/kube-controller-manager-ip-172-31-17-61" May 14 23:50:10.831124 kubelet[3524]: I0514 23:50:10.831090 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1200944c8d368c0c50120ac4533ab757-kubeconfig\") pod \"kube-controller-manager-ip-172-31-17-61\" (UID: \"1200944c8d368c0c50120ac4533ab757\") " pod="kube-system/kube-controller-manager-ip-172-31-17-61" May 14 23:50:10.831341 kubelet[3524]: I0514 23:50:10.831133 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a7ecf532871b1fe480fc8a71b12bf5f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-17-61\" (UID: \"8a7ecf532871b1fe480fc8a71b12bf5f\") " pod="kube-system/kube-apiserver-ip-172-31-17-61" May 14 23:50:10.831341 kubelet[3524]: I0514 23:50:10.831171 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1200944c8d368c0c50120ac4533ab757-ca-certs\") pod \"kube-controller-manager-ip-172-31-17-61\" (UID: \"1200944c8d368c0c50120ac4533ab757\") " pod="kube-system/kube-controller-manager-ip-172-31-17-61" May 14 23:50:10.831341 kubelet[3524]: I0514 23:50:10.831209 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1200944c8d368c0c50120ac4533ab757-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-17-61\" (UID: \"1200944c8d368c0c50120ac4533ab757\") " pod="kube-system/kube-controller-manager-ip-172-31-17-61" May 14 23:50:10.831341 kubelet[3524]: I0514 23:50:10.831245 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1200944c8d368c0c50120ac4533ab757-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-17-61\" (UID: \"1200944c8d368c0c50120ac4533ab757\") " pod="kube-system/kube-controller-manager-ip-172-31-17-61" May 14 23:50:11.251582 sudo[3537]: pam_unix(sudo:session): session closed for user root May 14 23:50:11.320028 kubelet[3524]: I0514 23:50:11.319482 3524 apiserver.go:52] "Watching apiserver" May 14 23:50:11.419213 kubelet[3524]: I0514 23:50:11.418164 3524 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 23:50:11.453785 kubelet[3524]: I0514 23:50:11.453301 3524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-17-61" podStartSLOduration=1.453240735 podStartE2EDuration="1.453240735s" podCreationTimestamp="2025-05-14 23:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:50:11.449272167 +0000 UTC m=+1.298873228" watchObservedRunningTime="2025-05-14 23:50:11.453240735 +0000 UTC m=+1.302841808" May 14 23:50:11.453785 kubelet[3524]: I0514 23:50:11.453588 3524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-17-61" podStartSLOduration=2.453566883 podStartE2EDuration="2.453566883s" podCreationTimestamp="2025-05-14 23:50:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:50:11.401677262 +0000 UTC m=+1.251278383" watchObservedRunningTime="2025-05-14 23:50:11.453566883 +0000 UTC m=+1.303167956" May 14 23:50:11.548307 kubelet[3524]: I0514 23:50:11.546371 3524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-17-61" podStartSLOduration=1.546336831 podStartE2EDuration="1.546336831s" podCreationTimestamp="2025-05-14 23:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:50:11.493422147 +0000 UTC m=+1.343023208" watchObservedRunningTime="2025-05-14 23:50:11.546336831 +0000 UTC m=+1.395937868" May 14 23:50:11.608884 kubelet[3524]: E0514 23:50:11.608583 3524 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-17-61\" already exists" pod="kube-system/kube-apiserver-ip-172-31-17-61" May 14 23:50:11.610150 kubelet[3524]: E0514 23:50:11.609696 3524 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-17-61\" already exists" pod="kube-system/kube-scheduler-ip-172-31-17-61" May 14 23:50:14.951096 sudo[2320]: pam_unix(sudo:session): session closed for user root May 14 23:50:14.975131 sshd[2317]: Connection closed by 139.178.89.65 port 33324 May 14 23:50:14.976018 sshd-session[2304]: pam_unix(sshd:session): session closed for user core May 14 23:50:14.983057 systemd[1]: sshd@8-172.31.17.61:22-139.178.89.65:33324.service: Deactivated successfully. May 14 23:50:14.988362 systemd[1]: session-9.scope: Deactivated successfully. May 14 23:50:14.988953 systemd[1]: session-9.scope: Consumed 12.148s CPU time, 261.7M memory peak. May 14 23:50:14.991888 systemd-logind[1943]: Session 9 logged out. Waiting for processes to exit. May 14 23:50:14.994708 systemd-logind[1943]: Removed session 9. May 14 23:50:15.882404 kubelet[3524]: I0514 23:50:15.882343 3524 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 23:50:15.883107 containerd[1965]: time="2025-05-14T23:50:15.882997017Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 23:50:15.885666 kubelet[3524]: I0514 23:50:15.885467 3524 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 23:50:16.368039 kubelet[3524]: W0514 23:50:16.367945 3524 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-17-61" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-61' and this object May 14 23:50:16.368039 kubelet[3524]: E0514 23:50:16.368023 3524 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-17-61\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-61' and this object" logger="UnhandledError" May 14 23:50:16.371749 kubelet[3524]: W0514 23:50:16.370045 3524 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-17-61" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-61' and this object May 14 23:50:16.371749 kubelet[3524]: E0514 23:50:16.370378 3524 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-17-61\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-61' and this object" logger="UnhandledError" May 14 23:50:16.375731 systemd[1]: Created slice kubepods-besteffort-pode375b92e_26e3_4b51_aa5e_36a16e514afa.slice - libcontainer container kubepods-besteffort-pode375b92e_26e3_4b51_aa5e_36a16e514afa.slice. May 14 23:50:16.426616 systemd[1]: Created slice kubepods-burstable-pod0f3f6420_5b97_43a1_be0c_8e023da75b13.slice - libcontainer container kubepods-burstable-pod0f3f6420_5b97_43a1_be0c_8e023da75b13.slice. May 14 23:50:16.469350 kubelet[3524]: I0514 23:50:16.469249 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-host-proc-sys-net\") pod \"cilium-jvllv\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " pod="kube-system/cilium-jvllv" May 14 23:50:16.469350 kubelet[3524]: I0514 23:50:16.469380 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-host-proc-sys-kernel\") pod \"cilium-jvllv\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " pod="kube-system/cilium-jvllv" May 14 23:50:16.469766 kubelet[3524]: I0514 23:50:16.469429 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hwzg\" (UniqueName: \"kubernetes.io/projected/e375b92e-26e3-4b51-aa5e-36a16e514afa-kube-api-access-2hwzg\") pod \"kube-proxy-d76fr\" (UID: \"e375b92e-26e3-4b51-aa5e-36a16e514afa\") " pod="kube-system/kube-proxy-d76fr" May 14 23:50:16.469766 kubelet[3524]: I0514 23:50:16.469472 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-bpf-maps\") pod \"cilium-jvllv\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " pod="kube-system/cilium-jvllv" May 14 23:50:16.469766 kubelet[3524]: I0514 23:50:16.469509 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0f3f6420-5b97-43a1-be0c-8e023da75b13-clustermesh-secrets\") pod \"cilium-jvllv\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " pod="kube-system/cilium-jvllv" May 14 23:50:16.469766 kubelet[3524]: I0514 23:50:16.469546 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xbls\" (UniqueName: \"kubernetes.io/projected/0f3f6420-5b97-43a1-be0c-8e023da75b13-kube-api-access-5xbls\") pod \"cilium-jvllv\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " pod="kube-system/cilium-jvllv" May 14 23:50:16.469766 kubelet[3524]: I0514 23:50:16.469583 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e375b92e-26e3-4b51-aa5e-36a16e514afa-kube-proxy\") pod \"kube-proxy-d76fr\" (UID: \"e375b92e-26e3-4b51-aa5e-36a16e514afa\") " pod="kube-system/kube-proxy-d76fr" May 14 23:50:16.470338 kubelet[3524]: I0514 23:50:16.469620 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-lib-modules\") pod \"cilium-jvllv\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " pod="kube-system/cilium-jvllv" May 14 23:50:16.470338 kubelet[3524]: I0514 23:50:16.469657 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-etc-cni-netd\") pod \"cilium-jvllv\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " pod="kube-system/cilium-jvllv" May 14 23:50:16.470338 kubelet[3524]: I0514 23:50:16.469695 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e375b92e-26e3-4b51-aa5e-36a16e514afa-lib-modules\") pod \"kube-proxy-d76fr\" (UID: \"e375b92e-26e3-4b51-aa5e-36a16e514afa\") " pod="kube-system/kube-proxy-d76fr" May 14 23:50:16.470338 kubelet[3524]: I0514 23:50:16.469730 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-hostproc\") pod \"cilium-jvllv\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " pod="kube-system/cilium-jvllv" May 14 23:50:16.470338 kubelet[3524]: I0514 23:50:16.469774 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-cni-path\") pod \"cilium-jvllv\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " pod="kube-system/cilium-jvllv" May 14 23:50:16.470338 kubelet[3524]: I0514 23:50:16.469812 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e375b92e-26e3-4b51-aa5e-36a16e514afa-xtables-lock\") pod \"kube-proxy-d76fr\" (UID: \"e375b92e-26e3-4b51-aa5e-36a16e514afa\") " pod="kube-system/kube-proxy-d76fr" May 14 23:50:16.470889 kubelet[3524]: I0514 23:50:16.469847 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-cilium-cgroup\") pod \"cilium-jvllv\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " pod="kube-system/cilium-jvllv" May 14 23:50:16.470889 kubelet[3524]: I0514 23:50:16.469883 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0f3f6420-5b97-43a1-be0c-8e023da75b13-hubble-tls\") pod \"cilium-jvllv\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " pod="kube-system/cilium-jvllv" May 14 23:50:16.470889 kubelet[3524]: I0514 23:50:16.469946 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-cilium-run\") pod \"cilium-jvllv\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " pod="kube-system/cilium-jvllv" May 14 23:50:16.470889 kubelet[3524]: I0514 23:50:16.469986 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-xtables-lock\") pod \"cilium-jvllv\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " pod="kube-system/cilium-jvllv" May 14 23:50:16.470889 kubelet[3524]: I0514 23:50:16.470024 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f3f6420-5b97-43a1-be0c-8e023da75b13-cilium-config-path\") pod \"cilium-jvllv\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " pod="kube-system/cilium-jvllv" May 14 23:50:17.014749 systemd[1]: Created slice kubepods-besteffort-pod99fee54d_ab70_4ec3_a226_bb9c31d872ab.slice - libcontainer container kubepods-besteffort-pod99fee54d_ab70_4ec3_a226_bb9c31d872ab.slice. May 14 23:50:17.077060 kubelet[3524]: I0514 23:50:17.076333 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99fee54d-ab70-4ec3-a226-bb9c31d872ab-cilium-config-path\") pod \"cilium-operator-5d85765b45-5cb8z\" (UID: \"99fee54d-ab70-4ec3-a226-bb9c31d872ab\") " pod="kube-system/cilium-operator-5d85765b45-5cb8z" May 14 23:50:17.077060 kubelet[3524]: I0514 23:50:17.076430 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn7w6\" (UniqueName: \"kubernetes.io/projected/99fee54d-ab70-4ec3-a226-bb9c31d872ab-kube-api-access-zn7w6\") pod \"cilium-operator-5d85765b45-5cb8z\" (UID: \"99fee54d-ab70-4ec3-a226-bb9c31d872ab\") " pod="kube-system/cilium-operator-5d85765b45-5cb8z" May 14 23:50:17.596520 containerd[1965]: time="2025-05-14T23:50:17.596439297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d76fr,Uid:e375b92e-26e3-4b51-aa5e-36a16e514afa,Namespace:kube-system,Attempt:0,}" May 14 23:50:17.626086 containerd[1965]: time="2025-05-14T23:50:17.626000157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5cb8z,Uid:99fee54d-ab70-4ec3-a226-bb9c31d872ab,Namespace:kube-system,Attempt:0,}" May 14 23:50:17.644273 containerd[1965]: time="2025-05-14T23:50:17.643643169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jvllv,Uid:0f3f6420-5b97-43a1-be0c-8e023da75b13,Namespace:kube-system,Attempt:0,}" May 14 23:50:17.663132 containerd[1965]: time="2025-05-14T23:50:17.660685101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:50:17.663132 containerd[1965]: time="2025-05-14T23:50:17.660796425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:50:17.663132 containerd[1965]: time="2025-05-14T23:50:17.660834081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:17.663132 containerd[1965]: time="2025-05-14T23:50:17.661022541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:17.745611 containerd[1965]: time="2025-05-14T23:50:17.742479250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:50:17.745611 containerd[1965]: time="2025-05-14T23:50:17.742608070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:50:17.748011 containerd[1965]: time="2025-05-14T23:50:17.747731734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:17.748409 systemd[1]: Started cri-containerd-22dc5c0194ae55fc1336644b18bb2f075bd038fc8fd2d75ee5e97ca5abead705.scope - libcontainer container 22dc5c0194ae55fc1336644b18bb2f075bd038fc8fd2d75ee5e97ca5abead705. May 14 23:50:17.752124 containerd[1965]: time="2025-05-14T23:50:17.751263238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:17.780804 containerd[1965]: time="2025-05-14T23:50:17.780594394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:50:17.781149 containerd[1965]: time="2025-05-14T23:50:17.780853618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:50:17.781149 containerd[1965]: time="2025-05-14T23:50:17.780939382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:17.782032 containerd[1965]: time="2025-05-14T23:50:17.781274338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:17.817474 systemd[1]: Started cri-containerd-540a8602c5f820003ca49a4b28721a77aecd6e8de48ea65eeb31191604c69601.scope - libcontainer container 540a8602c5f820003ca49a4b28721a77aecd6e8de48ea65eeb31191604c69601. May 14 23:50:17.860039 systemd[1]: Started cri-containerd-86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d.scope - libcontainer container 86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d. May 14 23:50:17.868408 containerd[1965]: time="2025-05-14T23:50:17.868344946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d76fr,Uid:e375b92e-26e3-4b51-aa5e-36a16e514afa,Namespace:kube-system,Attempt:0,} returns sandbox id \"22dc5c0194ae55fc1336644b18bb2f075bd038fc8fd2d75ee5e97ca5abead705\"" May 14 23:50:17.885693 containerd[1965]: time="2025-05-14T23:50:17.885595679Z" level=info msg="CreateContainer within sandbox \"22dc5c0194ae55fc1336644b18bb2f075bd038fc8fd2d75ee5e97ca5abead705\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 23:50:17.968749 containerd[1965]: time="2025-05-14T23:50:17.968695355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5cb8z,Uid:99fee54d-ab70-4ec3-a226-bb9c31d872ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"540a8602c5f820003ca49a4b28721a77aecd6e8de48ea65eeb31191604c69601\"" May 14 23:50:17.976465 containerd[1965]: time="2025-05-14T23:50:17.976349099Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 23:50:17.992210 containerd[1965]: time="2025-05-14T23:50:17.992099615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jvllv,Uid:0f3f6420-5b97-43a1-be0c-8e023da75b13,Namespace:kube-system,Attempt:0,} returns sandbox id \"86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d\"" May 14 23:50:17.993431 containerd[1965]: time="2025-05-14T23:50:17.993316103Z" level=info msg="CreateContainer within sandbox \"22dc5c0194ae55fc1336644b18bb2f075bd038fc8fd2d75ee5e97ca5abead705\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9321f805e33e68a3a099d9b14a7643a353c655e83b0214b450895fb498e000c6\"" May 14 23:50:17.995362 containerd[1965]: time="2025-05-14T23:50:17.995103083Z" level=info msg="StartContainer for \"9321f805e33e68a3a099d9b14a7643a353c655e83b0214b450895fb498e000c6\"" May 14 23:50:18.057517 systemd[1]: Started cri-containerd-9321f805e33e68a3a099d9b14a7643a353c655e83b0214b450895fb498e000c6.scope - libcontainer container 9321f805e33e68a3a099d9b14a7643a353c655e83b0214b450895fb498e000c6. May 14 23:50:18.134820 containerd[1965]: time="2025-05-14T23:50:18.133670072Z" level=info msg="StartContainer for \"9321f805e33e68a3a099d9b14a7643a353c655e83b0214b450895fb498e000c6\" returns successfully" May 14 23:50:19.334697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3921848740.mount: Deactivated successfully. May 14 23:50:20.517920 kubelet[3524]: I0514 23:50:20.517350 3524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d76fr" podStartSLOduration=4.517288248 podStartE2EDuration="4.517288248s" podCreationTimestamp="2025-05-14 23:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:50:18.62080411 +0000 UTC m=+8.470405135" watchObservedRunningTime="2025-05-14 23:50:20.517288248 +0000 UTC m=+10.366889297" May 14 23:50:21.329149 containerd[1965]: time="2025-05-14T23:50:21.328775940Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:21.331044 containerd[1965]: time="2025-05-14T23:50:21.330943296Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 14 23:50:21.333532 containerd[1965]: time="2025-05-14T23:50:21.333447852Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:21.336592 containerd[1965]: time="2025-05-14T23:50:21.336394332Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.359937185s" May 14 23:50:21.336592 containerd[1965]: time="2025-05-14T23:50:21.336455376Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 14 23:50:21.340373 containerd[1965]: time="2025-05-14T23:50:21.340292028Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 23:50:21.341891 containerd[1965]: time="2025-05-14T23:50:21.341808276Z" level=info msg="CreateContainer within sandbox \"540a8602c5f820003ca49a4b28721a77aecd6e8de48ea65eeb31191604c69601\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 23:50:21.373567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3588312188.mount: Deactivated successfully. May 14 23:50:21.380634 containerd[1965]: time="2025-05-14T23:50:21.380145588Z" level=info msg="CreateContainer within sandbox \"540a8602c5f820003ca49a4b28721a77aecd6e8de48ea65eeb31191604c69601\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"92c2ecd588f5088e7e6710b8d6c7d7cc91a7ec4eee691101f86237257363d512\"" May 14 23:50:21.381881 containerd[1965]: time="2025-05-14T23:50:21.381842556Z" level=info msg="StartContainer for \"92c2ecd588f5088e7e6710b8d6c7d7cc91a7ec4eee691101f86237257363d512\"" May 14 23:50:21.438391 systemd[1]: Started cri-containerd-92c2ecd588f5088e7e6710b8d6c7d7cc91a7ec4eee691101f86237257363d512.scope - libcontainer container 92c2ecd588f5088e7e6710b8d6c7d7cc91a7ec4eee691101f86237257363d512. May 14 23:50:21.488683 containerd[1965]: time="2025-05-14T23:50:21.488611644Z" level=info msg="StartContainer for \"92c2ecd588f5088e7e6710b8d6c7d7cc91a7ec4eee691101f86237257363d512\" returns successfully" May 14 23:50:27.171981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4034303856.mount: Deactivated successfully. May 14 23:50:29.898352 containerd[1965]: time="2025-05-14T23:50:29.898238590Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:29.900728 containerd[1965]: time="2025-05-14T23:50:29.900571990Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 14 23:50:29.903131 containerd[1965]: time="2025-05-14T23:50:29.902984686Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 23:50:29.907483 containerd[1965]: time="2025-05-14T23:50:29.907221934Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.566844334s" May 14 23:50:29.907483 containerd[1965]: time="2025-05-14T23:50:29.907300042Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 14 23:50:29.913252 containerd[1965]: time="2025-05-14T23:50:29.912901702Z" level=info msg="CreateContainer within sandbox \"86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 23:50:29.944342 containerd[1965]: time="2025-05-14T23:50:29.943912522Z" level=info msg="CreateContainer within sandbox \"86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8e8d56df4c07a06f63b9b1dd6e5a3bb793b269a71a3cbbb7f427c5b55c5f3abe\"" May 14 23:50:29.945230 containerd[1965]: time="2025-05-14T23:50:29.945157498Z" level=info msg="StartContainer for \"8e8d56df4c07a06f63b9b1dd6e5a3bb793b269a71a3cbbb7f427c5b55c5f3abe\"" May 14 23:50:30.015427 systemd[1]: Started cri-containerd-8e8d56df4c07a06f63b9b1dd6e5a3bb793b269a71a3cbbb7f427c5b55c5f3abe.scope - libcontainer container 8e8d56df4c07a06f63b9b1dd6e5a3bb793b269a71a3cbbb7f427c5b55c5f3abe. May 14 23:50:30.065424 containerd[1965]: time="2025-05-14T23:50:30.065341699Z" level=info msg="StartContainer for \"8e8d56df4c07a06f63b9b1dd6e5a3bb793b269a71a3cbbb7f427c5b55c5f3abe\" returns successfully" May 14 23:50:30.088717 systemd[1]: cri-containerd-8e8d56df4c07a06f63b9b1dd6e5a3bb793b269a71a3cbbb7f427c5b55c5f3abe.scope: Deactivated successfully. May 14 23:50:30.089281 systemd[1]: cri-containerd-8e8d56df4c07a06f63b9b1dd6e5a3bb793b269a71a3cbbb7f427c5b55c5f3abe.scope: Consumed 41ms CPU time, 6.5M memory peak, 2.1M written to disk. May 14 23:50:30.672473 kubelet[3524]: I0514 23:50:30.672214 3524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-5cb8z" podStartSLOduration=11.306363521 podStartE2EDuration="14.672190846s" podCreationTimestamp="2025-05-14 23:50:16 +0000 UTC" firstStartedPulling="2025-05-14 23:50:17.972399407 +0000 UTC m=+7.822000444" lastFinishedPulling="2025-05-14 23:50:21.338226744 +0000 UTC m=+11.187827769" observedRunningTime="2025-05-14 23:50:21.653044093 +0000 UTC m=+11.502645142" watchObservedRunningTime="2025-05-14 23:50:30.672190846 +0000 UTC m=+20.521791907" May 14 23:50:30.937825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e8d56df4c07a06f63b9b1dd6e5a3bb793b269a71a3cbbb7f427c5b55c5f3abe-rootfs.mount: Deactivated successfully. May 14 23:50:31.329792 containerd[1965]: time="2025-05-14T23:50:31.329683881Z" level=info msg="shim disconnected" id=8e8d56df4c07a06f63b9b1dd6e5a3bb793b269a71a3cbbb7f427c5b55c5f3abe namespace=k8s.io May 14 23:50:31.329792 containerd[1965]: time="2025-05-14T23:50:31.329791497Z" level=warning msg="cleaning up after shim disconnected" id=8e8d56df4c07a06f63b9b1dd6e5a3bb793b269a71a3cbbb7f427c5b55c5f3abe namespace=k8s.io May 14 23:50:31.330555 containerd[1965]: time="2025-05-14T23:50:31.329813817Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:50:31.655878 containerd[1965]: time="2025-05-14T23:50:31.655311203Z" level=info msg="CreateContainer within sandbox \"86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 23:50:31.683297 containerd[1965]: time="2025-05-14T23:50:31.683185943Z" level=info msg="CreateContainer within sandbox \"86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d364b43c8167703262799228da0b066ecb4bd3974f263be5af3fddae54aa1402\"" May 14 23:50:31.684598 containerd[1965]: time="2025-05-14T23:50:31.684366947Z" level=info msg="StartContainer for \"d364b43c8167703262799228da0b066ecb4bd3974f263be5af3fddae54aa1402\"" May 14 23:50:31.753501 systemd[1]: Started cri-containerd-d364b43c8167703262799228da0b066ecb4bd3974f263be5af3fddae54aa1402.scope - libcontainer container d364b43c8167703262799228da0b066ecb4bd3974f263be5af3fddae54aa1402. May 14 23:50:31.811188 containerd[1965]: time="2025-05-14T23:50:31.810952020Z" level=info msg="StartContainer for \"d364b43c8167703262799228da0b066ecb4bd3974f263be5af3fddae54aa1402\" returns successfully" May 14 23:50:31.841741 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 23:50:31.843606 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 23:50:31.843886 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 14 23:50:31.854383 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 23:50:31.855315 systemd[1]: cri-containerd-d364b43c8167703262799228da0b066ecb4bd3974f263be5af3fddae54aa1402.scope: Deactivated successfully. May 14 23:50:31.903123 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 23:50:31.903949 containerd[1965]: time="2025-05-14T23:50:31.903865056Z" level=info msg="shim disconnected" id=d364b43c8167703262799228da0b066ecb4bd3974f263be5af3fddae54aa1402 namespace=k8s.io May 14 23:50:31.904155 containerd[1965]: time="2025-05-14T23:50:31.903946080Z" level=warning msg="cleaning up after shim disconnected" id=d364b43c8167703262799228da0b066ecb4bd3974f263be5af3fddae54aa1402 namespace=k8s.io May 14 23:50:31.904155 containerd[1965]: time="2025-05-14T23:50:31.903966984Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:50:31.927967 containerd[1965]: time="2025-05-14T23:50:31.927675012Z" level=warning msg="cleanup warnings time=\"2025-05-14T23:50:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 14 23:50:31.937434 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d364b43c8167703262799228da0b066ecb4bd3974f263be5af3fddae54aa1402-rootfs.mount: Deactivated successfully. May 14 23:50:32.659322 containerd[1965]: time="2025-05-14T23:50:32.659011908Z" level=info msg="CreateContainer within sandbox \"86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 23:50:32.709596 containerd[1965]: time="2025-05-14T23:50:32.709408980Z" level=info msg="CreateContainer within sandbox \"86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"039a209c1f0fe7df5398fcef29cc706828ac7b60f61eecb1b633e8c090a0cf69\"" May 14 23:50:32.711870 containerd[1965]: time="2025-05-14T23:50:32.711584316Z" level=info msg="StartContainer for \"039a209c1f0fe7df5398fcef29cc706828ac7b60f61eecb1b633e8c090a0cf69\"" May 14 23:50:32.773409 systemd[1]: Started cri-containerd-039a209c1f0fe7df5398fcef29cc706828ac7b60f61eecb1b633e8c090a0cf69.scope - libcontainer container 039a209c1f0fe7df5398fcef29cc706828ac7b60f61eecb1b633e8c090a0cf69. May 14 23:50:32.835875 containerd[1965]: time="2025-05-14T23:50:32.835706713Z" level=info msg="StartContainer for \"039a209c1f0fe7df5398fcef29cc706828ac7b60f61eecb1b633e8c090a0cf69\" returns successfully" May 14 23:50:32.844376 systemd[1]: cri-containerd-039a209c1f0fe7df5398fcef29cc706828ac7b60f61eecb1b633e8c090a0cf69.scope: Deactivated successfully. May 14 23:50:32.886154 containerd[1965]: time="2025-05-14T23:50:32.886031905Z" level=info msg="shim disconnected" id=039a209c1f0fe7df5398fcef29cc706828ac7b60f61eecb1b633e8c090a0cf69 namespace=k8s.io May 14 23:50:32.886154 containerd[1965]: time="2025-05-14T23:50:32.886145797Z" level=warning msg="cleaning up after shim disconnected" id=039a209c1f0fe7df5398fcef29cc706828ac7b60f61eecb1b633e8c090a0cf69 namespace=k8s.io May 14 23:50:32.886611 containerd[1965]: time="2025-05-14T23:50:32.886167805Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:50:32.937872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-039a209c1f0fe7df5398fcef29cc706828ac7b60f61eecb1b633e8c090a0cf69-rootfs.mount: Deactivated successfully. May 14 23:50:33.665296 containerd[1965]: time="2025-05-14T23:50:33.665050225Z" level=info msg="CreateContainer within sandbox \"86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 23:50:33.693120 containerd[1965]: time="2025-05-14T23:50:33.692525089Z" level=info msg="CreateContainer within sandbox \"86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cd8988affec37a53c9c3f4c16bab772adf11fbdfb9c29e926e752516227af4c4\"" May 14 23:50:33.695648 containerd[1965]: time="2025-05-14T23:50:33.695293261Z" level=info msg="StartContainer for \"cd8988affec37a53c9c3f4c16bab772adf11fbdfb9c29e926e752516227af4c4\"" May 14 23:50:33.759417 systemd[1]: Started cri-containerd-cd8988affec37a53c9c3f4c16bab772adf11fbdfb9c29e926e752516227af4c4.scope - libcontainer container cd8988affec37a53c9c3f4c16bab772adf11fbdfb9c29e926e752516227af4c4. May 14 23:50:33.806197 systemd[1]: cri-containerd-cd8988affec37a53c9c3f4c16bab772adf11fbdfb9c29e926e752516227af4c4.scope: Deactivated successfully. May 14 23:50:33.812027 containerd[1965]: time="2025-05-14T23:50:33.811952774Z" level=info msg="StartContainer for \"cd8988affec37a53c9c3f4c16bab772adf11fbdfb9c29e926e752516227af4c4\" returns successfully" May 14 23:50:33.848318 containerd[1965]: time="2025-05-14T23:50:33.848206826Z" level=info msg="shim disconnected" id=cd8988affec37a53c9c3f4c16bab772adf11fbdfb9c29e926e752516227af4c4 namespace=k8s.io May 14 23:50:33.848598 containerd[1965]: time="2025-05-14T23:50:33.848330678Z" level=warning msg="cleaning up after shim disconnected" id=cd8988affec37a53c9c3f4c16bab772adf11fbdfb9c29e926e752516227af4c4 namespace=k8s.io May 14 23:50:33.848598 containerd[1965]: time="2025-05-14T23:50:33.848354966Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:50:33.938081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd8988affec37a53c9c3f4c16bab772adf11fbdfb9c29e926e752516227af4c4-rootfs.mount: Deactivated successfully. May 14 23:50:34.674360 containerd[1965]: time="2025-05-14T23:50:34.674180858Z" level=info msg="CreateContainer within sandbox \"86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 23:50:34.706099 containerd[1965]: time="2025-05-14T23:50:34.705884474Z" level=info msg="CreateContainer within sandbox \"86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805\"" May 14 23:50:34.714157 containerd[1965]: time="2025-05-14T23:50:34.710275166Z" level=info msg="StartContainer for \"de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805\"" May 14 23:50:34.773424 systemd[1]: Started cri-containerd-de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805.scope - libcontainer container de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805. May 14 23:50:34.830191 containerd[1965]: time="2025-05-14T23:50:34.830127567Z" level=info msg="StartContainer for \"de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805\" returns successfully" May 14 23:50:35.011832 kubelet[3524]: I0514 23:50:35.011680 3524 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 14 23:50:35.079022 systemd[1]: Created slice kubepods-burstable-pod6a9a3484_9d53_4288_b056_0d928bccfc82.slice - libcontainer container kubepods-burstable-pod6a9a3484_9d53_4288_b056_0d928bccfc82.slice. May 14 23:50:35.097157 systemd[1]: Created slice kubepods-burstable-podf723dfb2_ccde_416e_b475_283e8a2c6d49.slice - libcontainer container kubepods-burstable-podf723dfb2_ccde_416e_b475_283e8a2c6d49.slice. May 14 23:50:35.111419 kubelet[3524]: I0514 23:50:35.111165 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f723dfb2-ccde-416e-b475-283e8a2c6d49-config-volume\") pod \"coredns-6f6b679f8f-ll8ml\" (UID: \"f723dfb2-ccde-416e-b475-283e8a2c6d49\") " pod="kube-system/coredns-6f6b679f8f-ll8ml" May 14 23:50:35.111419 kubelet[3524]: I0514 23:50:35.111234 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a9a3484-9d53-4288-b056-0d928bccfc82-config-volume\") pod \"coredns-6f6b679f8f-nv4sv\" (UID: \"6a9a3484-9d53-4288-b056-0d928bccfc82\") " pod="kube-system/coredns-6f6b679f8f-nv4sv" May 14 23:50:35.111419 kubelet[3524]: I0514 23:50:35.111281 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9l9q\" (UniqueName: \"kubernetes.io/projected/6a9a3484-9d53-4288-b056-0d928bccfc82-kube-api-access-k9l9q\") pod \"coredns-6f6b679f8f-nv4sv\" (UID: \"6a9a3484-9d53-4288-b056-0d928bccfc82\") " pod="kube-system/coredns-6f6b679f8f-nv4sv" May 14 23:50:35.111419 kubelet[3524]: I0514 23:50:35.111329 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7489n\" (UniqueName: \"kubernetes.io/projected/f723dfb2-ccde-416e-b475-283e8a2c6d49-kube-api-access-7489n\") pod \"coredns-6f6b679f8f-ll8ml\" (UID: \"f723dfb2-ccde-416e-b475-283e8a2c6d49\") " pod="kube-system/coredns-6f6b679f8f-ll8ml" May 14 23:50:35.391218 containerd[1965]: time="2025-05-14T23:50:35.391132201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-nv4sv,Uid:6a9a3484-9d53-4288-b056-0d928bccfc82,Namespace:kube-system,Attempt:0,}" May 14 23:50:35.406523 containerd[1965]: time="2025-05-14T23:50:35.405507122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ll8ml,Uid:f723dfb2-ccde-416e-b475-283e8a2c6d49,Namespace:kube-system,Attempt:0,}" May 14 23:50:35.773937 kubelet[3524]: I0514 23:50:35.773572 3524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jvllv" podStartSLOduration=7.861406804 podStartE2EDuration="19.773523147s" podCreationTimestamp="2025-05-14 23:50:16 +0000 UTC" firstStartedPulling="2025-05-14 23:50:17.997488923 +0000 UTC m=+7.847089960" lastFinishedPulling="2025-05-14 23:50:29.909605266 +0000 UTC m=+19.759206303" observedRunningTime="2025-05-14 23:50:35.772712031 +0000 UTC m=+25.622313116" watchObservedRunningTime="2025-05-14 23:50:35.773523147 +0000 UTC m=+25.623124184" May 14 23:50:37.749034 systemd-networkd[1872]: cilium_host: Link UP May 14 23:50:37.750832 systemd-networkd[1872]: cilium_net: Link UP May 14 23:50:37.751682 systemd-networkd[1872]: cilium_net: Gained carrier May 14 23:50:37.752051 systemd-networkd[1872]: cilium_host: Gained carrier May 14 23:50:37.753914 (udev-worker)[4327]: Network interface NamePolicy= disabled on kernel command line. May 14 23:50:37.753970 (udev-worker)[4328]: Network interface NamePolicy= disabled on kernel command line. May 14 23:50:37.915801 systemd-networkd[1872]: cilium_host: Gained IPv6LL May 14 23:50:37.922506 (udev-worker)[4365]: Network interface NamePolicy= disabled on kernel command line. May 14 23:50:37.933860 systemd-networkd[1872]: cilium_vxlan: Link UP May 14 23:50:37.933878 systemd-networkd[1872]: cilium_vxlan: Gained carrier May 14 23:50:38.018333 systemd-networkd[1872]: cilium_net: Gained IPv6LL May 14 23:50:38.418496 kernel: NET: Registered PF_ALG protocol family May 14 23:50:39.626367 systemd-networkd[1872]: cilium_vxlan: Gained IPv6LL May 14 23:50:39.733708 systemd-networkd[1872]: lxc_health: Link UP May 14 23:50:39.744309 systemd-networkd[1872]: lxc_health: Gained carrier May 14 23:50:39.746307 (udev-worker)[4371]: Network interface NamePolicy= disabled on kernel command line. May 14 23:50:40.036569 (udev-worker)[4370]: Network interface NamePolicy= disabled on kernel command line. May 14 23:50:40.050121 kernel: eth0: renamed from tmp5ad4d May 14 23:50:40.060241 systemd-networkd[1872]: lxc86745419ce2c: Link UP May 14 23:50:40.060895 systemd-networkd[1872]: lxc88c8d132e130: Link UP May 14 23:50:40.071155 kernel: eth0: renamed from tmp1e065 May 14 23:50:40.089903 systemd-networkd[1872]: lxc88c8d132e130: Gained carrier May 14 23:50:40.090713 systemd-networkd[1872]: lxc86745419ce2c: Gained carrier May 14 23:50:41.610485 systemd-networkd[1872]: lxc_health: Gained IPv6LL May 14 23:50:41.866286 systemd-networkd[1872]: lxc88c8d132e130: Gained IPv6LL May 14 23:50:41.994349 systemd-networkd[1872]: lxc86745419ce2c: Gained IPv6LL May 14 23:50:44.929440 ntpd[1936]: Listen normally on 7 cilium_host 192.168.0.54:123 May 14 23:50:44.930885 ntpd[1936]: 14 May 23:50:44 ntpd[1936]: Listen normally on 7 cilium_host 192.168.0.54:123 May 14 23:50:44.930885 ntpd[1936]: 14 May 23:50:44 ntpd[1936]: Listen normally on 8 cilium_net [fe80::bc10:6cff:fefa:3a16%4]:123 May 14 23:50:44.930885 ntpd[1936]: 14 May 23:50:44 ntpd[1936]: Listen normally on 9 cilium_host [fe80::d02c:5cff:fe6d:e6ee%5]:123 May 14 23:50:44.930885 ntpd[1936]: 14 May 23:50:44 ntpd[1936]: Listen normally on 10 cilium_vxlan [fe80::7c39:14ff:fef8:f564%6]:123 May 14 23:50:44.930885 ntpd[1936]: 14 May 23:50:44 ntpd[1936]: Listen normally on 11 lxc_health [fe80::cc0e:c4ff:fe17:c75e%8]:123 May 14 23:50:44.930885 ntpd[1936]: 14 May 23:50:44 ntpd[1936]: Listen normally on 12 lxc86745419ce2c [fe80::c4ee:77ff:fe75:2a92%10]:123 May 14 23:50:44.930885 ntpd[1936]: 14 May 23:50:44 ntpd[1936]: Listen normally on 13 lxc88c8d132e130 [fe80::607a:44ff:fec9:e88d%12]:123 May 14 23:50:44.929564 ntpd[1936]: Listen normally on 8 cilium_net [fe80::bc10:6cff:fefa:3a16%4]:123 May 14 23:50:44.929642 ntpd[1936]: Listen normally on 9 cilium_host [fe80::d02c:5cff:fe6d:e6ee%5]:123 May 14 23:50:44.929708 ntpd[1936]: Listen normally on 10 cilium_vxlan [fe80::7c39:14ff:fef8:f564%6]:123 May 14 23:50:44.929774 ntpd[1936]: Listen normally on 11 lxc_health [fe80::cc0e:c4ff:fe17:c75e%8]:123 May 14 23:50:44.929840 ntpd[1936]: Listen normally on 12 lxc86745419ce2c [fe80::c4ee:77ff:fe75:2a92%10]:123 May 14 23:50:44.929904 ntpd[1936]: Listen normally on 13 lxc88c8d132e130 [fe80::607a:44ff:fec9:e88d%12]:123 May 14 23:50:47.167656 systemd[1]: Started sshd@9-172.31.17.61:22-139.178.89.65:59086.service - OpenSSH per-connection server daemon (139.178.89.65:59086). May 14 23:50:47.360199 sshd[4730]: Accepted publickey for core from 139.178.89.65 port 59086 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:50:47.362522 sshd-session[4730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:50:47.376532 systemd-logind[1943]: New session 10 of user core. May 14 23:50:47.388844 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 23:50:47.704741 sshd[4732]: Connection closed by 139.178.89.65 port 59086 May 14 23:50:47.705673 sshd-session[4730]: pam_unix(sshd:session): session closed for user core May 14 23:50:47.712317 systemd[1]: sshd@9-172.31.17.61:22-139.178.89.65:59086.service: Deactivated successfully. May 14 23:50:47.718755 systemd[1]: session-10.scope: Deactivated successfully. May 14 23:50:47.725745 systemd-logind[1943]: Session 10 logged out. Waiting for processes to exit. May 14 23:50:47.728791 systemd-logind[1943]: Removed session 10. May 14 23:50:48.700119 containerd[1965]: time="2025-05-14T23:50:48.689960116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:50:48.700119 containerd[1965]: time="2025-05-14T23:50:48.691050880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:50:48.700119 containerd[1965]: time="2025-05-14T23:50:48.691411192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:48.700119 containerd[1965]: time="2025-05-14T23:50:48.693328732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:48.773575 systemd[1]: run-containerd-runc-k8s.io-5ad4d1f7fd61c93996de5af27e2e349625cd04162033c72efd397698ad9a66f7-runc.0HFUo2.mount: Deactivated successfully. May 14 23:50:48.778327 containerd[1965]: time="2025-05-14T23:50:48.766355620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:50:48.778327 containerd[1965]: time="2025-05-14T23:50:48.766463524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:50:48.778327 containerd[1965]: time="2025-05-14T23:50:48.766493548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:48.778327 containerd[1965]: time="2025-05-14T23:50:48.766656784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:50:48.793889 systemd[1]: Started cri-containerd-5ad4d1f7fd61c93996de5af27e2e349625cd04162033c72efd397698ad9a66f7.scope - libcontainer container 5ad4d1f7fd61c93996de5af27e2e349625cd04162033c72efd397698ad9a66f7. May 14 23:50:48.861418 systemd[1]: Started cri-containerd-1e065fef84ab7997cd8e02d89081b9bb3cf9ae5e8d7cbaaf4509a85efb6862c1.scope - libcontainer container 1e065fef84ab7997cd8e02d89081b9bb3cf9ae5e8d7cbaaf4509a85efb6862c1. May 14 23:50:48.932757 containerd[1965]: time="2025-05-14T23:50:48.932690537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ll8ml,Uid:f723dfb2-ccde-416e-b475-283e8a2c6d49,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ad4d1f7fd61c93996de5af27e2e349625cd04162033c72efd397698ad9a66f7\"" May 14 23:50:48.942405 containerd[1965]: time="2025-05-14T23:50:48.942331793Z" level=info msg="CreateContainer within sandbox \"5ad4d1f7fd61c93996de5af27e2e349625cd04162033c72efd397698ad9a66f7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:50:48.986209 containerd[1965]: time="2025-05-14T23:50:48.986024597Z" level=info msg="CreateContainer within sandbox \"5ad4d1f7fd61c93996de5af27e2e349625cd04162033c72efd397698ad9a66f7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5e381fb86080a82705c0b755bc3ed3acfe27d66f5b24e0cb4012fb163ec7937c\"" May 14 23:50:48.992109 containerd[1965]: time="2025-05-14T23:50:48.988535117Z" level=info msg="StartContainer for \"5e381fb86080a82705c0b755bc3ed3acfe27d66f5b24e0cb4012fb163ec7937c\"" May 14 23:50:49.006846 containerd[1965]: time="2025-05-14T23:50:49.006778981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-nv4sv,Uid:6a9a3484-9d53-4288-b056-0d928bccfc82,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e065fef84ab7997cd8e02d89081b9bb3cf9ae5e8d7cbaaf4509a85efb6862c1\"" May 14 23:50:49.015900 containerd[1965]: time="2025-05-14T23:50:49.015599905Z" level=info msg="CreateContainer within sandbox \"1e065fef84ab7997cd8e02d89081b9bb3cf9ae5e8d7cbaaf4509a85efb6862c1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 23:50:49.063111 containerd[1965]: time="2025-05-14T23:50:49.061877125Z" level=info msg="CreateContainer within sandbox \"1e065fef84ab7997cd8e02d89081b9bb3cf9ae5e8d7cbaaf4509a85efb6862c1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bbffb736b3c53c3c937033c031b0256a23076c0c6b49b8f498d4e917bb9bb4b4\"" May 14 23:50:49.064450 containerd[1965]: time="2025-05-14T23:50:49.064355569Z" level=info msg="StartContainer for \"bbffb736b3c53c3c937033c031b0256a23076c0c6b49b8f498d4e917bb9bb4b4\"" May 14 23:50:49.071267 systemd[1]: Started cri-containerd-5e381fb86080a82705c0b755bc3ed3acfe27d66f5b24e0cb4012fb163ec7937c.scope - libcontainer container 5e381fb86080a82705c0b755bc3ed3acfe27d66f5b24e0cb4012fb163ec7937c. May 14 23:50:49.148405 systemd[1]: Started cri-containerd-bbffb736b3c53c3c937033c031b0256a23076c0c6b49b8f498d4e917bb9bb4b4.scope - libcontainer container bbffb736b3c53c3c937033c031b0256a23076c0c6b49b8f498d4e917bb9bb4b4. May 14 23:50:49.181922 containerd[1965]: time="2025-05-14T23:50:49.181871414Z" level=info msg="StartContainer for \"5e381fb86080a82705c0b755bc3ed3acfe27d66f5b24e0cb4012fb163ec7937c\" returns successfully" May 14 23:50:49.221747 containerd[1965]: time="2025-05-14T23:50:49.221678426Z" level=info msg="StartContainer for \"bbffb736b3c53c3c937033c031b0256a23076c0c6b49b8f498d4e917bb9bb4b4\" returns successfully" May 14 23:50:49.809312 kubelet[3524]: I0514 23:50:49.808781 3524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-nv4sv" podStartSLOduration=33.808759037 podStartE2EDuration="33.808759037s" podCreationTimestamp="2025-05-14 23:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:50:49.806354249 +0000 UTC m=+39.655955370" watchObservedRunningTime="2025-05-14 23:50:49.808759037 +0000 UTC m=+39.658360062" May 14 23:50:52.753535 systemd[1]: Started sshd@10-172.31.17.61:22-139.178.89.65:59102.service - OpenSSH per-connection server daemon (139.178.89.65:59102). May 14 23:50:52.937048 sshd[4922]: Accepted publickey for core from 139.178.89.65 port 59102 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:50:52.939783 sshd-session[4922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:50:52.949597 systemd-logind[1943]: New session 11 of user core. May 14 23:50:52.961361 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 23:50:53.209307 sshd[4924]: Connection closed by 139.178.89.65 port 59102 May 14 23:50:53.210456 sshd-session[4922]: pam_unix(sshd:session): session closed for user core May 14 23:50:53.216989 systemd[1]: sshd@10-172.31.17.61:22-139.178.89.65:59102.service: Deactivated successfully. May 14 23:50:53.222323 systemd[1]: session-11.scope: Deactivated successfully. May 14 23:50:53.224652 systemd-logind[1943]: Session 11 logged out. Waiting for processes to exit. May 14 23:50:53.227132 systemd-logind[1943]: Removed session 11. May 14 23:50:58.254605 systemd[1]: Started sshd@11-172.31.17.61:22-139.178.89.65:60420.service - OpenSSH per-connection server daemon (139.178.89.65:60420). May 14 23:50:58.443846 sshd[4937]: Accepted publickey for core from 139.178.89.65 port 60420 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:50:58.446348 sshd-session[4937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:50:58.456178 systemd-logind[1943]: New session 12 of user core. May 14 23:50:58.463386 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 23:50:58.709765 sshd[4939]: Connection closed by 139.178.89.65 port 60420 May 14 23:50:58.710641 sshd-session[4937]: pam_unix(sshd:session): session closed for user core May 14 23:50:58.715615 systemd-logind[1943]: Session 12 logged out. Waiting for processes to exit. May 14 23:50:58.716295 systemd[1]: sshd@11-172.31.17.61:22-139.178.89.65:60420.service: Deactivated successfully. May 14 23:50:58.720796 systemd[1]: session-12.scope: Deactivated successfully. May 14 23:50:58.725280 systemd-logind[1943]: Removed session 12. May 14 23:51:03.752637 systemd[1]: Started sshd@12-172.31.17.61:22-139.178.89.65:60426.service - OpenSSH per-connection server daemon (139.178.89.65:60426). May 14 23:51:03.932916 sshd[4953]: Accepted publickey for core from 139.178.89.65 port 60426 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:03.935449 sshd-session[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:03.943246 systemd-logind[1943]: New session 13 of user core. May 14 23:51:03.950346 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 23:51:04.190916 sshd[4955]: Connection closed by 139.178.89.65 port 60426 May 14 23:51:04.190795 sshd-session[4953]: pam_unix(sshd:session): session closed for user core May 14 23:51:04.196466 systemd[1]: sshd@12-172.31.17.61:22-139.178.89.65:60426.service: Deactivated successfully. May 14 23:51:04.200788 systemd[1]: session-13.scope: Deactivated successfully. May 14 23:51:04.204460 systemd-logind[1943]: Session 13 logged out. Waiting for processes to exit. May 14 23:51:04.206529 systemd-logind[1943]: Removed session 13. May 14 23:51:04.235508 systemd[1]: Started sshd@13-172.31.17.61:22-139.178.89.65:60438.service - OpenSSH per-connection server daemon (139.178.89.65:60438). May 14 23:51:04.415527 sshd[4967]: Accepted publickey for core from 139.178.89.65 port 60438 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:04.418571 sshd-session[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:04.428096 systemd-logind[1943]: New session 14 of user core. May 14 23:51:04.437377 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 23:51:04.756557 sshd[4969]: Connection closed by 139.178.89.65 port 60438 May 14 23:51:04.757656 sshd-session[4967]: pam_unix(sshd:session): session closed for user core May 14 23:51:04.767917 systemd[1]: sshd@13-172.31.17.61:22-139.178.89.65:60438.service: Deactivated successfully. May 14 23:51:04.775496 systemd[1]: session-14.scope: Deactivated successfully. May 14 23:51:04.782086 systemd-logind[1943]: Session 14 logged out. Waiting for processes to exit. May 14 23:51:04.812787 systemd[1]: Started sshd@14-172.31.17.61:22-139.178.89.65:60448.service - OpenSSH per-connection server daemon (139.178.89.65:60448). May 14 23:51:04.817377 systemd-logind[1943]: Removed session 14. May 14 23:51:05.000229 sshd[4978]: Accepted publickey for core from 139.178.89.65 port 60448 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:05.002703 sshd-session[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:05.010851 systemd-logind[1943]: New session 15 of user core. May 14 23:51:05.020354 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 23:51:05.273249 sshd[4981]: Connection closed by 139.178.89.65 port 60448 May 14 23:51:05.274386 sshd-session[4978]: pam_unix(sshd:session): session closed for user core May 14 23:51:05.280899 systemd[1]: sshd@14-172.31.17.61:22-139.178.89.65:60448.service: Deactivated successfully. May 14 23:51:05.287591 systemd[1]: session-15.scope: Deactivated successfully. May 14 23:51:05.290639 systemd-logind[1943]: Session 15 logged out. Waiting for processes to exit. May 14 23:51:05.292575 systemd-logind[1943]: Removed session 15. May 14 23:51:10.316578 systemd[1]: Started sshd@15-172.31.17.61:22-139.178.89.65:40004.service - OpenSSH per-connection server daemon (139.178.89.65:40004). May 14 23:51:10.511226 sshd[4993]: Accepted publickey for core from 139.178.89.65 port 40004 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:10.514137 sshd-session[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:10.524440 systemd-logind[1943]: New session 16 of user core. May 14 23:51:10.535412 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 23:51:10.786513 sshd[4997]: Connection closed by 139.178.89.65 port 40004 May 14 23:51:10.785923 sshd-session[4993]: pam_unix(sshd:session): session closed for user core May 14 23:51:10.791777 systemd[1]: sshd@15-172.31.17.61:22-139.178.89.65:40004.service: Deactivated successfully. May 14 23:51:10.795614 systemd[1]: session-16.scope: Deactivated successfully. May 14 23:51:10.798564 systemd-logind[1943]: Session 16 logged out. Waiting for processes to exit. May 14 23:51:10.800658 systemd-logind[1943]: Removed session 16. May 14 23:51:15.833608 systemd[1]: Started sshd@16-172.31.17.61:22-139.178.89.65:40020.service - OpenSSH per-connection server daemon (139.178.89.65:40020). May 14 23:51:16.016709 sshd[5008]: Accepted publickey for core from 139.178.89.65 port 40020 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:16.019223 sshd-session[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:16.028730 systemd-logind[1943]: New session 17 of user core. May 14 23:51:16.039338 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 23:51:16.283289 sshd[5010]: Connection closed by 139.178.89.65 port 40020 May 14 23:51:16.284419 sshd-session[5008]: pam_unix(sshd:session): session closed for user core May 14 23:51:16.290465 systemd-logind[1943]: Session 17 logged out. Waiting for processes to exit. May 14 23:51:16.291096 systemd[1]: sshd@16-172.31.17.61:22-139.178.89.65:40020.service: Deactivated successfully. May 14 23:51:16.294263 systemd[1]: session-17.scope: Deactivated successfully. May 14 23:51:16.298974 systemd-logind[1943]: Removed session 17. May 14 23:51:21.334635 systemd[1]: Started sshd@17-172.31.17.61:22-139.178.89.65:47052.service - OpenSSH per-connection server daemon (139.178.89.65:47052). May 14 23:51:21.530231 sshd[5027]: Accepted publickey for core from 139.178.89.65 port 47052 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:21.532699 sshd-session[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:21.543512 systemd-logind[1943]: New session 18 of user core. May 14 23:51:21.548400 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 23:51:21.793820 sshd[5029]: Connection closed by 139.178.89.65 port 47052 May 14 23:51:21.795396 sshd-session[5027]: pam_unix(sshd:session): session closed for user core May 14 23:51:21.802689 systemd[1]: sshd@17-172.31.17.61:22-139.178.89.65:47052.service: Deactivated successfully. May 14 23:51:21.807984 systemd[1]: session-18.scope: Deactivated successfully. May 14 23:51:21.810412 systemd-logind[1943]: Session 18 logged out. Waiting for processes to exit. May 14 23:51:21.812689 systemd-logind[1943]: Removed session 18. May 14 23:51:21.835759 systemd[1]: Started sshd@18-172.31.17.61:22-139.178.89.65:47062.service - OpenSSH per-connection server daemon (139.178.89.65:47062). May 14 23:51:22.027866 sshd[5040]: Accepted publickey for core from 139.178.89.65 port 47062 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:22.030432 sshd-session[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:22.038549 systemd-logind[1943]: New session 19 of user core. May 14 23:51:22.043433 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 23:51:22.349137 sshd[5042]: Connection closed by 139.178.89.65 port 47062 May 14 23:51:22.350106 sshd-session[5040]: pam_unix(sshd:session): session closed for user core May 14 23:51:22.357261 systemd[1]: sshd@18-172.31.17.61:22-139.178.89.65:47062.service: Deactivated successfully. May 14 23:51:22.361767 systemd[1]: session-19.scope: Deactivated successfully. May 14 23:51:22.363495 systemd-logind[1943]: Session 19 logged out. Waiting for processes to exit. May 14 23:51:22.365551 systemd-logind[1943]: Removed session 19. May 14 23:51:22.391589 systemd[1]: Started sshd@19-172.31.17.61:22-139.178.89.65:47070.service - OpenSSH per-connection server daemon (139.178.89.65:47070). May 14 23:51:22.573524 sshd[5052]: Accepted publickey for core from 139.178.89.65 port 47070 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:22.575936 sshd-session[5052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:22.586474 systemd-logind[1943]: New session 20 of user core. May 14 23:51:22.592608 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 23:51:25.173460 sshd[5054]: Connection closed by 139.178.89.65 port 47070 May 14 23:51:25.174652 sshd-session[5052]: pam_unix(sshd:session): session closed for user core May 14 23:51:25.185835 systemd[1]: sshd@19-172.31.17.61:22-139.178.89.65:47070.service: Deactivated successfully. May 14 23:51:25.195464 systemd[1]: session-20.scope: Deactivated successfully. May 14 23:51:25.202035 systemd-logind[1943]: Session 20 logged out. Waiting for processes to exit. May 14 23:51:25.229808 systemd[1]: Started sshd@20-172.31.17.61:22-139.178.89.65:47078.service - OpenSSH per-connection server daemon (139.178.89.65:47078). May 14 23:51:25.232784 systemd-logind[1943]: Removed session 20. May 14 23:51:25.431896 sshd[5071]: Accepted publickey for core from 139.178.89.65 port 47078 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:25.434184 sshd-session[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:25.443600 systemd-logind[1943]: New session 21 of user core. May 14 23:51:25.452317 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 23:51:25.949282 sshd[5074]: Connection closed by 139.178.89.65 port 47078 May 14 23:51:25.950378 sshd-session[5071]: pam_unix(sshd:session): session closed for user core May 14 23:51:25.958571 systemd[1]: sshd@20-172.31.17.61:22-139.178.89.65:47078.service: Deactivated successfully. May 14 23:51:25.963769 systemd[1]: session-21.scope: Deactivated successfully. May 14 23:51:25.965918 systemd-logind[1943]: Session 21 logged out. Waiting for processes to exit. May 14 23:51:25.967750 systemd-logind[1943]: Removed session 21. May 14 23:51:25.993553 systemd[1]: Started sshd@21-172.31.17.61:22-139.178.89.65:47088.service - OpenSSH per-connection server daemon (139.178.89.65:47088). May 14 23:51:26.171062 sshd[5084]: Accepted publickey for core from 139.178.89.65 port 47088 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:26.173643 sshd-session[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:26.182712 systemd-logind[1943]: New session 22 of user core. May 14 23:51:26.192327 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 23:51:26.432207 sshd[5086]: Connection closed by 139.178.89.65 port 47088 May 14 23:51:26.435920 sshd-session[5084]: pam_unix(sshd:session): session closed for user core May 14 23:51:26.442224 systemd[1]: sshd@21-172.31.17.61:22-139.178.89.65:47088.service: Deactivated successfully. May 14 23:51:26.447367 systemd[1]: session-22.scope: Deactivated successfully. May 14 23:51:26.448961 systemd-logind[1943]: Session 22 logged out. Waiting for processes to exit. May 14 23:51:26.451060 systemd-logind[1943]: Removed session 22. May 14 23:51:31.475425 systemd[1]: Started sshd@22-172.31.17.61:22-139.178.89.65:42758.service - OpenSSH per-connection server daemon (139.178.89.65:42758). May 14 23:51:31.666784 sshd[5097]: Accepted publickey for core from 139.178.89.65 port 42758 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:31.668496 sshd-session[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:31.676560 systemd-logind[1943]: New session 23 of user core. May 14 23:51:31.684336 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 23:51:31.926776 sshd[5099]: Connection closed by 139.178.89.65 port 42758 May 14 23:51:31.927644 sshd-session[5097]: pam_unix(sshd:session): session closed for user core May 14 23:51:31.933586 systemd[1]: sshd@22-172.31.17.61:22-139.178.89.65:42758.service: Deactivated successfully. May 14 23:51:31.937276 systemd[1]: session-23.scope: Deactivated successfully. May 14 23:51:31.939366 systemd-logind[1943]: Session 23 logged out. Waiting for processes to exit. May 14 23:51:31.941909 systemd-logind[1943]: Removed session 23. May 14 23:51:36.973615 systemd[1]: Started sshd@23-172.31.17.61:22-139.178.89.65:40916.service - OpenSSH per-connection server daemon (139.178.89.65:40916). May 14 23:51:37.152514 sshd[5114]: Accepted publickey for core from 139.178.89.65 port 40916 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:37.154932 sshd-session[5114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:37.165508 systemd-logind[1943]: New session 24 of user core. May 14 23:51:37.172351 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 23:51:37.409139 sshd[5116]: Connection closed by 139.178.89.65 port 40916 May 14 23:51:37.409966 sshd-session[5114]: pam_unix(sshd:session): session closed for user core May 14 23:51:37.417503 systemd[1]: sshd@23-172.31.17.61:22-139.178.89.65:40916.service: Deactivated successfully. May 14 23:51:37.421898 systemd[1]: session-24.scope: Deactivated successfully. May 14 23:51:37.424612 systemd-logind[1943]: Session 24 logged out. Waiting for processes to exit. May 14 23:51:37.426990 systemd-logind[1943]: Removed session 24. May 14 23:51:42.451582 systemd[1]: Started sshd@24-172.31.17.61:22-139.178.89.65:40926.service - OpenSSH per-connection server daemon (139.178.89.65:40926). May 14 23:51:42.643414 sshd[5128]: Accepted publickey for core from 139.178.89.65 port 40926 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:42.645858 sshd-session[5128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:42.655057 systemd-logind[1943]: New session 25 of user core. May 14 23:51:42.664377 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 23:51:42.901984 sshd[5130]: Connection closed by 139.178.89.65 port 40926 May 14 23:51:42.902838 sshd-session[5128]: pam_unix(sshd:session): session closed for user core May 14 23:51:42.910483 systemd[1]: sshd@24-172.31.17.61:22-139.178.89.65:40926.service: Deactivated successfully. May 14 23:51:42.913888 systemd[1]: session-25.scope: Deactivated successfully. May 14 23:51:42.915771 systemd-logind[1943]: Session 25 logged out. Waiting for processes to exit. May 14 23:51:42.918742 systemd-logind[1943]: Removed session 25. May 14 23:51:47.946604 systemd[1]: Started sshd@25-172.31.17.61:22-139.178.89.65:35630.service - OpenSSH per-connection server daemon (139.178.89.65:35630). May 14 23:51:48.129715 sshd[5141]: Accepted publickey for core from 139.178.89.65 port 35630 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:48.132260 sshd-session[5141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:48.141688 systemd-logind[1943]: New session 26 of user core. May 14 23:51:48.148380 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 23:51:48.391140 sshd[5143]: Connection closed by 139.178.89.65 port 35630 May 14 23:51:48.391985 sshd-session[5141]: pam_unix(sshd:session): session closed for user core May 14 23:51:48.398156 systemd[1]: sshd@25-172.31.17.61:22-139.178.89.65:35630.service: Deactivated successfully. May 14 23:51:48.402865 systemd[1]: session-26.scope: Deactivated successfully. May 14 23:51:48.404735 systemd-logind[1943]: Session 26 logged out. Waiting for processes to exit. May 14 23:51:48.406985 systemd-logind[1943]: Removed session 26. May 14 23:51:48.433698 systemd[1]: Started sshd@26-172.31.17.61:22-139.178.89.65:35632.service - OpenSSH per-connection server daemon (139.178.89.65:35632). May 14 23:51:48.618461 sshd[5157]: Accepted publickey for core from 139.178.89.65 port 35632 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:48.620901 sshd-session[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:48.629982 systemd-logind[1943]: New session 27 of user core. May 14 23:51:48.634336 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 23:51:52.751098 kubelet[3524]: I0514 23:51:52.750020 3524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-ll8ml" podStartSLOduration=96.750000078 podStartE2EDuration="1m36.750000078s" podCreationTimestamp="2025-05-14 23:50:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:50:49.861854249 +0000 UTC m=+39.711455322" watchObservedRunningTime="2025-05-14 23:51:52.750000078 +0000 UTC m=+102.599601139" May 14 23:51:52.781201 containerd[1965]: time="2025-05-14T23:51:52.781018326Z" level=info msg="StopContainer for \"92c2ecd588f5088e7e6710b8d6c7d7cc91a7ec4eee691101f86237257363d512\" with timeout 30 (s)" May 14 23:51:52.783766 containerd[1965]: time="2025-05-14T23:51:52.783370506Z" level=info msg="Stop container \"92c2ecd588f5088e7e6710b8d6c7d7cc91a7ec4eee691101f86237257363d512\" with signal terminated" May 14 23:51:52.806969 systemd[1]: run-containerd-runc-k8s.io-de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805-runc.fXiCS2.mount: Deactivated successfully. May 14 23:51:52.836378 systemd[1]: cri-containerd-92c2ecd588f5088e7e6710b8d6c7d7cc91a7ec4eee691101f86237257363d512.scope: Deactivated successfully. May 14 23:51:52.839093 containerd[1965]: time="2025-05-14T23:51:52.839005110Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 23:51:52.853717 containerd[1965]: time="2025-05-14T23:51:52.853643658Z" level=info msg="StopContainer for \"de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805\" with timeout 2 (s)" May 14 23:51:52.854791 containerd[1965]: time="2025-05-14T23:51:52.854689362Z" level=info msg="Stop container \"de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805\" with signal terminated" May 14 23:51:52.870621 systemd-networkd[1872]: lxc_health: Link DOWN May 14 23:51:52.870641 systemd-networkd[1872]: lxc_health: Lost carrier May 14 23:51:52.910921 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92c2ecd588f5088e7e6710b8d6c7d7cc91a7ec4eee691101f86237257363d512-rootfs.mount: Deactivated successfully. May 14 23:51:52.919642 systemd[1]: cri-containerd-de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805.scope: Deactivated successfully. May 14 23:51:52.920638 systemd[1]: cri-containerd-de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805.scope: Consumed 14.434s CPU time, 126.2M memory peak, 128K read from disk, 12.9M written to disk. May 14 23:51:52.924970 containerd[1965]: time="2025-05-14T23:51:52.924476695Z" level=info msg="shim disconnected" id=92c2ecd588f5088e7e6710b8d6c7d7cc91a7ec4eee691101f86237257363d512 namespace=k8s.io May 14 23:51:52.924970 containerd[1965]: time="2025-05-14T23:51:52.924555607Z" level=warning msg="cleaning up after shim disconnected" id=92c2ecd588f5088e7e6710b8d6c7d7cc91a7ec4eee691101f86237257363d512 namespace=k8s.io May 14 23:51:52.924970 containerd[1965]: time="2025-05-14T23:51:52.924578455Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:51:52.965303 containerd[1965]: time="2025-05-14T23:51:52.964896211Z" level=info msg="StopContainer for \"92c2ecd588f5088e7e6710b8d6c7d7cc91a7ec4eee691101f86237257363d512\" returns successfully" May 14 23:51:52.966657 containerd[1965]: time="2025-05-14T23:51:52.966542191Z" level=info msg="StopPodSandbox for \"540a8602c5f820003ca49a4b28721a77aecd6e8de48ea65eeb31191604c69601\"" May 14 23:51:52.966657 containerd[1965]: time="2025-05-14T23:51:52.966620935Z" level=info msg="Container to stop \"92c2ecd588f5088e7e6710b8d6c7d7cc91a7ec4eee691101f86237257363d512\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:51:52.971425 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-540a8602c5f820003ca49a4b28721a77aecd6e8de48ea65eeb31191604c69601-shm.mount: Deactivated successfully. May 14 23:51:52.985690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805-rootfs.mount: Deactivated successfully. May 14 23:51:52.988688 systemd[1]: cri-containerd-540a8602c5f820003ca49a4b28721a77aecd6e8de48ea65eeb31191604c69601.scope: Deactivated successfully. May 14 23:51:53.000182 containerd[1965]: time="2025-05-14T23:51:52.999751147Z" level=info msg="shim disconnected" id=de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805 namespace=k8s.io May 14 23:51:53.000182 containerd[1965]: time="2025-05-14T23:51:52.999914347Z" level=warning msg="cleaning up after shim disconnected" id=de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805 namespace=k8s.io May 14 23:51:53.000182 containerd[1965]: time="2025-05-14T23:51:52.999967171Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:51:53.039283 containerd[1965]: time="2025-05-14T23:51:53.038253987Z" level=info msg="shim disconnected" id=540a8602c5f820003ca49a4b28721a77aecd6e8de48ea65eeb31191604c69601 namespace=k8s.io May 14 23:51:53.040011 containerd[1965]: time="2025-05-14T23:51:53.039960999Z" level=warning msg="cleaning up after shim disconnected" id=540a8602c5f820003ca49a4b28721a77aecd6e8de48ea65eeb31191604c69601 namespace=k8s.io May 14 23:51:53.040211 containerd[1965]: time="2025-05-14T23:51:53.040179627Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:51:53.040707 containerd[1965]: time="2025-05-14T23:51:53.040649835Z" level=info msg="StopContainer for \"de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805\" returns successfully" May 14 23:51:53.041762 containerd[1965]: time="2025-05-14T23:51:53.041703795Z" level=info msg="StopPodSandbox for \"86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d\"" May 14 23:51:53.041909 containerd[1965]: time="2025-05-14T23:51:53.041796027Z" level=info msg="Container to stop \"de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:51:53.041909 containerd[1965]: time="2025-05-14T23:51:53.041825559Z" level=info msg="Container to stop \"8e8d56df4c07a06f63b9b1dd6e5a3bb793b269a71a3cbbb7f427c5b55c5f3abe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:51:53.041909 containerd[1965]: time="2025-05-14T23:51:53.041869287Z" level=info msg="Container to stop \"039a209c1f0fe7df5398fcef29cc706828ac7b60f61eecb1b633e8c090a0cf69\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:51:53.041909 containerd[1965]: time="2025-05-14T23:51:53.041896071Z" level=info msg="Container to stop \"cd8988affec37a53c9c3f4c16bab772adf11fbdfb9c29e926e752516227af4c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:51:53.042240 containerd[1965]: time="2025-05-14T23:51:53.041917107Z" level=info msg="Container to stop \"d364b43c8167703262799228da0b066ecb4bd3974f263be5af3fddae54aa1402\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 23:51:53.056324 systemd[1]: cri-containerd-86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d.scope: Deactivated successfully. May 14 23:51:53.081985 containerd[1965]: time="2025-05-14T23:51:53.081708795Z" level=info msg="TearDown network for sandbox \"540a8602c5f820003ca49a4b28721a77aecd6e8de48ea65eeb31191604c69601\" successfully" May 14 23:51:53.081985 containerd[1965]: time="2025-05-14T23:51:53.081755991Z" level=info msg="StopPodSandbox for \"540a8602c5f820003ca49a4b28721a77aecd6e8de48ea65eeb31191604c69601\" returns successfully" May 14 23:51:53.117035 containerd[1965]: time="2025-05-14T23:51:53.115838812Z" level=info msg="shim disconnected" id=86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d namespace=k8s.io May 14 23:51:53.118769 containerd[1965]: time="2025-05-14T23:51:53.115961812Z" level=warning msg="cleaning up after shim disconnected" id=86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d namespace=k8s.io May 14 23:51:53.118769 containerd[1965]: time="2025-05-14T23:51:53.118474156Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:51:53.135327 kubelet[3524]: I0514 23:51:53.135278 3524 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn7w6\" (UniqueName: \"kubernetes.io/projected/99fee54d-ab70-4ec3-a226-bb9c31d872ab-kube-api-access-zn7w6\") pod \"99fee54d-ab70-4ec3-a226-bb9c31d872ab\" (UID: \"99fee54d-ab70-4ec3-a226-bb9c31d872ab\") " May 14 23:51:53.137112 kubelet[3524]: I0514 23:51:53.135948 3524 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99fee54d-ab70-4ec3-a226-bb9c31d872ab-cilium-config-path\") pod \"99fee54d-ab70-4ec3-a226-bb9c31d872ab\" (UID: \"99fee54d-ab70-4ec3-a226-bb9c31d872ab\") " May 14 23:51:53.145472 kubelet[3524]: I0514 23:51:53.145246 3524 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99fee54d-ab70-4ec3-a226-bb9c31d872ab-kube-api-access-zn7w6" (OuterVolumeSpecName: "kube-api-access-zn7w6") pod "99fee54d-ab70-4ec3-a226-bb9c31d872ab" (UID: "99fee54d-ab70-4ec3-a226-bb9c31d872ab"). InnerVolumeSpecName "kube-api-access-zn7w6". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 23:51:53.146023 containerd[1965]: time="2025-05-14T23:51:53.145954756Z" level=info msg="TearDown network for sandbox \"86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d\" successfully" May 14 23:51:53.146023 containerd[1965]: time="2025-05-14T23:51:53.146005852Z" level=info msg="StopPodSandbox for \"86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d\" returns successfully" May 14 23:51:53.148455 kubelet[3524]: I0514 23:51:53.148402 3524 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99fee54d-ab70-4ec3-a226-bb9c31d872ab-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "99fee54d-ab70-4ec3-a226-bb9c31d872ab" (UID: "99fee54d-ab70-4ec3-a226-bb9c31d872ab"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 23:51:53.236987 kubelet[3524]: I0514 23:51:53.236937 3524 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f3f6420-5b97-43a1-be0c-8e023da75b13-cilium-config-path\") pod \"0f3f6420-5b97-43a1-be0c-8e023da75b13\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " May 14 23:51:53.238258 kubelet[3524]: I0514 23:51:53.237277 3524 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-bpf-maps\") pod \"0f3f6420-5b97-43a1-be0c-8e023da75b13\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " May 14 23:51:53.238258 kubelet[3524]: I0514 23:51:53.237336 3524 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-etc-cni-netd\") pod \"0f3f6420-5b97-43a1-be0c-8e023da75b13\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " May 14 23:51:53.238258 kubelet[3524]: I0514 23:51:53.237378 3524 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0f3f6420-5b97-43a1-be0c-8e023da75b13-clustermesh-secrets\") pod \"0f3f6420-5b97-43a1-be0c-8e023da75b13\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " May 14 23:51:53.238258 kubelet[3524]: I0514 23:51:53.237412 3524 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-cni-path\") pod \"0f3f6420-5b97-43a1-be0c-8e023da75b13\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " May 14 23:51:53.238258 kubelet[3524]: I0514 23:51:53.237458 3524 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-xtables-lock\") pod \"0f3f6420-5b97-43a1-be0c-8e023da75b13\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " May 14 23:51:53.238258 kubelet[3524]: I0514 23:51:53.237494 3524 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-cilium-cgroup\") pod \"0f3f6420-5b97-43a1-be0c-8e023da75b13\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " May 14 23:51:53.238643 kubelet[3524]: I0514 23:51:53.237525 3524 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-cilium-run\") pod \"0f3f6420-5b97-43a1-be0c-8e023da75b13\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " May 14 23:51:53.238643 kubelet[3524]: I0514 23:51:53.237561 3524 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-host-proc-sys-kernel\") pod \"0f3f6420-5b97-43a1-be0c-8e023da75b13\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " May 14 23:51:53.238643 kubelet[3524]: I0514 23:51:53.237597 3524 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xbls\" (UniqueName: \"kubernetes.io/projected/0f3f6420-5b97-43a1-be0c-8e023da75b13-kube-api-access-5xbls\") pod \"0f3f6420-5b97-43a1-be0c-8e023da75b13\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " May 14 23:51:53.238643 kubelet[3524]: I0514 23:51:53.237631 3524 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-host-proc-sys-net\") pod \"0f3f6420-5b97-43a1-be0c-8e023da75b13\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " May 14 23:51:53.238643 kubelet[3524]: I0514 23:51:53.237662 3524 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-lib-modules\") pod \"0f3f6420-5b97-43a1-be0c-8e023da75b13\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " May 14 23:51:53.238643 kubelet[3524]: I0514 23:51:53.237695 3524 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-hostproc\") pod \"0f3f6420-5b97-43a1-be0c-8e023da75b13\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " May 14 23:51:53.238980 kubelet[3524]: I0514 23:51:53.237733 3524 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0f3f6420-5b97-43a1-be0c-8e023da75b13-hubble-tls\") pod \"0f3f6420-5b97-43a1-be0c-8e023da75b13\" (UID: \"0f3f6420-5b97-43a1-be0c-8e023da75b13\") " May 14 23:51:53.238980 kubelet[3524]: I0514 23:51:53.237795 3524 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zn7w6\" (UniqueName: \"kubernetes.io/projected/99fee54d-ab70-4ec3-a226-bb9c31d872ab-kube-api-access-zn7w6\") on node \"ip-172-31-17-61\" DevicePath \"\"" May 14 23:51:53.238980 kubelet[3524]: I0514 23:51:53.237821 3524 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99fee54d-ab70-4ec3-a226-bb9c31d872ab-cilium-config-path\") on node \"ip-172-31-17-61\" DevicePath \"\"" May 14 23:51:53.240796 kubelet[3524]: I0514 23:51:53.240670 3524 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0f3f6420-5b97-43a1-be0c-8e023da75b13" (UID: "0f3f6420-5b97-43a1-be0c-8e023da75b13"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:51:53.241106 kubelet[3524]: I0514 23:51:53.240933 3524 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0f3f6420-5b97-43a1-be0c-8e023da75b13" (UID: "0f3f6420-5b97-43a1-be0c-8e023da75b13"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:51:53.243133 kubelet[3524]: I0514 23:51:53.242864 3524 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-cni-path" (OuterVolumeSpecName: "cni-path") pod "0f3f6420-5b97-43a1-be0c-8e023da75b13" (UID: "0f3f6420-5b97-43a1-be0c-8e023da75b13"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:51:53.243133 kubelet[3524]: I0514 23:51:53.242949 3524 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0f3f6420-5b97-43a1-be0c-8e023da75b13" (UID: "0f3f6420-5b97-43a1-be0c-8e023da75b13"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:51:53.243133 kubelet[3524]: I0514 23:51:53.242990 3524 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0f3f6420-5b97-43a1-be0c-8e023da75b13" (UID: "0f3f6420-5b97-43a1-be0c-8e023da75b13"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:51:53.243133 kubelet[3524]: I0514 23:51:53.243026 3524 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0f3f6420-5b97-43a1-be0c-8e023da75b13" (UID: "0f3f6420-5b97-43a1-be0c-8e023da75b13"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:51:53.243133 kubelet[3524]: I0514 23:51:53.243060 3524 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0f3f6420-5b97-43a1-be0c-8e023da75b13" (UID: "0f3f6420-5b97-43a1-be0c-8e023da75b13"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:51:53.246267 kubelet[3524]: I0514 23:51:53.245910 3524 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f3f6420-5b97-43a1-be0c-8e023da75b13-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0f3f6420-5b97-43a1-be0c-8e023da75b13" (UID: "0f3f6420-5b97-43a1-be0c-8e023da75b13"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 23:51:53.246509 kubelet[3524]: I0514 23:51:53.246392 3524 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0f3f6420-5b97-43a1-be0c-8e023da75b13" (UID: "0f3f6420-5b97-43a1-be0c-8e023da75b13"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:51:53.246691 kubelet[3524]: I0514 23:51:53.246638 3524 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0f3f6420-5b97-43a1-be0c-8e023da75b13" (UID: "0f3f6420-5b97-43a1-be0c-8e023da75b13"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:51:53.246956 kubelet[3524]: I0514 23:51:53.246881 3524 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-hostproc" (OuterVolumeSpecName: "hostproc") pod "0f3f6420-5b97-43a1-be0c-8e023da75b13" (UID: "0f3f6420-5b97-43a1-be0c-8e023da75b13"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 23:51:53.247194 kubelet[3524]: I0514 23:51:53.246929 3524 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f3f6420-5b97-43a1-be0c-8e023da75b13-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0f3f6420-5b97-43a1-be0c-8e023da75b13" (UID: "0f3f6420-5b97-43a1-be0c-8e023da75b13"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 23:51:53.250940 kubelet[3524]: I0514 23:51:53.250853 3524 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f3f6420-5b97-43a1-be0c-8e023da75b13-kube-api-access-5xbls" (OuterVolumeSpecName: "kube-api-access-5xbls") pod "0f3f6420-5b97-43a1-be0c-8e023da75b13" (UID: "0f3f6420-5b97-43a1-be0c-8e023da75b13"). InnerVolumeSpecName "kube-api-access-5xbls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 23:51:53.251168 kubelet[3524]: I0514 23:51:53.251109 3524 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f3f6420-5b97-43a1-be0c-8e023da75b13-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0f3f6420-5b97-43a1-be0c-8e023da75b13" (UID: "0f3f6420-5b97-43a1-be0c-8e023da75b13"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 23:51:53.339036 kubelet[3524]: I0514 23:51:53.338973 3524 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-host-proc-sys-kernel\") on node \"ip-172-31-17-61\" DevicePath \"\"" May 14 23:51:53.339036 kubelet[3524]: I0514 23:51:53.339027 3524 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5xbls\" (UniqueName: \"kubernetes.io/projected/0f3f6420-5b97-43a1-be0c-8e023da75b13-kube-api-access-5xbls\") on node \"ip-172-31-17-61\" DevicePath \"\"" May 14 23:51:53.339279 kubelet[3524]: I0514 23:51:53.339050 3524 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-host-proc-sys-net\") on node \"ip-172-31-17-61\" DevicePath \"\"" May 14 23:51:53.339279 kubelet[3524]: I0514 23:51:53.339099 3524 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-lib-modules\") on node \"ip-172-31-17-61\" DevicePath \"\"" May 14 23:51:53.339279 kubelet[3524]: I0514 23:51:53.339125 3524 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-hostproc\") on node \"ip-172-31-17-61\" DevicePath \"\"" May 14 23:51:53.339279 kubelet[3524]: I0514 23:51:53.339144 3524 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0f3f6420-5b97-43a1-be0c-8e023da75b13-hubble-tls\") on node \"ip-172-31-17-61\" DevicePath \"\"" May 14 23:51:53.339279 kubelet[3524]: I0514 23:51:53.339164 3524 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f3f6420-5b97-43a1-be0c-8e023da75b13-cilium-config-path\") on node \"ip-172-31-17-61\" DevicePath \"\"" May 14 23:51:53.339279 kubelet[3524]: I0514 23:51:53.339184 3524 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-bpf-maps\") on node \"ip-172-31-17-61\" DevicePath \"\"" May 14 23:51:53.339279 kubelet[3524]: I0514 23:51:53.339204 3524 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-etc-cni-netd\") on node \"ip-172-31-17-61\" DevicePath \"\"" May 14 23:51:53.339279 kubelet[3524]: I0514 23:51:53.339225 3524 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0f3f6420-5b97-43a1-be0c-8e023da75b13-clustermesh-secrets\") on node \"ip-172-31-17-61\" DevicePath \"\"" May 14 23:51:53.339672 kubelet[3524]: I0514 23:51:53.339244 3524 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-cni-path\") on node \"ip-172-31-17-61\" DevicePath \"\"" May 14 23:51:53.339672 kubelet[3524]: I0514 23:51:53.339263 3524 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-cilium-cgroup\") on node \"ip-172-31-17-61\" DevicePath \"\"" May 14 23:51:53.339672 kubelet[3524]: I0514 23:51:53.339286 3524 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-cilium-run\") on node \"ip-172-31-17-61\" DevicePath \"\"" May 14 23:51:53.339672 kubelet[3524]: I0514 23:51:53.339307 3524 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f3f6420-5b97-43a1-be0c-8e023da75b13-xtables-lock\") on node \"ip-172-31-17-61\" DevicePath \"\"" May 14 23:51:53.793095 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d-rootfs.mount: Deactivated successfully. May 14 23:51:53.793345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-540a8602c5f820003ca49a4b28721a77aecd6e8de48ea65eeb31191604c69601-rootfs.mount: Deactivated successfully. May 14 23:51:53.793482 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d-shm.mount: Deactivated successfully. May 14 23:51:53.793625 systemd[1]: var-lib-kubelet-pods-0f3f6420\x2d5b97\x2d43a1\x2dbe0c\x2d8e023da75b13-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5xbls.mount: Deactivated successfully. May 14 23:51:53.793775 systemd[1]: var-lib-kubelet-pods-99fee54d\x2dab70\x2d4ec3\x2da226\x2dbb9c31d872ab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzn7w6.mount: Deactivated successfully. May 14 23:51:53.793913 systemd[1]: var-lib-kubelet-pods-0f3f6420\x2d5b97\x2d43a1\x2dbe0c\x2d8e023da75b13-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 23:51:53.794047 systemd[1]: var-lib-kubelet-pods-0f3f6420\x2d5b97\x2d43a1\x2dbe0c\x2d8e023da75b13-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 23:51:53.953263 kubelet[3524]: I0514 23:51:53.953230 3524 scope.go:117] "RemoveContainer" containerID="de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805" May 14 23:51:53.961144 containerd[1965]: time="2025-05-14T23:51:53.960184256Z" level=info msg="RemoveContainer for \"de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805\"" May 14 23:51:53.976149 systemd[1]: Removed slice kubepods-burstable-pod0f3f6420_5b97_43a1_be0c_8e023da75b13.slice - libcontainer container kubepods-burstable-pod0f3f6420_5b97_43a1_be0c_8e023da75b13.slice. May 14 23:51:53.978041 containerd[1965]: time="2025-05-14T23:51:53.976210952Z" level=info msg="RemoveContainer for \"de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805\" returns successfully" May 14 23:51:53.976403 systemd[1]: kubepods-burstable-pod0f3f6420_5b97_43a1_be0c_8e023da75b13.slice: Consumed 14.596s CPU time, 126.7M memory peak, 128K read from disk, 15M written to disk. May 14 23:51:53.980687 kubelet[3524]: I0514 23:51:53.980477 3524 scope.go:117] "RemoveContainer" containerID="cd8988affec37a53c9c3f4c16bab772adf11fbdfb9c29e926e752516227af4c4" May 14 23:51:53.984752 systemd[1]: Removed slice kubepods-besteffort-pod99fee54d_ab70_4ec3_a226_bb9c31d872ab.slice - libcontainer container kubepods-besteffort-pod99fee54d_ab70_4ec3_a226_bb9c31d872ab.slice. May 14 23:51:53.987930 containerd[1965]: time="2025-05-14T23:51:53.987305564Z" level=info msg="RemoveContainer for \"cd8988affec37a53c9c3f4c16bab772adf11fbdfb9c29e926e752516227af4c4\"" May 14 23:51:53.995152 containerd[1965]: time="2025-05-14T23:51:53.995044424Z" level=info msg="RemoveContainer for \"cd8988affec37a53c9c3f4c16bab772adf11fbdfb9c29e926e752516227af4c4\" returns successfully" May 14 23:51:53.995572 kubelet[3524]: I0514 23:51:53.995421 3524 scope.go:117] "RemoveContainer" containerID="039a209c1f0fe7df5398fcef29cc706828ac7b60f61eecb1b633e8c090a0cf69" May 14 23:51:53.997869 containerd[1965]: time="2025-05-14T23:51:53.997762160Z" level=info msg="RemoveContainer for \"039a209c1f0fe7df5398fcef29cc706828ac7b60f61eecb1b633e8c090a0cf69\"" May 14 23:51:54.015900 containerd[1965]: time="2025-05-14T23:51:54.012718936Z" level=info msg="RemoveContainer for \"039a209c1f0fe7df5398fcef29cc706828ac7b60f61eecb1b633e8c090a0cf69\" returns successfully" May 14 23:51:54.016032 kubelet[3524]: I0514 23:51:54.014324 3524 scope.go:117] "RemoveContainer" containerID="d364b43c8167703262799228da0b066ecb4bd3974f263be5af3fddae54aa1402" May 14 23:51:54.022365 containerd[1965]: time="2025-05-14T23:51:54.021820912Z" level=info msg="RemoveContainer for \"d364b43c8167703262799228da0b066ecb4bd3974f263be5af3fddae54aa1402\"" May 14 23:51:54.032864 containerd[1965]: time="2025-05-14T23:51:54.032813104Z" level=info msg="RemoveContainer for \"d364b43c8167703262799228da0b066ecb4bd3974f263be5af3fddae54aa1402\" returns successfully" May 14 23:51:54.034229 kubelet[3524]: I0514 23:51:54.034189 3524 scope.go:117] "RemoveContainer" containerID="8e8d56df4c07a06f63b9b1dd6e5a3bb793b269a71a3cbbb7f427c5b55c5f3abe" May 14 23:51:54.039109 containerd[1965]: time="2025-05-14T23:51:54.038439076Z" level=info msg="RemoveContainer for \"8e8d56df4c07a06f63b9b1dd6e5a3bb793b269a71a3cbbb7f427c5b55c5f3abe\"" May 14 23:51:54.048452 containerd[1965]: time="2025-05-14T23:51:54.047315224Z" level=info msg="RemoveContainer for \"8e8d56df4c07a06f63b9b1dd6e5a3bb793b269a71a3cbbb7f427c5b55c5f3abe\" returns successfully" May 14 23:51:54.051307 kubelet[3524]: I0514 23:51:54.051258 3524 scope.go:117] "RemoveContainer" containerID="de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805" May 14 23:51:54.052768 containerd[1965]: time="2025-05-14T23:51:54.051993052Z" level=error msg="ContainerStatus for \"de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805\": not found" May 14 23:51:54.053191 kubelet[3524]: E0514 23:51:54.052990 3524 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805\": not found" containerID="de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805" May 14 23:51:54.054683 kubelet[3524]: I0514 23:51:54.053127 3524 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805"} err="failed to get container status \"de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805\": rpc error: code = NotFound desc = an error occurred when try to find container \"de68f560994143c6a467ca36eeaaa4da39d07554759aee320e7f160fa8800805\": not found" May 14 23:51:54.054683 kubelet[3524]: I0514 23:51:54.053458 3524 scope.go:117] "RemoveContainer" containerID="cd8988affec37a53c9c3f4c16bab772adf11fbdfb9c29e926e752516227af4c4" May 14 23:51:54.055986 containerd[1965]: time="2025-05-14T23:51:54.055334212Z" level=error msg="ContainerStatus for \"cd8988affec37a53c9c3f4c16bab772adf11fbdfb9c29e926e752516227af4c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cd8988affec37a53c9c3f4c16bab772adf11fbdfb9c29e926e752516227af4c4\": not found" May 14 23:51:54.056157 kubelet[3524]: E0514 23:51:54.055774 3524 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cd8988affec37a53c9c3f4c16bab772adf11fbdfb9c29e926e752516227af4c4\": not found" containerID="cd8988affec37a53c9c3f4c16bab772adf11fbdfb9c29e926e752516227af4c4" May 14 23:51:54.056157 kubelet[3524]: I0514 23:51:54.055824 3524 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cd8988affec37a53c9c3f4c16bab772adf11fbdfb9c29e926e752516227af4c4"} err="failed to get container status \"cd8988affec37a53c9c3f4c16bab772adf11fbdfb9c29e926e752516227af4c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"cd8988affec37a53c9c3f4c16bab772adf11fbdfb9c29e926e752516227af4c4\": not found" May 14 23:51:54.056157 kubelet[3524]: I0514 23:51:54.055861 3524 scope.go:117] "RemoveContainer" containerID="039a209c1f0fe7df5398fcef29cc706828ac7b60f61eecb1b633e8c090a0cf69" May 14 23:51:54.058165 containerd[1965]: time="2025-05-14T23:51:54.057867064Z" level=error msg="ContainerStatus for \"039a209c1f0fe7df5398fcef29cc706828ac7b60f61eecb1b633e8c090a0cf69\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"039a209c1f0fe7df5398fcef29cc706828ac7b60f61eecb1b633e8c090a0cf69\": not found" May 14 23:51:54.058574 kubelet[3524]: E0514 23:51:54.058314 3524 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"039a209c1f0fe7df5398fcef29cc706828ac7b60f61eecb1b633e8c090a0cf69\": not found" containerID="039a209c1f0fe7df5398fcef29cc706828ac7b60f61eecb1b633e8c090a0cf69" May 14 23:51:54.058574 kubelet[3524]: I0514 23:51:54.058396 3524 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"039a209c1f0fe7df5398fcef29cc706828ac7b60f61eecb1b633e8c090a0cf69"} err="failed to get container status \"039a209c1f0fe7df5398fcef29cc706828ac7b60f61eecb1b633e8c090a0cf69\": rpc error: code = NotFound desc = an error occurred when try to find container \"039a209c1f0fe7df5398fcef29cc706828ac7b60f61eecb1b633e8c090a0cf69\": not found" May 14 23:51:54.058574 kubelet[3524]: I0514 23:51:54.058461 3524 scope.go:117] "RemoveContainer" containerID="d364b43c8167703262799228da0b066ecb4bd3974f263be5af3fddae54aa1402" May 14 23:51:54.059981 kubelet[3524]: E0514 23:51:54.059103 3524 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d364b43c8167703262799228da0b066ecb4bd3974f263be5af3fddae54aa1402\": not found" containerID="d364b43c8167703262799228da0b066ecb4bd3974f263be5af3fddae54aa1402" May 14 23:51:54.059981 kubelet[3524]: I0514 23:51:54.059142 3524 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d364b43c8167703262799228da0b066ecb4bd3974f263be5af3fddae54aa1402"} err="failed to get container status \"d364b43c8167703262799228da0b066ecb4bd3974f263be5af3fddae54aa1402\": rpc error: code = NotFound desc = an error occurred when try to find container \"d364b43c8167703262799228da0b066ecb4bd3974f263be5af3fddae54aa1402\": not found" May 14 23:51:54.059981 kubelet[3524]: I0514 23:51:54.059173 3524 scope.go:117] "RemoveContainer" containerID="8e8d56df4c07a06f63b9b1dd6e5a3bb793b269a71a3cbbb7f427c5b55c5f3abe" May 14 23:51:54.059981 kubelet[3524]: E0514 23:51:54.059622 3524 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e8d56df4c07a06f63b9b1dd6e5a3bb793b269a71a3cbbb7f427c5b55c5f3abe\": not found" containerID="8e8d56df4c07a06f63b9b1dd6e5a3bb793b269a71a3cbbb7f427c5b55c5f3abe" May 14 23:51:54.059981 kubelet[3524]: I0514 23:51:54.059660 3524 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8e8d56df4c07a06f63b9b1dd6e5a3bb793b269a71a3cbbb7f427c5b55c5f3abe"} err="failed to get container status \"8e8d56df4c07a06f63b9b1dd6e5a3bb793b269a71a3cbbb7f427c5b55c5f3abe\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e8d56df4c07a06f63b9b1dd6e5a3bb793b269a71a3cbbb7f427c5b55c5f3abe\": not found" May 14 23:51:54.059981 kubelet[3524]: I0514 23:51:54.059693 3524 scope.go:117] "RemoveContainer" containerID="92c2ecd588f5088e7e6710b8d6c7d7cc91a7ec4eee691101f86237257363d512" May 14 23:51:54.060346 containerd[1965]: time="2025-05-14T23:51:54.058831540Z" level=error msg="ContainerStatus for \"d364b43c8167703262799228da0b066ecb4bd3974f263be5af3fddae54aa1402\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d364b43c8167703262799228da0b066ecb4bd3974f263be5af3fddae54aa1402\": not found" May 14 23:51:54.060346 containerd[1965]: time="2025-05-14T23:51:54.059429824Z" level=error msg="ContainerStatus for \"8e8d56df4c07a06f63b9b1dd6e5a3bb793b269a71a3cbbb7f427c5b55c5f3abe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e8d56df4c07a06f63b9b1dd6e5a3bb793b269a71a3cbbb7f427c5b55c5f3abe\": not found" May 14 23:51:54.062391 containerd[1965]: time="2025-05-14T23:51:54.062345332Z" level=info msg="RemoveContainer for \"92c2ecd588f5088e7e6710b8d6c7d7cc91a7ec4eee691101f86237257363d512\"" May 14 23:51:54.068649 containerd[1965]: time="2025-05-14T23:51:54.068527732Z" level=info msg="RemoveContainer for \"92c2ecd588f5088e7e6710b8d6c7d7cc91a7ec4eee691101f86237257363d512\" returns successfully" May 14 23:51:54.069056 kubelet[3524]: I0514 23:51:54.068862 3524 scope.go:117] "RemoveContainer" containerID="92c2ecd588f5088e7e6710b8d6c7d7cc91a7ec4eee691101f86237257363d512" May 14 23:51:54.069827 containerd[1965]: time="2025-05-14T23:51:54.069700696Z" level=error msg="ContainerStatus for \"92c2ecd588f5088e7e6710b8d6c7d7cc91a7ec4eee691101f86237257363d512\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92c2ecd588f5088e7e6710b8d6c7d7cc91a7ec4eee691101f86237257363d512\": not found" May 14 23:51:54.070155 kubelet[3524]: E0514 23:51:54.070049 3524 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92c2ecd588f5088e7e6710b8d6c7d7cc91a7ec4eee691101f86237257363d512\": not found" containerID="92c2ecd588f5088e7e6710b8d6c7d7cc91a7ec4eee691101f86237257363d512" May 14 23:51:54.070155 kubelet[3524]: I0514 23:51:54.070116 3524 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92c2ecd588f5088e7e6710b8d6c7d7cc91a7ec4eee691101f86237257363d512"} err="failed to get container status \"92c2ecd588f5088e7e6710b8d6c7d7cc91a7ec4eee691101f86237257363d512\": rpc error: code = NotFound desc = an error occurred when try to find container \"92c2ecd588f5088e7e6710b8d6c7d7cc91a7ec4eee691101f86237257363d512\": not found" May 14 23:51:54.499225 kubelet[3524]: I0514 23:51:54.498215 3524 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f3f6420-5b97-43a1-be0c-8e023da75b13" path="/var/lib/kubelet/pods/0f3f6420-5b97-43a1-be0c-8e023da75b13/volumes" May 14 23:51:54.500136 kubelet[3524]: I0514 23:51:54.500089 3524 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99fee54d-ab70-4ec3-a226-bb9c31d872ab" path="/var/lib/kubelet/pods/99fee54d-ab70-4ec3-a226-bb9c31d872ab/volumes" May 14 23:51:54.710233 sshd[5159]: Connection closed by 139.178.89.65 port 35632 May 14 23:51:54.711162 sshd-session[5157]: pam_unix(sshd:session): session closed for user core May 14 23:51:54.716606 systemd-logind[1943]: Session 27 logged out. Waiting for processes to exit. May 14 23:51:54.719311 systemd[1]: sshd@26-172.31.17.61:22-139.178.89.65:35632.service: Deactivated successfully. May 14 23:51:54.723323 systemd[1]: session-27.scope: Deactivated successfully. May 14 23:51:54.724988 systemd[1]: session-27.scope: Consumed 3.389s CPU time, 23.7M memory peak. May 14 23:51:54.726944 systemd-logind[1943]: Removed session 27. May 14 23:51:54.749612 systemd[1]: Started sshd@27-172.31.17.61:22-139.178.89.65:35644.service - OpenSSH per-connection server daemon (139.178.89.65:35644). May 14 23:51:54.929384 ntpd[1936]: Deleting interface #11 lxc_health, fe80::cc0e:c4ff:fe17:c75e%8#123, interface stats: received=0, sent=0, dropped=0, active_time=70 secs May 14 23:51:54.929875 ntpd[1936]: 14 May 23:51:54 ntpd[1936]: Deleting interface #11 lxc_health, fe80::cc0e:c4ff:fe17:c75e%8#123, interface stats: received=0, sent=0, dropped=0, active_time=70 secs May 14 23:51:54.937332 sshd[5321]: Accepted publickey for core from 139.178.89.65 port 35644 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:54.939915 sshd-session[5321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:54.949060 systemd-logind[1943]: New session 28 of user core. May 14 23:51:54.957327 systemd[1]: Started session-28.scope - Session 28 of User core. May 14 23:51:55.651911 kubelet[3524]: E0514 23:51:55.651681 3524 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 23:51:56.214127 sshd[5323]: Connection closed by 139.178.89.65 port 35644 May 14 23:51:56.213774 sshd-session[5321]: pam_unix(sshd:session): session closed for user core May 14 23:51:56.223234 systemd[1]: sshd@27-172.31.17.61:22-139.178.89.65:35644.service: Deactivated successfully. May 14 23:51:56.232617 systemd[1]: session-28.scope: Deactivated successfully. May 14 23:51:56.234764 systemd[1]: session-28.scope: Consumed 1.067s CPU time, 23.6M memory peak. May 14 23:51:56.242233 systemd-logind[1943]: Session 28 logged out. Waiting for processes to exit. May 14 23:51:56.246327 kubelet[3524]: E0514 23:51:56.245443 3524 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="99fee54d-ab70-4ec3-a226-bb9c31d872ab" containerName="cilium-operator" May 14 23:51:56.246327 kubelet[3524]: E0514 23:51:56.245490 3524 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0f3f6420-5b97-43a1-be0c-8e023da75b13" containerName="mount-cgroup" May 14 23:51:56.246327 kubelet[3524]: E0514 23:51:56.245635 3524 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0f3f6420-5b97-43a1-be0c-8e023da75b13" containerName="apply-sysctl-overwrites" May 14 23:51:56.246327 kubelet[3524]: E0514 23:51:56.245655 3524 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0f3f6420-5b97-43a1-be0c-8e023da75b13" containerName="mount-bpf-fs" May 14 23:51:56.246327 kubelet[3524]: E0514 23:51:56.245674 3524 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0f3f6420-5b97-43a1-be0c-8e023da75b13" containerName="clean-cilium-state" May 14 23:51:56.246327 kubelet[3524]: E0514 23:51:56.245715 3524 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0f3f6420-5b97-43a1-be0c-8e023da75b13" containerName="cilium-agent" May 14 23:51:56.246327 kubelet[3524]: I0514 23:51:56.245769 3524 memory_manager.go:354] "RemoveStaleState removing state" podUID="99fee54d-ab70-4ec3-a226-bb9c31d872ab" containerName="cilium-operator" May 14 23:51:56.246327 kubelet[3524]: I0514 23:51:56.245824 3524 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f3f6420-5b97-43a1-be0c-8e023da75b13" containerName="cilium-agent" May 14 23:51:56.271020 systemd-logind[1943]: Removed session 28. May 14 23:51:56.295286 systemd[1]: Started sshd@28-172.31.17.61:22-139.178.89.65:35654.service - OpenSSH per-connection server daemon (139.178.89.65:35654). May 14 23:51:56.306037 systemd[1]: Created slice kubepods-burstable-pod159e4d4c_e983_4f75_a4e2_d68ca16435f4.slice - libcontainer container kubepods-burstable-pod159e4d4c_e983_4f75_a4e2_d68ca16435f4.slice. May 14 23:51:56.321349 kubelet[3524]: W0514 23:51:56.321200 3524 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-17-61" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-17-61' and this object May 14 23:51:56.321349 kubelet[3524]: E0514 23:51:56.321289 3524 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ip-172-31-17-61\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-17-61' and this object" logger="UnhandledError" May 14 23:51:56.357979 kubelet[3524]: I0514 23:51:56.357273 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/159e4d4c-e983-4f75-a4e2-d68ca16435f4-cilium-run\") pod \"cilium-tzp9z\" (UID: \"159e4d4c-e983-4f75-a4e2-d68ca16435f4\") " pod="kube-system/cilium-tzp9z" May 14 23:51:56.357979 kubelet[3524]: I0514 23:51:56.357340 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/159e4d4c-e983-4f75-a4e2-d68ca16435f4-host-proc-sys-net\") pod \"cilium-tzp9z\" (UID: \"159e4d4c-e983-4f75-a4e2-d68ca16435f4\") " pod="kube-system/cilium-tzp9z" May 14 23:51:56.357979 kubelet[3524]: I0514 23:51:56.357380 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/159e4d4c-e983-4f75-a4e2-d68ca16435f4-cilium-config-path\") pod \"cilium-tzp9z\" (UID: \"159e4d4c-e983-4f75-a4e2-d68ca16435f4\") " pod="kube-system/cilium-tzp9z" May 14 23:51:56.357979 kubelet[3524]: I0514 23:51:56.357417 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/159e4d4c-e983-4f75-a4e2-d68ca16435f4-hubble-tls\") pod \"cilium-tzp9z\" (UID: \"159e4d4c-e983-4f75-a4e2-d68ca16435f4\") " pod="kube-system/cilium-tzp9z" May 14 23:51:56.357979 kubelet[3524]: I0514 23:51:56.357456 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/159e4d4c-e983-4f75-a4e2-d68ca16435f4-bpf-maps\") pod \"cilium-tzp9z\" (UID: \"159e4d4c-e983-4f75-a4e2-d68ca16435f4\") " pod="kube-system/cilium-tzp9z" May 14 23:51:56.357979 kubelet[3524]: I0514 23:51:56.357488 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/159e4d4c-e983-4f75-a4e2-d68ca16435f4-hostproc\") pod \"cilium-tzp9z\" (UID: \"159e4d4c-e983-4f75-a4e2-d68ca16435f4\") " pod="kube-system/cilium-tzp9z" May 14 23:51:56.358401 kubelet[3524]: I0514 23:51:56.357523 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/159e4d4c-e983-4f75-a4e2-d68ca16435f4-cilium-ipsec-secrets\") pod \"cilium-tzp9z\" (UID: \"159e4d4c-e983-4f75-a4e2-d68ca16435f4\") " pod="kube-system/cilium-tzp9z" May 14 23:51:56.358401 kubelet[3524]: I0514 23:51:56.357561 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/159e4d4c-e983-4f75-a4e2-d68ca16435f4-lib-modules\") pod \"cilium-tzp9z\" (UID: \"159e4d4c-e983-4f75-a4e2-d68ca16435f4\") " pod="kube-system/cilium-tzp9z" May 14 23:51:56.358401 kubelet[3524]: I0514 23:51:56.357598 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/159e4d4c-e983-4f75-a4e2-d68ca16435f4-host-proc-sys-kernel\") pod \"cilium-tzp9z\" (UID: \"159e4d4c-e983-4f75-a4e2-d68ca16435f4\") " pod="kube-system/cilium-tzp9z" May 14 23:51:56.358401 kubelet[3524]: I0514 23:51:56.357633 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/159e4d4c-e983-4f75-a4e2-d68ca16435f4-clustermesh-secrets\") pod \"cilium-tzp9z\" (UID: \"159e4d4c-e983-4f75-a4e2-d68ca16435f4\") " pod="kube-system/cilium-tzp9z" May 14 23:51:56.358401 kubelet[3524]: I0514 23:51:56.357669 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsm2m\" (UniqueName: \"kubernetes.io/projected/159e4d4c-e983-4f75-a4e2-d68ca16435f4-kube-api-access-lsm2m\") pod \"cilium-tzp9z\" (UID: \"159e4d4c-e983-4f75-a4e2-d68ca16435f4\") " pod="kube-system/cilium-tzp9z" May 14 23:51:56.358672 kubelet[3524]: I0514 23:51:56.357704 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/159e4d4c-e983-4f75-a4e2-d68ca16435f4-cilium-cgroup\") pod \"cilium-tzp9z\" (UID: \"159e4d4c-e983-4f75-a4e2-d68ca16435f4\") " pod="kube-system/cilium-tzp9z" May 14 23:51:56.358672 kubelet[3524]: I0514 23:51:56.357739 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/159e4d4c-e983-4f75-a4e2-d68ca16435f4-xtables-lock\") pod \"cilium-tzp9z\" (UID: \"159e4d4c-e983-4f75-a4e2-d68ca16435f4\") " pod="kube-system/cilium-tzp9z" May 14 23:51:56.358672 kubelet[3524]: I0514 23:51:56.357773 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/159e4d4c-e983-4f75-a4e2-d68ca16435f4-cni-path\") pod \"cilium-tzp9z\" (UID: \"159e4d4c-e983-4f75-a4e2-d68ca16435f4\") " pod="kube-system/cilium-tzp9z" May 14 23:51:56.358672 kubelet[3524]: I0514 23:51:56.357808 3524 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/159e4d4c-e983-4f75-a4e2-d68ca16435f4-etc-cni-netd\") pod \"cilium-tzp9z\" (UID: \"159e4d4c-e983-4f75-a4e2-d68ca16435f4\") " pod="kube-system/cilium-tzp9z" May 14 23:51:56.523880 sshd[5333]: Accepted publickey for core from 139.178.89.65 port 35654 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:56.527254 sshd-session[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:56.536355 systemd-logind[1943]: New session 29 of user core. May 14 23:51:56.545405 systemd[1]: Started session-29.scope - Session 29 of User core. May 14 23:51:56.668129 sshd[5339]: Connection closed by 139.178.89.65 port 35654 May 14 23:51:56.668956 sshd-session[5333]: pam_unix(sshd:session): session closed for user core May 14 23:51:56.676391 systemd[1]: sshd@28-172.31.17.61:22-139.178.89.65:35654.service: Deactivated successfully. May 14 23:51:56.680377 systemd[1]: session-29.scope: Deactivated successfully. May 14 23:51:56.683786 systemd-logind[1943]: Session 29 logged out. Waiting for processes to exit. May 14 23:51:56.685962 systemd-logind[1943]: Removed session 29. May 14 23:51:56.709658 systemd[1]: Started sshd@29-172.31.17.61:22-139.178.89.65:55400.service - OpenSSH per-connection server daemon (139.178.89.65:55400). May 14 23:51:56.902513 sshd[5346]: Accepted publickey for core from 139.178.89.65 port 55400 ssh2: RSA SHA256:P5lx8LuVgYRnVINBokzXFUV2F/1CVpmkiH+0ahpdjwk May 14 23:51:56.904927 sshd-session[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 23:51:56.915226 systemd-logind[1943]: New session 30 of user core. May 14 23:51:56.921368 systemd[1]: Started session-30.scope - Session 30 of User core. May 14 23:51:57.459648 kubelet[3524]: E0514 23:51:57.459588 3524 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition May 14 23:51:57.460260 kubelet[3524]: E0514 23:51:57.459718 3524 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/159e4d4c-e983-4f75-a4e2-d68ca16435f4-cilium-config-path podName:159e4d4c-e983-4f75-a4e2-d68ca16435f4 nodeName:}" failed. No retries permitted until 2025-05-14 23:51:57.959686973 +0000 UTC m=+107.809288010 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/159e4d4c-e983-4f75-a4e2-d68ca16435f4-cilium-config-path") pod "cilium-tzp9z" (UID: "159e4d4c-e983-4f75-a4e2-d68ca16435f4") : failed to sync configmap cache: timed out waiting for the condition May 14 23:51:58.115387 containerd[1965]: time="2025-05-14T23:51:58.115304492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tzp9z,Uid:159e4d4c-e983-4f75-a4e2-d68ca16435f4,Namespace:kube-system,Attempt:0,}" May 14 23:51:58.157213 containerd[1965]: time="2025-05-14T23:51:58.156975477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 23:51:58.157213 containerd[1965]: time="2025-05-14T23:51:58.157143825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 23:51:58.157213 containerd[1965]: time="2025-05-14T23:51:58.157194717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:58.157803 containerd[1965]: time="2025-05-14T23:51:58.157540509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 23:51:58.199400 systemd[1]: Started cri-containerd-4df6d27e60b04017a909e41b98478a4b93c6be5a737e7bbdb910fd3c25775df9.scope - libcontainer container 4df6d27e60b04017a909e41b98478a4b93c6be5a737e7bbdb910fd3c25775df9. May 14 23:51:58.238704 containerd[1965]: time="2025-05-14T23:51:58.238654533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tzp9z,Uid:159e4d4c-e983-4f75-a4e2-d68ca16435f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4df6d27e60b04017a909e41b98478a4b93c6be5a737e7bbdb910fd3c25775df9\"" May 14 23:51:58.244696 containerd[1965]: time="2025-05-14T23:51:58.244607469Z" level=info msg="CreateContainer within sandbox \"4df6d27e60b04017a909e41b98478a4b93c6be5a737e7bbdb910fd3c25775df9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 23:51:58.270896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2534896225.mount: Deactivated successfully. May 14 23:51:58.284852 containerd[1965]: time="2025-05-14T23:51:58.284648373Z" level=info msg="CreateContainer within sandbox \"4df6d27e60b04017a909e41b98478a4b93c6be5a737e7bbdb910fd3c25775df9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fca090bc5a9c2f45b7b8711f66847f86b30a24000831b087ae687825ab53fe34\"" May 14 23:51:58.285845 containerd[1965]: time="2025-05-14T23:51:58.285781941Z" level=info msg="StartContainer for \"fca090bc5a9c2f45b7b8711f66847f86b30a24000831b087ae687825ab53fe34\"" May 14 23:51:58.332365 systemd[1]: Started cri-containerd-fca090bc5a9c2f45b7b8711f66847f86b30a24000831b087ae687825ab53fe34.scope - libcontainer container fca090bc5a9c2f45b7b8711f66847f86b30a24000831b087ae687825ab53fe34. May 14 23:51:58.382655 containerd[1965]: time="2025-05-14T23:51:58.381625714Z" level=info msg="StartContainer for \"fca090bc5a9c2f45b7b8711f66847f86b30a24000831b087ae687825ab53fe34\" returns successfully" May 14 23:51:58.397415 systemd[1]: cri-containerd-fca090bc5a9c2f45b7b8711f66847f86b30a24000831b087ae687825ab53fe34.scope: Deactivated successfully. May 14 23:51:58.449596 containerd[1965]: time="2025-05-14T23:51:58.449264878Z" level=info msg="shim disconnected" id=fca090bc5a9c2f45b7b8711f66847f86b30a24000831b087ae687825ab53fe34 namespace=k8s.io May 14 23:51:58.449596 containerd[1965]: time="2025-05-14T23:51:58.449340022Z" level=warning msg="cleaning up after shim disconnected" id=fca090bc5a9c2f45b7b8711f66847f86b30a24000831b087ae687825ab53fe34 namespace=k8s.io May 14 23:51:58.449596 containerd[1965]: time="2025-05-14T23:51:58.449374702Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:51:58.990222 containerd[1965]: time="2025-05-14T23:51:58.988786957Z" level=info msg="CreateContainer within sandbox \"4df6d27e60b04017a909e41b98478a4b93c6be5a737e7bbdb910fd3c25775df9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 23:51:59.015994 containerd[1965]: time="2025-05-14T23:51:59.015938793Z" level=info msg="CreateContainer within sandbox \"4df6d27e60b04017a909e41b98478a4b93c6be5a737e7bbdb910fd3c25775df9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"84bec0ed0117689c8df4729e7967f5b0ea26c1c1640b71bdada684524b4e2e0b\"" May 14 23:51:59.017210 containerd[1965]: time="2025-05-14T23:51:59.017159313Z" level=info msg="StartContainer for \"84bec0ed0117689c8df4729e7967f5b0ea26c1c1640b71bdada684524b4e2e0b\"" May 14 23:51:59.069396 systemd[1]: Started cri-containerd-84bec0ed0117689c8df4729e7967f5b0ea26c1c1640b71bdada684524b4e2e0b.scope - libcontainer container 84bec0ed0117689c8df4729e7967f5b0ea26c1c1640b71bdada684524b4e2e0b. May 14 23:51:59.119113 containerd[1965]: time="2025-05-14T23:51:59.118624461Z" level=info msg="StartContainer for \"84bec0ed0117689c8df4729e7967f5b0ea26c1c1640b71bdada684524b4e2e0b\" returns successfully" May 14 23:51:59.140979 systemd[1]: cri-containerd-84bec0ed0117689c8df4729e7967f5b0ea26c1c1640b71bdada684524b4e2e0b.scope: Deactivated successfully. May 14 23:51:59.191557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84bec0ed0117689c8df4729e7967f5b0ea26c1c1640b71bdada684524b4e2e0b-rootfs.mount: Deactivated successfully. May 14 23:51:59.200199 containerd[1965]: time="2025-05-14T23:51:59.200008966Z" level=info msg="shim disconnected" id=84bec0ed0117689c8df4729e7967f5b0ea26c1c1640b71bdada684524b4e2e0b namespace=k8s.io May 14 23:51:59.200199 containerd[1965]: time="2025-05-14T23:51:59.200191894Z" level=warning msg="cleaning up after shim disconnected" id=84bec0ed0117689c8df4729e7967f5b0ea26c1c1640b71bdada684524b4e2e0b namespace=k8s.io May 14 23:51:59.200535 containerd[1965]: time="2025-05-14T23:51:59.200244622Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:51:59.225350 containerd[1965]: time="2025-05-14T23:51:59.225212962Z" level=warning msg="cleanup warnings time=\"2025-05-14T23:51:59Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 14 23:51:59.998020 containerd[1965]: time="2025-05-14T23:51:59.997668506Z" level=info msg="CreateContainer within sandbox \"4df6d27e60b04017a909e41b98478a4b93c6be5a737e7bbdb910fd3c25775df9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 23:52:00.041417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2590296026.mount: Deactivated successfully. May 14 23:52:00.044374 containerd[1965]: time="2025-05-14T23:52:00.044282050Z" level=info msg="CreateContainer within sandbox \"4df6d27e60b04017a909e41b98478a4b93c6be5a737e7bbdb910fd3c25775df9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e5dcba52b2d7a97f44fa23c0aa4bf11aa8325df813faf339c6ca05b9b6e1dc31\"" May 14 23:52:00.046777 containerd[1965]: time="2025-05-14T23:52:00.045367246Z" level=info msg="StartContainer for \"e5dcba52b2d7a97f44fa23c0aa4bf11aa8325df813faf339c6ca05b9b6e1dc31\"" May 14 23:52:00.109342 systemd[1]: Started cri-containerd-e5dcba52b2d7a97f44fa23c0aa4bf11aa8325df813faf339c6ca05b9b6e1dc31.scope - libcontainer container e5dcba52b2d7a97f44fa23c0aa4bf11aa8325df813faf339c6ca05b9b6e1dc31. May 14 23:52:00.174257 containerd[1965]: time="2025-05-14T23:52:00.174197111Z" level=info msg="StartContainer for \"e5dcba52b2d7a97f44fa23c0aa4bf11aa8325df813faf339c6ca05b9b6e1dc31\" returns successfully" May 14 23:52:00.179475 systemd[1]: cri-containerd-e5dcba52b2d7a97f44fa23c0aa4bf11aa8325df813faf339c6ca05b9b6e1dc31.scope: Deactivated successfully. May 14 23:52:00.218041 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5dcba52b2d7a97f44fa23c0aa4bf11aa8325df813faf339c6ca05b9b6e1dc31-rootfs.mount: Deactivated successfully. May 14 23:52:00.225829 containerd[1965]: time="2025-05-14T23:52:00.225696923Z" level=info msg="shim disconnected" id=e5dcba52b2d7a97f44fa23c0aa4bf11aa8325df813faf339c6ca05b9b6e1dc31 namespace=k8s.io May 14 23:52:00.226171 containerd[1965]: time="2025-05-14T23:52:00.225816935Z" level=warning msg="cleaning up after shim disconnected" id=e5dcba52b2d7a97f44fa23c0aa4bf11aa8325df813faf339c6ca05b9b6e1dc31 namespace=k8s.io May 14 23:52:00.226171 containerd[1965]: time="2025-05-14T23:52:00.225862919Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:52:00.653660 kubelet[3524]: E0514 23:52:00.653590 3524 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 23:52:01.001468 containerd[1965]: time="2025-05-14T23:52:01.000803579Z" level=info msg="CreateContainer within sandbox \"4df6d27e60b04017a909e41b98478a4b93c6be5a737e7bbdb910fd3c25775df9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 23:52:01.038722 containerd[1965]: time="2025-05-14T23:52:01.038544611Z" level=info msg="CreateContainer within sandbox \"4df6d27e60b04017a909e41b98478a4b93c6be5a737e7bbdb910fd3c25775df9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"240514d008409344c9629935cc55249cac16634aac75a5d925cab2d371e6a92e\"" May 14 23:52:01.040896 containerd[1965]: time="2025-05-14T23:52:01.039435803Z" level=info msg="StartContainer for \"240514d008409344c9629935cc55249cac16634aac75a5d925cab2d371e6a92e\"" May 14 23:52:01.102380 systemd[1]: Started cri-containerd-240514d008409344c9629935cc55249cac16634aac75a5d925cab2d371e6a92e.scope - libcontainer container 240514d008409344c9629935cc55249cac16634aac75a5d925cab2d371e6a92e. May 14 23:52:01.163284 systemd[1]: cri-containerd-240514d008409344c9629935cc55249cac16634aac75a5d925cab2d371e6a92e.scope: Deactivated successfully. May 14 23:52:01.170345 containerd[1965]: time="2025-05-14T23:52:01.170188680Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod159e4d4c_e983_4f75_a4e2_d68ca16435f4.slice/cri-containerd-240514d008409344c9629935cc55249cac16634aac75a5d925cab2d371e6a92e.scope/memory.events\": no such file or directory" May 14 23:52:01.175385 containerd[1965]: time="2025-05-14T23:52:01.175327896Z" level=info msg="StartContainer for \"240514d008409344c9629935cc55249cac16634aac75a5d925cab2d371e6a92e\" returns successfully" May 14 23:52:01.211249 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-240514d008409344c9629935cc55249cac16634aac75a5d925cab2d371e6a92e-rootfs.mount: Deactivated successfully. May 14 23:52:01.218285 containerd[1965]: time="2025-05-14T23:52:01.218153760Z" level=info msg="shim disconnected" id=240514d008409344c9629935cc55249cac16634aac75a5d925cab2d371e6a92e namespace=k8s.io May 14 23:52:01.218503 containerd[1965]: time="2025-05-14T23:52:01.218309496Z" level=warning msg="cleaning up after shim disconnected" id=240514d008409344c9629935cc55249cac16634aac75a5d925cab2d371e6a92e namespace=k8s.io May 14 23:52:01.218503 containerd[1965]: time="2025-05-14T23:52:01.218331972Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:52:02.007820 containerd[1965]: time="2025-05-14T23:52:02.007623816Z" level=info msg="CreateContainer within sandbox \"4df6d27e60b04017a909e41b98478a4b93c6be5a737e7bbdb910fd3c25775df9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 23:52:02.048576 containerd[1965]: time="2025-05-14T23:52:02.048502380Z" level=info msg="CreateContainer within sandbox \"4df6d27e60b04017a909e41b98478a4b93c6be5a737e7bbdb910fd3c25775df9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"97ad6151d16c29a799148cf802640ba86022aad057115d386662294f4af6382d\"" May 14 23:52:02.050302 containerd[1965]: time="2025-05-14T23:52:02.049396896Z" level=info msg="StartContainer for \"97ad6151d16c29a799148cf802640ba86022aad057115d386662294f4af6382d\"" May 14 23:52:02.099372 systemd[1]: Started cri-containerd-97ad6151d16c29a799148cf802640ba86022aad057115d386662294f4af6382d.scope - libcontainer container 97ad6151d16c29a799148cf802640ba86022aad057115d386662294f4af6382d. May 14 23:52:02.158279 containerd[1965]: time="2025-05-14T23:52:02.158201244Z" level=info msg="StartContainer for \"97ad6151d16c29a799148cf802640ba86022aad057115d386662294f4af6382d\" returns successfully" May 14 23:52:02.934486 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 14 23:52:03.051630 kubelet[3524]: I0514 23:52:03.051227 3524 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tzp9z" podStartSLOduration=7.051031477 podStartE2EDuration="7.051031477s" podCreationTimestamp="2025-05-14 23:51:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 23:52:03.050421817 +0000 UTC m=+112.900022962" watchObservedRunningTime="2025-05-14 23:52:03.051031477 +0000 UTC m=+112.900632514" May 14 23:52:03.208917 kubelet[3524]: I0514 23:52:03.208743 3524 setters.go:600] "Node became not ready" node="ip-172-31-17-61" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-14T23:52:03Z","lastTransitionTime":"2025-05-14T23:52:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 14 23:52:07.108447 systemd-networkd[1872]: lxc_health: Link UP May 14 23:52:07.118033 (udev-worker)[6186]: Network interface NamePolicy= disabled on kernel command line. May 14 23:52:07.129851 systemd-networkd[1872]: lxc_health: Gained carrier May 14 23:52:09.162416 systemd-networkd[1872]: lxc_health: Gained IPv6LL May 14 23:52:10.459667 containerd[1965]: time="2025-05-14T23:52:10.459440638Z" level=info msg="StopPodSandbox for \"540a8602c5f820003ca49a4b28721a77aecd6e8de48ea65eeb31191604c69601\"" May 14 23:52:10.459667 containerd[1965]: time="2025-05-14T23:52:10.459597094Z" level=info msg="TearDown network for sandbox \"540a8602c5f820003ca49a4b28721a77aecd6e8de48ea65eeb31191604c69601\" successfully" May 14 23:52:10.459667 containerd[1965]: time="2025-05-14T23:52:10.459621046Z" level=info msg="StopPodSandbox for \"540a8602c5f820003ca49a4b28721a77aecd6e8de48ea65eeb31191604c69601\" returns successfully" May 14 23:52:10.463505 containerd[1965]: time="2025-05-14T23:52:10.463420498Z" level=info msg="RemovePodSandbox for \"540a8602c5f820003ca49a4b28721a77aecd6e8de48ea65eeb31191604c69601\"" May 14 23:52:10.463505 containerd[1965]: time="2025-05-14T23:52:10.463481398Z" level=info msg="Forcibly stopping sandbox \"540a8602c5f820003ca49a4b28721a77aecd6e8de48ea65eeb31191604c69601\"" May 14 23:52:10.464004 containerd[1965]: time="2025-05-14T23:52:10.463600762Z" level=info msg="TearDown network for sandbox \"540a8602c5f820003ca49a4b28721a77aecd6e8de48ea65eeb31191604c69601\" successfully" May 14 23:52:10.470632 containerd[1965]: time="2025-05-14T23:52:10.470539750Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"540a8602c5f820003ca49a4b28721a77aecd6e8de48ea65eeb31191604c69601\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:52:10.470786 containerd[1965]: time="2025-05-14T23:52:10.470658718Z" level=info msg="RemovePodSandbox \"540a8602c5f820003ca49a4b28721a77aecd6e8de48ea65eeb31191604c69601\" returns successfully" May 14 23:52:10.472697 containerd[1965]: time="2025-05-14T23:52:10.472422970Z" level=info msg="StopPodSandbox for \"86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d\"" May 14 23:52:10.472697 containerd[1965]: time="2025-05-14T23:52:10.472566814Z" level=info msg="TearDown network for sandbox \"86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d\" successfully" May 14 23:52:10.472697 containerd[1965]: time="2025-05-14T23:52:10.472588714Z" level=info msg="StopPodSandbox for \"86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d\" returns successfully" May 14 23:52:10.474384 containerd[1965]: time="2025-05-14T23:52:10.473442058Z" level=info msg="RemovePodSandbox for \"86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d\"" May 14 23:52:10.474384 containerd[1965]: time="2025-05-14T23:52:10.473494558Z" level=info msg="Forcibly stopping sandbox \"86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d\"" May 14 23:52:10.474384 containerd[1965]: time="2025-05-14T23:52:10.473592274Z" level=info msg="TearDown network for sandbox \"86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d\" successfully" May 14 23:52:10.485785 containerd[1965]: time="2025-05-14T23:52:10.484416514Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 14 23:52:10.485785 containerd[1965]: time="2025-05-14T23:52:10.484529746Z" level=info msg="RemovePodSandbox \"86e97dfaec59e0d4481fb4cbdaf9d3a684588c793bb628ead6b21f2a7ba4172d\" returns successfully" May 14 23:52:11.929494 ntpd[1936]: Listen normally on 14 lxc_health [fe80::9cea:b8ff:fe05:dd87%14]:123 May 14 23:52:11.930180 ntpd[1936]: 14 May 23:52:11 ntpd[1936]: Listen normally on 14 lxc_health [fe80::9cea:b8ff:fe05:dd87%14]:123 May 14 23:52:12.813886 sshd[5348]: Connection closed by 139.178.89.65 port 55400 May 14 23:52:12.815205 sshd-session[5346]: pam_unix(sshd:session): session closed for user core May 14 23:52:12.824234 systemd[1]: sshd@29-172.31.17.61:22-139.178.89.65:55400.service: Deactivated successfully. May 14 23:52:12.832016 systemd[1]: session-30.scope: Deactivated successfully. May 14 23:52:12.835657 systemd-logind[1943]: Session 30 logged out. Waiting for processes to exit. May 14 23:52:12.840297 systemd-logind[1943]: Removed session 30. May 14 23:52:27.708390 systemd[1]: cri-containerd-066df360d139681524984ae7e171beab5ef0871addba5177e071df09d5c37cd4.scope: Deactivated successfully. May 14 23:52:27.708979 systemd[1]: cri-containerd-066df360d139681524984ae7e171beab5ef0871addba5177e071df09d5c37cd4.scope: Consumed 5.658s CPU time, 53.3M memory peak. May 14 23:52:27.750762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-066df360d139681524984ae7e171beab5ef0871addba5177e071df09d5c37cd4-rootfs.mount: Deactivated successfully. May 14 23:52:27.761602 containerd[1965]: time="2025-05-14T23:52:27.761480296Z" level=info msg="shim disconnected" id=066df360d139681524984ae7e171beab5ef0871addba5177e071df09d5c37cd4 namespace=k8s.io May 14 23:52:27.761602 containerd[1965]: time="2025-05-14T23:52:27.761566516Z" level=warning msg="cleaning up after shim disconnected" id=066df360d139681524984ae7e171beab5ef0871addba5177e071df09d5c37cd4 namespace=k8s.io May 14 23:52:27.761602 containerd[1965]: time="2025-05-14T23:52:27.761585944Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:52:28.087014 kubelet[3524]: I0514 23:52:28.086939 3524 scope.go:117] "RemoveContainer" containerID="066df360d139681524984ae7e171beab5ef0871addba5177e071df09d5c37cd4" May 14 23:52:28.090480 containerd[1965]: time="2025-05-14T23:52:28.090367429Z" level=info msg="CreateContainer within sandbox \"ca17bee3cca6f100be1aec3c87c38988a931bd32fb9f998da00f3f236b61a55f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 14 23:52:28.118364 containerd[1965]: time="2025-05-14T23:52:28.118288765Z" level=info msg="CreateContainer within sandbox \"ca17bee3cca6f100be1aec3c87c38988a931bd32fb9f998da00f3f236b61a55f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"4aa043802d6c0d4b74f8a2f31d9fa67f771e9629077d459199032bb2e1a5bd38\"" May 14 23:52:28.119118 containerd[1965]: time="2025-05-14T23:52:28.118946473Z" level=info msg="StartContainer for \"4aa043802d6c0d4b74f8a2f31d9fa67f771e9629077d459199032bb2e1a5bd38\"" May 14 23:52:28.179374 systemd[1]: Started cri-containerd-4aa043802d6c0d4b74f8a2f31d9fa67f771e9629077d459199032bb2e1a5bd38.scope - libcontainer container 4aa043802d6c0d4b74f8a2f31d9fa67f771e9629077d459199032bb2e1a5bd38. May 14 23:52:28.250340 containerd[1965]: time="2025-05-14T23:52:28.250270622Z" level=info msg="StartContainer for \"4aa043802d6c0d4b74f8a2f31d9fa67f771e9629077d459199032bb2e1a5bd38\" returns successfully" May 14 23:52:31.978255 systemd[1]: cri-containerd-0771bc3617f979f2755ff254bbbd4e63c5fc1ba347b6200e547c1fc06ffdc734.scope: Deactivated successfully. May 14 23:52:31.979376 systemd[1]: cri-containerd-0771bc3617f979f2755ff254bbbd4e63c5fc1ba347b6200e547c1fc06ffdc734.scope: Consumed 3.450s CPU time, 22.5M memory peak. May 14 23:52:32.018627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0771bc3617f979f2755ff254bbbd4e63c5fc1ba347b6200e547c1fc06ffdc734-rootfs.mount: Deactivated successfully. May 14 23:52:32.032461 containerd[1965]: time="2025-05-14T23:52:32.032371253Z" level=info msg="shim disconnected" id=0771bc3617f979f2755ff254bbbd4e63c5fc1ba347b6200e547c1fc06ffdc734 namespace=k8s.io May 14 23:52:32.032461 containerd[1965]: time="2025-05-14T23:52:32.032446949Z" level=warning msg="cleaning up after shim disconnected" id=0771bc3617f979f2755ff254bbbd4e63c5fc1ba347b6200e547c1fc06ffdc734 namespace=k8s.io May 14 23:52:32.033275 containerd[1965]: time="2025-05-14T23:52:32.032471693Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 23:52:32.102281 kubelet[3524]: I0514 23:52:32.102236 3524 scope.go:117] "RemoveContainer" containerID="0771bc3617f979f2755ff254bbbd4e63c5fc1ba347b6200e547c1fc06ffdc734" May 14 23:52:32.106889 containerd[1965]: time="2025-05-14T23:52:32.106834337Z" level=info msg="CreateContainer within sandbox \"baf6f29df0f20ed1ce5e1e150046cee57e8f3bf8f0ac4efbe2349745937fe836\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 14 23:52:32.138347 containerd[1965]: time="2025-05-14T23:52:32.138182249Z" level=info msg="CreateContainer within sandbox \"baf6f29df0f20ed1ce5e1e150046cee57e8f3bf8f0ac4efbe2349745937fe836\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"bfea724ae4436589d2fa5888eb2bad657abac2fbcf2d61c39b7656c7376b7c13\"" May 14 23:52:32.140959 containerd[1965]: time="2025-05-14T23:52:32.139133969Z" level=info msg="StartContainer for \"bfea724ae4436589d2fa5888eb2bad657abac2fbcf2d61c39b7656c7376b7c13\"" May 14 23:52:32.195346 systemd[1]: Started cri-containerd-bfea724ae4436589d2fa5888eb2bad657abac2fbcf2d61c39b7656c7376b7c13.scope - libcontainer container bfea724ae4436589d2fa5888eb2bad657abac2fbcf2d61c39b7656c7376b7c13. May 14 23:52:32.267038 containerd[1965]: time="2025-05-14T23:52:32.266900250Z" level=info msg="StartContainer for \"bfea724ae4436589d2fa5888eb2bad657abac2fbcf2d61c39b7656c7376b7c13\" returns successfully" May 14 23:52:33.392106 kubelet[3524]: E0514 23:52:33.391482 3524 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-61?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 14 23:52:43.392653 kubelet[3524]: E0514 23:52:43.392400 3524 controller.go:195] "Failed to update lease" err="Put \"https://172.31.17.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-17-61?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"