May 17 00:05:00.182967 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] May 17 00:05:00.183029 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 16 22:39:35 -00 2025 May 17 00:05:00.183056 kernel: KASLR disabled due to lack of seed May 17 00:05:00.183074 kernel: efi: EFI v2.7 by EDK II May 17 00:05:00.183090 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b000a98 MEMRESERVE=0x7852ee18 May 17 00:05:00.183105 kernel: ACPI: Early table checksum verification disabled May 17 00:05:00.183124 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) May 17 00:05:00.183140 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) May 17 00:05:00.183156 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) May 17 00:05:00.183172 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) May 17 00:05:00.183195 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 17 00:05:00.183212 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) May 17 00:05:00.183228 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) May 17 00:05:00.183244 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) May 17 00:05:00.183263 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 17 00:05:00.183284 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) May 17 00:05:00.183302 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) May 17 00:05:00.183318 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 May 17 00:05:00.183335 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') May 17 00:05:00.183352 kernel: printk: bootconsole [uart0] enabled May 17 00:05:00.183410 kernel: NUMA: Failed to initialise from firmware May 17 00:05:00.183430 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] May 17 00:05:00.183448 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] May 17 00:05:00.183465 kernel: Zone ranges: May 17 00:05:00.183482 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 17 00:05:00.183499 kernel: DMA32 empty May 17 00:05:00.183521 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] May 17 00:05:00.183539 kernel: Movable zone start for each node May 17 00:05:00.183556 kernel: Early memory node ranges May 17 00:05:00.183573 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] May 17 00:05:00.183601 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] May 17 00:05:00.183643 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] May 17 00:05:00.183700 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] May 17 00:05:00.183724 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] May 17 00:05:00.183742 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] May 17 00:05:00.183758 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] May 17 00:05:00.183776 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] May 17 00:05:00.183792 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] May 17 00:05:00.183815 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges May 17 00:05:00.183833 kernel: psci: probing for conduit method from ACPI. May 17 00:05:00.183857 kernel: psci: PSCIv1.0 detected in firmware. May 17 00:05:00.183875 kernel: psci: Using standard PSCI v0.2 function IDs May 17 00:05:00.183893 kernel: psci: Trusted OS migration not required May 17 00:05:00.184055 kernel: psci: SMC Calling Convention v1.1 May 17 00:05:00.184085 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 17 00:05:00.184104 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 17 00:05:00.184122 kernel: pcpu-alloc: [0] 0 [0] 1 May 17 00:05:00.184140 kernel: Detected PIPT I-cache on CPU0 May 17 00:05:00.184158 kernel: CPU features: detected: GIC system register CPU interface May 17 00:05:00.184175 kernel: CPU features: detected: Spectre-v2 May 17 00:05:00.184193 kernel: CPU features: detected: Spectre-v3a May 17 00:05:00.184211 kernel: CPU features: detected: Spectre-BHB May 17 00:05:00.184229 kernel: CPU features: detected: ARM erratum 1742098 May 17 00:05:00.184248 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 May 17 00:05:00.184273 kernel: alternatives: applying boot alternatives May 17 00:05:00.184293 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:05:00.184313 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:05:00.184331 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:05:00.184349 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:05:00.184367 kernel: Fallback order for Node 0: 0 May 17 00:05:00.184384 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 May 17 00:05:00.184402 kernel: Policy zone: Normal May 17 00:05:00.184419 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:05:00.184438 kernel: software IO TLB: area num 2. May 17 00:05:00.184456 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) May 17 00:05:00.184479 kernel: Memory: 3820152K/4030464K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 210312K reserved, 0K cma-reserved) May 17 00:05:00.184497 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:05:00.184515 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:05:00.184534 kernel: rcu: RCU event tracing is enabled. May 17 00:05:00.184552 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:05:00.184570 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:05:00.184588 kernel: Tracing variant of Tasks RCU enabled. May 17 00:05:00.184606 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:05:00.184623 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:05:00.184641 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 17 00:05:00.184658 kernel: GICv3: 96 SPIs implemented May 17 00:05:00.184680 kernel: GICv3: 0 Extended SPIs implemented May 17 00:05:00.184698 kernel: Root IRQ handler: gic_handle_irq May 17 00:05:00.184715 kernel: GICv3: GICv3 features: 16 PPIs May 17 00:05:00.184733 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 May 17 00:05:00.184750 kernel: ITS [mem 0x10080000-0x1009ffff] May 17 00:05:00.184768 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) May 17 00:05:00.184786 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) May 17 00:05:00.184805 kernel: GICv3: using LPI property table @0x00000004000d0000 May 17 00:05:00.184823 kernel: ITS: Using hypervisor restricted LPI range [128] May 17 00:05:00.184841 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 May 17 00:05:00.184860 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:05:00.184878 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). May 17 00:05:00.184902 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns May 17 00:05:00.186978 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns May 17 00:05:00.187016 kernel: Console: colour dummy device 80x25 May 17 00:05:00.187036 kernel: printk: console [tty1] enabled May 17 00:05:00.187054 kernel: ACPI: Core revision 20230628 May 17 00:05:00.187073 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) May 17 00:05:00.187091 kernel: pid_max: default: 32768 minimum: 301 May 17 00:05:00.187110 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:05:00.187127 kernel: landlock: Up and running. May 17 00:05:00.187155 kernel: SELinux: Initializing. May 17 00:05:00.187173 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:05:00.187192 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:05:00.187210 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:05:00.187228 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:05:00.187246 kernel: rcu: Hierarchical SRCU implementation. May 17 00:05:00.187264 kernel: rcu: Max phase no-delay instances is 400. May 17 00:05:00.187282 kernel: Platform MSI: ITS@0x10080000 domain created May 17 00:05:00.187300 kernel: PCI/MSI: ITS@0x10080000 domain created May 17 00:05:00.187322 kernel: Remapping and enabling EFI services. May 17 00:05:00.187340 kernel: smp: Bringing up secondary CPUs ... May 17 00:05:00.187357 kernel: Detected PIPT I-cache on CPU1 May 17 00:05:00.187396 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 May 17 00:05:00.187415 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 May 17 00:05:00.187433 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] May 17 00:05:00.187451 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:05:00.187468 kernel: SMP: Total of 2 processors activated. May 17 00:05:00.187486 kernel: CPU features: detected: 32-bit EL0 Support May 17 00:05:00.187509 kernel: CPU features: detected: 32-bit EL1 Support May 17 00:05:00.187528 kernel: CPU features: detected: CRC32 instructions May 17 00:05:00.187546 kernel: CPU: All CPU(s) started at EL1 May 17 00:05:00.187575 kernel: alternatives: applying system-wide alternatives May 17 00:05:00.187598 kernel: devtmpfs: initialized May 17 00:05:00.187617 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:05:00.187637 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:05:00.187656 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:05:00.187707 kernel: SMBIOS 3.0.0 present. May 17 00:05:00.187738 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 May 17 00:05:00.187764 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:05:00.187783 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 17 00:05:00.187802 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 17 00:05:00.187821 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 17 00:05:00.187840 kernel: audit: initializing netlink subsys (disabled) May 17 00:05:00.187859 kernel: audit: type=2000 audit(0.286:1): state=initialized audit_enabled=0 res=1 May 17 00:05:00.187879 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:05:00.187901 kernel: cpuidle: using governor menu May 17 00:05:00.187952 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 17 00:05:00.187976 kernel: ASID allocator initialised with 65536 entries May 17 00:05:00.187995 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:05:00.188014 kernel: Serial: AMBA PL011 UART driver May 17 00:05:00.188033 kernel: Modules: 17504 pages in range for non-PLT usage May 17 00:05:00.188052 kernel: Modules: 509024 pages in range for PLT usage May 17 00:05:00.188071 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:05:00.188090 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:05:00.188116 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 17 00:05:00.188135 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 17 00:05:00.188154 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:05:00.188173 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:05:00.188192 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 17 00:05:00.188210 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 17 00:05:00.188229 kernel: ACPI: Added _OSI(Module Device) May 17 00:05:00.188248 kernel: ACPI: Added _OSI(Processor Device) May 17 00:05:00.188266 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:05:00.188290 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:05:00.188309 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:05:00.188327 kernel: ACPI: Interpreter enabled May 17 00:05:00.188347 kernel: ACPI: Using GIC for interrupt routing May 17 00:05:00.188365 kernel: ACPI: MCFG table detected, 1 entries May 17 00:05:00.188384 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) May 17 00:05:00.188697 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:05:00.188963 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 17 00:05:00.190217 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 17 00:05:00.190426 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 May 17 00:05:00.190623 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] May 17 00:05:00.190649 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] May 17 00:05:00.190669 kernel: acpiphp: Slot [1] registered May 17 00:05:00.190688 kernel: acpiphp: Slot [2] registered May 17 00:05:00.190706 kernel: acpiphp: Slot [3] registered May 17 00:05:00.190725 kernel: acpiphp: Slot [4] registered May 17 00:05:00.190753 kernel: acpiphp: Slot [5] registered May 17 00:05:00.190772 kernel: acpiphp: Slot [6] registered May 17 00:05:00.190790 kernel: acpiphp: Slot [7] registered May 17 00:05:00.190809 kernel: acpiphp: Slot [8] registered May 17 00:05:00.190827 kernel: acpiphp: Slot [9] registered May 17 00:05:00.190846 kernel: acpiphp: Slot [10] registered May 17 00:05:00.190864 kernel: acpiphp: Slot [11] registered May 17 00:05:00.190883 kernel: acpiphp: Slot [12] registered May 17 00:05:00.190902 kernel: acpiphp: Slot [13] registered May 17 00:05:00.191943 kernel: acpiphp: Slot [14] registered May 17 00:05:00.191973 kernel: acpiphp: Slot [15] registered May 17 00:05:00.191993 kernel: acpiphp: Slot [16] registered May 17 00:05:00.192011 kernel: acpiphp: Slot [17] registered May 17 00:05:00.192030 kernel: acpiphp: Slot [18] registered May 17 00:05:00.192048 kernel: acpiphp: Slot [19] registered May 17 00:05:00.192067 kernel: acpiphp: Slot [20] registered May 17 00:05:00.192085 kernel: acpiphp: Slot [21] registered May 17 00:05:00.192104 kernel: acpiphp: Slot [22] registered May 17 00:05:00.192122 kernel: acpiphp: Slot [23] registered May 17 00:05:00.192144 kernel: acpiphp: Slot [24] registered May 17 00:05:00.192163 kernel: acpiphp: Slot [25] registered May 17 00:05:00.192181 kernel: acpiphp: Slot [26] registered May 17 00:05:00.192200 kernel: acpiphp: Slot [27] registered May 17 00:05:00.192218 kernel: acpiphp: Slot [28] registered May 17 00:05:00.192237 kernel: acpiphp: Slot [29] registered May 17 00:05:00.192255 kernel: acpiphp: Slot [30] registered May 17 00:05:00.192274 kernel: acpiphp: Slot [31] registered May 17 00:05:00.192292 kernel: PCI host bridge to bus 0000:00 May 17 00:05:00.192501 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] May 17 00:05:00.192740 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 17 00:05:00.194966 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] May 17 00:05:00.195193 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] May 17 00:05:00.195478 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 May 17 00:05:00.195725 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 May 17 00:05:00.196005 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] May 17 00:05:00.196249 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 17 00:05:00.196460 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] May 17 00:05:00.196667 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 00:05:00.196900 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 17 00:05:00.199783 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] May 17 00:05:00.200103 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] May 17 00:05:00.200324 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] May 17 00:05:00.200528 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 00:05:00.200730 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] May 17 00:05:00.202227 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] May 17 00:05:00.202520 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] May 17 00:05:00.202728 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] May 17 00:05:00.202965 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] May 17 00:05:00.203205 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] May 17 00:05:00.203424 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 17 00:05:00.203616 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] May 17 00:05:00.203643 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 17 00:05:00.203663 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 17 00:05:00.203682 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 17 00:05:00.203701 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 17 00:05:00.203720 kernel: iommu: Default domain type: Translated May 17 00:05:00.203739 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 17 00:05:00.203764 kernel: efivars: Registered efivars operations May 17 00:05:00.203783 kernel: vgaarb: loaded May 17 00:05:00.203801 kernel: clocksource: Switched to clocksource arch_sys_counter May 17 00:05:00.203820 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:05:00.203838 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:05:00.203857 kernel: pnp: PnP ACPI init May 17 00:05:00.204122 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved May 17 00:05:00.204151 kernel: pnp: PnP ACPI: found 1 devices May 17 00:05:00.204177 kernel: NET: Registered PF_INET protocol family May 17 00:05:00.204196 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:05:00.204215 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:05:00.204234 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:05:00.204253 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:05:00.204272 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 17 00:05:00.204291 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:05:00.204311 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:05:00.204329 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:05:00.204353 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:05:00.204371 kernel: PCI: CLS 0 bytes, default 64 May 17 00:05:00.204390 kernel: kvm [1]: HYP mode not available May 17 00:05:00.204408 kernel: Initialise system trusted keyrings May 17 00:05:00.204427 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:05:00.204446 kernel: Key type asymmetric registered May 17 00:05:00.204465 kernel: Asymmetric key parser 'x509' registered May 17 00:05:00.204484 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 17 00:05:00.204503 kernel: io scheduler mq-deadline registered May 17 00:05:00.204525 kernel: io scheduler kyber registered May 17 00:05:00.204544 kernel: io scheduler bfq registered May 17 00:05:00.204759 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered May 17 00:05:00.204787 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 17 00:05:00.204807 kernel: ACPI: button: Power Button [PWRB] May 17 00:05:00.204826 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 May 17 00:05:00.204844 kernel: ACPI: button: Sleep Button [SLPB] May 17 00:05:00.204863 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:05:00.204888 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 17 00:05:00.205155 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) May 17 00:05:00.205186 kernel: printk: console [ttyS0] disabled May 17 00:05:00.205206 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A May 17 00:05:00.205225 kernel: printk: console [ttyS0] enabled May 17 00:05:00.205244 kernel: printk: bootconsole [uart0] disabled May 17 00:05:00.205263 kernel: thunder_xcv, ver 1.0 May 17 00:05:00.205283 kernel: thunder_bgx, ver 1.0 May 17 00:05:00.205301 kernel: nicpf, ver 1.0 May 17 00:05:00.205328 kernel: nicvf, ver 1.0 May 17 00:05:00.205544 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 17 00:05:00.205739 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-17T00:04:59 UTC (1747440299) May 17 00:05:00.205766 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:05:00.205786 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available May 17 00:05:00.205805 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 17 00:05:00.205824 kernel: watchdog: Hard watchdog permanently disabled May 17 00:05:00.205842 kernel: NET: Registered PF_INET6 protocol family May 17 00:05:00.205867 kernel: Segment Routing with IPv6 May 17 00:05:00.205886 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:05:00.205905 kernel: NET: Registered PF_PACKET protocol family May 17 00:05:00.206028 kernel: Key type dns_resolver registered May 17 00:05:00.206050 kernel: registered taskstats version 1 May 17 00:05:00.206069 kernel: Loading compiled-in X.509 certificates May 17 00:05:00.206088 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 02f7129968574a1ae76b1ee42e7674ea1c42071b' May 17 00:05:00.206107 kernel: Key type .fscrypt registered May 17 00:05:00.206125 kernel: Key type fscrypt-provisioning registered May 17 00:05:00.206150 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:05:00.206170 kernel: ima: Allocated hash algorithm: sha1 May 17 00:05:00.206189 kernel: ima: No architecture policies found May 17 00:05:00.206208 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 17 00:05:00.206227 kernel: clk: Disabling unused clocks May 17 00:05:00.206245 kernel: Freeing unused kernel memory: 39424K May 17 00:05:00.206264 kernel: Run /init as init process May 17 00:05:00.206282 kernel: with arguments: May 17 00:05:00.206301 kernel: /init May 17 00:05:00.206319 kernel: with environment: May 17 00:05:00.206342 kernel: HOME=/ May 17 00:05:00.206361 kernel: TERM=linux May 17 00:05:00.206379 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:05:00.206402 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:05:00.206427 systemd[1]: Detected virtualization amazon. May 17 00:05:00.206448 systemd[1]: Detected architecture arm64. May 17 00:05:00.206468 systemd[1]: Running in initrd. May 17 00:05:00.206493 systemd[1]: No hostname configured, using default hostname. May 17 00:05:00.206513 systemd[1]: Hostname set to . May 17 00:05:00.206534 systemd[1]: Initializing machine ID from VM UUID. May 17 00:05:00.206555 systemd[1]: Queued start job for default target initrd.target. May 17 00:05:00.206575 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:05:00.206595 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:05:00.206618 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:05:00.206639 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:05:00.206665 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:05:00.206686 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:05:00.206710 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:05:00.206731 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:05:00.206752 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:05:00.206773 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:05:00.206793 systemd[1]: Reached target paths.target - Path Units. May 17 00:05:00.206818 systemd[1]: Reached target slices.target - Slice Units. May 17 00:05:00.206839 systemd[1]: Reached target swap.target - Swaps. May 17 00:05:00.206860 systemd[1]: Reached target timers.target - Timer Units. May 17 00:05:00.206880 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:05:00.206900 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:05:00.206944 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:05:00.206969 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:05:00.206990 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:05:00.207010 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:05:00.207037 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:05:00.207058 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:05:00.207078 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:05:00.207099 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:05:00.207120 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:05:00.207140 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:05:00.207161 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:05:00.207181 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:05:00.207206 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:05:00.207227 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:05:00.207248 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:05:00.207312 systemd-journald[251]: Collecting audit messages is disabled. May 17 00:05:00.207373 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:05:00.207401 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:05:00.207422 systemd-journald[251]: Journal started May 17 00:05:00.207466 systemd-journald[251]: Runtime Journal (/run/log/journal/ec27a9b00af21dc8dac0b49cf90c9254) is 8.0M, max 75.3M, 67.3M free. May 17 00:05:00.186083 systemd-modules-load[252]: Inserted module 'overlay' May 17 00:05:00.215112 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:05:00.217519 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:05:00.225955 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:05:00.227432 systemd-modules-load[252]: Inserted module 'br_netfilter' May 17 00:05:00.229331 kernel: Bridge firewalling registered May 17 00:05:00.232298 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:05:00.240247 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:05:00.247087 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:05:00.257488 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:05:00.268194 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:05:00.269411 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:05:00.300548 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:05:00.320800 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:05:00.327804 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:05:00.341237 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:05:00.345731 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:05:00.357911 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:05:00.413595 dracut-cmdline[288]: dracut-dracut-053 May 17 00:05:00.421963 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:05:00.420778 systemd-resolved[286]: Positive Trust Anchors: May 17 00:05:00.420801 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:05:00.420865 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:05:00.585942 kernel: SCSI subsystem initialized May 17 00:05:00.591958 kernel: Loading iSCSI transport class v2.0-870. May 17 00:05:00.603955 kernel: iscsi: registered transport (tcp) May 17 00:05:00.626479 kernel: iscsi: registered transport (qla4xxx) May 17 00:05:00.626567 kernel: QLogic iSCSI HBA Driver May 17 00:05:00.695948 kernel: random: crng init done May 17 00:05:00.696274 systemd-resolved[286]: Defaulting to hostname 'linux'. May 17 00:05:00.699754 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:05:00.702051 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:05:00.728341 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:05:00.739217 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:05:00.781254 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:05:00.781339 kernel: device-mapper: uevent: version 1.0.3 May 17 00:05:00.783285 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:05:00.847965 kernel: raid6: neonx8 gen() 6752 MB/s May 17 00:05:00.864952 kernel: raid6: neonx4 gen() 6525 MB/s May 17 00:05:00.881949 kernel: raid6: neonx2 gen() 5438 MB/s May 17 00:05:00.898950 kernel: raid6: neonx1 gen() 3941 MB/s May 17 00:05:00.915950 kernel: raid6: int64x8 gen() 3820 MB/s May 17 00:05:00.932951 kernel: raid6: int64x4 gen() 3716 MB/s May 17 00:05:00.949949 kernel: raid6: int64x2 gen() 3606 MB/s May 17 00:05:00.967785 kernel: raid6: int64x1 gen() 2764 MB/s May 17 00:05:00.967828 kernel: raid6: using algorithm neonx8 gen() 6752 MB/s May 17 00:05:00.985793 kernel: raid6: .... xor() 4810 MB/s, rmw enabled May 17 00:05:00.985832 kernel: raid6: using neon recovery algorithm May 17 00:05:00.994234 kernel: xor: measuring software checksum speed May 17 00:05:00.994289 kernel: 8regs : 10959 MB/sec May 17 00:05:00.995385 kernel: 32regs : 11559 MB/sec May 17 00:05:00.996597 kernel: arm64_neon : 9515 MB/sec May 17 00:05:00.996629 kernel: xor: using function: 32regs (11559 MB/sec) May 17 00:05:01.080970 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:05:01.100359 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:05:01.111220 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:05:01.148194 systemd-udevd[470]: Using default interface naming scheme 'v255'. May 17 00:05:01.156735 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:05:01.177402 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:05:01.214250 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation May 17 00:05:01.269904 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:05:01.281259 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:05:01.417369 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:05:01.429549 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:05:01.480451 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:05:01.485096 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:05:01.503580 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:05:01.515646 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:05:01.531713 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:05:01.563736 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:05:01.624072 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:05:01.624322 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:05:01.626312 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:05:01.626551 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:05:01.667718 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 17 00:05:01.667758 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) May 17 00:05:01.626778 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:05:01.626909 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:05:01.667997 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:05:01.690406 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 17 00:05:01.690752 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 17 00:05:01.703084 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:b3:30:1f:23:ed May 17 00:05:01.705009 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:05:01.712458 (udev-worker)[531]: Network interface NamePolicy= disabled on kernel command line. May 17 00:05:01.719065 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 17 00:05:01.719102 kernel: nvme nvme0: pci function 0000:00:04.0 May 17 00:05:01.719445 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:05:01.735966 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 17 00:05:01.748001 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:05:01.748062 kernel: GPT:9289727 != 16777215 May 17 00:05:01.748089 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:05:01.749841 kernel: GPT:9289727 != 16777215 May 17 00:05:01.749880 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:05:01.750938 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:05:01.774205 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:05:01.823969 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (529) May 17 00:05:01.864325 kernel: BTRFS: device fsid 4797bc80-d55e-4b4a-8ede-cb88964b0162 devid 1 transid 43 /dev/nvme0n1p3 scanned by (udev-worker) (536) May 17 00:05:01.892075 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 17 00:05:01.946358 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 17 00:05:01.965272 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 17 00:05:02.002387 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 17 00:05:02.005166 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 17 00:05:02.024189 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:05:02.046587 disk-uuid[662]: Primary Header is updated. May 17 00:05:02.046587 disk-uuid[662]: Secondary Entries is updated. May 17 00:05:02.046587 disk-uuid[662]: Secondary Header is updated. May 17 00:05:02.056966 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:05:02.063953 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:05:02.072954 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:05:03.073954 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 00:05:03.075768 disk-uuid[663]: The operation has completed successfully. May 17 00:05:03.255200 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:05:03.255436 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:05:03.312169 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:05:03.320616 sh[1005]: Success May 17 00:05:03.351978 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 17 00:05:03.450819 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:05:03.467121 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:05:03.475049 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:05:03.516191 kernel: BTRFS info (device dm-0): first mount of filesystem 4797bc80-d55e-4b4a-8ede-cb88964b0162 May 17 00:05:03.516254 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 17 00:05:03.516281 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:05:03.517591 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:05:03.518683 kernel: BTRFS info (device dm-0): using free space tree May 17 00:05:03.610941 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:05:03.634514 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:05:03.638359 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:05:03.648216 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:05:03.659217 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:05:03.690476 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:05:03.690551 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 00:05:03.691996 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:05:03.699002 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:05:03.718430 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:05:03.720776 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:05:03.732547 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:05:03.755247 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:05:03.834837 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:05:03.850171 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:05:03.895591 systemd-networkd[1209]: lo: Link UP May 17 00:05:03.896070 systemd-networkd[1209]: lo: Gained carrier May 17 00:05:03.898449 systemd-networkd[1209]: Enumeration completed May 17 00:05:03.898974 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:05:03.899782 systemd-networkd[1209]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:05:03.899789 systemd-networkd[1209]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:05:03.905703 systemd-networkd[1209]: eth0: Link UP May 17 00:05:03.905712 systemd-networkd[1209]: eth0: Gained carrier May 17 00:05:03.905729 systemd-networkd[1209]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:05:03.906132 systemd[1]: Reached target network.target - Network. May 17 00:05:03.931019 systemd-networkd[1209]: eth0: DHCPv4 address 172.31.24.47/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 17 00:05:04.160146 ignition[1146]: Ignition 2.19.0 May 17 00:05:04.160664 ignition[1146]: Stage: fetch-offline May 17 00:05:04.161204 ignition[1146]: no configs at "/usr/lib/ignition/base.d" May 17 00:05:04.161228 ignition[1146]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:05:04.161724 ignition[1146]: Ignition finished successfully May 17 00:05:04.171405 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:05:04.183201 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:05:04.212082 ignition[1218]: Ignition 2.19.0 May 17 00:05:04.212111 ignition[1218]: Stage: fetch May 17 00:05:04.213719 ignition[1218]: no configs at "/usr/lib/ignition/base.d" May 17 00:05:04.213745 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:05:04.214259 ignition[1218]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:05:04.238693 ignition[1218]: PUT result: OK May 17 00:05:04.241945 ignition[1218]: parsed url from cmdline: "" May 17 00:05:04.241961 ignition[1218]: no config URL provided May 17 00:05:04.241978 ignition[1218]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:05:04.242004 ignition[1218]: no config at "/usr/lib/ignition/user.ign" May 17 00:05:04.242035 ignition[1218]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:05:04.246205 ignition[1218]: PUT result: OK May 17 00:05:04.246284 ignition[1218]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 17 00:05:04.257194 unknown[1218]: fetched base config from "system" May 17 00:05:04.249992 ignition[1218]: GET result: OK May 17 00:05:04.257212 unknown[1218]: fetched base config from "system" May 17 00:05:04.250181 ignition[1218]: parsing config with SHA512: b81d26706ad8af3ac487bb7cde0fd84efd6e1cb89417019c07cd83f0a1666fb35ad14c7a99db47b96334fda28590c389dc62138df0138a3c9d5a149a29a6f27e May 17 00:05:04.257226 unknown[1218]: fetched user config from "aws" May 17 00:05:04.257873 ignition[1218]: fetch: fetch complete May 17 00:05:04.265457 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:05:04.257885 ignition[1218]: fetch: fetch passed May 17 00:05:04.283407 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:05:04.257996 ignition[1218]: Ignition finished successfully May 17 00:05:04.310185 ignition[1225]: Ignition 2.19.0 May 17 00:05:04.310666 ignition[1225]: Stage: kargs May 17 00:05:04.311330 ignition[1225]: no configs at "/usr/lib/ignition/base.d" May 17 00:05:04.311374 ignition[1225]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:05:04.311544 ignition[1225]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:05:04.314256 ignition[1225]: PUT result: OK May 17 00:05:04.322883 ignition[1225]: kargs: kargs passed May 17 00:05:04.323257 ignition[1225]: Ignition finished successfully May 17 00:05:04.330981 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:05:04.348396 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:05:04.371855 ignition[1231]: Ignition 2.19.0 May 17 00:05:04.372380 ignition[1231]: Stage: disks May 17 00:05:04.373054 ignition[1231]: no configs at "/usr/lib/ignition/base.d" May 17 00:05:04.373080 ignition[1231]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:05:04.373260 ignition[1231]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:05:04.377579 ignition[1231]: PUT result: OK May 17 00:05:04.386560 ignition[1231]: disks: disks passed May 17 00:05:04.386665 ignition[1231]: Ignition finished successfully May 17 00:05:04.390874 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:05:04.394146 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:05:04.396775 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:05:04.399096 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:05:04.401755 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:05:04.406714 systemd[1]: Reached target basic.target - Basic System. May 17 00:05:04.423197 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:05:04.465191 systemd-fsck[1240]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:05:04.469824 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:05:04.488168 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:05:04.574957 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 50a777b7-c00f-4923-84ce-1c186fc0fd3b r/w with ordered data mode. Quota mode: none. May 17 00:05:04.575874 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:05:04.579828 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:05:04.591074 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:05:04.602165 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:05:04.608472 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 17 00:05:04.608588 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:05:04.608643 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:05:04.618775 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:05:04.640247 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:05:04.649963 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1259) May 17 00:05:04.650028 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:05:04.653391 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 00:05:04.653447 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:05:04.661968 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:05:04.663238 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:05:05.023983 initrd-setup-root[1283]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:05:05.033609 initrd-setup-root[1290]: cut: /sysroot/etc/group: No such file or directory May 17 00:05:05.053366 initrd-setup-root[1297]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:05:05.061555 initrd-setup-root[1304]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:05:05.358484 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:05:05.373601 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:05:05.379034 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:05:05.395814 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:05:05.402957 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:05:05.442407 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:05:05.451887 ignition[1371]: INFO : Ignition 2.19.0 May 17 00:05:05.451887 ignition[1371]: INFO : Stage: mount May 17 00:05:05.456201 ignition[1371]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:05:05.456201 ignition[1371]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:05:05.456201 ignition[1371]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:05:05.456201 ignition[1371]: INFO : PUT result: OK May 17 00:05:05.467974 ignition[1371]: INFO : mount: mount passed May 17 00:05:05.467974 ignition[1371]: INFO : Ignition finished successfully May 17 00:05:05.474976 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:05:05.484146 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:05:05.595565 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:05:05.615971 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1383) May 17 00:05:05.619961 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:05:05.620021 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 00:05:05.620048 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 00:05:05.625955 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 00:05:05.629476 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:05:05.664581 ignition[1400]: INFO : Ignition 2.19.0 May 17 00:05:05.664581 ignition[1400]: INFO : Stage: files May 17 00:05:05.667896 ignition[1400]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:05:05.667896 ignition[1400]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:05:05.672134 ignition[1400]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:05:05.675300 ignition[1400]: INFO : PUT result: OK May 17 00:05:05.679900 ignition[1400]: DEBUG : files: compiled without relabeling support, skipping May 17 00:05:05.686449 ignition[1400]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:05:05.686449 ignition[1400]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:05:05.714664 ignition[1400]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:05:05.717616 ignition[1400]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:05:05.720515 unknown[1400]: wrote ssh authorized keys file for user: core May 17 00:05:05.723004 ignition[1400]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:05:05.726515 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 17 00:05:05.730686 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 17 00:05:05.848156 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:05:05.857142 systemd-networkd[1209]: eth0: Gained IPv6LL May 17 00:05:06.000544 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 17 00:05:06.000544 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:05:06.007497 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 17 00:05:06.418766 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:05:06.542357 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:05:06.547261 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 00:05:06.547261 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:05:06.547261 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:05:06.547261 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:05:06.547261 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:05:06.547261 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:05:06.547261 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:05:06.547261 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:05:06.547261 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:05:06.547261 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:05:06.547261 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 17 00:05:06.547261 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 17 00:05:06.547261 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 17 00:05:06.547261 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 May 17 00:05:07.274774 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 17 00:05:07.589887 ignition[1400]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 17 00:05:07.589887 ignition[1400]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 17 00:05:07.600880 ignition[1400]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:05:07.604462 ignition[1400]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:05:07.604462 ignition[1400]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 17 00:05:07.604462 ignition[1400]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 17 00:05:07.604462 ignition[1400]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:05:07.615614 ignition[1400]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:05:07.615614 ignition[1400]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:05:07.615614 ignition[1400]: INFO : files: files passed May 17 00:05:07.615614 ignition[1400]: INFO : Ignition finished successfully May 17 00:05:07.628975 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:05:07.643619 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:05:07.649278 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:05:07.663629 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:05:07.663854 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:05:07.683554 initrd-setup-root-after-ignition[1432]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:05:07.687218 initrd-setup-root-after-ignition[1428]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:05:07.690844 initrd-setup-root-after-ignition[1428]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:05:07.695692 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:05:07.700955 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:05:07.716294 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:05:07.770051 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:05:07.770252 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:05:07.774406 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:05:07.776819 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:05:07.779603 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:05:07.795294 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:05:07.822753 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:05:07.837872 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:05:07.862990 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:05:07.866501 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:05:07.872382 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:05:07.872735 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:05:07.873017 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:05:07.882893 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:05:07.885136 systemd[1]: Stopped target basic.target - Basic System. May 17 00:05:07.887535 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:05:07.895156 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:05:07.897774 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:05:07.904469 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:05:07.906948 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:05:07.911590 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:05:07.914421 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:05:07.922861 systemd[1]: Stopped target swap.target - Swaps. May 17 00:05:07.924717 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:05:07.925310 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:05:07.933538 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:05:07.937587 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:05:07.940004 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:05:07.941995 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:05:07.948459 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:05:07.948680 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:05:07.951141 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:05:07.951747 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:05:07.962081 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:05:07.962302 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:05:07.976379 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:05:07.982291 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:05:07.984552 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:05:07.993314 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:05:07.996699 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:05:07.997030 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:05:07.999574 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:05:07.999823 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:05:08.024670 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:05:08.024883 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:05:08.045384 ignition[1452]: INFO : Ignition 2.19.0 May 17 00:05:08.045384 ignition[1452]: INFO : Stage: umount May 17 00:05:08.050078 ignition[1452]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:05:08.050078 ignition[1452]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 17 00:05:08.050078 ignition[1452]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 17 00:05:08.061668 ignition[1452]: INFO : PUT result: OK May 17 00:05:08.064572 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:05:08.069754 ignition[1452]: INFO : umount: umount passed May 17 00:05:08.071591 ignition[1452]: INFO : Ignition finished successfully May 17 00:05:08.077419 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:05:08.079227 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:05:08.083861 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:05:08.086030 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:05:08.089740 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:05:08.089829 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:05:08.091797 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:05:08.091875 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:05:08.093835 systemd[1]: Stopped target network.target - Network. May 17 00:05:08.095518 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:05:08.095600 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:05:08.097983 systemd[1]: Stopped target paths.target - Path Units. May 17 00:05:08.113147 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:05:08.122039 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:05:08.124606 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:05:08.126299 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:05:08.128216 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:05:08.128296 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:05:08.130471 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:05:08.130540 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:05:08.132429 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:05:08.132511 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:05:08.134352 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:05:08.134433 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:05:08.136664 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:05:08.138718 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:05:08.140700 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:05:08.140887 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:05:08.142020 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:05:08.142182 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:05:08.153060 systemd-networkd[1209]: eth0: DHCPv6 lease lost May 17 00:05:08.170596 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:05:08.170865 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:05:08.176113 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:05:08.176316 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:05:08.192734 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:05:08.192822 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:05:08.212144 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:05:08.221545 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:05:08.221661 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:05:08.225626 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:05:08.225731 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:05:08.236031 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:05:08.236131 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:05:08.238451 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:05:08.238531 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:05:08.241450 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:05:08.273189 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:05:08.275015 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:05:08.283125 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:05:08.285010 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:05:08.288396 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:05:08.288478 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:05:08.291749 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:05:08.291820 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:05:08.294026 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:05:08.294117 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:05:08.296663 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:05:08.296746 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:05:08.314148 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:05:08.314245 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:05:08.327197 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:05:08.332479 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:05:08.332603 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:05:08.335198 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:05:08.335284 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:05:08.351566 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:05:08.351749 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:05:08.354659 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:05:08.361176 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:05:08.390681 systemd[1]: Switching root. May 17 00:05:08.455064 systemd-journald[251]: Journal stopped May 17 00:05:10.787986 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). May 17 00:05:10.788113 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:05:10.788167 kernel: SELinux: policy capability open_perms=1 May 17 00:05:10.788199 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:05:10.788232 kernel: SELinux: policy capability always_check_network=0 May 17 00:05:10.788262 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:05:10.788294 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:05:10.788334 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:05:10.788366 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:05:10.788401 kernel: audit: type=1403 audit(1747440309.034:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:05:10.788434 systemd[1]: Successfully loaded SELinux policy in 67.605ms. May 17 00:05:10.788480 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.237ms. May 17 00:05:10.788515 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:05:10.788547 systemd[1]: Detected virtualization amazon. May 17 00:05:10.788577 systemd[1]: Detected architecture arm64. May 17 00:05:10.788609 systemd[1]: Detected first boot. May 17 00:05:10.788641 systemd[1]: Initializing machine ID from VM UUID. May 17 00:05:10.788673 zram_generator::config[1493]: No configuration found. May 17 00:05:10.788710 systemd[1]: Populated /etc with preset unit settings. May 17 00:05:10.788740 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:05:10.788771 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 00:05:10.788802 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:05:10.788836 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:05:10.788869 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:05:10.788901 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:05:10.788957 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:05:10.788997 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:05:10.789031 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:05:10.789065 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:05:10.789099 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:05:10.789130 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:05:10.789161 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:05:10.789193 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:05:10.789225 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:05:10.789257 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:05:10.789293 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:05:10.789325 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 17 00:05:10.789358 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:05:10.789391 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 00:05:10.789421 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 00:05:10.789450 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 00:05:10.789492 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:05:10.789528 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:05:10.789560 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:05:10.789591 systemd[1]: Reached target slices.target - Slice Units. May 17 00:05:10.789622 systemd[1]: Reached target swap.target - Swaps. May 17 00:05:10.789652 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:05:10.789684 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:05:10.789714 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:05:10.789747 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:05:10.789778 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:05:10.789810 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:05:10.789844 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:05:10.789876 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:05:10.789908 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:05:10.789971 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:05:10.790005 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:05:10.790036 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:05:10.790069 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:05:10.790099 systemd[1]: Reached target machines.target - Containers. May 17 00:05:10.790135 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:05:10.790168 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:05:10.790204 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:05:10.790234 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:05:10.790263 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:05:10.790294 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:05:10.790326 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:05:10.790355 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:05:10.790384 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:05:10.790419 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:05:10.790449 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:05:10.790479 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 00:05:10.790510 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:05:10.790540 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:05:10.790569 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:05:10.790600 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:05:10.790630 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:05:10.790666 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:05:10.790696 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:05:10.790727 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:05:10.790756 systemd[1]: Stopped verity-setup.service. May 17 00:05:10.790785 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:05:10.790815 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:05:10.790847 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:05:10.790877 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:05:10.790906 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:05:10.790959 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:05:10.790994 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:05:10.791024 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:05:10.791053 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:05:10.791083 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:05:10.791119 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:05:10.791149 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:05:10.791181 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:05:10.791250 systemd-journald[1571]: Collecting audit messages is disabled. May 17 00:05:10.791308 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:05:10.791353 systemd-journald[1571]: Journal started May 17 00:05:10.791411 systemd-journald[1571]: Runtime Journal (/run/log/journal/ec27a9b00af21dc8dac0b49cf90c9254) is 8.0M, max 75.3M, 67.3M free. May 17 00:05:10.231448 systemd[1]: Queued start job for default target multi-user.target. May 17 00:05:10.796047 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:05:10.303226 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 17 00:05:10.304160 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:05:10.801166 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:05:10.821960 kernel: loop: module loaded May 17 00:05:10.826708 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:05:10.832480 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:05:10.832989 kernel: ACPI: bus type drm_connector registered May 17 00:05:10.833044 kernel: fuse: init (API version 7.39) May 17 00:05:10.835639 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:05:10.840423 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:05:10.840987 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:05:10.844694 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:05:10.845074 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:05:10.865182 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:05:10.878353 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:05:10.890094 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:05:10.892292 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:05:10.892350 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:05:10.901239 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:05:10.910211 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:05:10.932340 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:05:10.935259 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:05:10.939265 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:05:10.950373 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:05:10.952636 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:05:10.963177 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:05:10.965325 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:05:10.972275 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:05:10.978431 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:05:10.986184 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:05:10.988733 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:05:10.992366 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:05:11.001642 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:05:11.019365 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:05:11.062568 kernel: loop0: detected capacity change from 0 to 52536 May 17 00:05:11.070578 systemd-journald[1571]: Time spent on flushing to /var/log/journal/ec27a9b00af21dc8dac0b49cf90c9254 is 138.556ms for 911 entries. May 17 00:05:11.070578 systemd-journald[1571]: System Journal (/var/log/journal/ec27a9b00af21dc8dac0b49cf90c9254) is 8.0M, max 195.6M, 187.6M free. May 17 00:05:11.225665 systemd-journald[1571]: Received client request to flush runtime journal. May 17 00:05:11.227052 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:05:11.227121 kernel: loop1: detected capacity change from 0 to 207008 May 17 00:05:11.071282 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:05:11.087729 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:05:11.104438 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:05:11.160464 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:05:11.201247 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:05:11.210359 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:05:11.231954 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:05:11.237605 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:05:11.241178 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:05:11.277994 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:05:11.291179 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:05:11.295354 udevadm[1636]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 00:05:11.316965 kernel: loop2: detected capacity change from 0 to 114328 May 17 00:05:11.350282 systemd-tmpfiles[1642]: ACLs are not supported, ignoring. May 17 00:05:11.350314 systemd-tmpfiles[1642]: ACLs are not supported, ignoring. May 17 00:05:11.363695 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:05:11.456959 kernel: loop3: detected capacity change from 0 to 114432 May 17 00:05:11.585674 kernel: loop4: detected capacity change from 0 to 52536 May 17 00:05:11.605117 kernel: loop5: detected capacity change from 0 to 207008 May 17 00:05:11.636958 kernel: loop6: detected capacity change from 0 to 114328 May 17 00:05:11.651069 kernel: loop7: detected capacity change from 0 to 114432 May 17 00:05:11.665005 (sd-merge)[1649]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 17 00:05:11.666328 (sd-merge)[1649]: Merged extensions into '/usr'. May 17 00:05:11.674991 systemd[1]: Reloading requested from client PID 1621 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:05:11.675016 systemd[1]: Reloading... May 17 00:05:11.846954 zram_generator::config[1676]: No configuration found. May 17 00:05:12.162657 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:05:12.281704 systemd[1]: Reloading finished in 605 ms. May 17 00:05:12.320992 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:05:12.324164 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:05:12.340279 systemd[1]: Starting ensure-sysext.service... May 17 00:05:12.351570 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:05:12.359497 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:05:12.379067 systemd[1]: Reloading requested from client PID 1727 ('systemctl') (unit ensure-sysext.service)... May 17 00:05:12.379104 systemd[1]: Reloading... May 17 00:05:12.405758 systemd-tmpfiles[1728]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:05:12.408343 systemd-tmpfiles[1728]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:05:12.411693 systemd-tmpfiles[1728]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:05:12.412715 systemd-tmpfiles[1728]: ACLs are not supported, ignoring. May 17 00:05:12.412868 systemd-tmpfiles[1728]: ACLs are not supported, ignoring. May 17 00:05:12.443622 systemd-tmpfiles[1728]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:05:12.443884 systemd-tmpfiles[1728]: Skipping /boot May 17 00:05:12.471696 systemd-tmpfiles[1728]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:05:12.471862 systemd-tmpfiles[1728]: Skipping /boot May 17 00:05:12.496763 systemd-udevd[1729]: Using default interface naming scheme 'v255'. May 17 00:05:12.572957 zram_generator::config[1756]: No configuration found. May 17 00:05:12.724592 (udev-worker)[1769]: Network interface NamePolicy= disabled on kernel command line. May 17 00:05:12.777260 ldconfig[1613]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:05:12.991635 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:05:13.021965 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (1773) May 17 00:05:13.180748 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 17 00:05:13.182757 systemd[1]: Reloading finished in 803 ms. May 17 00:05:13.225390 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:05:13.229814 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:05:13.232534 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:05:13.306554 systemd[1]: Finished ensure-sysext.service. May 17 00:05:13.342606 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:05:13.354996 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 17 00:05:13.364287 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:05:13.372257 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:05:13.376377 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:05:13.383258 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:05:13.395420 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:05:13.401418 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:05:13.407275 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:05:13.412412 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:05:13.414666 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:05:13.419267 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:05:13.427966 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:05:13.439024 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:05:13.451288 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:05:13.454115 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:05:13.460026 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:05:13.466284 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:05:13.496065 lvm[1928]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:05:13.517320 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:05:13.518000 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:05:13.545570 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:05:13.548371 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:05:13.550070 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:05:13.579768 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:05:13.580225 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:05:13.583189 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:05:13.589187 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:05:13.598406 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:05:13.610839 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:05:13.611213 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:05:13.613715 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:05:13.615664 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:05:13.650782 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:05:13.663817 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:05:13.667557 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:05:13.681639 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:05:13.686068 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:05:13.694374 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:05:13.696688 augenrules[1967]: No rules May 17 00:05:13.703148 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:05:13.713762 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:05:13.725990 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:05:13.732957 lvm[1965]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:05:13.799367 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:05:13.807027 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:05:13.865684 systemd-networkd[1941]: lo: Link UP May 17 00:05:13.865707 systemd-networkd[1941]: lo: Gained carrier May 17 00:05:13.868524 systemd-networkd[1941]: Enumeration completed May 17 00:05:13.868713 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:05:13.872872 systemd-networkd[1941]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:05:13.872897 systemd-networkd[1941]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:05:13.876236 systemd-networkd[1941]: eth0: Link UP May 17 00:05:13.876610 systemd-networkd[1941]: eth0: Gained carrier May 17 00:05:13.876656 systemd-networkd[1941]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:05:13.882114 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:05:13.893060 systemd-networkd[1941]: eth0: DHCPv4 address 172.31.24.47/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 17 00:05:13.894706 systemd-resolved[1942]: Positive Trust Anchors: May 17 00:05:13.894744 systemd-resolved[1942]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:05:13.894812 systemd-resolved[1942]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:05:13.914469 systemd-resolved[1942]: Defaulting to hostname 'linux'. May 17 00:05:13.917648 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:05:13.920062 systemd[1]: Reached target network.target - Network. May 17 00:05:13.921889 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:05:13.924140 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:05:13.926279 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:05:13.928650 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:05:13.931311 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:05:13.933461 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:05:13.935762 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:05:13.938053 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:05:13.938106 systemd[1]: Reached target paths.target - Path Units. May 17 00:05:13.939835 systemd[1]: Reached target timers.target - Timer Units. May 17 00:05:13.943018 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:05:13.947569 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:05:13.959102 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:05:13.962209 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:05:13.964465 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:05:13.966368 systemd[1]: Reached target basic.target - Basic System. May 17 00:05:13.968260 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:05:13.968323 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:05:13.976261 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:05:13.982505 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:05:13.997467 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:05:14.003138 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:05:14.015344 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:05:14.017530 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:05:14.028376 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:05:14.034010 systemd[1]: Started ntpd.service - Network Time Service. May 17 00:05:14.040222 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:05:14.046175 systemd[1]: Starting setup-oem.service - Setup OEM... May 17 00:05:14.060173 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:05:14.068266 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:05:14.078200 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:05:14.082129 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:05:14.083722 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:05:14.092240 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:05:14.095845 jq[1992]: false May 17 00:05:14.099171 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:05:14.107650 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:05:14.108039 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:05:14.176384 ntpd[1995]: ntpd 4.2.8p17@1.4004-o Fri May 16 22:02:25 UTC 2025 (1): Starting May 17 00:05:14.177885 ntpd[1995]: 17 May 00:05:14 ntpd[1995]: ntpd 4.2.8p17@1.4004-o Fri May 16 22:02:25 UTC 2025 (1): Starting May 17 00:05:14.177885 ntpd[1995]: 17 May 00:05:14 ntpd[1995]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 17 00:05:14.177885 ntpd[1995]: 17 May 00:05:14 ntpd[1995]: ---------------------------------------------------- May 17 00:05:14.177885 ntpd[1995]: 17 May 00:05:14 ntpd[1995]: ntp-4 is maintained by Network Time Foundation, May 17 00:05:14.177885 ntpd[1995]: 17 May 00:05:14 ntpd[1995]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 17 00:05:14.177885 ntpd[1995]: 17 May 00:05:14 ntpd[1995]: corporation. Support and training for ntp-4 are May 17 00:05:14.177885 ntpd[1995]: 17 May 00:05:14 ntpd[1995]: available at https://www.nwtime.org/support May 17 00:05:14.177885 ntpd[1995]: 17 May 00:05:14 ntpd[1995]: ---------------------------------------------------- May 17 00:05:14.176450 ntpd[1995]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 17 00:05:14.176470 ntpd[1995]: ---------------------------------------------------- May 17 00:05:14.176490 ntpd[1995]: ntp-4 is maintained by Network Time Foundation, May 17 00:05:14.176509 ntpd[1995]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 17 00:05:14.176527 ntpd[1995]: corporation. Support and training for ntp-4 are May 17 00:05:14.176546 ntpd[1995]: available at https://www.nwtime.org/support May 17 00:05:14.185530 ntpd[1995]: 17 May 00:05:14 ntpd[1995]: proto: precision = 0.096 usec (-23) May 17 00:05:14.182658 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:05:14.176564 ntpd[1995]: ---------------------------------------------------- May 17 00:05:14.185412 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:05:14.185066 ntpd[1995]: proto: precision = 0.096 usec (-23) May 17 00:05:14.189378 ntpd[1995]: basedate set to 2025-05-04 May 17 00:05:14.191155 ntpd[1995]: 17 May 00:05:14 ntpd[1995]: basedate set to 2025-05-04 May 17 00:05:14.191155 ntpd[1995]: 17 May 00:05:14 ntpd[1995]: gps base set to 2025-05-04 (week 2365) May 17 00:05:14.189418 ntpd[1995]: gps base set to 2025-05-04 (week 2365) May 17 00:05:14.198459 ntpd[1995]: 17 May 00:05:14 ntpd[1995]: Listen and drop on 0 v6wildcard [::]:123 May 17 00:05:14.198459 ntpd[1995]: 17 May 00:05:14 ntpd[1995]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 17 00:05:14.198161 ntpd[1995]: Listen and drop on 0 v6wildcard [::]:123 May 17 00:05:14.198252 ntpd[1995]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 17 00:05:14.201165 ntpd[1995]: Listen normally on 2 lo 127.0.0.1:123 May 17 00:05:14.204061 ntpd[1995]: 17 May 00:05:14 ntpd[1995]: Listen normally on 2 lo 127.0.0.1:123 May 17 00:05:14.204061 ntpd[1995]: 17 May 00:05:14 ntpd[1995]: Listen normally on 3 eth0 172.31.24.47:123 May 17 00:05:14.204061 ntpd[1995]: 17 May 00:05:14 ntpd[1995]: Listen normally on 4 lo [::1]:123 May 17 00:05:14.204061 ntpd[1995]: 17 May 00:05:14 ntpd[1995]: bind(21) AF_INET6 fe80::4b3:30ff:fe1f:23ed%2#123 flags 0x11 failed: Cannot assign requested address May 17 00:05:14.204061 ntpd[1995]: 17 May 00:05:14 ntpd[1995]: unable to create socket on eth0 (5) for fe80::4b3:30ff:fe1f:23ed%2#123 May 17 00:05:14.204061 ntpd[1995]: 17 May 00:05:14 ntpd[1995]: failed to init interface for address fe80::4b3:30ff:fe1f:23ed%2 May 17 00:05:14.204061 ntpd[1995]: 17 May 00:05:14 ntpd[1995]: Listening on routing socket on fd #21 for interface updates May 17 00:05:14.203330 ntpd[1995]: Listen normally on 3 eth0 172.31.24.47:123 May 17 00:05:14.203410 ntpd[1995]: Listen normally on 4 lo [::1]:123 May 17 00:05:14.203497 ntpd[1995]: bind(21) AF_INET6 fe80::4b3:30ff:fe1f:23ed%2#123 flags 0x11 failed: Cannot assign requested address May 17 00:05:14.203538 ntpd[1995]: unable to create socket on eth0 (5) for fe80::4b3:30ff:fe1f:23ed%2#123 May 17 00:05:14.203566 ntpd[1995]: failed to init interface for address fe80::4b3:30ff:fe1f:23ed%2 May 17 00:05:14.203628 ntpd[1995]: Listening on routing socket on fd #21 for interface updates May 17 00:05:14.214914 ntpd[1995]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:05:14.216124 ntpd[1995]: 17 May 00:05:14 ntpd[1995]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:05:14.216124 ntpd[1995]: 17 May 00:05:14 ntpd[1995]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:05:14.215017 ntpd[1995]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 17 00:05:14.233757 update_engine[2004]: I20250517 00:05:14.233043 2004 main.cc:92] Flatcar Update Engine starting May 17 00:05:14.249229 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:05:14.251099 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:05:14.259588 (ntainerd)[2020]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:05:14.268672 dbus-daemon[1991]: [system] SELinux support is enabled May 17 00:05:14.270119 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:05:14.293592 jq[2007]: true May 17 00:05:14.303968 coreos-metadata[1990]: May 17 00:05:14.303 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 17 00:05:14.296754 dbus-daemon[1991]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1941 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 17 00:05:14.308517 update_engine[2004]: I20250517 00:05:14.305589 2004 update_check_scheduler.cc:74] Next update check in 3m26s May 17 00:05:14.308649 coreos-metadata[1990]: May 17 00:05:14.305 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 17 00:05:14.308649 coreos-metadata[1990]: May 17 00:05:14.308 INFO Fetch successful May 17 00:05:14.308649 coreos-metadata[1990]: May 17 00:05:14.308 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 17 00:05:14.317193 dbus-daemon[1991]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 00:05:14.320651 tar[2017]: linux-arm64/LICENSE May 17 00:05:14.320651 tar[2017]: linux-arm64/helm May 17 00:05:14.321157 coreos-metadata[1990]: May 17 00:05:14.317 INFO Fetch successful May 17 00:05:14.321157 coreos-metadata[1990]: May 17 00:05:14.319 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 17 00:05:14.309534 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:05:14.321433 extend-filesystems[1993]: Found loop4 May 17 00:05:14.321433 extend-filesystems[1993]: Found loop5 May 17 00:05:14.309608 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:05:14.334127 coreos-metadata[1990]: May 17 00:05:14.325 INFO Fetch successful May 17 00:05:14.334127 coreos-metadata[1990]: May 17 00:05:14.325 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 17 00:05:14.334127 coreos-metadata[1990]: May 17 00:05:14.329 INFO Fetch successful May 17 00:05:14.334127 coreos-metadata[1990]: May 17 00:05:14.329 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 17 00:05:14.334334 extend-filesystems[1993]: Found loop6 May 17 00:05:14.334334 extend-filesystems[1993]: Found loop7 May 17 00:05:14.334334 extend-filesystems[1993]: Found nvme0n1 May 17 00:05:14.334334 extend-filesystems[1993]: Found nvme0n1p1 May 17 00:05:14.334334 extend-filesystems[1993]: Found nvme0n1p2 May 17 00:05:14.334334 extend-filesystems[1993]: Found nvme0n1p3 May 17 00:05:14.334334 extend-filesystems[1993]: Found usr May 17 00:05:14.334334 extend-filesystems[1993]: Found nvme0n1p4 May 17 00:05:14.334334 extend-filesystems[1993]: Found nvme0n1p6 May 17 00:05:14.334334 extend-filesystems[1993]: Found nvme0n1p7 May 17 00:05:14.334334 extend-filesystems[1993]: Found nvme0n1p9 May 17 00:05:14.334334 extend-filesystems[1993]: Checking size of /dev/nvme0n1p9 May 17 00:05:14.312125 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:05:14.380391 coreos-metadata[1990]: May 17 00:05:14.335 INFO Fetch failed with 404: resource not found May 17 00:05:14.380391 coreos-metadata[1990]: May 17 00:05:14.335 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 17 00:05:14.380391 coreos-metadata[1990]: May 17 00:05:14.337 INFO Fetch successful May 17 00:05:14.380391 coreos-metadata[1990]: May 17 00:05:14.337 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 17 00:05:14.380391 coreos-metadata[1990]: May 17 00:05:14.352 INFO Fetch successful May 17 00:05:14.380391 coreos-metadata[1990]: May 17 00:05:14.353 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 17 00:05:14.380391 coreos-metadata[1990]: May 17 00:05:14.363 INFO Fetch successful May 17 00:05:14.380391 coreos-metadata[1990]: May 17 00:05:14.363 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 17 00:05:14.312165 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:05:14.317502 systemd[1]: Started update-engine.service - Update Engine. May 17 00:05:14.326281 systemd[1]: Finished setup-oem.service - Setup OEM. May 17 00:05:14.362222 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 17 00:05:14.371689 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:05:14.388992 coreos-metadata[1990]: May 17 00:05:14.388 INFO Fetch successful May 17 00:05:14.388992 coreos-metadata[1990]: May 17 00:05:14.388 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 17 00:05:14.393447 coreos-metadata[1990]: May 17 00:05:14.391 INFO Fetch successful May 17 00:05:14.423890 jq[2033]: true May 17 00:05:14.451764 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:05:14.463953 extend-filesystems[1993]: Resized partition /dev/nvme0n1p9 May 17 00:05:14.475462 extend-filesystems[2049]: resize2fs 1.47.1 (20-May-2024) May 17 00:05:14.498971 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 17 00:05:14.542119 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:05:14.544887 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:05:14.598983 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 17 00:05:14.613331 extend-filesystems[2049]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 17 00:05:14.613331 extend-filesystems[2049]: old_desc_blocks = 1, new_desc_blocks = 1 May 17 00:05:14.613331 extend-filesystems[2049]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 17 00:05:14.631632 extend-filesystems[1993]: Resized filesystem in /dev/nvme0n1p9 May 17 00:05:14.624777 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:05:14.625153 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:05:14.673470 systemd-logind[2003]: Watching system buttons on /dev/input/event0 (Power Button) May 17 00:05:14.673522 systemd-logind[2003]: Watching system buttons on /dev/input/event1 (Sleep Button) May 17 00:05:14.674725 systemd-logind[2003]: New seat seat0. May 17 00:05:14.689184 bash[2078]: Updated "/home/core/.ssh/authorized_keys" May 17 00:05:14.694696 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:05:14.701951 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (1772) May 17 00:05:14.774208 dbus-daemon[1991]: [system] Successfully activated service 'org.freedesktop.hostname1' May 17 00:05:14.775408 dbus-daemon[1991]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.4' (uid=0 pid=2036 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 17 00:05:14.817680 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 17 00:05:14.845187 systemd[1]: Starting polkit.service - Authorization Manager... May 17 00:05:14.854253 systemd[1]: Starting sshkeys.service... May 17 00:05:14.856112 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:05:14.864864 locksmithd[2037]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:05:14.919562 polkitd[2095]: Started polkitd version 121 May 17 00:05:14.941810 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:05:14.953570 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:05:14.985759 polkitd[2095]: Loading rules from directory /etc/polkit-1/rules.d May 17 00:05:14.985889 polkitd[2095]: Loading rules from directory /usr/share/polkit-1/rules.d May 17 00:05:14.994476 polkitd[2095]: Finished loading, compiling and executing 2 rules May 17 00:05:15.003266 dbus-daemon[1991]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 17 00:05:15.003574 systemd[1]: Started polkit.service - Authorization Manager. May 17 00:05:15.008913 polkitd[2095]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 17 00:05:15.077471 systemd-hostnamed[2036]: Hostname set to (transient) May 17 00:05:15.080015 systemd-resolved[1942]: System hostname changed to 'ip-172-31-24-47'. May 17 00:05:15.091349 containerd[2020]: time="2025-05-17T00:05:15.090637173Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:05:15.177179 ntpd[1995]: bind(24) AF_INET6 fe80::4b3:30ff:fe1f:23ed%2#123 flags 0x11 failed: Cannot assign requested address May 17 00:05:15.177670 ntpd[1995]: 17 May 00:05:15 ntpd[1995]: bind(24) AF_INET6 fe80::4b3:30ff:fe1f:23ed%2#123 flags 0x11 failed: Cannot assign requested address May 17 00:05:15.177670 ntpd[1995]: 17 May 00:05:15 ntpd[1995]: unable to create socket on eth0 (6) for fe80::4b3:30ff:fe1f:23ed%2#123 May 17 00:05:15.177670 ntpd[1995]: 17 May 00:05:15 ntpd[1995]: failed to init interface for address fe80::4b3:30ff:fe1f:23ed%2 May 17 00:05:15.177245 ntpd[1995]: unable to create socket on eth0 (6) for fe80::4b3:30ff:fe1f:23ed%2#123 May 17 00:05:15.177274 ntpd[1995]: failed to init interface for address fe80::4b3:30ff:fe1f:23ed%2 May 17 00:05:15.199166 coreos-metadata[2119]: May 17 00:05:15.198 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 17 00:05:15.199166 coreos-metadata[2119]: May 17 00:05:15.198 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 17 00:05:15.200025 coreos-metadata[2119]: May 17 00:05:15.199 INFO Fetch successful May 17 00:05:15.200025 coreos-metadata[2119]: May 17 00:05:15.199 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 17 00:05:15.200025 coreos-metadata[2119]: May 17 00:05:15.199 INFO Fetch successful May 17 00:05:15.201870 unknown[2119]: wrote ssh authorized keys file for user: core May 17 00:05:15.261047 update-ssh-keys[2166]: Updated "/home/core/.ssh/authorized_keys" May 17 00:05:15.265767 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:05:15.272715 systemd[1]: Finished sshkeys.service. May 17 00:05:15.316049 containerd[2020]: time="2025-05-17T00:05:15.314991034Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:05:15.323646 containerd[2020]: time="2025-05-17T00:05:15.322959826Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:05:15.323646 containerd[2020]: time="2025-05-17T00:05:15.323034790Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:05:15.323646 containerd[2020]: time="2025-05-17T00:05:15.323072710Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:05:15.323646 containerd[2020]: time="2025-05-17T00:05:15.323381854Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:05:15.323646 containerd[2020]: time="2025-05-17T00:05:15.323418430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:05:15.323646 containerd[2020]: time="2025-05-17T00:05:15.323540842Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:05:15.323646 containerd[2020]: time="2025-05-17T00:05:15.323571130Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:05:15.324039 containerd[2020]: time="2025-05-17T00:05:15.323853082Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:05:15.324039 containerd[2020]: time="2025-05-17T00:05:15.323886166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:05:15.324039 containerd[2020]: time="2025-05-17T00:05:15.323937598Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:05:15.324039 containerd[2020]: time="2025-05-17T00:05:15.323967334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:05:15.324217 containerd[2020]: time="2025-05-17T00:05:15.324137566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:05:15.324638 containerd[2020]: time="2025-05-17T00:05:15.324580018Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:05:15.324853 containerd[2020]: time="2025-05-17T00:05:15.324806638Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:05:15.324964 containerd[2020]: time="2025-05-17T00:05:15.324848866Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:05:15.327760 containerd[2020]: time="2025-05-17T00:05:15.327120862Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:05:15.327760 containerd[2020]: time="2025-05-17T00:05:15.327260530Z" level=info msg="metadata content store policy set" policy=shared May 17 00:05:15.336660 containerd[2020]: time="2025-05-17T00:05:15.335662090Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:05:15.336660 containerd[2020]: time="2025-05-17T00:05:15.335790910Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:05:15.336660 containerd[2020]: time="2025-05-17T00:05:15.335955658Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:05:15.336660 containerd[2020]: time="2025-05-17T00:05:15.335995666Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:05:15.336660 containerd[2020]: time="2025-05-17T00:05:15.336053830Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:05:15.336660 containerd[2020]: time="2025-05-17T00:05:15.336406402Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:05:15.340086 containerd[2020]: time="2025-05-17T00:05:15.339998578Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:05:15.340446 containerd[2020]: time="2025-05-17T00:05:15.340378330Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:05:15.340506 containerd[2020]: time="2025-05-17T00:05:15.340448890Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:05:15.340506 containerd[2020]: time="2025-05-17T00:05:15.340483270Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:05:15.340617 containerd[2020]: time="2025-05-17T00:05:15.340517446Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:05:15.340617 containerd[2020]: time="2025-05-17T00:05:15.340590646Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:05:15.340705 containerd[2020]: time="2025-05-17T00:05:15.340628722Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:05:15.340705 containerd[2020]: time="2025-05-17T00:05:15.340661446Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:05:15.340705 containerd[2020]: time="2025-05-17T00:05:15.340695118Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:05:15.340825 containerd[2020]: time="2025-05-17T00:05:15.340724770Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:05:15.340825 containerd[2020]: time="2025-05-17T00:05:15.340753798Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:05:15.340937 containerd[2020]: time="2025-05-17T00:05:15.340834594Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:05:15.340937 containerd[2020]: time="2025-05-17T00:05:15.340877326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:05:15.341027 containerd[2020]: time="2025-05-17T00:05:15.340908982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:05:15.341027 containerd[2020]: time="2025-05-17T00:05:15.340970590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:05:15.341027 containerd[2020]: time="2025-05-17T00:05:15.341003074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:05:15.341177 containerd[2020]: time="2025-05-17T00:05:15.341032618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:05:15.341177 containerd[2020]: time="2025-05-17T00:05:15.341063314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:05:15.341177 containerd[2020]: time="2025-05-17T00:05:15.341093950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:05:15.341177 containerd[2020]: time="2025-05-17T00:05:15.341125486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:05:15.341335 containerd[2020]: time="2025-05-17T00:05:15.341180650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:05:15.341335 containerd[2020]: time="2025-05-17T00:05:15.341236198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:05:15.341335 containerd[2020]: time="2025-05-17T00:05:15.341267002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:05:15.341335 containerd[2020]: time="2025-05-17T00:05:15.341296990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:05:15.341505 containerd[2020]: time="2025-05-17T00:05:15.341330782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:05:15.341505 containerd[2020]: time="2025-05-17T00:05:15.341373766Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:05:15.341505 containerd[2020]: time="2025-05-17T00:05:15.341416774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:05:15.341505 containerd[2020]: time="2025-05-17T00:05:15.341445802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:05:15.341505 containerd[2020]: time="2025-05-17T00:05:15.341479210Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:05:15.346249 containerd[2020]: time="2025-05-17T00:05:15.342997162Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:05:15.346249 containerd[2020]: time="2025-05-17T00:05:15.343204666Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:05:15.346249 containerd[2020]: time="2025-05-17T00:05:15.343235458Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:05:15.346249 containerd[2020]: time="2025-05-17T00:05:15.343264630Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:05:15.346249 containerd[2020]: time="2025-05-17T00:05:15.343289470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:05:15.346249 containerd[2020]: time="2025-05-17T00:05:15.343343950Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:05:15.346249 containerd[2020]: time="2025-05-17T00:05:15.343379134Z" level=info msg="NRI interface is disabled by configuration." May 17 00:05:15.346249 containerd[2020]: time="2025-05-17T00:05:15.343410982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:05:15.352589 containerd[2020]: time="2025-05-17T00:05:15.352413706Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:05:15.352876 containerd[2020]: time="2025-05-17T00:05:15.352587490Z" level=info msg="Connect containerd service" May 17 00:05:15.352876 containerd[2020]: time="2025-05-17T00:05:15.352696354Z" level=info msg="using legacy CRI server" May 17 00:05:15.352876 containerd[2020]: time="2025-05-17T00:05:15.352727854Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:05:15.353094 containerd[2020]: time="2025-05-17T00:05:15.352941118Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:05:15.356878 containerd[2020]: time="2025-05-17T00:05:15.356614978Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:05:15.362613 containerd[2020]: time="2025-05-17T00:05:15.362047690Z" level=info msg="Start subscribing containerd event" May 17 00:05:15.363156 containerd[2020]: time="2025-05-17T00:05:15.363099238Z" level=info msg="Start recovering state" May 17 00:05:15.363295 containerd[2020]: time="2025-05-17T00:05:15.363254470Z" level=info msg="Start event monitor" May 17 00:05:15.363402 containerd[2020]: time="2025-05-17T00:05:15.363292294Z" level=info msg="Start snapshots syncer" May 17 00:05:15.363402 containerd[2020]: time="2025-05-17T00:05:15.363339550Z" level=info msg="Start cni network conf syncer for default" May 17 00:05:15.363402 containerd[2020]: time="2025-05-17T00:05:15.363361846Z" level=info msg="Start streaming server" May 17 00:05:15.366560 containerd[2020]: time="2025-05-17T00:05:15.362541574Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:05:15.366560 containerd[2020]: time="2025-05-17T00:05:15.363668422Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:05:15.366560 containerd[2020]: time="2025-05-17T00:05:15.363765982Z" level=info msg="containerd successfully booted in 0.280071s" May 17 00:05:15.363894 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:05:15.630802 sshd_keygen[2042]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:05:15.677226 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:05:15.691794 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:05:15.704471 systemd[1]: Started sshd@0-172.31.24.47:22-139.178.89.65:39504.service - OpenSSH per-connection server daemon (139.178.89.65:39504). May 17 00:05:15.713095 systemd-networkd[1941]: eth0: Gained IPv6LL May 17 00:05:15.720890 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:05:15.725204 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:05:15.740145 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 17 00:05:15.759179 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:05:15.767893 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:05:15.771258 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:05:15.773066 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:05:15.785447 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:05:15.862779 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:05:15.881902 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:05:15.898535 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 17 00:05:15.904321 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:05:15.914268 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:05:15.935530 amazon-ssm-agent[2208]: Initializing new seelog logger May 17 00:05:15.935530 amazon-ssm-agent[2208]: New Seelog Logger Creation Complete May 17 00:05:15.935530 amazon-ssm-agent[2208]: 2025/05/17 00:05:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:05:15.935530 amazon-ssm-agent[2208]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:05:15.938227 amazon-ssm-agent[2208]: 2025/05/17 00:05:15 processing appconfig overrides May 17 00:05:15.939142 amazon-ssm-agent[2208]: 2025/05/17 00:05:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:05:15.939142 amazon-ssm-agent[2208]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:05:15.939142 amazon-ssm-agent[2208]: 2025/05/17 00:05:15 processing appconfig overrides May 17 00:05:15.939292 amazon-ssm-agent[2208]: 2025/05/17 00:05:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:05:15.939292 amazon-ssm-agent[2208]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:05:15.939410 amazon-ssm-agent[2208]: 2025/05/17 00:05:15 processing appconfig overrides May 17 00:05:15.941246 amazon-ssm-agent[2208]: 2025-05-17 00:05:15 INFO Proxy environment variables: May 17 00:05:15.944861 amazon-ssm-agent[2208]: 2025/05/17 00:05:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:05:15.946952 amazon-ssm-agent[2208]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 17 00:05:15.946952 amazon-ssm-agent[2208]: 2025/05/17 00:05:15 processing appconfig overrides May 17 00:05:16.040531 amazon-ssm-agent[2208]: 2025-05-17 00:05:15 INFO https_proxy: May 17 00:05:16.052758 sshd[2204]: Accepted publickey for core from 139.178.89.65 port 39504 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:05:16.060830 sshd[2204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:05:16.085256 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:05:16.099157 tar[2017]: linux-arm64/README.md May 17 00:05:16.100381 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:05:16.130391 systemd-logind[2003]: New session 1 of user core. May 17 00:05:16.131388 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:05:16.146163 amazon-ssm-agent[2208]: 2025-05-17 00:05:15 INFO http_proxy: May 17 00:05:16.161993 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:05:16.175426 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:05:16.200603 (systemd)[2235]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:05:16.250809 amazon-ssm-agent[2208]: 2025-05-17 00:05:15 INFO no_proxy: May 17 00:05:16.348354 amazon-ssm-agent[2208]: 2025-05-17 00:05:15 INFO Checking if agent identity type OnPrem can be assumed May 17 00:05:16.443207 systemd[2235]: Queued start job for default target default.target. May 17 00:05:16.448460 amazon-ssm-agent[2208]: 2025-05-17 00:05:15 INFO Checking if agent identity type EC2 can be assumed May 17 00:05:16.450097 systemd[2235]: Created slice app.slice - User Application Slice. May 17 00:05:16.450156 systemd[2235]: Reached target paths.target - Paths. May 17 00:05:16.450190 systemd[2235]: Reached target timers.target - Timers. May 17 00:05:16.453312 systemd[2235]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:05:16.482775 systemd[2235]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:05:16.483095 systemd[2235]: Reached target sockets.target - Sockets. May 17 00:05:16.483130 systemd[2235]: Reached target basic.target - Basic System. May 17 00:05:16.483366 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:05:16.486044 systemd[2235]: Reached target default.target - Main User Target. May 17 00:05:16.486133 systemd[2235]: Startup finished in 267ms. May 17 00:05:16.493200 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:05:16.545999 amazon-ssm-agent[2208]: 2025-05-17 00:05:16 INFO Agent will take identity from EC2 May 17 00:05:16.648962 amazon-ssm-agent[2208]: 2025-05-17 00:05:16 INFO [amazon-ssm-agent] using named pipe channel for IPC May 17 00:05:16.658417 systemd[1]: Started sshd@1-172.31.24.47:22-139.178.89.65:59684.service - OpenSSH per-connection server daemon (139.178.89.65:59684). May 17 00:05:16.745418 amazon-ssm-agent[2208]: 2025-05-17 00:05:16 INFO [amazon-ssm-agent] using named pipe channel for IPC May 17 00:05:16.771577 amazon-ssm-agent[2208]: 2025-05-17 00:05:16 INFO [amazon-ssm-agent] using named pipe channel for IPC May 17 00:05:16.771577 amazon-ssm-agent[2208]: 2025-05-17 00:05:16 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 May 17 00:05:16.771577 amazon-ssm-agent[2208]: 2025-05-17 00:05:16 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 May 17 00:05:16.771799 amazon-ssm-agent[2208]: 2025-05-17 00:05:16 INFO [amazon-ssm-agent] Starting Core Agent May 17 00:05:16.771799 amazon-ssm-agent[2208]: 2025-05-17 00:05:16 INFO [amazon-ssm-agent] registrar detected. Attempting registration May 17 00:05:16.771799 amazon-ssm-agent[2208]: 2025-05-17 00:05:16 INFO [Registrar] Starting registrar module May 17 00:05:16.771799 amazon-ssm-agent[2208]: 2025-05-17 00:05:16 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration May 17 00:05:16.771799 amazon-ssm-agent[2208]: 2025-05-17 00:05:16 INFO [EC2Identity] EC2 registration was successful. May 17 00:05:16.771799 amazon-ssm-agent[2208]: 2025-05-17 00:05:16 INFO [CredentialRefresher] credentialRefresher has started May 17 00:05:16.771799 amazon-ssm-agent[2208]: 2025-05-17 00:05:16 INFO [CredentialRefresher] Starting credentials refresher loop May 17 00:05:16.771799 amazon-ssm-agent[2208]: 2025-05-17 00:05:16 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 17 00:05:16.845174 amazon-ssm-agent[2208]: 2025-05-17 00:05:16 INFO [CredentialRefresher] Next credential rotation will be in 31.01665671416667 minutes May 17 00:05:16.859602 sshd[2248]: Accepted publickey for core from 139.178.89.65 port 59684 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:05:16.862502 sshd[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:05:16.870331 systemd-logind[2003]: New session 2 of user core. May 17 00:05:16.877229 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:05:17.005324 sshd[2248]: pam_unix(sshd:session): session closed for user core May 17 00:05:17.013171 systemd[1]: sshd@1-172.31.24.47:22-139.178.89.65:59684.service: Deactivated successfully. May 17 00:05:17.016623 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:05:17.020133 systemd-logind[2003]: Session 2 logged out. Waiting for processes to exit. May 17 00:05:17.022665 systemd-logind[2003]: Removed session 2. May 17 00:05:17.046601 systemd[1]: Started sshd@2-172.31.24.47:22-139.178.89.65:59696.service - OpenSSH per-connection server daemon (139.178.89.65:59696). May 17 00:05:17.232235 sshd[2256]: Accepted publickey for core from 139.178.89.65 port 59696 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:05:17.234983 sshd[2256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:05:17.244227 systemd-logind[2003]: New session 3 of user core. May 17 00:05:17.251191 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:05:17.382207 sshd[2256]: pam_unix(sshd:session): session closed for user core May 17 00:05:17.388844 systemd[1]: sshd@2-172.31.24.47:22-139.178.89.65:59696.service: Deactivated successfully. May 17 00:05:17.393512 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:05:17.399107 systemd-logind[2003]: Session 3 logged out. Waiting for processes to exit. May 17 00:05:17.401700 systemd-logind[2003]: Removed session 3. May 17 00:05:17.556239 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:05:17.559426 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:05:17.561698 systemd[1]: Startup finished in 1.156s (kernel) + 9.219s (initrd) + 8.592s (userspace) = 18.967s. May 17 00:05:17.572607 (kubelet)[2267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:05:17.799092 amazon-ssm-agent[2208]: 2025-05-17 00:05:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 17 00:05:17.899662 amazon-ssm-agent[2208]: 2025-05-17 00:05:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2277) started May 17 00:05:18.000318 amazon-ssm-agent[2208]: 2025-05-17 00:05:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 17 00:05:18.177152 ntpd[1995]: Listen normally on 7 eth0 [fe80::4b3:30ff:fe1f:23ed%2]:123 May 17 00:05:18.178137 ntpd[1995]: 17 May 00:05:18 ntpd[1995]: Listen normally on 7 eth0 [fe80::4b3:30ff:fe1f:23ed%2]:123 May 17 00:05:18.414406 kubelet[2267]: E0517 00:05:18.414315 2267 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:05:18.418953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:05:18.419376 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:05:18.420002 systemd[1]: kubelet.service: Consumed 1.329s CPU time. May 17 00:05:21.339971 systemd-resolved[1942]: Clock change detected. Flushing caches. May 17 00:05:27.578576 systemd[1]: Started sshd@3-172.31.24.47:22-139.178.89.65:58892.service - OpenSSH per-connection server daemon (139.178.89.65:58892). May 17 00:05:27.755174 sshd[2290]: Accepted publickey for core from 139.178.89.65 port 58892 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:05:27.757790 sshd[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:05:27.764901 systemd-logind[2003]: New session 4 of user core. May 17 00:05:27.777530 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:05:27.900173 sshd[2290]: pam_unix(sshd:session): session closed for user core May 17 00:05:27.906790 systemd[1]: sshd@3-172.31.24.47:22-139.178.89.65:58892.service: Deactivated successfully. May 17 00:05:27.910246 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:05:27.911517 systemd-logind[2003]: Session 4 logged out. Waiting for processes to exit. May 17 00:05:27.913306 systemd-logind[2003]: Removed session 4. May 17 00:05:27.933833 systemd[1]: Started sshd@4-172.31.24.47:22-139.178.89.65:58898.service - OpenSSH per-connection server daemon (139.178.89.65:58898). May 17 00:05:28.110198 sshd[2297]: Accepted publickey for core from 139.178.89.65 port 58898 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:05:28.112754 sshd[2297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:05:28.122451 systemd-logind[2003]: New session 5 of user core. May 17 00:05:28.129510 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:05:28.247570 sshd[2297]: pam_unix(sshd:session): session closed for user core May 17 00:05:28.253519 systemd[1]: sshd@4-172.31.24.47:22-139.178.89.65:58898.service: Deactivated successfully. May 17 00:05:28.257536 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:05:28.259013 systemd-logind[2003]: Session 5 logged out. Waiting for processes to exit. May 17 00:05:28.260707 systemd-logind[2003]: Removed session 5. May 17 00:05:28.282335 systemd[1]: Started sshd@5-172.31.24.47:22-139.178.89.65:58912.service - OpenSSH per-connection server daemon (139.178.89.65:58912). May 17 00:05:28.462121 sshd[2304]: Accepted publickey for core from 139.178.89.65 port 58912 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:05:28.464699 sshd[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:05:28.472080 systemd-logind[2003]: New session 6 of user core. May 17 00:05:28.480552 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:05:28.584406 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:05:28.603602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:05:28.609545 sshd[2304]: pam_unix(sshd:session): session closed for user core May 17 00:05:28.617242 systemd-logind[2003]: Session 6 logged out. Waiting for processes to exit. May 17 00:05:28.625746 systemd[1]: sshd@5-172.31.24.47:22-139.178.89.65:58912.service: Deactivated successfully. May 17 00:05:28.630185 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:05:28.641848 systemd-logind[2003]: Removed session 6. May 17 00:05:28.653704 systemd[1]: Started sshd@6-172.31.24.47:22-139.178.89.65:58928.service - OpenSSH per-connection server daemon (139.178.89.65:58928). May 17 00:05:28.831925 sshd[2314]: Accepted publickey for core from 139.178.89.65 port 58928 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:05:28.836959 sshd[2314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:05:28.848377 systemd-logind[2003]: New session 7 of user core. May 17 00:05:28.855615 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:05:28.942541 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:05:28.944899 (kubelet)[2322]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:05:28.989582 sudo[2323]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:05:28.991036 sudo[2323]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:05:29.007817 sudo[2323]: pam_unix(sudo:session): session closed for user root May 17 00:05:29.031340 kubelet[2322]: E0517 00:05:29.031129 2322 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:05:29.034623 sshd[2314]: pam_unix(sshd:session): session closed for user core May 17 00:05:29.038936 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:05:29.039442 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:05:29.042536 systemd[1]: sshd@6-172.31.24.47:22-139.178.89.65:58928.service: Deactivated successfully. May 17 00:05:29.045785 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:05:29.047762 systemd-logind[2003]: Session 7 logged out. Waiting for processes to exit. May 17 00:05:29.051701 systemd-logind[2003]: Removed session 7. May 17 00:05:29.065710 systemd[1]: Started sshd@7-172.31.24.47:22-139.178.89.65:58932.service - OpenSSH per-connection server daemon (139.178.89.65:58932). May 17 00:05:29.248299 sshd[2334]: Accepted publickey for core from 139.178.89.65 port 58932 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:05:29.251629 sshd[2334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:05:29.258570 systemd-logind[2003]: New session 8 of user core. May 17 00:05:29.268530 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:05:29.371397 sudo[2338]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:05:29.372039 sudo[2338]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:05:29.378341 sudo[2338]: pam_unix(sudo:session): session closed for user root May 17 00:05:29.388204 sudo[2337]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:05:29.389540 sudo[2337]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:05:29.415765 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:05:29.419214 auditctl[2341]: No rules May 17 00:05:29.419912 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:05:29.420307 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:05:29.428917 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:05:29.478423 augenrules[2359]: No rules May 17 00:05:29.480842 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:05:29.483693 sudo[2337]: pam_unix(sudo:session): session closed for user root May 17 00:05:29.507608 sshd[2334]: pam_unix(sshd:session): session closed for user core May 17 00:05:29.513339 systemd[1]: sshd@7-172.31.24.47:22-139.178.89.65:58932.service: Deactivated successfully. May 17 00:05:29.516728 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:05:29.520327 systemd-logind[2003]: Session 8 logged out. Waiting for processes to exit. May 17 00:05:29.522017 systemd-logind[2003]: Removed session 8. May 17 00:05:29.544765 systemd[1]: Started sshd@8-172.31.24.47:22-139.178.89.65:58944.service - OpenSSH per-connection server daemon (139.178.89.65:58944). May 17 00:05:29.715395 sshd[2367]: Accepted publickey for core from 139.178.89.65 port 58944 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:05:29.717942 sshd[2367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:05:29.725128 systemd-logind[2003]: New session 9 of user core. May 17 00:05:29.732518 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:05:29.833490 sudo[2370]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:05:29.834199 sudo[2370]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:05:30.404742 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:05:30.407370 (dockerd)[2386]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:05:30.874974 dockerd[2386]: time="2025-05-17T00:05:30.874869207Z" level=info msg="Starting up" May 17 00:05:31.072577 dockerd[2386]: time="2025-05-17T00:05:31.072070104Z" level=info msg="Loading containers: start." May 17 00:05:31.221338 kernel: Initializing XFRM netlink socket May 17 00:05:31.252356 (udev-worker)[2409]: Network interface NamePolicy= disabled on kernel command line. May 17 00:05:31.338035 systemd-networkd[1941]: docker0: Link UP May 17 00:05:31.360652 dockerd[2386]: time="2025-05-17T00:05:31.360575545Z" level=info msg="Loading containers: done." May 17 00:05:31.381162 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck465541177-merged.mount: Deactivated successfully. May 17 00:05:31.387453 dockerd[2386]: time="2025-05-17T00:05:31.387379093Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:05:31.387690 dockerd[2386]: time="2025-05-17T00:05:31.387529201Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:05:31.387795 dockerd[2386]: time="2025-05-17T00:05:31.387745177Z" level=info msg="Daemon has completed initialization" May 17 00:05:31.451183 dockerd[2386]: time="2025-05-17T00:05:31.450328454Z" level=info msg="API listen on /run/docker.sock" May 17 00:05:31.450616 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:05:32.523760 containerd[2020]: time="2025-05-17T00:05:32.523594839Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 17 00:05:33.154145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2721151672.mount: Deactivated successfully. May 17 00:05:35.365840 containerd[2020]: time="2025-05-17T00:05:35.365768873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:35.367895 containerd[2020]: time="2025-05-17T00:05:35.367828781Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=26326311" May 17 00:05:35.369149 containerd[2020]: time="2025-05-17T00:05:35.369096437Z" level=info msg="ImageCreate event name:\"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:35.376553 containerd[2020]: time="2025-05-17T00:05:35.376468793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:35.378950 containerd[2020]: time="2025-05-17T00:05:35.378656357Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"26323111\" in 2.85499925s" May 17 00:05:35.378950 containerd[2020]: time="2025-05-17T00:05:35.378711989Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\"" May 17 00:05:35.379802 containerd[2020]: time="2025-05-17T00:05:35.379670777Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 17 00:05:37.629668 containerd[2020]: time="2025-05-17T00:05:37.629601032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:37.631686 containerd[2020]: time="2025-05-17T00:05:37.631617836Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=22530547" May 17 00:05:37.633233 containerd[2020]: time="2025-05-17T00:05:37.633148772Z" level=info msg="ImageCreate event name:\"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:37.638890 containerd[2020]: time="2025-05-17T00:05:37.638831624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:37.641757 containerd[2020]: time="2025-05-17T00:05:37.641298272Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"24066313\" in 2.261245907s" May 17 00:05:37.641757 containerd[2020]: time="2025-05-17T00:05:37.641353520Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\"" May 17 00:05:37.642842 containerd[2020]: time="2025-05-17T00:05:37.642507332Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 17 00:05:39.208918 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:05:39.215630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:05:39.575545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:05:39.580466 (kubelet)[2594]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:05:39.683569 kubelet[2594]: E0517 00:05:39.683433 2594 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:05:39.687815 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:05:39.688133 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:05:39.722722 containerd[2020]: time="2025-05-17T00:05:39.722642975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:39.724864 containerd[2020]: time="2025-05-17T00:05:39.724788299Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=17484190" May 17 00:05:39.727418 containerd[2020]: time="2025-05-17T00:05:39.727349891Z" level=info msg="ImageCreate event name:\"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:39.733622 containerd[2020]: time="2025-05-17T00:05:39.733528499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:39.736494 containerd[2020]: time="2025-05-17T00:05:39.735837491Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"19019974\" in 2.093272043s" May 17 00:05:39.736494 containerd[2020]: time="2025-05-17T00:05:39.735898295Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\"" May 17 00:05:39.736926 containerd[2020]: time="2025-05-17T00:05:39.736887047Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 17 00:05:41.140913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3620209981.mount: Deactivated successfully. May 17 00:05:41.717323 containerd[2020]: time="2025-05-17T00:05:41.716826469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:41.719374 containerd[2020]: time="2025-05-17T00:05:41.719064853Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=27377375" May 17 00:05:41.721702 containerd[2020]: time="2025-05-17T00:05:41.721614709Z" level=info msg="ImageCreate event name:\"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:41.726433 containerd[2020]: time="2025-05-17T00:05:41.726333229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:41.728087 containerd[2020]: time="2025-05-17T00:05:41.727887781Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"27376394\" in 1.990838302s" May 17 00:05:41.728087 containerd[2020]: time="2025-05-17T00:05:41.727942465Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\"" May 17 00:05:41.728857 containerd[2020]: time="2025-05-17T00:05:41.728784253Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:05:42.315664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1767268705.mount: Deactivated successfully. May 17 00:05:43.636295 containerd[2020]: time="2025-05-17T00:05:43.635431442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:43.638046 containerd[2020]: time="2025-05-17T00:05:43.637977122Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" May 17 00:05:43.640735 containerd[2020]: time="2025-05-17T00:05:43.640659770Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:43.647123 containerd[2020]: time="2025-05-17T00:05:43.647040074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:43.650843 containerd[2020]: time="2025-05-17T00:05:43.649554710Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.920368265s" May 17 00:05:43.650843 containerd[2020]: time="2025-05-17T00:05:43.649643714Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 17 00:05:43.650843 containerd[2020]: time="2025-05-17T00:05:43.650507834Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:05:44.175896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3867515445.mount: Deactivated successfully. May 17 00:05:44.191317 containerd[2020]: time="2025-05-17T00:05:44.190861357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:44.193066 containerd[2020]: time="2025-05-17T00:05:44.192737065Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 17 00:05:44.195217 containerd[2020]: time="2025-05-17T00:05:44.195160009Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:44.201694 containerd[2020]: time="2025-05-17T00:05:44.201613873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:44.203278 containerd[2020]: time="2025-05-17T00:05:44.203200273Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 552.646983ms" May 17 00:05:44.203428 containerd[2020]: time="2025-05-17T00:05:44.203272981Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 17 00:05:44.204518 containerd[2020]: time="2025-05-17T00:05:44.204049933Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 17 00:05:44.853827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2954714567.mount: Deactivated successfully. May 17 00:05:45.261765 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 17 00:05:48.864884 containerd[2020]: time="2025-05-17T00:05:48.864802232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:48.867313 containerd[2020]: time="2025-05-17T00:05:48.867216104Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" May 17 00:05:48.869522 containerd[2020]: time="2025-05-17T00:05:48.869450432Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:48.876180 containerd[2020]: time="2025-05-17T00:05:48.876080420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:05:48.878681 containerd[2020]: time="2025-05-17T00:05:48.878623064Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.674517991s" May 17 00:05:48.879184 containerd[2020]: time="2025-05-17T00:05:48.878834372Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 17 00:05:49.708169 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 17 00:05:49.719426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:05:50.067628 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:05:50.079006 (kubelet)[2752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:05:50.155296 kubelet[2752]: E0517 00:05:50.154605 2752 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:05:50.159086 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:05:50.160499 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:05:55.925005 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:05:55.937751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:05:55.987702 systemd[1]: Reloading requested from client PID 2767 ('systemctl') (unit session-9.scope)... May 17 00:05:55.987738 systemd[1]: Reloading... May 17 00:05:56.211300 zram_generator::config[2810]: No configuration found. May 17 00:05:56.458466 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:05:56.631333 systemd[1]: Reloading finished in 642 ms. May 17 00:05:56.726862 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:05:56.727358 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:05:56.740154 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:05:57.044614 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:05:57.061791 (kubelet)[2871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:05:57.134822 kubelet[2871]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:05:57.134822 kubelet[2871]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:05:57.134822 kubelet[2871]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:05:57.135452 kubelet[2871]: I0517 00:05:57.134928 2871 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:05:58.382577 kubelet[2871]: I0517 00:05:58.382501 2871 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:05:58.382577 kubelet[2871]: I0517 00:05:58.382558 2871 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:05:58.383212 kubelet[2871]: I0517 00:05:58.383074 2871 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:05:58.428158 kubelet[2871]: I0517 00:05:58.427559 2871 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:05:58.428773 kubelet[2871]: E0517 00:05:58.428710 2871 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.24.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.47:6443: connect: connection refused" logger="UnhandledError" May 17 00:05:58.440232 kubelet[2871]: E0517 00:05:58.440173 2871 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:05:58.440232 kubelet[2871]: I0517 00:05:58.440228 2871 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:05:58.445206 kubelet[2871]: I0517 00:05:58.445154 2871 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:05:58.447357 kubelet[2871]: I0517 00:05:58.447278 2871 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:05:58.447666 kubelet[2871]: I0517 00:05:58.447347 2871 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-47","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:05:58.447816 kubelet[2871]: I0517 00:05:58.447692 2871 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:05:58.447816 kubelet[2871]: I0517 00:05:58.447713 2871 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:05:58.448018 kubelet[2871]: I0517 00:05:58.447977 2871 state_mem.go:36] "Initialized new in-memory state store" May 17 00:05:58.455782 kubelet[2871]: I0517 00:05:58.455735 2871 kubelet.go:446] "Attempting to sync node with API server" May 17 00:05:58.455782 kubelet[2871]: I0517 00:05:58.455788 2871 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:05:58.456181 kubelet[2871]: I0517 00:05:58.455827 2871 kubelet.go:352] "Adding apiserver pod source" May 17 00:05:58.456181 kubelet[2871]: I0517 00:05:58.455851 2871 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:05:58.463356 kubelet[2871]: W0517 00:05:58.462623 2871 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.24.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.24.47:6443: connect: connection refused May 17 00:05:58.463356 kubelet[2871]: E0517 00:05:58.462722 2871 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.24.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.47:6443: connect: connection refused" logger="UnhandledError" May 17 00:05:58.463561 kubelet[2871]: W0517 00:05:58.463394 2871 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.24.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-47&limit=500&resourceVersion=0": dial tcp 172.31.24.47:6443: connect: connection refused May 17 00:05:58.463561 kubelet[2871]: E0517 00:05:58.463449 2871 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.24.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-47&limit=500&resourceVersion=0\": dial tcp 172.31.24.47:6443: connect: connection refused" logger="UnhandledError" May 17 00:05:58.463692 kubelet[2871]: I0517 00:05:58.463617 2871 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:05:58.464484 kubelet[2871]: I0517 00:05:58.464432 2871 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:05:58.464598 kubelet[2871]: W0517 00:05:58.464564 2871 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:05:58.466222 kubelet[2871]: I0517 00:05:58.466164 2871 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:05:58.466222 kubelet[2871]: I0517 00:05:58.466228 2871 server.go:1287] "Started kubelet" May 17 00:05:58.474064 kubelet[2871]: E0517 00:05:58.473827 2871 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.47:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.47:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-47.184027bf688f1e28 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-47,UID:ip-172-31-24-47,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-47,},FirstTimestamp:2025-05-17 00:05:58.46619908 +0000 UTC m=+1.398132968,LastTimestamp:2025-05-17 00:05:58.46619908 +0000 UTC m=+1.398132968,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-47,}" May 17 00:05:58.475128 kubelet[2871]: I0517 00:05:58.475065 2871 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:05:58.482297 kubelet[2871]: I0517 00:05:58.481773 2871 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:05:58.482741 kubelet[2871]: I0517 00:05:58.482715 2871 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:05:58.483276 kubelet[2871]: E0517 00:05:58.483220 2871 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-47\" not found" May 17 00:05:58.483856 kubelet[2871]: I0517 00:05:58.483828 2871 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:05:58.486235 kubelet[2871]: I0517 00:05:58.484041 2871 reconciler.go:26] "Reconciler: start to sync state" May 17 00:05:58.486387 kubelet[2871]: I0517 00:05:58.484968 2871 server.go:479] "Adding debug handlers to kubelet server" May 17 00:05:58.486954 kubelet[2871]: I0517 00:05:58.486899 2871 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:05:58.488510 kubelet[2871]: I0517 00:05:58.485028 2871 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:05:58.488851 kubelet[2871]: I0517 00:05:58.488788 2871 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:05:58.489242 kubelet[2871]: E0517 00:05:58.489055 2871 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:05:58.489457 kubelet[2871]: W0517 00:05:58.489308 2871 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.47:6443: connect: connection refused May 17 00:05:58.489457 kubelet[2871]: E0517 00:05:58.489396 2871 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.24.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.47:6443: connect: connection refused" logger="UnhandledError" May 17 00:05:58.489852 kubelet[2871]: E0517 00:05:58.489685 2871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-47?timeout=10s\": dial tcp 172.31.24.47:6443: connect: connection refused" interval="200ms" May 17 00:05:58.490048 kubelet[2871]: I0517 00:05:58.489908 2871 factory.go:221] Registration of the systemd container factory successfully May 17 00:05:58.490715 kubelet[2871]: I0517 00:05:58.490525 2871 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:05:58.493297 kubelet[2871]: I0517 00:05:58.492986 2871 factory.go:221] Registration of the containerd container factory successfully May 17 00:05:58.533315 kubelet[2871]: I0517 00:05:58.531807 2871 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:05:58.533315 kubelet[2871]: I0517 00:05:58.531844 2871 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:05:58.533315 kubelet[2871]: I0517 00:05:58.531875 2871 state_mem.go:36] "Initialized new in-memory state store" May 17 00:05:58.534965 kubelet[2871]: I0517 00:05:58.534892 2871 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:05:58.537189 kubelet[2871]: I0517 00:05:58.537134 2871 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:05:58.537189 kubelet[2871]: I0517 00:05:58.537181 2871 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:05:58.537444 kubelet[2871]: I0517 00:05:58.537216 2871 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:05:58.537444 kubelet[2871]: I0517 00:05:58.537234 2871 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:05:58.537444 kubelet[2871]: E0517 00:05:58.537332 2871 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:05:58.539816 kubelet[2871]: I0517 00:05:58.539386 2871 policy_none.go:49] "None policy: Start" May 17 00:05:58.539816 kubelet[2871]: I0517 00:05:58.539425 2871 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:05:58.539816 kubelet[2871]: I0517 00:05:58.539448 2871 state_mem.go:35] "Initializing new in-memory state store" May 17 00:05:58.547492 kubelet[2871]: W0517 00:05:58.547025 2871 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.24.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.47:6443: connect: connection refused May 17 00:05:58.547492 kubelet[2871]: E0517 00:05:58.547109 2871 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.24.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.47:6443: connect: connection refused" logger="UnhandledError" May 17 00:05:58.556808 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 00:05:58.572052 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 00:05:58.584366 kubelet[2871]: E0517 00:05:58.584309 2871 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-47\" not found" May 17 00:05:58.588851 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 00:05:58.593810 kubelet[2871]: I0517 00:05:58.591932 2871 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:05:58.593810 kubelet[2871]: I0517 00:05:58.593084 2871 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:05:58.593810 kubelet[2871]: I0517 00:05:58.593172 2871 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:05:58.593810 kubelet[2871]: I0517 00:05:58.593661 2871 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:05:58.597109 kubelet[2871]: E0517 00:05:58.597071 2871 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:05:58.597405 kubelet[2871]: E0517 00:05:58.597364 2871 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-47\" not found" May 17 00:05:58.654508 systemd[1]: Created slice kubepods-burstable-pod739792d28ee93c335a757a3699d7a655.slice - libcontainer container kubepods-burstable-pod739792d28ee93c335a757a3699d7a655.slice. May 17 00:05:58.675984 kubelet[2871]: E0517 00:05:58.675907 2871 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-47\" not found" node="ip-172-31-24-47" May 17 00:05:58.685832 systemd[1]: Created slice kubepods-burstable-podf026bf96ec566b6163363ff6c4705c42.slice - libcontainer container kubepods-burstable-podf026bf96ec566b6163363ff6c4705c42.slice. May 17 00:05:58.687453 kubelet[2871]: I0517 00:05:58.686653 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/739792d28ee93c335a757a3699d7a655-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-47\" (UID: \"739792d28ee93c335a757a3699d7a655\") " pod="kube-system/kube-apiserver-ip-172-31-24-47" May 17 00:05:58.687453 kubelet[2871]: I0517 00:05:58.686713 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/739792d28ee93c335a757a3699d7a655-ca-certs\") pod \"kube-apiserver-ip-172-31-24-47\" (UID: \"739792d28ee93c335a757a3699d7a655\") " pod="kube-system/kube-apiserver-ip-172-31-24-47" May 17 00:05:58.687453 kubelet[2871]: I0517 00:05:58.686752 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/739792d28ee93c335a757a3699d7a655-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-47\" (UID: \"739792d28ee93c335a757a3699d7a655\") " pod="kube-system/kube-apiserver-ip-172-31-24-47" May 17 00:05:58.687453 kubelet[2871]: I0517 00:05:58.686791 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f026bf96ec566b6163363ff6c4705c42-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-47\" (UID: \"f026bf96ec566b6163363ff6c4705c42\") " pod="kube-system/kube-controller-manager-ip-172-31-24-47" May 17 00:05:58.687453 kubelet[2871]: I0517 00:05:58.686833 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f026bf96ec566b6163363ff6c4705c42-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-47\" (UID: \"f026bf96ec566b6163363ff6c4705c42\") " pod="kube-system/kube-controller-manager-ip-172-31-24-47" May 17 00:05:58.688360 kubelet[2871]: I0517 00:05:58.686871 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f026bf96ec566b6163363ff6c4705c42-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-47\" (UID: \"f026bf96ec566b6163363ff6c4705c42\") " pod="kube-system/kube-controller-manager-ip-172-31-24-47" May 17 00:05:58.688360 kubelet[2871]: I0517 00:05:58.686904 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f026bf96ec566b6163363ff6c4705c42-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-47\" (UID: \"f026bf96ec566b6163363ff6c4705c42\") " pod="kube-system/kube-controller-manager-ip-172-31-24-47" May 17 00:05:58.688360 kubelet[2871]: I0517 00:05:58.686939 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f026bf96ec566b6163363ff6c4705c42-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-47\" (UID: \"f026bf96ec566b6163363ff6c4705c42\") " pod="kube-system/kube-controller-manager-ip-172-31-24-47" May 17 00:05:58.688360 kubelet[2871]: I0517 00:05:58.686982 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de569161e547d54abedb751c4103d95-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-47\" (UID: \"8de569161e547d54abedb751c4103d95\") " pod="kube-system/kube-scheduler-ip-172-31-24-47" May 17 00:05:58.690681 kubelet[2871]: E0517 00:05:58.690611 2871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-47?timeout=10s\": dial tcp 172.31.24.47:6443: connect: connection refused" interval="400ms" May 17 00:05:58.696045 kubelet[2871]: E0517 00:05:58.695984 2871 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-47\" not found" node="ip-172-31-24-47" May 17 00:05:58.699612 systemd[1]: Created slice kubepods-burstable-pod8de569161e547d54abedb751c4103d95.slice - libcontainer container kubepods-burstable-pod8de569161e547d54abedb751c4103d95.slice. May 17 00:05:58.703273 kubelet[2871]: I0517 00:05:58.702524 2871 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-47" May 17 00:05:58.703273 kubelet[2871]: E0517 00:05:58.703024 2871 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.47:6443/api/v1/nodes\": dial tcp 172.31.24.47:6443: connect: connection refused" node="ip-172-31-24-47" May 17 00:05:58.705294 kubelet[2871]: E0517 00:05:58.705218 2871 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-47\" not found" node="ip-172-31-24-47" May 17 00:05:58.905224 kubelet[2871]: I0517 00:05:58.905164 2871 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-47" May 17 00:05:58.905716 kubelet[2871]: E0517 00:05:58.905671 2871 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.47:6443/api/v1/nodes\": dial tcp 172.31.24.47:6443: connect: connection refused" node="ip-172-31-24-47" May 17 00:05:58.977417 containerd[2020]: time="2025-05-17T00:05:58.977319546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-47,Uid:739792d28ee93c335a757a3699d7a655,Namespace:kube-system,Attempt:0,}" May 17 00:05:58.997885 containerd[2020]: time="2025-05-17T00:05:58.997834687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-47,Uid:f026bf96ec566b6163363ff6c4705c42,Namespace:kube-system,Attempt:0,}" May 17 00:05:59.006903 containerd[2020]: time="2025-05-17T00:05:59.006770787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-47,Uid:8de569161e547d54abedb751c4103d95,Namespace:kube-system,Attempt:0,}" May 17 00:05:59.091477 kubelet[2871]: E0517 00:05:59.091394 2871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-47?timeout=10s\": dial tcp 172.31.24.47:6443: connect: connection refused" interval="800ms" May 17 00:05:59.285002 kubelet[2871]: W0517 00:05:59.284775 2871 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.24.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-47&limit=500&resourceVersion=0": dial tcp 172.31.24.47:6443: connect: connection refused May 17 00:05:59.285002 kubelet[2871]: E0517 00:05:59.284867 2871 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.24.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-47&limit=500&resourceVersion=0\": dial tcp 172.31.24.47:6443: connect: connection refused" logger="UnhandledError" May 17 00:05:59.308943 kubelet[2871]: I0517 00:05:59.308376 2871 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-47" May 17 00:05:59.308943 kubelet[2871]: E0517 00:05:59.308861 2871 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.47:6443/api/v1/nodes\": dial tcp 172.31.24.47:6443: connect: connection refused" node="ip-172-31-24-47" May 17 00:05:59.319655 kubelet[2871]: W0517 00:05:59.319565 2871 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.47:6443: connect: connection refused May 17 00:05:59.319802 kubelet[2871]: E0517 00:05:59.319665 2871 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.24.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.47:6443: connect: connection refused" logger="UnhandledError" May 17 00:05:59.530166 kubelet[2871]: W0517 00:05:59.530074 2871 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.24.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.24.47:6443: connect: connection refused May 17 00:05:59.530750 kubelet[2871]: E0517 00:05:59.530176 2871 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.24.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.47:6443: connect: connection refused" logger="UnhandledError" May 17 00:05:59.554300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1479179556.mount: Deactivated successfully. May 17 00:05:59.572505 containerd[2020]: time="2025-05-17T00:05:59.572412461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:05:59.574637 containerd[2020]: time="2025-05-17T00:05:59.574566821Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:05:59.576538 containerd[2020]: time="2025-05-17T00:05:59.576446825Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 17 00:05:59.578840 containerd[2020]: time="2025-05-17T00:05:59.578784209Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:05:59.580622 containerd[2020]: time="2025-05-17T00:05:59.580574909Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:05:59.583824 containerd[2020]: time="2025-05-17T00:05:59.583346333Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:05:59.587044 containerd[2020]: time="2025-05-17T00:05:59.586950197Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:05:59.595507 containerd[2020]: time="2025-05-17T00:05:59.595433058Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 597.300915ms" May 17 00:05:59.599309 containerd[2020]: time="2025-05-17T00:05:59.599177154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:05:59.605209 containerd[2020]: time="2025-05-17T00:05:59.605130294Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 627.698008ms" May 17 00:05:59.645129 containerd[2020]: time="2025-05-17T00:05:59.643555506Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 636.667083ms" May 17 00:05:59.758128 update_engine[2004]: I20250517 00:05:59.758033 2004 update_attempter.cc:509] Updating boot flags... May 17 00:05:59.858646 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (2941) May 17 00:05:59.887955 kubelet[2871]: W0517 00:05:59.887510 2871 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.24.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.47:6443: connect: connection refused May 17 00:05:59.887955 kubelet[2871]: E0517 00:05:59.887590 2871 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.24.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.47:6443: connect: connection refused" logger="UnhandledError" May 17 00:05:59.893088 kubelet[2871]: E0517 00:05:59.892460 2871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-47?timeout=10s\": dial tcp 172.31.24.47:6443: connect: connection refused" interval="1.6s" May 17 00:05:59.897389 containerd[2020]: time="2025-05-17T00:05:59.897218815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:05:59.897389 containerd[2020]: time="2025-05-17T00:05:59.897349351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:05:59.903197 containerd[2020]: time="2025-05-17T00:05:59.900366715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:05:59.903387 containerd[2020]: time="2025-05-17T00:05:59.902909851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:05:59.912505 containerd[2020]: time="2025-05-17T00:05:59.912304519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:05:59.912505 containerd[2020]: time="2025-05-17T00:05:59.912437047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:05:59.915047 containerd[2020]: time="2025-05-17T00:05:59.914849431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:05:59.917105 containerd[2020]: time="2025-05-17T00:05:59.916009507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:05:59.917105 containerd[2020]: time="2025-05-17T00:05:59.916119823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:05:59.917105 containerd[2020]: time="2025-05-17T00:05:59.916158439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:05:59.917105 containerd[2020]: time="2025-05-17T00:05:59.916338679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:05:59.918956 containerd[2020]: time="2025-05-17T00:05:59.918556411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:05:59.963155 systemd[1]: Started cri-containerd-7507eaf027a5d80c0c45a448bae0ee0adc8967a01aa8f5c714dc3c44a045d838.scope - libcontainer container 7507eaf027a5d80c0c45a448bae0ee0adc8967a01aa8f5c714dc3c44a045d838. May 17 00:06:00.022869 systemd[1]: Started cri-containerd-181b11e75e10838f17d2ec420fa6f8a0bd663cbd7c9e2404a4fb0cc6ff68af70.scope - libcontainer container 181b11e75e10838f17d2ec420fa6f8a0bd663cbd7c9e2404a4fb0cc6ff68af70. May 17 00:06:00.038630 systemd[1]: Started cri-containerd-96a999c5305e674f672b66d358429ef2f0d849cbc1aa681936acd186b43df642.scope - libcontainer container 96a999c5305e674f672b66d358429ef2f0d849cbc1aa681936acd186b43df642. May 17 00:06:00.114290 kubelet[2871]: I0517 00:06:00.113914 2871 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-47" May 17 00:06:00.114556 kubelet[2871]: E0517 00:06:00.114459 2871 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.47:6443/api/v1/nodes\": dial tcp 172.31.24.47:6443: connect: connection refused" node="ip-172-31-24-47" May 17 00:06:00.294281 containerd[2020]: time="2025-05-17T00:06:00.294064793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-47,Uid:739792d28ee93c335a757a3699d7a655,Namespace:kube-system,Attempt:0,} returns sandbox id \"7507eaf027a5d80c0c45a448bae0ee0adc8967a01aa8f5c714dc3c44a045d838\"" May 17 00:06:00.305919 containerd[2020]: time="2025-05-17T00:06:00.305864957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-47,Uid:8de569161e547d54abedb751c4103d95,Namespace:kube-system,Attempt:0,} returns sandbox id \"96a999c5305e674f672b66d358429ef2f0d849cbc1aa681936acd186b43df642\"" May 17 00:06:00.309470 containerd[2020]: time="2025-05-17T00:06:00.309406421Z" level=info msg="CreateContainer within sandbox \"7507eaf027a5d80c0c45a448bae0ee0adc8967a01aa8f5c714dc3c44a045d838\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:06:00.321038 containerd[2020]: time="2025-05-17T00:06:00.320969741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-47,Uid:f026bf96ec566b6163363ff6c4705c42,Namespace:kube-system,Attempt:0,} returns sandbox id \"181b11e75e10838f17d2ec420fa6f8a0bd663cbd7c9e2404a4fb0cc6ff68af70\"" May 17 00:06:00.330394 containerd[2020]: time="2025-05-17T00:06:00.330341489Z" level=info msg="CreateContainer within sandbox \"96a999c5305e674f672b66d358429ef2f0d849cbc1aa681936acd186b43df642\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:06:00.333003 containerd[2020]: time="2025-05-17T00:06:00.332925269Z" level=info msg="CreateContainer within sandbox \"181b11e75e10838f17d2ec420fa6f8a0bd663cbd7c9e2404a4fb0cc6ff68af70\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:06:00.396418 containerd[2020]: time="2025-05-17T00:06:00.396090377Z" level=info msg="CreateContainer within sandbox \"7507eaf027a5d80c0c45a448bae0ee0adc8967a01aa8f5c714dc3c44a045d838\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c635c532940ea9bf26efaefd085139384ed1542b8e850311cd665615a3660c10\"" May 17 00:06:00.397103 containerd[2020]: time="2025-05-17T00:06:00.397055117Z" level=info msg="StartContainer for \"c635c532940ea9bf26efaefd085139384ed1542b8e850311cd665615a3660c10\"" May 17 00:06:00.415735 containerd[2020]: time="2025-05-17T00:06:00.414530478Z" level=info msg="CreateContainer within sandbox \"96a999c5305e674f672b66d358429ef2f0d849cbc1aa681936acd186b43df642\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3c02a2f74620eefc52847ade3e689670cda72bfc9b2d9a0d1043fcfb0b418a8c\"" May 17 00:06:00.415735 containerd[2020]: time="2025-05-17T00:06:00.415404354Z" level=info msg="StartContainer for \"3c02a2f74620eefc52847ade3e689670cda72bfc9b2d9a0d1043fcfb0b418a8c\"" May 17 00:06:00.435311 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (2942) May 17 00:06:00.458122 containerd[2020]: time="2025-05-17T00:06:00.458053122Z" level=info msg="CreateContainer within sandbox \"181b11e75e10838f17d2ec420fa6f8a0bd663cbd7c9e2404a4fb0cc6ff68af70\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"94123a6bc28e8ca7b65f931c9e3d8f27f989a2dc6c5e4e93f6fe0b10d7546dbb\"" May 17 00:06:00.458985 containerd[2020]: time="2025-05-17T00:06:00.458938710Z" level=info msg="StartContainer for \"94123a6bc28e8ca7b65f931c9e3d8f27f989a2dc6c5e4e93f6fe0b10d7546dbb\"" May 17 00:06:00.525674 systemd[1]: Started cri-containerd-c635c532940ea9bf26efaefd085139384ed1542b8e850311cd665615a3660c10.scope - libcontainer container c635c532940ea9bf26efaefd085139384ed1542b8e850311cd665615a3660c10. May 17 00:06:00.621048 kubelet[2871]: E0517 00:06:00.620012 2871 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.24.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.47:6443: connect: connection refused" logger="UnhandledError" May 17 00:06:00.655633 systemd[1]: Started cri-containerd-3c02a2f74620eefc52847ade3e689670cda72bfc9b2d9a0d1043fcfb0b418a8c.scope - libcontainer container 3c02a2f74620eefc52847ade3e689670cda72bfc9b2d9a0d1043fcfb0b418a8c. May 17 00:06:00.775654 systemd[1]: Started cri-containerd-94123a6bc28e8ca7b65f931c9e3d8f27f989a2dc6c5e4e93f6fe0b10d7546dbb.scope - libcontainer container 94123a6bc28e8ca7b65f931c9e3d8f27f989a2dc6c5e4e93f6fe0b10d7546dbb. May 17 00:06:00.809566 containerd[2020]: time="2025-05-17T00:06:00.809184164Z" level=info msg="StartContainer for \"c635c532940ea9bf26efaefd085139384ed1542b8e850311cd665615a3660c10\" returns successfully" May 17 00:06:01.026969 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (2942) May 17 00:06:01.036979 containerd[2020]: time="2025-05-17T00:06:01.036791873Z" level=info msg="StartContainer for \"3c02a2f74620eefc52847ade3e689670cda72bfc9b2d9a0d1043fcfb0b418a8c\" returns successfully" May 17 00:06:01.105902 containerd[2020]: time="2025-05-17T00:06:01.105827021Z" level=info msg="StartContainer for \"94123a6bc28e8ca7b65f931c9e3d8f27f989a2dc6c5e4e93f6fe0b10d7546dbb\" returns successfully" May 17 00:06:01.658498 kubelet[2871]: E0517 00:06:01.658233 2871 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-47\" not found" node="ip-172-31-24-47" May 17 00:06:01.664970 kubelet[2871]: E0517 00:06:01.664915 2871 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-47\" not found" node="ip-172-31-24-47" May 17 00:06:01.668542 kubelet[2871]: E0517 00:06:01.668492 2871 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-47\" not found" node="ip-172-31-24-47" May 17 00:06:01.717613 kubelet[2871]: I0517 00:06:01.717567 2871 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-47" May 17 00:06:02.672287 kubelet[2871]: E0517 00:06:02.671241 2871 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-47\" not found" node="ip-172-31-24-47" May 17 00:06:02.672287 kubelet[2871]: E0517 00:06:02.671717 2871 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-47\" not found" node="ip-172-31-24-47" May 17 00:06:02.672287 kubelet[2871]: E0517 00:06:02.671783 2871 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-47\" not found" node="ip-172-31-24-47" May 17 00:06:03.675695 kubelet[2871]: E0517 00:06:03.675640 2871 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-47\" not found" node="ip-172-31-24-47" May 17 00:06:03.676304 kubelet[2871]: E0517 00:06:03.676183 2871 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-47\" not found" node="ip-172-31-24-47" May 17 00:06:04.947346 kubelet[2871]: E0517 00:06:04.947269 2871 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-47\" not found" node="ip-172-31-24-47" May 17 00:06:05.089428 kubelet[2871]: I0517 00:06:05.089109 2871 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-47" May 17 00:06:05.185023 kubelet[2871]: I0517 00:06:05.184414 2871 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-47" May 17 00:06:05.200048 kubelet[2871]: E0517 00:06:05.199605 2871 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-47\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-24-47" May 17 00:06:05.200048 kubelet[2871]: I0517 00:06:05.199655 2871 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-47" May 17 00:06:05.209905 kubelet[2871]: E0517 00:06:05.209596 2871 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-47\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-24-47" May 17 00:06:05.209905 kubelet[2871]: I0517 00:06:05.209643 2871 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-47" May 17 00:06:05.214164 kubelet[2871]: E0517 00:06:05.214083 2871 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-47\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-24-47" May 17 00:06:05.463726 kubelet[2871]: I0517 00:06:05.463670 2871 apiserver.go:52] "Watching apiserver" May 17 00:06:05.486413 kubelet[2871]: I0517 00:06:05.486322 2871 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:06:07.369135 kubelet[2871]: I0517 00:06:07.368767 2871 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-47" May 17 00:06:07.537995 systemd[1]: Reloading requested from client PID 3412 ('systemctl') (unit session-9.scope)... May 17 00:06:07.538029 systemd[1]: Reloading... May 17 00:06:07.724555 zram_generator::config[3461]: No configuration found. May 17 00:06:07.938734 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:06:08.164729 systemd[1]: Reloading finished in 626 ms. May 17 00:06:08.252300 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:06:08.272026 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:06:08.273756 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:06:08.273842 systemd[1]: kubelet.service: Consumed 2.195s CPU time, 130.3M memory peak, 0B memory swap peak. May 17 00:06:08.288662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:06:08.634543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:06:08.652179 (kubelet)[3512]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:06:08.747329 kubelet[3512]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:06:08.747329 kubelet[3512]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:06:08.747329 kubelet[3512]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:06:08.748007 kubelet[3512]: I0517 00:06:08.747345 3512 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:06:08.763982 kubelet[3512]: I0517 00:06:08.763923 3512 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:06:08.763982 kubelet[3512]: I0517 00:06:08.763971 3512 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:06:08.765569 kubelet[3512]: I0517 00:06:08.764497 3512 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:06:08.775593 kubelet[3512]: I0517 00:06:08.775531 3512 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:06:08.783324 kubelet[3512]: I0517 00:06:08.782018 3512 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:06:08.791419 kubelet[3512]: E0517 00:06:08.790374 3512 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:06:08.791419 kubelet[3512]: I0517 00:06:08.790430 3512 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:06:08.796585 kubelet[3512]: I0517 00:06:08.796546 3512 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:06:08.798133 kubelet[3512]: I0517 00:06:08.798073 3512 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:06:08.798637 kubelet[3512]: I0517 00:06:08.798348 3512 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-47","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:06:08.799271 kubelet[3512]: I0517 00:06:08.798857 3512 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:06:08.799271 kubelet[3512]: I0517 00:06:08.798886 3512 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:06:08.799271 kubelet[3512]: I0517 00:06:08.798956 3512 state_mem.go:36] "Initialized new in-memory state store" May 17 00:06:08.799518 kubelet[3512]: I0517 00:06:08.799244 3512 kubelet.go:446] "Attempting to sync node with API server" May 17 00:06:08.799636 kubelet[3512]: I0517 00:06:08.799615 3512 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:06:08.799752 kubelet[3512]: I0517 00:06:08.799732 3512 kubelet.go:352] "Adding apiserver pod source" May 17 00:06:08.800068 kubelet[3512]: I0517 00:06:08.799845 3512 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:06:08.803389 kubelet[3512]: I0517 00:06:08.803215 3512 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:06:08.804908 kubelet[3512]: I0517 00:06:08.804838 3512 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:06:08.806233 kubelet[3512]: I0517 00:06:08.806201 3512 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:06:08.809317 kubelet[3512]: I0517 00:06:08.806463 3512 server.go:1287] "Started kubelet" May 17 00:06:08.814962 sudo[3527]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 17 00:06:08.817760 sudo[3527]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 17 00:06:08.824210 kubelet[3512]: I0517 00:06:08.819444 3512 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:06:08.826135 kubelet[3512]: I0517 00:06:08.826070 3512 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:06:08.831731 kubelet[3512]: I0517 00:06:08.831673 3512 server.go:479] "Adding debug handlers to kubelet server" May 17 00:06:08.837457 kubelet[3512]: I0517 00:06:08.837411 3512 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:06:08.837927 kubelet[3512]: E0517 00:06:08.837874 3512 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-47\" not found" May 17 00:06:08.838720 kubelet[3512]: I0517 00:06:08.838462 3512 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:06:08.838817 kubelet[3512]: I0517 00:06:08.838734 3512 reconciler.go:26] "Reconciler: start to sync state" May 17 00:06:08.864467 kubelet[3512]: I0517 00:06:08.864208 3512 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:06:08.864974 kubelet[3512]: I0517 00:06:08.864622 3512 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:06:08.865075 kubelet[3512]: I0517 00:06:08.865009 3512 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:06:08.910271 kubelet[3512]: I0517 00:06:08.910023 3512 factory.go:221] Registration of the systemd container factory successfully May 17 00:06:08.913343 kubelet[3512]: I0517 00:06:08.910226 3512 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:06:08.943348 kubelet[3512]: I0517 00:06:08.943278 3512 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:06:08.948652 kubelet[3512]: E0517 00:06:08.948583 3512 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-47\" not found" May 17 00:06:08.949982 kubelet[3512]: I0517 00:06:08.949544 3512 factory.go:221] Registration of the containerd container factory successfully May 17 00:06:08.952650 kubelet[3512]: I0517 00:06:08.952578 3512 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:06:08.952650 kubelet[3512]: I0517 00:06:08.952634 3512 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:06:08.952864 kubelet[3512]: I0517 00:06:08.952681 3512 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:06:08.952864 kubelet[3512]: I0517 00:06:08.952698 3512 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:06:08.952864 kubelet[3512]: E0517 00:06:08.952778 3512 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:06:08.993542 kubelet[3512]: E0517 00:06:08.993359 3512 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:06:09.053208 kubelet[3512]: E0517 00:06:09.053066 3512 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:06:09.107341 kubelet[3512]: I0517 00:06:09.106630 3512 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:06:09.107341 kubelet[3512]: I0517 00:06:09.106658 3512 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:06:09.107341 kubelet[3512]: I0517 00:06:09.106691 3512 state_mem.go:36] "Initialized new in-memory state store" May 17 00:06:09.107341 kubelet[3512]: I0517 00:06:09.106995 3512 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:06:09.107341 kubelet[3512]: I0517 00:06:09.107018 3512 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:06:09.107341 kubelet[3512]: I0517 00:06:09.107050 3512 policy_none.go:49] "None policy: Start" May 17 00:06:09.107341 kubelet[3512]: I0517 00:06:09.107067 3512 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:06:09.107341 kubelet[3512]: I0517 00:06:09.107088 3512 state_mem.go:35] "Initializing new in-memory state store" May 17 00:06:09.108590 kubelet[3512]: I0517 00:06:09.108054 3512 state_mem.go:75] "Updated machine memory state" May 17 00:06:09.117629 kubelet[3512]: I0517 00:06:09.117587 3512 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:06:09.122593 kubelet[3512]: I0517 00:06:09.120539 3512 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:06:09.122593 kubelet[3512]: I0517 00:06:09.120577 3512 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:06:09.125937 kubelet[3512]: I0517 00:06:09.125898 3512 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:06:09.138519 kubelet[3512]: E0517 00:06:09.138478 3512 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:06:09.254292 kubelet[3512]: I0517 00:06:09.253974 3512 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-47" May 17 00:06:09.257502 kubelet[3512]: I0517 00:06:09.256875 3512 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-47" May 17 00:06:09.257502 kubelet[3512]: I0517 00:06:09.257272 3512 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-47" May 17 00:06:09.263338 kubelet[3512]: I0517 00:06:09.261972 3512 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-47" May 17 00:06:09.285433 kubelet[3512]: E0517 00:06:09.285367 3512 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-47\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-24-47" May 17 00:06:09.292565 kubelet[3512]: I0517 00:06:09.292497 3512 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-24-47" May 17 00:06:09.292721 kubelet[3512]: I0517 00:06:09.292699 3512 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-47" May 17 00:06:09.354014 kubelet[3512]: I0517 00:06:09.353533 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f026bf96ec566b6163363ff6c4705c42-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-47\" (UID: \"f026bf96ec566b6163363ff6c4705c42\") " pod="kube-system/kube-controller-manager-ip-172-31-24-47" May 17 00:06:09.354014 kubelet[3512]: I0517 00:06:09.353613 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f026bf96ec566b6163363ff6c4705c42-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-47\" (UID: \"f026bf96ec566b6163363ff6c4705c42\") " pod="kube-system/kube-controller-manager-ip-172-31-24-47" May 17 00:06:09.354014 kubelet[3512]: I0517 00:06:09.353662 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de569161e547d54abedb751c4103d95-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-47\" (UID: \"8de569161e547d54abedb751c4103d95\") " pod="kube-system/kube-scheduler-ip-172-31-24-47" May 17 00:06:09.354014 kubelet[3512]: I0517 00:06:09.353699 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/739792d28ee93c335a757a3699d7a655-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-47\" (UID: \"739792d28ee93c335a757a3699d7a655\") " pod="kube-system/kube-apiserver-ip-172-31-24-47" May 17 00:06:09.354014 kubelet[3512]: I0517 00:06:09.353738 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/739792d28ee93c335a757a3699d7a655-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-47\" (UID: \"739792d28ee93c335a757a3699d7a655\") " pod="kube-system/kube-apiserver-ip-172-31-24-47" May 17 00:06:09.354432 kubelet[3512]: I0517 00:06:09.353772 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f026bf96ec566b6163363ff6c4705c42-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-47\" (UID: \"f026bf96ec566b6163363ff6c4705c42\") " pod="kube-system/kube-controller-manager-ip-172-31-24-47" May 17 00:06:09.354432 kubelet[3512]: I0517 00:06:09.353806 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f026bf96ec566b6163363ff6c4705c42-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-47\" (UID: \"f026bf96ec566b6163363ff6c4705c42\") " pod="kube-system/kube-controller-manager-ip-172-31-24-47" May 17 00:06:09.354432 kubelet[3512]: I0517 00:06:09.353841 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/739792d28ee93c335a757a3699d7a655-ca-certs\") pod \"kube-apiserver-ip-172-31-24-47\" (UID: \"739792d28ee93c335a757a3699d7a655\") " pod="kube-system/kube-apiserver-ip-172-31-24-47" May 17 00:06:09.354432 kubelet[3512]: I0517 00:06:09.353881 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f026bf96ec566b6163363ff6c4705c42-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-47\" (UID: \"f026bf96ec566b6163363ff6c4705c42\") " pod="kube-system/kube-controller-manager-ip-172-31-24-47" May 17 00:06:09.753535 sudo[3527]: pam_unix(sudo:session): session closed for user root May 17 00:06:09.802123 kubelet[3512]: I0517 00:06:09.801767 3512 apiserver.go:52] "Watching apiserver" May 17 00:06:09.839088 kubelet[3512]: I0517 00:06:09.839005 3512 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:06:10.001839 kubelet[3512]: I0517 00:06:10.001186 3512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-47" podStartSLOduration=1.001164001 podStartE2EDuration="1.001164001s" podCreationTimestamp="2025-05-17 00:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:06:10.000110785 +0000 UTC m=+1.340633791" watchObservedRunningTime="2025-05-17 00:06:10.001164001 +0000 UTC m=+1.341686995" May 17 00:06:10.001839 kubelet[3512]: I0517 00:06:10.001368 3512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-47" podStartSLOduration=3.001358461 podStartE2EDuration="3.001358461s" podCreationTimestamp="2025-05-17 00:06:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:06:09.985772801 +0000 UTC m=+1.326295795" watchObservedRunningTime="2025-05-17 00:06:10.001358461 +0000 UTC m=+1.341881443" May 17 00:06:10.027431 kubelet[3512]: I0517 00:06:10.025988 3512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-47" podStartSLOduration=1.025966237 podStartE2EDuration="1.025966237s" podCreationTimestamp="2025-05-17 00:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:06:10.022053097 +0000 UTC m=+1.362576115" watchObservedRunningTime="2025-05-17 00:06:10.025966237 +0000 UTC m=+1.366489219" May 17 00:06:11.925025 sudo[2370]: pam_unix(sudo:session): session closed for user root May 17 00:06:11.951317 sshd[2367]: pam_unix(sshd:session): session closed for user core May 17 00:06:11.957904 systemd[1]: sshd@8-172.31.24.47:22-139.178.89.65:58944.service: Deactivated successfully. May 17 00:06:11.961579 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:06:11.961930 systemd[1]: session-9.scope: Consumed 10.107s CPU time, 151.0M memory peak, 0B memory swap peak. May 17 00:06:11.964379 systemd-logind[2003]: Session 9 logged out. Waiting for processes to exit. May 17 00:06:11.966961 systemd-logind[2003]: Removed session 9. May 17 00:06:12.691779 kubelet[3512]: I0517 00:06:12.691723 3512 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:06:12.692435 containerd[2020]: time="2025-05-17T00:06:12.692224387Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:06:12.692900 kubelet[3512]: I0517 00:06:12.692726 3512 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:06:13.634652 systemd[1]: Created slice kubepods-besteffort-podb908e30e_22d1_43ad_baf3_d1e59b573b1d.slice - libcontainer container kubepods-besteffort-podb908e30e_22d1_43ad_baf3_d1e59b573b1d.slice. May 17 00:06:13.666486 systemd[1]: Created slice kubepods-burstable-pode7a15507_293b_4b64_9a85_6b7691d993b0.slice - libcontainer container kubepods-burstable-pode7a15507_293b_4b64_9a85_6b7691d993b0.slice. May 17 00:06:13.669961 kubelet[3512]: W0517 00:06:13.668163 3512 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-24-47" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-47' and this object May 17 00:06:13.669961 kubelet[3512]: E0517 00:06:13.668236 3512 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-24-47\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-24-47' and this object" logger="UnhandledError" May 17 00:06:13.684305 kubelet[3512]: I0517 00:06:13.684156 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-lib-modules\") pod \"cilium-fhdjf\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " pod="kube-system/cilium-fhdjf" May 17 00:06:13.684507 kubelet[3512]: I0517 00:06:13.684335 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-host-proc-sys-net\") pod \"cilium-fhdjf\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " pod="kube-system/cilium-fhdjf" May 17 00:06:13.684507 kubelet[3512]: I0517 00:06:13.684423 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmsws\" (UniqueName: \"kubernetes.io/projected/e7a15507-293b-4b64-9a85-6b7691d993b0-kube-api-access-vmsws\") pod \"cilium-fhdjf\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " pod="kube-system/cilium-fhdjf" May 17 00:06:13.684627 kubelet[3512]: I0517 00:06:13.684548 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-host-proc-sys-kernel\") pod \"cilium-fhdjf\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " pod="kube-system/cilium-fhdjf" May 17 00:06:13.684688 kubelet[3512]: I0517 00:06:13.684630 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e7a15507-293b-4b64-9a85-6b7691d993b0-hubble-tls\") pod \"cilium-fhdjf\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " pod="kube-system/cilium-fhdjf" May 17 00:06:13.684748 kubelet[3512]: I0517 00:06:13.684669 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-cilium-cgroup\") pod \"cilium-fhdjf\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " pod="kube-system/cilium-fhdjf" May 17 00:06:13.684842 kubelet[3512]: I0517 00:06:13.684750 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-cilium-run\") pod \"cilium-fhdjf\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " pod="kube-system/cilium-fhdjf" May 17 00:06:13.684905 kubelet[3512]: I0517 00:06:13.684834 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-cni-path\") pod \"cilium-fhdjf\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " pod="kube-system/cilium-fhdjf" May 17 00:06:13.684964 kubelet[3512]: I0517 00:06:13.684916 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b908e30e-22d1-43ad-baf3-d1e59b573b1d-lib-modules\") pod \"kube-proxy-xjmqf\" (UID: \"b908e30e-22d1-43ad-baf3-d1e59b573b1d\") " pod="kube-system/kube-proxy-xjmqf" May 17 00:06:13.685016 kubelet[3512]: I0517 00:06:13.684992 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-hostproc\") pod \"cilium-fhdjf\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " pod="kube-system/cilium-fhdjf" May 17 00:06:13.685157 kubelet[3512]: I0517 00:06:13.685064 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e7a15507-293b-4b64-9a85-6b7691d993b0-cilium-config-path\") pod \"cilium-fhdjf\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " pod="kube-system/cilium-fhdjf" May 17 00:06:13.685286 kubelet[3512]: I0517 00:06:13.685189 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-bpf-maps\") pod \"cilium-fhdjf\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " pod="kube-system/cilium-fhdjf" May 17 00:06:13.685390 kubelet[3512]: I0517 00:06:13.685328 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-etc-cni-netd\") pod \"cilium-fhdjf\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " pod="kube-system/cilium-fhdjf" May 17 00:06:13.687365 kubelet[3512]: I0517 00:06:13.687289 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-xtables-lock\") pod \"cilium-fhdjf\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " pod="kube-system/cilium-fhdjf" May 17 00:06:13.687653 kubelet[3512]: I0517 00:06:13.687418 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b908e30e-22d1-43ad-baf3-d1e59b573b1d-kube-proxy\") pod \"kube-proxy-xjmqf\" (UID: \"b908e30e-22d1-43ad-baf3-d1e59b573b1d\") " pod="kube-system/kube-proxy-xjmqf" May 17 00:06:13.687653 kubelet[3512]: I0517 00:06:13.687464 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e7a15507-293b-4b64-9a85-6b7691d993b0-clustermesh-secrets\") pod \"cilium-fhdjf\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " pod="kube-system/cilium-fhdjf" May 17 00:06:13.687653 kubelet[3512]: I0517 00:06:13.687520 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b908e30e-22d1-43ad-baf3-d1e59b573b1d-xtables-lock\") pod \"kube-proxy-xjmqf\" (UID: \"b908e30e-22d1-43ad-baf3-d1e59b573b1d\") " pod="kube-system/kube-proxy-xjmqf" May 17 00:06:13.687653 kubelet[3512]: I0517 00:06:13.687564 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv5fh\" (UniqueName: \"kubernetes.io/projected/b908e30e-22d1-43ad-baf3-d1e59b573b1d-kube-api-access-sv5fh\") pod \"kube-proxy-xjmqf\" (UID: \"b908e30e-22d1-43ad-baf3-d1e59b573b1d\") " pod="kube-system/kube-proxy-xjmqf" May 17 00:06:13.843409 systemd[1]: Created slice kubepods-besteffort-pod673ed8d9_444e_46ba_b658_ec110b324f30.slice - libcontainer container kubepods-besteffort-pod673ed8d9_444e_46ba_b658_ec110b324f30.slice. May 17 00:06:13.890378 kubelet[3512]: I0517 00:06:13.889534 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkdth\" (UniqueName: \"kubernetes.io/projected/673ed8d9-444e-46ba-b658-ec110b324f30-kube-api-access-jkdth\") pod \"cilium-operator-6c4d7847fc-lj5g4\" (UID: \"673ed8d9-444e-46ba-b658-ec110b324f30\") " pod="kube-system/cilium-operator-6c4d7847fc-lj5g4" May 17 00:06:13.890378 kubelet[3512]: I0517 00:06:13.889614 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/673ed8d9-444e-46ba-b658-ec110b324f30-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-lj5g4\" (UID: \"673ed8d9-444e-46ba-b658-ec110b324f30\") " pod="kube-system/cilium-operator-6c4d7847fc-lj5g4" May 17 00:06:13.958148 containerd[2020]: time="2025-05-17T00:06:13.958002201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xjmqf,Uid:b908e30e-22d1-43ad-baf3-d1e59b573b1d,Namespace:kube-system,Attempt:0,}" May 17 00:06:14.011644 containerd[2020]: time="2025-05-17T00:06:14.009866237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:06:14.011644 containerd[2020]: time="2025-05-17T00:06:14.011108837Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:06:14.012420 containerd[2020]: time="2025-05-17T00:06:14.011192513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:14.012420 containerd[2020]: time="2025-05-17T00:06:14.011413241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:14.048605 systemd[1]: Started cri-containerd-07a821cd0d264315940b2224d59d08e827980f14d684fd427c4604e07993f39a.scope - libcontainer container 07a821cd0d264315940b2224d59d08e827980f14d684fd427c4604e07993f39a. May 17 00:06:14.090730 containerd[2020]: time="2025-05-17T00:06:14.090667446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xjmqf,Uid:b908e30e-22d1-43ad-baf3-d1e59b573b1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"07a821cd0d264315940b2224d59d08e827980f14d684fd427c4604e07993f39a\"" May 17 00:06:14.099639 containerd[2020]: time="2025-05-17T00:06:14.099465294Z" level=info msg="CreateContainer within sandbox \"07a821cd0d264315940b2224d59d08e827980f14d684fd427c4604e07993f39a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:06:14.133698 containerd[2020]: time="2025-05-17T00:06:14.133625082Z" level=info msg="CreateContainer within sandbox \"07a821cd0d264315940b2224d59d08e827980f14d684fd427c4604e07993f39a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c6baf8c5da438ff80725dd106434670cb8a59a053725aab5314793892b85e94e\"" May 17 00:06:14.136136 containerd[2020]: time="2025-05-17T00:06:14.134589354Z" level=info msg="StartContainer for \"c6baf8c5da438ff80725dd106434670cb8a59a053725aab5314793892b85e94e\"" May 17 00:06:14.154727 containerd[2020]: time="2025-05-17T00:06:14.154558926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lj5g4,Uid:673ed8d9-444e-46ba-b658-ec110b324f30,Namespace:kube-system,Attempt:0,}" May 17 00:06:14.181063 systemd[1]: Started cri-containerd-c6baf8c5da438ff80725dd106434670cb8a59a053725aab5314793892b85e94e.scope - libcontainer container c6baf8c5da438ff80725dd106434670cb8a59a053725aab5314793892b85e94e. May 17 00:06:14.221052 containerd[2020]: time="2025-05-17T00:06:14.220796454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:06:14.223045 containerd[2020]: time="2025-05-17T00:06:14.222870666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:06:14.223193 containerd[2020]: time="2025-05-17T00:06:14.223042158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:14.223795 containerd[2020]: time="2025-05-17T00:06:14.223710282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:14.250430 containerd[2020]: time="2025-05-17T00:06:14.250374042Z" level=info msg="StartContainer for \"c6baf8c5da438ff80725dd106434670cb8a59a053725aab5314793892b85e94e\" returns successfully" May 17 00:06:14.269619 systemd[1]: Started cri-containerd-8fc98ee84d16a5ed467bbdb1f6e68c1fef9ea1c1601349afe76c10e7138b14e7.scope - libcontainer container 8fc98ee84d16a5ed467bbdb1f6e68c1fef9ea1c1601349afe76c10e7138b14e7. May 17 00:06:14.353406 containerd[2020]: time="2025-05-17T00:06:14.353179147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lj5g4,Uid:673ed8d9-444e-46ba-b658-ec110b324f30,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fc98ee84d16a5ed467bbdb1f6e68c1fef9ea1c1601349afe76c10e7138b14e7\"" May 17 00:06:14.359162 containerd[2020]: time="2025-05-17T00:06:14.359082355Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:06:14.792282 kubelet[3512]: E0517 00:06:14.792196 3512 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition May 17 00:06:14.792437 kubelet[3512]: E0517 00:06:14.792351 3512 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7a15507-293b-4b64-9a85-6b7691d993b0-clustermesh-secrets podName:e7a15507-293b-4b64-9a85-6b7691d993b0 nodeName:}" failed. No retries permitted until 2025-05-17 00:06:15.292318405 +0000 UTC m=+6.632841387 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/e7a15507-293b-4b64-9a85-6b7691d993b0-clustermesh-secrets") pod "cilium-fhdjf" (UID: "e7a15507-293b-4b64-9a85-6b7691d993b0") : failed to sync secret cache: timed out waiting for the condition May 17 00:06:15.087151 kubelet[3512]: I0517 00:06:15.086965 3512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xjmqf" podStartSLOduration=2.086889246 podStartE2EDuration="2.086889246s" podCreationTimestamp="2025-05-17 00:06:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:06:15.086864262 +0000 UTC m=+6.427387256" watchObservedRunningTime="2025-05-17 00:06:15.086889246 +0000 UTC m=+6.427412228" May 17 00:06:15.477436 containerd[2020]: time="2025-05-17T00:06:15.477367676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fhdjf,Uid:e7a15507-293b-4b64-9a85-6b7691d993b0,Namespace:kube-system,Attempt:0,}" May 17 00:06:15.526171 containerd[2020]: time="2025-05-17T00:06:15.525428529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:06:15.526171 containerd[2020]: time="2025-05-17T00:06:15.525856377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:06:15.526171 containerd[2020]: time="2025-05-17T00:06:15.525946797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:15.526784 containerd[2020]: time="2025-05-17T00:06:15.526618257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:15.572591 systemd[1]: Started cri-containerd-760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5.scope - libcontainer container 760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5. May 17 00:06:15.617986 containerd[2020]: time="2025-05-17T00:06:15.617781081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fhdjf,Uid:e7a15507-293b-4b64-9a85-6b7691d993b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5\"" May 17 00:06:17.182152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3240881952.mount: Deactivated successfully. May 17 00:06:18.778083 containerd[2020]: time="2025-05-17T00:06:18.777997945Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:06:18.781118 containerd[2020]: time="2025-05-17T00:06:18.781061581Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17136657" May 17 00:06:18.783413 containerd[2020]: time="2025-05-17T00:06:18.783328705Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:06:18.786205 containerd[2020]: time="2025-05-17T00:06:18.786011857Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.426851358s" May 17 00:06:18.786205 containerd[2020]: time="2025-05-17T00:06:18.786069781Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 17 00:06:18.788895 containerd[2020]: time="2025-05-17T00:06:18.788790265Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:06:18.791997 containerd[2020]: time="2025-05-17T00:06:18.791909257Z" level=info msg="CreateContainer within sandbox \"8fc98ee84d16a5ed467bbdb1f6e68c1fef9ea1c1601349afe76c10e7138b14e7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:06:18.823206 containerd[2020]: time="2025-05-17T00:06:18.823085521Z" level=info msg="CreateContainer within sandbox \"8fc98ee84d16a5ed467bbdb1f6e68c1fef9ea1c1601349afe76c10e7138b14e7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"03783ce6fca6392d1fab1fe986ec582349a3669e04cb2d8620f032f371ab1bf3\"" May 17 00:06:18.824552 containerd[2020]: time="2025-05-17T00:06:18.824436037Z" level=info msg="StartContainer for \"03783ce6fca6392d1fab1fe986ec582349a3669e04cb2d8620f032f371ab1bf3\"" May 17 00:06:18.878574 systemd[1]: Started cri-containerd-03783ce6fca6392d1fab1fe986ec582349a3669e04cb2d8620f032f371ab1bf3.scope - libcontainer container 03783ce6fca6392d1fab1fe986ec582349a3669e04cb2d8620f032f371ab1bf3. May 17 00:06:18.929482 containerd[2020]: time="2025-05-17T00:06:18.929383406Z" level=info msg="StartContainer for \"03783ce6fca6392d1fab1fe986ec582349a3669e04cb2d8620f032f371ab1bf3\" returns successfully" May 17 00:06:24.277070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1762297991.mount: Deactivated successfully. May 17 00:06:26.912722 containerd[2020]: time="2025-05-17T00:06:26.912647769Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:06:26.916393 containerd[2020]: time="2025-05-17T00:06:26.916315065Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 17 00:06:26.919551 containerd[2020]: time="2025-05-17T00:06:26.919464477Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:06:26.924956 containerd[2020]: time="2025-05-17T00:06:26.924892101Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.135998624s" May 17 00:06:26.925123 containerd[2020]: time="2025-05-17T00:06:26.924959925Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 17 00:06:26.933205 containerd[2020]: time="2025-05-17T00:06:26.932975961Z" level=info msg="CreateContainer within sandbox \"760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:06:26.961730 containerd[2020]: time="2025-05-17T00:06:26.961584717Z" level=info msg="CreateContainer within sandbox \"760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8959c2ea90df3fa31a9889d3eb397d78426d762f7d3b386e8e6fcca6bbffa247\"" May 17 00:06:26.962808 containerd[2020]: time="2025-05-17T00:06:26.962513577Z" level=info msg="StartContainer for \"8959c2ea90df3fa31a9889d3eb397d78426d762f7d3b386e8e6fcca6bbffa247\"" May 17 00:06:27.021594 systemd[1]: Started cri-containerd-8959c2ea90df3fa31a9889d3eb397d78426d762f7d3b386e8e6fcca6bbffa247.scope - libcontainer container 8959c2ea90df3fa31a9889d3eb397d78426d762f7d3b386e8e6fcca6bbffa247. May 17 00:06:27.065264 containerd[2020]: time="2025-05-17T00:06:27.064655814Z" level=info msg="StartContainer for \"8959c2ea90df3fa31a9889d3eb397d78426d762f7d3b386e8e6fcca6bbffa247\" returns successfully" May 17 00:06:27.097525 systemd[1]: cri-containerd-8959c2ea90df3fa31a9889d3eb397d78426d762f7d3b386e8e6fcca6bbffa247.scope: Deactivated successfully. May 17 00:06:27.168519 kubelet[3512]: I0517 00:06:27.166153 3512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-lj5g4" podStartSLOduration=9.734352844 podStartE2EDuration="14.166132686s" podCreationTimestamp="2025-05-17 00:06:13 +0000 UTC" firstStartedPulling="2025-05-17 00:06:14.356097883 +0000 UTC m=+5.696620865" lastFinishedPulling="2025-05-17 00:06:18.787877725 +0000 UTC m=+10.128400707" observedRunningTime="2025-05-17 00:06:19.137765435 +0000 UTC m=+10.478288441" watchObservedRunningTime="2025-05-17 00:06:27.166132686 +0000 UTC m=+18.506655668" May 17 00:06:27.953756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8959c2ea90df3fa31a9889d3eb397d78426d762f7d3b386e8e6fcca6bbffa247-rootfs.mount: Deactivated successfully. May 17 00:06:28.105013 containerd[2020]: time="2025-05-17T00:06:28.104901247Z" level=info msg="shim disconnected" id=8959c2ea90df3fa31a9889d3eb397d78426d762f7d3b386e8e6fcca6bbffa247 namespace=k8s.io May 17 00:06:28.105631 containerd[2020]: time="2025-05-17T00:06:28.105037075Z" level=warning msg="cleaning up after shim disconnected" id=8959c2ea90df3fa31a9889d3eb397d78426d762f7d3b386e8e6fcca6bbffa247 namespace=k8s.io May 17 00:06:28.105631 containerd[2020]: time="2025-05-17T00:06:28.105061123Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:06:28.143695 containerd[2020]: time="2025-05-17T00:06:28.143613031Z" level=info msg="CreateContainer within sandbox \"760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:06:28.176704 containerd[2020]: time="2025-05-17T00:06:28.176142307Z" level=info msg="CreateContainer within sandbox \"760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2f9961ed93a85db0fd4a3371520250868790b06a3e1ec25185d9f2342df3f7f0\"" May 17 00:06:28.179556 containerd[2020]: time="2025-05-17T00:06:28.177907267Z" level=info msg="StartContainer for \"2f9961ed93a85db0fd4a3371520250868790b06a3e1ec25185d9f2342df3f7f0\"" May 17 00:06:28.245600 systemd[1]: Started cri-containerd-2f9961ed93a85db0fd4a3371520250868790b06a3e1ec25185d9f2342df3f7f0.scope - libcontainer container 2f9961ed93a85db0fd4a3371520250868790b06a3e1ec25185d9f2342df3f7f0. May 17 00:06:28.294114 containerd[2020]: time="2025-05-17T00:06:28.294040436Z" level=info msg="StartContainer for \"2f9961ed93a85db0fd4a3371520250868790b06a3e1ec25185d9f2342df3f7f0\" returns successfully" May 17 00:06:28.316110 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:06:28.316625 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:06:28.316751 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 17 00:06:28.327885 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:06:28.328522 systemd[1]: cri-containerd-2f9961ed93a85db0fd4a3371520250868790b06a3e1ec25185d9f2342df3f7f0.scope: Deactivated successfully. May 17 00:06:28.368658 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:06:28.378161 containerd[2020]: time="2025-05-17T00:06:28.378086144Z" level=info msg="shim disconnected" id=2f9961ed93a85db0fd4a3371520250868790b06a3e1ec25185d9f2342df3f7f0 namespace=k8s.io May 17 00:06:28.378952 containerd[2020]: time="2025-05-17T00:06:28.378632480Z" level=warning msg="cleaning up after shim disconnected" id=2f9961ed93a85db0fd4a3371520250868790b06a3e1ec25185d9f2342df3f7f0 namespace=k8s.io May 17 00:06:28.378952 containerd[2020]: time="2025-05-17T00:06:28.378667748Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:06:28.952075 systemd[1]: run-containerd-runc-k8s.io-2f9961ed93a85db0fd4a3371520250868790b06a3e1ec25185d9f2342df3f7f0-runc.iwLZAF.mount: Deactivated successfully. May 17 00:06:28.952285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f9961ed93a85db0fd4a3371520250868790b06a3e1ec25185d9f2342df3f7f0-rootfs.mount: Deactivated successfully. May 17 00:06:29.148667 containerd[2020]: time="2025-05-17T00:06:29.147741308Z" level=info msg="CreateContainer within sandbox \"760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:06:29.199155 containerd[2020]: time="2025-05-17T00:06:29.198996153Z" level=info msg="CreateContainer within sandbox \"760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7463971aaddb29df7a21e8c8f3499e9df21f6c7a2188ad210101c51327b672f7\"" May 17 00:06:29.201547 containerd[2020]: time="2025-05-17T00:06:29.201494025Z" level=info msg="StartContainer for \"7463971aaddb29df7a21e8c8f3499e9df21f6c7a2188ad210101c51327b672f7\"" May 17 00:06:29.262588 systemd[1]: Started cri-containerd-7463971aaddb29df7a21e8c8f3499e9df21f6c7a2188ad210101c51327b672f7.scope - libcontainer container 7463971aaddb29df7a21e8c8f3499e9df21f6c7a2188ad210101c51327b672f7. May 17 00:06:29.328952 containerd[2020]: time="2025-05-17T00:06:29.328370985Z" level=info msg="StartContainer for \"7463971aaddb29df7a21e8c8f3499e9df21f6c7a2188ad210101c51327b672f7\" returns successfully" May 17 00:06:29.341119 systemd[1]: cri-containerd-7463971aaddb29df7a21e8c8f3499e9df21f6c7a2188ad210101c51327b672f7.scope: Deactivated successfully. May 17 00:06:29.355757 kubelet[3512]: E0517 00:06:29.355652 3512 cadvisor_stats_provider.go:522] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7a15507_293b_4b64_9a85_6b7691d993b0.slice/cri-containerd-7463971aaddb29df7a21e8c8f3499e9df21f6c7a2188ad210101c51327b672f7.scope\": RecentStats: unable to find data in memory cache]" May 17 00:06:29.390806 containerd[2020]: time="2025-05-17T00:06:29.390695542Z" level=info msg="shim disconnected" id=7463971aaddb29df7a21e8c8f3499e9df21f6c7a2188ad210101c51327b672f7 namespace=k8s.io May 17 00:06:29.391429 containerd[2020]: time="2025-05-17T00:06:29.391074982Z" level=warning msg="cleaning up after shim disconnected" id=7463971aaddb29df7a21e8c8f3499e9df21f6c7a2188ad210101c51327b672f7 namespace=k8s.io May 17 00:06:29.391429 containerd[2020]: time="2025-05-17T00:06:29.391102390Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:06:29.951900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7463971aaddb29df7a21e8c8f3499e9df21f6c7a2188ad210101c51327b672f7-rootfs.mount: Deactivated successfully. May 17 00:06:30.154675 containerd[2020]: time="2025-05-17T00:06:30.153930165Z" level=info msg="CreateContainer within sandbox \"760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:06:30.201589 containerd[2020]: time="2025-05-17T00:06:30.201511990Z" level=info msg="CreateContainer within sandbox \"760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d63e6f7fd8fc7497c24518f81ccb13133c2588f5c703d54342908161236c6b1a\"" May 17 00:06:30.204091 containerd[2020]: time="2025-05-17T00:06:30.203568910Z" level=info msg="StartContainer for \"d63e6f7fd8fc7497c24518f81ccb13133c2588f5c703d54342908161236c6b1a\"" May 17 00:06:30.260564 systemd[1]: Started cri-containerd-d63e6f7fd8fc7497c24518f81ccb13133c2588f5c703d54342908161236c6b1a.scope - libcontainer container d63e6f7fd8fc7497c24518f81ccb13133c2588f5c703d54342908161236c6b1a. May 17 00:06:30.304588 systemd[1]: cri-containerd-d63e6f7fd8fc7497c24518f81ccb13133c2588f5c703d54342908161236c6b1a.scope: Deactivated successfully. May 17 00:06:30.308364 containerd[2020]: time="2025-05-17T00:06:30.308182174Z" level=info msg="StartContainer for \"d63e6f7fd8fc7497c24518f81ccb13133c2588f5c703d54342908161236c6b1a\" returns successfully" May 17 00:06:30.352782 containerd[2020]: time="2025-05-17T00:06:30.352651822Z" level=info msg="shim disconnected" id=d63e6f7fd8fc7497c24518f81ccb13133c2588f5c703d54342908161236c6b1a namespace=k8s.io May 17 00:06:30.352782 containerd[2020]: time="2025-05-17T00:06:30.352749982Z" level=warning msg="cleaning up after shim disconnected" id=d63e6f7fd8fc7497c24518f81ccb13133c2588f5c703d54342908161236c6b1a namespace=k8s.io May 17 00:06:30.353101 containerd[2020]: time="2025-05-17T00:06:30.352801330Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:06:30.952770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d63e6f7fd8fc7497c24518f81ccb13133c2588f5c703d54342908161236c6b1a-rootfs.mount: Deactivated successfully. May 17 00:06:31.159086 containerd[2020]: time="2025-05-17T00:06:31.158728822Z" level=info msg="CreateContainer within sandbox \"760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:06:31.195569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1792368230.mount: Deactivated successfully. May 17 00:06:31.200614 containerd[2020]: time="2025-05-17T00:06:31.199217446Z" level=info msg="CreateContainer within sandbox \"760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9377c9e0f8344540302c3b87f3e7b1ce78c7e67b9cfc609418bbd4086ceffa8d\"" May 17 00:06:31.202301 containerd[2020]: time="2025-05-17T00:06:31.200965955Z" level=info msg="StartContainer for \"9377c9e0f8344540302c3b87f3e7b1ce78c7e67b9cfc609418bbd4086ceffa8d\"" May 17 00:06:31.260576 systemd[1]: Started cri-containerd-9377c9e0f8344540302c3b87f3e7b1ce78c7e67b9cfc609418bbd4086ceffa8d.scope - libcontainer container 9377c9e0f8344540302c3b87f3e7b1ce78c7e67b9cfc609418bbd4086ceffa8d. May 17 00:06:31.316381 containerd[2020]: time="2025-05-17T00:06:31.316296647Z" level=info msg="StartContainer for \"9377c9e0f8344540302c3b87f3e7b1ce78c7e67b9cfc609418bbd4086ceffa8d\" returns successfully" May 17 00:06:31.492632 kubelet[3512]: I0517 00:06:31.491500 3512 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:06:31.564057 systemd[1]: Created slice kubepods-burstable-pod71cf7dda_6e53_4939_aaee_3ac3aa5a2138.slice - libcontainer container kubepods-burstable-pod71cf7dda_6e53_4939_aaee_3ac3aa5a2138.slice. May 17 00:06:31.586148 systemd[1]: Created slice kubepods-burstable-pod4fe572ba_6ee5_416d_81db_788b76c2956e.slice - libcontainer container kubepods-burstable-pod4fe572ba_6ee5_416d_81db_788b76c2956e.slice. May 17 00:06:31.635333 kubelet[3512]: I0517 00:06:31.635218 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71cf7dda-6e53-4939-aaee-3ac3aa5a2138-config-volume\") pod \"coredns-668d6bf9bc-ml4t4\" (UID: \"71cf7dda-6e53-4939-aaee-3ac3aa5a2138\") " pod="kube-system/coredns-668d6bf9bc-ml4t4" May 17 00:06:31.635333 kubelet[3512]: I0517 00:06:31.635322 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fe572ba-6ee5-416d-81db-788b76c2956e-config-volume\") pod \"coredns-668d6bf9bc-l5gst\" (UID: \"4fe572ba-6ee5-416d-81db-788b76c2956e\") " pod="kube-system/coredns-668d6bf9bc-l5gst" May 17 00:06:31.635583 kubelet[3512]: I0517 00:06:31.635367 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvwg2\" (UniqueName: \"kubernetes.io/projected/71cf7dda-6e53-4939-aaee-3ac3aa5a2138-kube-api-access-pvwg2\") pod \"coredns-668d6bf9bc-ml4t4\" (UID: \"71cf7dda-6e53-4939-aaee-3ac3aa5a2138\") " pod="kube-system/coredns-668d6bf9bc-ml4t4" May 17 00:06:31.635583 kubelet[3512]: I0517 00:06:31.635421 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfqps\" (UniqueName: \"kubernetes.io/projected/4fe572ba-6ee5-416d-81db-788b76c2956e-kube-api-access-qfqps\") pod \"coredns-668d6bf9bc-l5gst\" (UID: \"4fe572ba-6ee5-416d-81db-788b76c2956e\") " pod="kube-system/coredns-668d6bf9bc-l5gst" May 17 00:06:31.877711 containerd[2020]: time="2025-05-17T00:06:31.876051518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ml4t4,Uid:71cf7dda-6e53-4939-aaee-3ac3aa5a2138,Namespace:kube-system,Attempt:0,}" May 17 00:06:31.897790 containerd[2020]: time="2025-05-17T00:06:31.897326582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l5gst,Uid:4fe572ba-6ee5-416d-81db-788b76c2956e,Namespace:kube-system,Attempt:0,}" May 17 00:06:34.181046 (udev-worker)[4307]: Network interface NamePolicy= disabled on kernel command line. May 17 00:06:34.183021 systemd-networkd[1941]: cilium_host: Link UP May 17 00:06:34.183367 systemd-networkd[1941]: cilium_net: Link UP May 17 00:06:34.183683 systemd-networkd[1941]: cilium_net: Gained carrier May 17 00:06:34.183990 systemd-networkd[1941]: cilium_host: Gained carrier May 17 00:06:34.188002 (udev-worker)[4347]: Network interface NamePolicy= disabled on kernel command line. May 17 00:06:34.356165 (udev-worker)[4353]: Network interface NamePolicy= disabled on kernel command line. May 17 00:06:34.367684 systemd-networkd[1941]: cilium_vxlan: Link UP May 17 00:06:34.367702 systemd-networkd[1941]: cilium_vxlan: Gained carrier May 17 00:06:34.603479 systemd-networkd[1941]: cilium_net: Gained IPv6LL May 17 00:06:34.845452 kernel: NET: Registered PF_ALG protocol family May 17 00:06:35.043512 systemd-networkd[1941]: cilium_host: Gained IPv6LL May 17 00:06:36.168217 systemd-networkd[1941]: lxc_health: Link UP May 17 00:06:36.185056 systemd-networkd[1941]: lxc_health: Gained carrier May 17 00:06:36.259495 systemd-networkd[1941]: cilium_vxlan: Gained IPv6LL May 17 00:06:36.514881 systemd-networkd[1941]: lxca0fc81bd8d89: Link UP May 17 00:06:36.520321 kernel: eth0: renamed from tmpfbe53 May 17 00:06:36.526114 systemd-networkd[1941]: lxca0fc81bd8d89: Gained carrier May 17 00:06:36.563273 (udev-worker)[4354]: Network interface NamePolicy= disabled on kernel command line. May 17 00:06:36.577310 kernel: eth0: renamed from tmp8a09e May 17 00:06:36.583466 systemd-networkd[1941]: lxc0dbeccbfbc94: Link UP May 17 00:06:36.591091 systemd-networkd[1941]: lxc0dbeccbfbc94: Gained carrier May 17 00:06:37.517932 kubelet[3512]: I0517 00:06:37.516395 3512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fhdjf" podStartSLOduration=13.210412182 podStartE2EDuration="24.516373218s" podCreationTimestamp="2025-05-17 00:06:13 +0000 UTC" firstStartedPulling="2025-05-17 00:06:15.620834193 +0000 UTC m=+6.961357163" lastFinishedPulling="2025-05-17 00:06:26.926795217 +0000 UTC m=+18.267318199" observedRunningTime="2025-05-17 00:06:32.218695668 +0000 UTC m=+23.559218662" watchObservedRunningTime="2025-05-17 00:06:37.516373218 +0000 UTC m=+28.856896200" May 17 00:06:37.539564 systemd-networkd[1941]: lxc_health: Gained IPv6LL May 17 00:06:37.731491 systemd-networkd[1941]: lxc0dbeccbfbc94: Gained IPv6LL May 17 00:06:37.987475 systemd-networkd[1941]: lxca0fc81bd8d89: Gained IPv6LL May 17 00:06:40.339704 ntpd[1995]: Listen normally on 8 cilium_host 192.168.0.143:123 May 17 00:06:40.340767 ntpd[1995]: 17 May 00:06:40 ntpd[1995]: Listen normally on 8 cilium_host 192.168.0.143:123 May 17 00:06:40.340767 ntpd[1995]: 17 May 00:06:40 ntpd[1995]: Listen normally on 9 cilium_net [fe80::8c91:76ff:feea:ddd4%4]:123 May 17 00:06:40.340767 ntpd[1995]: 17 May 00:06:40 ntpd[1995]: Listen normally on 10 cilium_host [fe80::54ce:92ff:fe86:88fd%5]:123 May 17 00:06:40.340767 ntpd[1995]: 17 May 00:06:40 ntpd[1995]: Listen normally on 11 cilium_vxlan [fe80::fc5d:4dff:fe79:b451%6]:123 May 17 00:06:40.340767 ntpd[1995]: 17 May 00:06:40 ntpd[1995]: Listen normally on 12 lxc_health [fe80::c023:31ff:fe78:5a19%8]:123 May 17 00:06:40.340767 ntpd[1995]: 17 May 00:06:40 ntpd[1995]: Listen normally on 13 lxca0fc81bd8d89 [fe80::7071:93ff:fe25:581a%10]:123 May 17 00:06:40.340767 ntpd[1995]: 17 May 00:06:40 ntpd[1995]: Listen normally on 14 lxc0dbeccbfbc94 [fe80::e087:cff:fed6:222b%12]:123 May 17 00:06:40.339836 ntpd[1995]: Listen normally on 9 cilium_net [fe80::8c91:76ff:feea:ddd4%4]:123 May 17 00:06:40.339917 ntpd[1995]: Listen normally on 10 cilium_host [fe80::54ce:92ff:fe86:88fd%5]:123 May 17 00:06:40.339984 ntpd[1995]: Listen normally on 11 cilium_vxlan [fe80::fc5d:4dff:fe79:b451%6]:123 May 17 00:06:40.340050 ntpd[1995]: Listen normally on 12 lxc_health [fe80::c023:31ff:fe78:5a19%8]:123 May 17 00:06:40.340120 ntpd[1995]: Listen normally on 13 lxca0fc81bd8d89 [fe80::7071:93ff:fe25:581a%10]:123 May 17 00:06:40.340191 ntpd[1995]: Listen normally on 14 lxc0dbeccbfbc94 [fe80::e087:cff:fed6:222b%12]:123 May 17 00:06:44.904149 containerd[2020]: time="2025-05-17T00:06:44.903909783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:06:44.904149 containerd[2020]: time="2025-05-17T00:06:44.904006515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:06:44.906287 containerd[2020]: time="2025-05-17T00:06:44.904047879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:44.906694 containerd[2020]: time="2025-05-17T00:06:44.906562551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:44.978456 containerd[2020]: time="2025-05-17T00:06:44.976703619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:06:44.978456 containerd[2020]: time="2025-05-17T00:06:44.976810527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:06:44.978456 containerd[2020]: time="2025-05-17T00:06:44.976874235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:44.978456 containerd[2020]: time="2025-05-17T00:06:44.978029607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:06:44.981662 systemd[1]: Started cri-containerd-fbe530ad13a0bad1a2a4df7bbfa93f7e39cb80ae35e1023a4d76718946e09756.scope - libcontainer container fbe530ad13a0bad1a2a4df7bbfa93f7e39cb80ae35e1023a4d76718946e09756. May 17 00:06:45.051589 systemd[1]: Started cri-containerd-8a09ec25a129c0c836ab560fe6e281181cc930e3bc972652deedcc509f3aa040.scope - libcontainer container 8a09ec25a129c0c836ab560fe6e281181cc930e3bc972652deedcc509f3aa040. May 17 00:06:45.174523 containerd[2020]: time="2025-05-17T00:06:45.173861484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l5gst,Uid:4fe572ba-6ee5-416d-81db-788b76c2956e,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a09ec25a129c0c836ab560fe6e281181cc930e3bc972652deedcc509f3aa040\"" May 17 00:06:45.184887 containerd[2020]: time="2025-05-17T00:06:45.184646952Z" level=info msg="CreateContainer within sandbox \"8a09ec25a129c0c836ab560fe6e281181cc930e3bc972652deedcc509f3aa040\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:06:45.191917 containerd[2020]: time="2025-05-17T00:06:45.191679072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ml4t4,Uid:71cf7dda-6e53-4939-aaee-3ac3aa5a2138,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbe530ad13a0bad1a2a4df7bbfa93f7e39cb80ae35e1023a4d76718946e09756\"" May 17 00:06:45.201862 containerd[2020]: time="2025-05-17T00:06:45.200769528Z" level=info msg="CreateContainer within sandbox \"fbe530ad13a0bad1a2a4df7bbfa93f7e39cb80ae35e1023a4d76718946e09756\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:06:45.251985 containerd[2020]: time="2025-05-17T00:06:45.251586732Z" level=info msg="CreateContainer within sandbox \"8a09ec25a129c0c836ab560fe6e281181cc930e3bc972652deedcc509f3aa040\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"016cc89f42b6a05a5e379862a480552adbc6567568cc230f1af0319e12685348\"" May 17 00:06:45.253369 containerd[2020]: time="2025-05-17T00:06:45.252912108Z" level=info msg="StartContainer for \"016cc89f42b6a05a5e379862a480552adbc6567568cc230f1af0319e12685348\"" May 17 00:06:45.269646 containerd[2020]: time="2025-05-17T00:06:45.269566812Z" level=info msg="CreateContainer within sandbox \"fbe530ad13a0bad1a2a4df7bbfa93f7e39cb80ae35e1023a4d76718946e09756\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"230abc76224e7cbd4840fc9c2cf3f6a72f20f9892aa97ed930f7a5a9eabd7e89\"" May 17 00:06:45.273721 containerd[2020]: time="2025-05-17T00:06:45.271429332Z" level=info msg="StartContainer for \"230abc76224e7cbd4840fc9c2cf3f6a72f20f9892aa97ed930f7a5a9eabd7e89\"" May 17 00:06:45.351643 systemd[1]: Started cri-containerd-230abc76224e7cbd4840fc9c2cf3f6a72f20f9892aa97ed930f7a5a9eabd7e89.scope - libcontainer container 230abc76224e7cbd4840fc9c2cf3f6a72f20f9892aa97ed930f7a5a9eabd7e89. May 17 00:06:45.373586 systemd[1]: Started cri-containerd-016cc89f42b6a05a5e379862a480552adbc6567568cc230f1af0319e12685348.scope - libcontainer container 016cc89f42b6a05a5e379862a480552adbc6567568cc230f1af0319e12685348. May 17 00:06:45.451465 containerd[2020]: time="2025-05-17T00:06:45.451296301Z" level=info msg="StartContainer for \"230abc76224e7cbd4840fc9c2cf3f6a72f20f9892aa97ed930f7a5a9eabd7e89\" returns successfully" May 17 00:06:45.460494 containerd[2020]: time="2025-05-17T00:06:45.460214557Z" level=info msg="StartContainer for \"016cc89f42b6a05a5e379862a480552adbc6567568cc230f1af0319e12685348\" returns successfully" May 17 00:06:45.918317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2206031290.mount: Deactivated successfully. May 17 00:06:46.272008 kubelet[3512]: I0517 00:06:46.271886 3512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-l5gst" podStartSLOduration=33.271862617 podStartE2EDuration="33.271862617s" podCreationTimestamp="2025-05-17 00:06:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:06:46.269455033 +0000 UTC m=+37.609978051" watchObservedRunningTime="2025-05-17 00:06:46.271862617 +0000 UTC m=+37.612385599" May 17 00:06:46.336923 kubelet[3512]: I0517 00:06:46.336822 3512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ml4t4" podStartSLOduration=33.336796358 podStartE2EDuration="33.336796358s" podCreationTimestamp="2025-05-17 00:06:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:06:46.299969066 +0000 UTC m=+37.640492084" watchObservedRunningTime="2025-05-17 00:06:46.336796358 +0000 UTC m=+37.677319364" May 17 00:06:56.317797 systemd[1]: Started sshd@9-172.31.24.47:22-139.178.89.65:41068.service - OpenSSH per-connection server daemon (139.178.89.65:41068). May 17 00:06:56.500130 sshd[4887]: Accepted publickey for core from 139.178.89.65 port 41068 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:06:56.502799 sshd[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:06:56.510148 systemd-logind[2003]: New session 10 of user core. May 17 00:06:56.520702 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:06:56.790734 sshd[4887]: pam_unix(sshd:session): session closed for user core May 17 00:06:56.800782 systemd[1]: sshd@9-172.31.24.47:22-139.178.89.65:41068.service: Deactivated successfully. May 17 00:06:56.806228 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:06:56.808889 systemd-logind[2003]: Session 10 logged out. Waiting for processes to exit. May 17 00:06:56.810990 systemd-logind[2003]: Removed session 10. May 17 00:07:01.829807 systemd[1]: Started sshd@10-172.31.24.47:22-139.178.89.65:53572.service - OpenSSH per-connection server daemon (139.178.89.65:53572). May 17 00:07:02.011320 sshd[4901]: Accepted publickey for core from 139.178.89.65 port 53572 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:02.013873 sshd[4901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:02.022387 systemd-logind[2003]: New session 11 of user core. May 17 00:07:02.026540 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:07:02.261504 sshd[4901]: pam_unix(sshd:session): session closed for user core May 17 00:07:02.269029 systemd[1]: sshd@10-172.31.24.47:22-139.178.89.65:53572.service: Deactivated successfully. May 17 00:07:02.273205 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:07:02.275871 systemd-logind[2003]: Session 11 logged out. Waiting for processes to exit. May 17 00:07:02.278080 systemd-logind[2003]: Removed session 11. May 17 00:07:07.296693 systemd[1]: Started sshd@11-172.31.24.47:22-139.178.89.65:52234.service - OpenSSH per-connection server daemon (139.178.89.65:52234). May 17 00:07:07.478069 sshd[4915]: Accepted publickey for core from 139.178.89.65 port 52234 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:07.482006 sshd[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:07.490800 systemd-logind[2003]: New session 12 of user core. May 17 00:07:07.499577 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:07:07.736843 sshd[4915]: pam_unix(sshd:session): session closed for user core May 17 00:07:07.743916 systemd[1]: sshd@11-172.31.24.47:22-139.178.89.65:52234.service: Deactivated successfully. May 17 00:07:07.747991 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:07:07.750502 systemd-logind[2003]: Session 12 logged out. Waiting for processes to exit. May 17 00:07:07.752779 systemd-logind[2003]: Removed session 12. May 17 00:07:12.781800 systemd[1]: Started sshd@12-172.31.24.47:22-139.178.89.65:52236.service - OpenSSH per-connection server daemon (139.178.89.65:52236). May 17 00:07:12.967278 sshd[4930]: Accepted publickey for core from 139.178.89.65 port 52236 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:12.969966 sshd[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:12.977551 systemd-logind[2003]: New session 13 of user core. May 17 00:07:12.988660 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:07:13.231143 sshd[4930]: pam_unix(sshd:session): session closed for user core May 17 00:07:13.238471 systemd[1]: sshd@12-172.31.24.47:22-139.178.89.65:52236.service: Deactivated successfully. May 17 00:07:13.241983 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:07:13.244416 systemd-logind[2003]: Session 13 logged out. Waiting for processes to exit. May 17 00:07:13.246913 systemd-logind[2003]: Removed session 13. May 17 00:07:18.269796 systemd[1]: Started sshd@13-172.31.24.47:22-139.178.89.65:52154.service - OpenSSH per-connection server daemon (139.178.89.65:52154). May 17 00:07:18.444497 sshd[4946]: Accepted publickey for core from 139.178.89.65 port 52154 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:18.447463 sshd[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:18.456525 systemd-logind[2003]: New session 14 of user core. May 17 00:07:18.461671 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:07:18.696572 sshd[4946]: pam_unix(sshd:session): session closed for user core May 17 00:07:18.703394 systemd[1]: sshd@13-172.31.24.47:22-139.178.89.65:52154.service: Deactivated successfully. May 17 00:07:18.706613 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:07:18.708152 systemd-logind[2003]: Session 14 logged out. Waiting for processes to exit. May 17 00:07:18.710614 systemd-logind[2003]: Removed session 14. May 17 00:07:18.733793 systemd[1]: Started sshd@14-172.31.24.47:22-139.178.89.65:52156.service - OpenSSH per-connection server daemon (139.178.89.65:52156). May 17 00:07:18.915350 sshd[4960]: Accepted publickey for core from 139.178.89.65 port 52156 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:18.917403 sshd[4960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:18.928451 systemd-logind[2003]: New session 15 of user core. May 17 00:07:18.941543 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:07:19.263596 sshd[4960]: pam_unix(sshd:session): session closed for user core May 17 00:07:19.278066 systemd[1]: sshd@14-172.31.24.47:22-139.178.89.65:52156.service: Deactivated successfully. May 17 00:07:19.289423 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:07:19.294681 systemd-logind[2003]: Session 15 logged out. Waiting for processes to exit. May 17 00:07:19.321789 systemd[1]: Started sshd@15-172.31.24.47:22-139.178.89.65:52168.service - OpenSSH per-connection server daemon (139.178.89.65:52168). May 17 00:07:19.324282 systemd-logind[2003]: Removed session 15. May 17 00:07:19.507781 sshd[4971]: Accepted publickey for core from 139.178.89.65 port 52168 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:19.510926 sshd[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:19.525648 systemd-logind[2003]: New session 16 of user core. May 17 00:07:19.533323 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:07:19.778432 sshd[4971]: pam_unix(sshd:session): session closed for user core May 17 00:07:19.786233 systemd[1]: sshd@15-172.31.24.47:22-139.178.89.65:52168.service: Deactivated successfully. May 17 00:07:19.786567 systemd-logind[2003]: Session 16 logged out. Waiting for processes to exit. May 17 00:07:19.790628 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:07:19.794015 systemd-logind[2003]: Removed session 16. May 17 00:07:24.818769 systemd[1]: Started sshd@16-172.31.24.47:22-139.178.89.65:52176.service - OpenSSH per-connection server daemon (139.178.89.65:52176). May 17 00:07:25.003601 sshd[4986]: Accepted publickey for core from 139.178.89.65 port 52176 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:25.006509 sshd[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:25.014181 systemd-logind[2003]: New session 17 of user core. May 17 00:07:25.024513 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:07:25.272021 sshd[4986]: pam_unix(sshd:session): session closed for user core May 17 00:07:25.279368 systemd[1]: sshd@16-172.31.24.47:22-139.178.89.65:52176.service: Deactivated successfully. May 17 00:07:25.283878 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:07:25.285624 systemd-logind[2003]: Session 17 logged out. Waiting for processes to exit. May 17 00:07:25.287658 systemd-logind[2003]: Removed session 17. May 17 00:07:30.310787 systemd[1]: Started sshd@17-172.31.24.47:22-139.178.89.65:39300.service - OpenSSH per-connection server daemon (139.178.89.65:39300). May 17 00:07:30.486925 sshd[4999]: Accepted publickey for core from 139.178.89.65 port 39300 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:30.489570 sshd[4999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:30.497120 systemd-logind[2003]: New session 18 of user core. May 17 00:07:30.509540 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:07:30.747615 sshd[4999]: pam_unix(sshd:session): session closed for user core May 17 00:07:30.754536 systemd[1]: sshd@17-172.31.24.47:22-139.178.89.65:39300.service: Deactivated successfully. May 17 00:07:30.758685 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:07:30.760514 systemd-logind[2003]: Session 18 logged out. Waiting for processes to exit. May 17 00:07:30.763018 systemd-logind[2003]: Removed session 18. May 17 00:07:35.787778 systemd[1]: Started sshd@18-172.31.24.47:22-139.178.89.65:39306.service - OpenSSH per-connection server daemon (139.178.89.65:39306). May 17 00:07:35.965971 sshd[5012]: Accepted publickey for core from 139.178.89.65 port 39306 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:35.968851 sshd[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:35.978458 systemd-logind[2003]: New session 19 of user core. May 17 00:07:35.985563 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:07:36.229758 sshd[5012]: pam_unix(sshd:session): session closed for user core May 17 00:07:36.236029 systemd[1]: sshd@18-172.31.24.47:22-139.178.89.65:39306.service: Deactivated successfully. May 17 00:07:36.239999 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:07:36.242405 systemd-logind[2003]: Session 19 logged out. Waiting for processes to exit. May 17 00:07:36.244583 systemd-logind[2003]: Removed session 19. May 17 00:07:36.269769 systemd[1]: Started sshd@19-172.31.24.47:22-139.178.89.65:39322.service - OpenSSH per-connection server daemon (139.178.89.65:39322). May 17 00:07:36.447162 sshd[5025]: Accepted publickey for core from 139.178.89.65 port 39322 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:36.449916 sshd[5025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:36.458111 systemd-logind[2003]: New session 20 of user core. May 17 00:07:36.464534 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:07:36.752963 sshd[5025]: pam_unix(sshd:session): session closed for user core May 17 00:07:36.759895 systemd-logind[2003]: Session 20 logged out. Waiting for processes to exit. May 17 00:07:36.761550 systemd[1]: sshd@19-172.31.24.47:22-139.178.89.65:39322.service: Deactivated successfully. May 17 00:07:36.766025 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:07:36.768307 systemd-logind[2003]: Removed session 20. May 17 00:07:36.793898 systemd[1]: Started sshd@20-172.31.24.47:22-139.178.89.65:59926.service - OpenSSH per-connection server daemon (139.178.89.65:59926). May 17 00:07:36.971878 sshd[5035]: Accepted publickey for core from 139.178.89.65 port 59926 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:36.974623 sshd[5035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:36.982507 systemd-logind[2003]: New session 21 of user core. May 17 00:07:36.997548 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 00:07:38.293352 sshd[5035]: pam_unix(sshd:session): session closed for user core May 17 00:07:38.302686 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:07:38.305368 systemd[1]: sshd@20-172.31.24.47:22-139.178.89.65:59926.service: Deactivated successfully. May 17 00:07:38.318001 systemd-logind[2003]: Session 21 logged out. Waiting for processes to exit. May 17 00:07:38.337822 systemd[1]: Started sshd@21-172.31.24.47:22-139.178.89.65:59936.service - OpenSSH per-connection server daemon (139.178.89.65:59936). May 17 00:07:38.342389 systemd-logind[2003]: Removed session 21. May 17 00:07:38.539295 sshd[5053]: Accepted publickey for core from 139.178.89.65 port 59936 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:38.542089 sshd[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:38.551050 systemd-logind[2003]: New session 22 of user core. May 17 00:07:38.556518 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 00:07:39.050175 sshd[5053]: pam_unix(sshd:session): session closed for user core May 17 00:07:39.059199 systemd[1]: sshd@21-172.31.24.47:22-139.178.89.65:59936.service: Deactivated successfully. May 17 00:07:39.059925 systemd-logind[2003]: Session 22 logged out. Waiting for processes to exit. May 17 00:07:39.065882 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:07:39.071321 systemd-logind[2003]: Removed session 22. May 17 00:07:39.093787 systemd[1]: Started sshd@22-172.31.24.47:22-139.178.89.65:59942.service - OpenSSH per-connection server daemon (139.178.89.65:59942). May 17 00:07:39.280554 sshd[5064]: Accepted publickey for core from 139.178.89.65 port 59942 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:39.283340 sshd[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:39.291589 systemd-logind[2003]: New session 23 of user core. May 17 00:07:39.301589 systemd[1]: Started session-23.scope - Session 23 of User core. May 17 00:07:39.545932 sshd[5064]: pam_unix(sshd:session): session closed for user core May 17 00:07:39.552307 systemd[1]: sshd@22-172.31.24.47:22-139.178.89.65:59942.service: Deactivated successfully. May 17 00:07:39.557038 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:07:39.559491 systemd-logind[2003]: Session 23 logged out. Waiting for processes to exit. May 17 00:07:39.562294 systemd-logind[2003]: Removed session 23. May 17 00:07:44.584779 systemd[1]: Started sshd@23-172.31.24.47:22-139.178.89.65:59948.service - OpenSSH per-connection server daemon (139.178.89.65:59948). May 17 00:07:44.765019 sshd[5079]: Accepted publickey for core from 139.178.89.65 port 59948 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:44.767895 sshd[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:44.776645 systemd-logind[2003]: New session 24 of user core. May 17 00:07:44.781541 systemd[1]: Started session-24.scope - Session 24 of User core. May 17 00:07:45.022700 sshd[5079]: pam_unix(sshd:session): session closed for user core May 17 00:07:45.029672 systemd[1]: sshd@23-172.31.24.47:22-139.178.89.65:59948.service: Deactivated successfully. May 17 00:07:45.033845 systemd[1]: session-24.scope: Deactivated successfully. May 17 00:07:45.035432 systemd-logind[2003]: Session 24 logged out. Waiting for processes to exit. May 17 00:07:45.038094 systemd-logind[2003]: Removed session 24. May 17 00:07:50.063843 systemd[1]: Started sshd@24-172.31.24.47:22-139.178.89.65:59656.service - OpenSSH per-connection server daemon (139.178.89.65:59656). May 17 00:07:50.240938 sshd[5094]: Accepted publickey for core from 139.178.89.65 port 59656 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:50.243710 sshd[5094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:50.251221 systemd-logind[2003]: New session 25 of user core. May 17 00:07:50.259510 systemd[1]: Started session-25.scope - Session 25 of User core. May 17 00:07:50.491704 sshd[5094]: pam_unix(sshd:session): session closed for user core May 17 00:07:50.498330 systemd[1]: sshd@24-172.31.24.47:22-139.178.89.65:59656.service: Deactivated successfully. May 17 00:07:50.502192 systemd[1]: session-25.scope: Deactivated successfully. May 17 00:07:50.503745 systemd-logind[2003]: Session 25 logged out. Waiting for processes to exit. May 17 00:07:50.505855 systemd-logind[2003]: Removed session 25. May 17 00:07:55.535933 systemd[1]: Started sshd@25-172.31.24.47:22-139.178.89.65:59664.service - OpenSSH per-connection server daemon (139.178.89.65:59664). May 17 00:07:55.713889 sshd[5107]: Accepted publickey for core from 139.178.89.65 port 59664 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:07:55.716561 sshd[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:07:55.724355 systemd-logind[2003]: New session 26 of user core. May 17 00:07:55.731565 systemd[1]: Started session-26.scope - Session 26 of User core. May 17 00:07:55.970713 sshd[5107]: pam_unix(sshd:session): session closed for user core May 17 00:07:55.976865 systemd[1]: sshd@25-172.31.24.47:22-139.178.89.65:59664.service: Deactivated successfully. May 17 00:07:55.982367 systemd[1]: session-26.scope: Deactivated successfully. May 17 00:07:55.984117 systemd-logind[2003]: Session 26 logged out. Waiting for processes to exit. May 17 00:07:55.985916 systemd-logind[2003]: Removed session 26. May 17 00:08:01.017793 systemd[1]: Started sshd@26-172.31.24.47:22-139.178.89.65:50390.service - OpenSSH per-connection server daemon (139.178.89.65:50390). May 17 00:08:01.191637 sshd[5120]: Accepted publickey for core from 139.178.89.65 port 50390 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:08:01.194554 sshd[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:01.202659 systemd-logind[2003]: New session 27 of user core. May 17 00:08:01.211546 systemd[1]: Started session-27.scope - Session 27 of User core. May 17 00:08:01.445018 sshd[5120]: pam_unix(sshd:session): session closed for user core May 17 00:08:01.452167 systemd[1]: sshd@26-172.31.24.47:22-139.178.89.65:50390.service: Deactivated successfully. May 17 00:08:01.456757 systemd[1]: session-27.scope: Deactivated successfully. May 17 00:08:01.458184 systemd-logind[2003]: Session 27 logged out. Waiting for processes to exit. May 17 00:08:01.460237 systemd-logind[2003]: Removed session 27. May 17 00:08:01.479601 systemd[1]: Started sshd@27-172.31.24.47:22-139.178.89.65:50398.service - OpenSSH per-connection server daemon (139.178.89.65:50398). May 17 00:08:01.665168 sshd[5132]: Accepted publickey for core from 139.178.89.65 port 50398 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:08:01.668046 sshd[5132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:01.676674 systemd-logind[2003]: New session 28 of user core. May 17 00:08:01.686596 systemd[1]: Started session-28.scope - Session 28 of User core. May 17 00:08:03.910143 containerd[2020]: time="2025-05-17T00:08:03.908604967Z" level=info msg="StopContainer for \"03783ce6fca6392d1fab1fe986ec582349a3669e04cb2d8620f032f371ab1bf3\" with timeout 30 (s)" May 17 00:08:03.916810 containerd[2020]: time="2025-05-17T00:08:03.913662883Z" level=info msg="Stop container \"03783ce6fca6392d1fab1fe986ec582349a3669e04cb2d8620f032f371ab1bf3\" with signal terminated" May 17 00:08:03.944767 containerd[2020]: time="2025-05-17T00:08:03.944504299Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:08:03.954177 systemd[1]: cri-containerd-03783ce6fca6392d1fab1fe986ec582349a3669e04cb2d8620f032f371ab1bf3.scope: Deactivated successfully. May 17 00:08:03.977276 containerd[2020]: time="2025-05-17T00:08:03.976502647Z" level=info msg="StopContainer for \"9377c9e0f8344540302c3b87f3e7b1ce78c7e67b9cfc609418bbd4086ceffa8d\" with timeout 2 (s)" May 17 00:08:03.980297 containerd[2020]: time="2025-05-17T00:08:03.979428859Z" level=info msg="Stop container \"9377c9e0f8344540302c3b87f3e7b1ce78c7e67b9cfc609418bbd4086ceffa8d\" with signal terminated" May 17 00:08:04.000811 systemd-networkd[1941]: lxc_health: Link DOWN May 17 00:08:04.000829 systemd-networkd[1941]: lxc_health: Lost carrier May 17 00:08:04.026174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03783ce6fca6392d1fab1fe986ec582349a3669e04cb2d8620f032f371ab1bf3-rootfs.mount: Deactivated successfully. May 17 00:08:04.039792 systemd[1]: cri-containerd-9377c9e0f8344540302c3b87f3e7b1ce78c7e67b9cfc609418bbd4086ceffa8d.scope: Deactivated successfully. May 17 00:08:04.040877 systemd[1]: cri-containerd-9377c9e0f8344540302c3b87f3e7b1ce78c7e67b9cfc609418bbd4086ceffa8d.scope: Consumed 14.247s CPU time. May 17 00:08:04.053064 containerd[2020]: time="2025-05-17T00:08:04.052962364Z" level=info msg="shim disconnected" id=03783ce6fca6392d1fab1fe986ec582349a3669e04cb2d8620f032f371ab1bf3 namespace=k8s.io May 17 00:08:04.053064 containerd[2020]: time="2025-05-17T00:08:04.053043940Z" level=warning msg="cleaning up after shim disconnected" id=03783ce6fca6392d1fab1fe986ec582349a3669e04cb2d8620f032f371ab1bf3 namespace=k8s.io May 17 00:08:04.053064 containerd[2020]: time="2025-05-17T00:08:04.053069860Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:08:04.098150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9377c9e0f8344540302c3b87f3e7b1ce78c7e67b9cfc609418bbd4086ceffa8d-rootfs.mount: Deactivated successfully. May 17 00:08:04.102554 containerd[2020]: time="2025-05-17T00:08:04.102448432Z" level=info msg="shim disconnected" id=9377c9e0f8344540302c3b87f3e7b1ce78c7e67b9cfc609418bbd4086ceffa8d namespace=k8s.io May 17 00:08:04.102724 containerd[2020]: time="2025-05-17T00:08:04.102551776Z" level=warning msg="cleaning up after shim disconnected" id=9377c9e0f8344540302c3b87f3e7b1ce78c7e67b9cfc609418bbd4086ceffa8d namespace=k8s.io May 17 00:08:04.102724 containerd[2020]: time="2025-05-17T00:08:04.102577192Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:08:04.107961 containerd[2020]: time="2025-05-17T00:08:04.107774392Z" level=info msg="StopContainer for \"03783ce6fca6392d1fab1fe986ec582349a3669e04cb2d8620f032f371ab1bf3\" returns successfully" May 17 00:08:04.109935 containerd[2020]: time="2025-05-17T00:08:04.109630588Z" level=info msg="StopPodSandbox for \"8fc98ee84d16a5ed467bbdb1f6e68c1fef9ea1c1601349afe76c10e7138b14e7\"" May 17 00:08:04.109935 containerd[2020]: time="2025-05-17T00:08:04.109700884Z" level=info msg="Container to stop \"03783ce6fca6392d1fab1fe986ec582349a3669e04cb2d8620f032f371ab1bf3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:08:04.114680 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8fc98ee84d16a5ed467bbdb1f6e68c1fef9ea1c1601349afe76c10e7138b14e7-shm.mount: Deactivated successfully. May 17 00:08:04.132826 systemd[1]: cri-containerd-8fc98ee84d16a5ed467bbdb1f6e68c1fef9ea1c1601349afe76c10e7138b14e7.scope: Deactivated successfully. May 17 00:08:04.141357 containerd[2020]: time="2025-05-17T00:08:04.141102040Z" level=info msg="StopContainer for \"9377c9e0f8344540302c3b87f3e7b1ce78c7e67b9cfc609418bbd4086ceffa8d\" returns successfully" May 17 00:08:04.142109 containerd[2020]: time="2025-05-17T00:08:04.141761584Z" level=info msg="StopPodSandbox for \"760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5\"" May 17 00:08:04.142109 containerd[2020]: time="2025-05-17T00:08:04.141833440Z" level=info msg="Container to stop \"8959c2ea90df3fa31a9889d3eb397d78426d762f7d3b386e8e6fcca6bbffa247\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:08:04.142109 containerd[2020]: time="2025-05-17T00:08:04.141859456Z" level=info msg="Container to stop \"7463971aaddb29df7a21e8c8f3499e9df21f6c7a2188ad210101c51327b672f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:08:04.142109 containerd[2020]: time="2025-05-17T00:08:04.141883660Z" level=info msg="Container to stop \"2f9961ed93a85db0fd4a3371520250868790b06a3e1ec25185d9f2342df3f7f0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:08:04.142109 containerd[2020]: time="2025-05-17T00:08:04.141907792Z" level=info msg="Container to stop \"d63e6f7fd8fc7497c24518f81ccb13133c2588f5c703d54342908161236c6b1a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:08:04.142109 containerd[2020]: time="2025-05-17T00:08:04.141930988Z" level=info msg="Container to stop \"9377c9e0f8344540302c3b87f3e7b1ce78c7e67b9cfc609418bbd4086ceffa8d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:08:04.150067 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5-shm.mount: Deactivated successfully. May 17 00:08:04.161176 systemd[1]: cri-containerd-760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5.scope: Deactivated successfully. May 17 00:08:04.181948 kubelet[3512]: E0517 00:08:04.181888 3512 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:08:04.205485 containerd[2020]: time="2025-05-17T00:08:04.205360924Z" level=info msg="shim disconnected" id=8fc98ee84d16a5ed467bbdb1f6e68c1fef9ea1c1601349afe76c10e7138b14e7 namespace=k8s.io May 17 00:08:04.206073 containerd[2020]: time="2025-05-17T00:08:04.205459144Z" level=warning msg="cleaning up after shim disconnected" id=8fc98ee84d16a5ed467bbdb1f6e68c1fef9ea1c1601349afe76c10e7138b14e7 namespace=k8s.io May 17 00:08:04.206073 containerd[2020]: time="2025-05-17T00:08:04.205851340Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:08:04.215753 containerd[2020]: time="2025-05-17T00:08:04.215506001Z" level=info msg="shim disconnected" id=760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5 namespace=k8s.io May 17 00:08:04.215753 containerd[2020]: time="2025-05-17T00:08:04.215699345Z" level=warning msg="cleaning up after shim disconnected" id=760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5 namespace=k8s.io May 17 00:08:04.216091 containerd[2020]: time="2025-05-17T00:08:04.215722181Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:08:04.244026 containerd[2020]: time="2025-05-17T00:08:04.242095925Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:08:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:08:04.244026 containerd[2020]: time="2025-05-17T00:08:04.242845877Z" level=info msg="TearDown network for sandbox \"8fc98ee84d16a5ed467bbdb1f6e68c1fef9ea1c1601349afe76c10e7138b14e7\" successfully" May 17 00:08:04.244026 containerd[2020]: time="2025-05-17T00:08:04.242907233Z" level=info msg="StopPodSandbox for \"8fc98ee84d16a5ed467bbdb1f6e68c1fef9ea1c1601349afe76c10e7138b14e7\" returns successfully" May 17 00:08:04.244026 containerd[2020]: time="2025-05-17T00:08:04.243892313Z" level=info msg="TearDown network for sandbox \"760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5\" successfully" May 17 00:08:04.244026 containerd[2020]: time="2025-05-17T00:08:04.243964133Z" level=info msg="StopPodSandbox for \"760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5\" returns successfully" May 17 00:08:04.367901 kubelet[3512]: I0517 00:08:04.367839 3512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-cni-path\") pod \"e7a15507-293b-4b64-9a85-6b7691d993b0\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " May 17 00:08:04.368224 kubelet[3512]: I0517 00:08:04.367976 3512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-cni-path" (OuterVolumeSpecName: "cni-path") pod "e7a15507-293b-4b64-9a85-6b7691d993b0" (UID: "e7a15507-293b-4b64-9a85-6b7691d993b0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:08:04.369712 kubelet[3512]: I0517 00:08:04.369633 3512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkdth\" (UniqueName: \"kubernetes.io/projected/673ed8d9-444e-46ba-b658-ec110b324f30-kube-api-access-jkdth\") pod \"673ed8d9-444e-46ba-b658-ec110b324f30\" (UID: \"673ed8d9-444e-46ba-b658-ec110b324f30\") " May 17 00:08:04.370453 kubelet[3512]: I0517 00:08:04.370398 3512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-lib-modules\") pod \"e7a15507-293b-4b64-9a85-6b7691d993b0\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " May 17 00:08:04.370594 kubelet[3512]: I0517 00:08:04.370469 3512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmsws\" (UniqueName: \"kubernetes.io/projected/e7a15507-293b-4b64-9a85-6b7691d993b0-kube-api-access-vmsws\") pod \"e7a15507-293b-4b64-9a85-6b7691d993b0\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " May 17 00:08:04.370594 kubelet[3512]: I0517 00:08:04.370506 3512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-bpf-maps\") pod \"e7a15507-293b-4b64-9a85-6b7691d993b0\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " May 17 00:08:04.370594 kubelet[3512]: I0517 00:08:04.370543 3512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-xtables-lock\") pod \"e7a15507-293b-4b64-9a85-6b7691d993b0\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " May 17 00:08:04.370594 kubelet[3512]: I0517 00:08:04.370579 3512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-host-proc-sys-kernel\") pod \"e7a15507-293b-4b64-9a85-6b7691d993b0\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " May 17 00:08:04.370812 kubelet[3512]: I0517 00:08:04.370618 3512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e7a15507-293b-4b64-9a85-6b7691d993b0-hubble-tls\") pod \"e7a15507-293b-4b64-9a85-6b7691d993b0\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " May 17 00:08:04.370812 kubelet[3512]: I0517 00:08:04.370653 3512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-hostproc\") pod \"e7a15507-293b-4b64-9a85-6b7691d993b0\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " May 17 00:08:04.370812 kubelet[3512]: I0517 00:08:04.370685 3512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-etc-cni-netd\") pod \"e7a15507-293b-4b64-9a85-6b7691d993b0\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " May 17 00:08:04.370812 kubelet[3512]: I0517 00:08:04.370723 3512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/673ed8d9-444e-46ba-b658-ec110b324f30-cilium-config-path\") pod \"673ed8d9-444e-46ba-b658-ec110b324f30\" (UID: \"673ed8d9-444e-46ba-b658-ec110b324f30\") " May 17 00:08:04.370812 kubelet[3512]: I0517 00:08:04.370761 3512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-cilium-cgroup\") pod \"e7a15507-293b-4b64-9a85-6b7691d993b0\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " May 17 00:08:04.370812 kubelet[3512]: I0517 00:08:04.370804 3512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e7a15507-293b-4b64-9a85-6b7691d993b0-cilium-config-path\") pod \"e7a15507-293b-4b64-9a85-6b7691d993b0\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " May 17 00:08:04.371202 kubelet[3512]: I0517 00:08:04.370841 3512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-host-proc-sys-net\") pod \"e7a15507-293b-4b64-9a85-6b7691d993b0\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " May 17 00:08:04.371202 kubelet[3512]: I0517 00:08:04.370873 3512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-cilium-run\") pod \"e7a15507-293b-4b64-9a85-6b7691d993b0\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " May 17 00:08:04.371202 kubelet[3512]: I0517 00:08:04.370914 3512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e7a15507-293b-4b64-9a85-6b7691d993b0-clustermesh-secrets\") pod \"e7a15507-293b-4b64-9a85-6b7691d993b0\" (UID: \"e7a15507-293b-4b64-9a85-6b7691d993b0\") " May 17 00:08:04.371202 kubelet[3512]: I0517 00:08:04.370986 3512 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-cni-path\") on node \"ip-172-31-24-47\" DevicePath \"\"" May 17 00:08:04.372681 kubelet[3512]: I0517 00:08:04.371352 3512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-hostproc" (OuterVolumeSpecName: "hostproc") pod "e7a15507-293b-4b64-9a85-6b7691d993b0" (UID: "e7a15507-293b-4b64-9a85-6b7691d993b0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:08:04.372681 kubelet[3512]: I0517 00:08:04.371423 3512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e7a15507-293b-4b64-9a85-6b7691d993b0" (UID: "e7a15507-293b-4b64-9a85-6b7691d993b0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:08:04.376278 kubelet[3512]: I0517 00:08:04.375895 3512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e7a15507-293b-4b64-9a85-6b7691d993b0" (UID: "e7a15507-293b-4b64-9a85-6b7691d993b0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:08:04.376987 kubelet[3512]: I0517 00:08:04.376931 3512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e7a15507-293b-4b64-9a85-6b7691d993b0" (UID: "e7a15507-293b-4b64-9a85-6b7691d993b0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:08:04.377246 kubelet[3512]: I0517 00:08:04.377006 3512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e7a15507-293b-4b64-9a85-6b7691d993b0" (UID: "e7a15507-293b-4b64-9a85-6b7691d993b0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:08:04.377246 kubelet[3512]: I0517 00:08:04.377048 3512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e7a15507-293b-4b64-9a85-6b7691d993b0" (UID: "e7a15507-293b-4b64-9a85-6b7691d993b0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:08:04.378577 kubelet[3512]: I0517 00:08:04.378482 3512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e7a15507-293b-4b64-9a85-6b7691d993b0" (UID: "e7a15507-293b-4b64-9a85-6b7691d993b0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:08:04.379415 kubelet[3512]: I0517 00:08:04.379338 3512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e7a15507-293b-4b64-9a85-6b7691d993b0" (UID: "e7a15507-293b-4b64-9a85-6b7691d993b0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:08:04.384070 kubelet[3512]: I0517 00:08:04.383406 3512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e7a15507-293b-4b64-9a85-6b7691d993b0" (UID: "e7a15507-293b-4b64-9a85-6b7691d993b0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:08:04.388460 kubelet[3512]: I0517 00:08:04.388228 3512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7a15507-293b-4b64-9a85-6b7691d993b0-kube-api-access-vmsws" (OuterVolumeSpecName: "kube-api-access-vmsws") pod "e7a15507-293b-4b64-9a85-6b7691d993b0" (UID: "e7a15507-293b-4b64-9a85-6b7691d993b0"). InnerVolumeSpecName "kube-api-access-vmsws". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:08:04.388997 kubelet[3512]: I0517 00:08:04.388865 3512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/673ed8d9-444e-46ba-b658-ec110b324f30-kube-api-access-jkdth" (OuterVolumeSpecName: "kube-api-access-jkdth") pod "673ed8d9-444e-46ba-b658-ec110b324f30" (UID: "673ed8d9-444e-46ba-b658-ec110b324f30"). InnerVolumeSpecName "kube-api-access-jkdth". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:08:04.389435 kubelet[3512]: I0517 00:08:04.389160 3512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7a15507-293b-4b64-9a85-6b7691d993b0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e7a15507-293b-4b64-9a85-6b7691d993b0" (UID: "e7a15507-293b-4b64-9a85-6b7691d993b0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:08:04.396421 kubelet[3512]: I0517 00:08:04.396334 3512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7a15507-293b-4b64-9a85-6b7691d993b0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e7a15507-293b-4b64-9a85-6b7691d993b0" (UID: "e7a15507-293b-4b64-9a85-6b7691d993b0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:08:04.396421 kubelet[3512]: I0517 00:08:04.396344 3512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/673ed8d9-444e-46ba-b658-ec110b324f30-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "673ed8d9-444e-46ba-b658-ec110b324f30" (UID: "673ed8d9-444e-46ba-b658-ec110b324f30"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:08:04.401878 kubelet[3512]: I0517 00:08:04.401754 3512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7a15507-293b-4b64-9a85-6b7691d993b0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e7a15507-293b-4b64-9a85-6b7691d993b0" (UID: "e7a15507-293b-4b64-9a85-6b7691d993b0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:08:04.447340 kubelet[3512]: I0517 00:08:04.447158 3512 scope.go:117] "RemoveContainer" containerID="9377c9e0f8344540302c3b87f3e7b1ce78c7e67b9cfc609418bbd4086ceffa8d" May 17 00:08:04.456719 containerd[2020]: time="2025-05-17T00:08:04.456605382Z" level=info msg="RemoveContainer for \"9377c9e0f8344540302c3b87f3e7b1ce78c7e67b9cfc609418bbd4086ceffa8d\"" May 17 00:08:04.463414 systemd[1]: Removed slice kubepods-burstable-pode7a15507_293b_4b64_9a85_6b7691d993b0.slice - libcontainer container kubepods-burstable-pode7a15507_293b_4b64_9a85_6b7691d993b0.slice. May 17 00:08:04.463636 systemd[1]: kubepods-burstable-pode7a15507_293b_4b64_9a85_6b7691d993b0.slice: Consumed 14.389s CPU time. May 17 00:08:04.469232 containerd[2020]: time="2025-05-17T00:08:04.469067490Z" level=info msg="RemoveContainer for \"9377c9e0f8344540302c3b87f3e7b1ce78c7e67b9cfc609418bbd4086ceffa8d\" returns successfully" May 17 00:08:04.470537 kubelet[3512]: I0517 00:08:04.470317 3512 scope.go:117] "RemoveContainer" containerID="d63e6f7fd8fc7497c24518f81ccb13133c2588f5c703d54342908161236c6b1a" May 17 00:08:04.471359 kubelet[3512]: I0517 00:08:04.471319 3512 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-lib-modules\") on node \"ip-172-31-24-47\" DevicePath \"\"" May 17 00:08:04.471359 kubelet[3512]: I0517 00:08:04.471359 3512 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jkdth\" (UniqueName: \"kubernetes.io/projected/673ed8d9-444e-46ba-b658-ec110b324f30-kube-api-access-jkdth\") on node \"ip-172-31-24-47\" DevicePath \"\"" May 17 00:08:04.471544 kubelet[3512]: I0517 00:08:04.471384 3512 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vmsws\" (UniqueName: \"kubernetes.io/projected/e7a15507-293b-4b64-9a85-6b7691d993b0-kube-api-access-vmsws\") on node \"ip-172-31-24-47\" DevicePath \"\"" May 17 00:08:04.471544 kubelet[3512]: I0517 00:08:04.471409 3512 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-bpf-maps\") on node \"ip-172-31-24-47\" DevicePath \"\"" May 17 00:08:04.471544 kubelet[3512]: I0517 00:08:04.471432 3512 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-xtables-lock\") on node \"ip-172-31-24-47\" DevicePath \"\"" May 17 00:08:04.471544 kubelet[3512]: I0517 00:08:04.471453 3512 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-host-proc-sys-kernel\") on node \"ip-172-31-24-47\" DevicePath \"\"" May 17 00:08:04.471544 kubelet[3512]: I0517 00:08:04.471474 3512 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e7a15507-293b-4b64-9a85-6b7691d993b0-hubble-tls\") on node \"ip-172-31-24-47\" DevicePath \"\"" May 17 00:08:04.471544 kubelet[3512]: I0517 00:08:04.471494 3512 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-hostproc\") on node \"ip-172-31-24-47\" DevicePath \"\"" May 17 00:08:04.471544 kubelet[3512]: I0517 00:08:04.471514 3512 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-etc-cni-netd\") on node \"ip-172-31-24-47\" DevicePath \"\"" May 17 00:08:04.471544 kubelet[3512]: I0517 00:08:04.471533 3512 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/673ed8d9-444e-46ba-b658-ec110b324f30-cilium-config-path\") on node \"ip-172-31-24-47\" DevicePath \"\"" May 17 00:08:04.472000 kubelet[3512]: I0517 00:08:04.471552 3512 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-cilium-cgroup\") on node \"ip-172-31-24-47\" DevicePath \"\"" May 17 00:08:04.472000 kubelet[3512]: I0517 00:08:04.471573 3512 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e7a15507-293b-4b64-9a85-6b7691d993b0-cilium-config-path\") on node \"ip-172-31-24-47\" DevicePath \"\"" May 17 00:08:04.472000 kubelet[3512]: I0517 00:08:04.471592 3512 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-host-proc-sys-net\") on node \"ip-172-31-24-47\" DevicePath \"\"" May 17 00:08:04.472000 kubelet[3512]: I0517 00:08:04.471611 3512 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e7a15507-293b-4b64-9a85-6b7691d993b0-cilium-run\") on node \"ip-172-31-24-47\" DevicePath \"\"" May 17 00:08:04.472000 kubelet[3512]: I0517 00:08:04.471636 3512 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e7a15507-293b-4b64-9a85-6b7691d993b0-clustermesh-secrets\") on node \"ip-172-31-24-47\" DevicePath \"\"" May 17 00:08:04.474142 containerd[2020]: time="2025-05-17T00:08:04.474080274Z" level=info msg="RemoveContainer for \"d63e6f7fd8fc7497c24518f81ccb13133c2588f5c703d54342908161236c6b1a\"" May 17 00:08:04.476113 systemd[1]: Removed slice kubepods-besteffort-pod673ed8d9_444e_46ba_b658_ec110b324f30.slice - libcontainer container kubepods-besteffort-pod673ed8d9_444e_46ba_b658_ec110b324f30.slice. May 17 00:08:04.483348 containerd[2020]: time="2025-05-17T00:08:04.483282414Z" level=info msg="RemoveContainer for \"d63e6f7fd8fc7497c24518f81ccb13133c2588f5c703d54342908161236c6b1a\" returns successfully" May 17 00:08:04.483663 kubelet[3512]: I0517 00:08:04.483607 3512 scope.go:117] "RemoveContainer" containerID="7463971aaddb29df7a21e8c8f3499e9df21f6c7a2188ad210101c51327b672f7" May 17 00:08:04.486371 containerd[2020]: time="2025-05-17T00:08:04.486178554Z" level=info msg="RemoveContainer for \"7463971aaddb29df7a21e8c8f3499e9df21f6c7a2188ad210101c51327b672f7\"" May 17 00:08:04.494059 containerd[2020]: time="2025-05-17T00:08:04.493985250Z" level=info msg="RemoveContainer for \"7463971aaddb29df7a21e8c8f3499e9df21f6c7a2188ad210101c51327b672f7\" returns successfully" May 17 00:08:04.495376 kubelet[3512]: I0517 00:08:04.494680 3512 scope.go:117] "RemoveContainer" containerID="2f9961ed93a85db0fd4a3371520250868790b06a3e1ec25185d9f2342df3f7f0" May 17 00:08:04.501289 containerd[2020]: time="2025-05-17T00:08:04.499586670Z" level=info msg="RemoveContainer for \"2f9961ed93a85db0fd4a3371520250868790b06a3e1ec25185d9f2342df3f7f0\"" May 17 00:08:04.509835 containerd[2020]: time="2025-05-17T00:08:04.509478558Z" level=info msg="RemoveContainer for \"2f9961ed93a85db0fd4a3371520250868790b06a3e1ec25185d9f2342df3f7f0\" returns successfully" May 17 00:08:04.509996 kubelet[3512]: I0517 00:08:04.509910 3512 scope.go:117] "RemoveContainer" containerID="8959c2ea90df3fa31a9889d3eb397d78426d762f7d3b386e8e6fcca6bbffa247" May 17 00:08:04.514624 containerd[2020]: time="2025-05-17T00:08:04.514364250Z" level=info msg="RemoveContainer for \"8959c2ea90df3fa31a9889d3eb397d78426d762f7d3b386e8e6fcca6bbffa247\"" May 17 00:08:04.523600 containerd[2020]: time="2025-05-17T00:08:04.523525182Z" level=info msg="RemoveContainer for \"8959c2ea90df3fa31a9889d3eb397d78426d762f7d3b386e8e6fcca6bbffa247\" returns successfully" May 17 00:08:04.525190 kubelet[3512]: I0517 00:08:04.525022 3512 scope.go:117] "RemoveContainer" containerID="9377c9e0f8344540302c3b87f3e7b1ce78c7e67b9cfc609418bbd4086ceffa8d" May 17 00:08:04.526068 containerd[2020]: time="2025-05-17T00:08:04.525994218Z" level=error msg="ContainerStatus for \"9377c9e0f8344540302c3b87f3e7b1ce78c7e67b9cfc609418bbd4086ceffa8d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9377c9e0f8344540302c3b87f3e7b1ce78c7e67b9cfc609418bbd4086ceffa8d\": not found" May 17 00:08:04.527812 kubelet[3512]: E0517 00:08:04.526950 3512 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9377c9e0f8344540302c3b87f3e7b1ce78c7e67b9cfc609418bbd4086ceffa8d\": not found" containerID="9377c9e0f8344540302c3b87f3e7b1ce78c7e67b9cfc609418bbd4086ceffa8d" May 17 00:08:04.527812 kubelet[3512]: I0517 00:08:04.527004 3512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9377c9e0f8344540302c3b87f3e7b1ce78c7e67b9cfc609418bbd4086ceffa8d"} err="failed to get container status \"9377c9e0f8344540302c3b87f3e7b1ce78c7e67b9cfc609418bbd4086ceffa8d\": rpc error: code = NotFound desc = an error occurred when try to find container \"9377c9e0f8344540302c3b87f3e7b1ce78c7e67b9cfc609418bbd4086ceffa8d\": not found" May 17 00:08:04.527812 kubelet[3512]: I0517 00:08:04.527121 3512 scope.go:117] "RemoveContainer" containerID="d63e6f7fd8fc7497c24518f81ccb13133c2588f5c703d54342908161236c6b1a" May 17 00:08:04.528069 containerd[2020]: time="2025-05-17T00:08:04.527677242Z" level=error msg="ContainerStatus for \"d63e6f7fd8fc7497c24518f81ccb13133c2588f5c703d54342908161236c6b1a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d63e6f7fd8fc7497c24518f81ccb13133c2588f5c703d54342908161236c6b1a\": not found" May 17 00:08:04.528153 kubelet[3512]: E0517 00:08:04.527931 3512 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d63e6f7fd8fc7497c24518f81ccb13133c2588f5c703d54342908161236c6b1a\": not found" containerID="d63e6f7fd8fc7497c24518f81ccb13133c2588f5c703d54342908161236c6b1a" May 17 00:08:04.528153 kubelet[3512]: I0517 00:08:04.527978 3512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d63e6f7fd8fc7497c24518f81ccb13133c2588f5c703d54342908161236c6b1a"} err="failed to get container status \"d63e6f7fd8fc7497c24518f81ccb13133c2588f5c703d54342908161236c6b1a\": rpc error: code = NotFound desc = an error occurred when try to find container \"d63e6f7fd8fc7497c24518f81ccb13133c2588f5c703d54342908161236c6b1a\": not found" May 17 00:08:04.528153 kubelet[3512]: I0517 00:08:04.528012 3512 scope.go:117] "RemoveContainer" containerID="7463971aaddb29df7a21e8c8f3499e9df21f6c7a2188ad210101c51327b672f7" May 17 00:08:04.528895 containerd[2020]: time="2025-05-17T00:08:04.528740658Z" level=error msg="ContainerStatus for \"7463971aaddb29df7a21e8c8f3499e9df21f6c7a2188ad210101c51327b672f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7463971aaddb29df7a21e8c8f3499e9df21f6c7a2188ad210101c51327b672f7\": not found" May 17 00:08:04.529129 kubelet[3512]: E0517 00:08:04.529030 3512 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7463971aaddb29df7a21e8c8f3499e9df21f6c7a2188ad210101c51327b672f7\": not found" containerID="7463971aaddb29df7a21e8c8f3499e9df21f6c7a2188ad210101c51327b672f7" May 17 00:08:04.529129 kubelet[3512]: I0517 00:08:04.529074 3512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7463971aaddb29df7a21e8c8f3499e9df21f6c7a2188ad210101c51327b672f7"} err="failed to get container status \"7463971aaddb29df7a21e8c8f3499e9df21f6c7a2188ad210101c51327b672f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"7463971aaddb29df7a21e8c8f3499e9df21f6c7a2188ad210101c51327b672f7\": not found" May 17 00:08:04.529129 kubelet[3512]: I0517 00:08:04.529109 3512 scope.go:117] "RemoveContainer" containerID="2f9961ed93a85db0fd4a3371520250868790b06a3e1ec25185d9f2342df3f7f0" May 17 00:08:04.529954 containerd[2020]: time="2025-05-17T00:08:04.529888446Z" level=error msg="ContainerStatus for \"2f9961ed93a85db0fd4a3371520250868790b06a3e1ec25185d9f2342df3f7f0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2f9961ed93a85db0fd4a3371520250868790b06a3e1ec25185d9f2342df3f7f0\": not found" May 17 00:08:04.530844 kubelet[3512]: E0517 00:08:04.530596 3512 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2f9961ed93a85db0fd4a3371520250868790b06a3e1ec25185d9f2342df3f7f0\": not found" containerID="2f9961ed93a85db0fd4a3371520250868790b06a3e1ec25185d9f2342df3f7f0" May 17 00:08:04.530844 kubelet[3512]: I0517 00:08:04.530651 3512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2f9961ed93a85db0fd4a3371520250868790b06a3e1ec25185d9f2342df3f7f0"} err="failed to get container status \"2f9961ed93a85db0fd4a3371520250868790b06a3e1ec25185d9f2342df3f7f0\": rpc error: code = NotFound desc = an error occurred when try to find container \"2f9961ed93a85db0fd4a3371520250868790b06a3e1ec25185d9f2342df3f7f0\": not found" May 17 00:08:04.530844 kubelet[3512]: I0517 00:08:04.530698 3512 scope.go:117] "RemoveContainer" containerID="8959c2ea90df3fa31a9889d3eb397d78426d762f7d3b386e8e6fcca6bbffa247" May 17 00:08:04.531743 kubelet[3512]: E0517 00:08:04.531700 3512 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8959c2ea90df3fa31a9889d3eb397d78426d762f7d3b386e8e6fcca6bbffa247\": not found" containerID="8959c2ea90df3fa31a9889d3eb397d78426d762f7d3b386e8e6fcca6bbffa247" May 17 00:08:04.531824 containerd[2020]: time="2025-05-17T00:08:04.531437286Z" level=error msg="ContainerStatus for \"8959c2ea90df3fa31a9889d3eb397d78426d762f7d3b386e8e6fcca6bbffa247\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8959c2ea90df3fa31a9889d3eb397d78426d762f7d3b386e8e6fcca6bbffa247\": not found" May 17 00:08:04.531895 kubelet[3512]: I0517 00:08:04.531746 3512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8959c2ea90df3fa31a9889d3eb397d78426d762f7d3b386e8e6fcca6bbffa247"} err="failed to get container status \"8959c2ea90df3fa31a9889d3eb397d78426d762f7d3b386e8e6fcca6bbffa247\": rpc error: code = NotFound desc = an error occurred when try to find container \"8959c2ea90df3fa31a9889d3eb397d78426d762f7d3b386e8e6fcca6bbffa247\": not found" May 17 00:08:04.531895 kubelet[3512]: I0517 00:08:04.531783 3512 scope.go:117] "RemoveContainer" containerID="03783ce6fca6392d1fab1fe986ec582349a3669e04cb2d8620f032f371ab1bf3" May 17 00:08:04.535223 containerd[2020]: time="2025-05-17T00:08:04.535175754Z" level=info msg="RemoveContainer for \"03783ce6fca6392d1fab1fe986ec582349a3669e04cb2d8620f032f371ab1bf3\"" May 17 00:08:04.541076 containerd[2020]: time="2025-05-17T00:08:04.541025382Z" level=info msg="RemoveContainer for \"03783ce6fca6392d1fab1fe986ec582349a3669e04cb2d8620f032f371ab1bf3\" returns successfully" May 17 00:08:04.541545 kubelet[3512]: I0517 00:08:04.541513 3512 scope.go:117] "RemoveContainer" containerID="03783ce6fca6392d1fab1fe986ec582349a3669e04cb2d8620f032f371ab1bf3" May 17 00:08:04.542110 containerd[2020]: time="2025-05-17T00:08:04.542051910Z" level=error msg="ContainerStatus for \"03783ce6fca6392d1fab1fe986ec582349a3669e04cb2d8620f032f371ab1bf3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"03783ce6fca6392d1fab1fe986ec582349a3669e04cb2d8620f032f371ab1bf3\": not found" May 17 00:08:04.542445 kubelet[3512]: E0517 00:08:04.542368 3512 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"03783ce6fca6392d1fab1fe986ec582349a3669e04cb2d8620f032f371ab1bf3\": not found" containerID="03783ce6fca6392d1fab1fe986ec582349a3669e04cb2d8620f032f371ab1bf3" May 17 00:08:04.542540 kubelet[3512]: I0517 00:08:04.542440 3512 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"03783ce6fca6392d1fab1fe986ec582349a3669e04cb2d8620f032f371ab1bf3"} err="failed to get container status \"03783ce6fca6392d1fab1fe986ec582349a3669e04cb2d8620f032f371ab1bf3\": rpc error: code = NotFound desc = an error occurred when try to find container \"03783ce6fca6392d1fab1fe986ec582349a3669e04cb2d8620f032f371ab1bf3\": not found" May 17 00:08:04.909286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5-rootfs.mount: Deactivated successfully. May 17 00:08:04.909476 systemd[1]: var-lib-kubelet-pods-e7a15507\x2d293b\x2d4b64\x2d9a85\x2d6b7691d993b0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:08:04.909627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fc98ee84d16a5ed467bbdb1f6e68c1fef9ea1c1601349afe76c10e7138b14e7-rootfs.mount: Deactivated successfully. May 17 00:08:04.909764 systemd[1]: var-lib-kubelet-pods-673ed8d9\x2d444e\x2d46ba\x2db658\x2dec110b324f30-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djkdth.mount: Deactivated successfully. May 17 00:08:04.909897 systemd[1]: var-lib-kubelet-pods-e7a15507\x2d293b\x2d4b64\x2d9a85\x2d6b7691d993b0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvmsws.mount: Deactivated successfully. May 17 00:08:04.910032 systemd[1]: var-lib-kubelet-pods-e7a15507\x2d293b\x2d4b64\x2d9a85\x2d6b7691d993b0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:08:04.957828 kubelet[3512]: I0517 00:08:04.957759 3512 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="673ed8d9-444e-46ba-b658-ec110b324f30" path="/var/lib/kubelet/pods/673ed8d9-444e-46ba-b658-ec110b324f30/volumes" May 17 00:08:04.958800 kubelet[3512]: I0517 00:08:04.958752 3512 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7a15507-293b-4b64-9a85-6b7691d993b0" path="/var/lib/kubelet/pods/e7a15507-293b-4b64-9a85-6b7691d993b0/volumes" May 17 00:08:05.826929 sshd[5132]: pam_unix(sshd:session): session closed for user core May 17 00:08:05.831896 systemd-logind[2003]: Session 28 logged out. Waiting for processes to exit. May 17 00:08:05.832735 systemd[1]: sshd@27-172.31.24.47:22-139.178.89.65:50398.service: Deactivated successfully. May 17 00:08:05.837682 systemd[1]: session-28.scope: Deactivated successfully. May 17 00:08:05.838268 systemd[1]: session-28.scope: Consumed 1.464s CPU time. May 17 00:08:05.842510 systemd-logind[2003]: Removed session 28. May 17 00:08:05.871972 systemd[1]: Started sshd@28-172.31.24.47:22-139.178.89.65:50406.service - OpenSSH per-connection server daemon (139.178.89.65:50406). May 17 00:08:06.041811 sshd[5296]: Accepted publickey for core from 139.178.89.65 port 50406 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:08:06.044458 sshd[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:06.052369 systemd-logind[2003]: New session 29 of user core. May 17 00:08:06.064537 systemd[1]: Started session-29.scope - Session 29 of User core. May 17 00:08:06.339546 ntpd[1995]: Deleting interface #12 lxc_health, fe80::c023:31ff:fe78:5a19%8#123, interface stats: received=0, sent=0, dropped=0, active_time=86 secs May 17 00:08:06.340051 ntpd[1995]: 17 May 00:08:06 ntpd[1995]: Deleting interface #12 lxc_health, fe80::c023:31ff:fe78:5a19%8#123, interface stats: received=0, sent=0, dropped=0, active_time=86 secs May 17 00:08:08.193989 kubelet[3512]: I0517 00:08:08.193573 3512 memory_manager.go:355] "RemoveStaleState removing state" podUID="673ed8d9-444e-46ba-b658-ec110b324f30" containerName="cilium-operator" May 17 00:08:08.193989 kubelet[3512]: I0517 00:08:08.193624 3512 memory_manager.go:355] "RemoveStaleState removing state" podUID="e7a15507-293b-4b64-9a85-6b7691d993b0" containerName="cilium-agent" May 17 00:08:08.194547 sshd[5296]: pam_unix(sshd:session): session closed for user core May 17 00:08:08.208184 systemd[1]: sshd@28-172.31.24.47:22-139.178.89.65:50406.service: Deactivated successfully. May 17 00:08:08.217025 systemd[1]: session-29.scope: Deactivated successfully. May 17 00:08:08.218459 systemd[1]: session-29.scope: Consumed 1.902s CPU time. May 17 00:08:08.242016 systemd-logind[2003]: Session 29 logged out. Waiting for processes to exit. May 17 00:08:08.253807 systemd[1]: Started sshd@29-172.31.24.47:22-139.178.89.65:49514.service - OpenSSH per-connection server daemon (139.178.89.65:49514). May 17 00:08:08.258583 systemd-logind[2003]: Removed session 29. May 17 00:08:08.267854 systemd[1]: Created slice kubepods-burstable-pod445e2a26_e6c8_44ae_86fa_8c657b5259d3.slice - libcontainer container kubepods-burstable-pod445e2a26_e6c8_44ae_86fa_8c657b5259d3.slice. May 17 00:08:08.297318 kubelet[3512]: I0517 00:08:08.297239 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/445e2a26-e6c8-44ae-86fa-8c657b5259d3-xtables-lock\") pod \"cilium-mxqmq\" (UID: \"445e2a26-e6c8-44ae-86fa-8c657b5259d3\") " pod="kube-system/cilium-mxqmq" May 17 00:08:08.297933 kubelet[3512]: I0517 00:08:08.297892 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/445e2a26-e6c8-44ae-86fa-8c657b5259d3-cilium-config-path\") pod \"cilium-mxqmq\" (UID: \"445e2a26-e6c8-44ae-86fa-8c657b5259d3\") " pod="kube-system/cilium-mxqmq" May 17 00:08:08.298172 kubelet[3512]: I0517 00:08:08.298126 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/445e2a26-e6c8-44ae-86fa-8c657b5259d3-cni-path\") pod \"cilium-mxqmq\" (UID: \"445e2a26-e6c8-44ae-86fa-8c657b5259d3\") " pod="kube-system/cilium-mxqmq" May 17 00:08:08.298514 kubelet[3512]: I0517 00:08:08.298474 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/445e2a26-e6c8-44ae-86fa-8c657b5259d3-cilium-run\") pod \"cilium-mxqmq\" (UID: \"445e2a26-e6c8-44ae-86fa-8c657b5259d3\") " pod="kube-system/cilium-mxqmq" May 17 00:08:08.298756 kubelet[3512]: I0517 00:08:08.298551 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/445e2a26-e6c8-44ae-86fa-8c657b5259d3-bpf-maps\") pod \"cilium-mxqmq\" (UID: \"445e2a26-e6c8-44ae-86fa-8c657b5259d3\") " pod="kube-system/cilium-mxqmq" May 17 00:08:08.298756 kubelet[3512]: I0517 00:08:08.298602 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/445e2a26-e6c8-44ae-86fa-8c657b5259d3-hostproc\") pod \"cilium-mxqmq\" (UID: \"445e2a26-e6c8-44ae-86fa-8c657b5259d3\") " pod="kube-system/cilium-mxqmq" May 17 00:08:08.298756 kubelet[3512]: I0517 00:08:08.298683 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/445e2a26-e6c8-44ae-86fa-8c657b5259d3-host-proc-sys-net\") pod \"cilium-mxqmq\" (UID: \"445e2a26-e6c8-44ae-86fa-8c657b5259d3\") " pod="kube-system/cilium-mxqmq" May 17 00:08:08.299003 kubelet[3512]: I0517 00:08:08.298764 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/445e2a26-e6c8-44ae-86fa-8c657b5259d3-lib-modules\") pod \"cilium-mxqmq\" (UID: \"445e2a26-e6c8-44ae-86fa-8c657b5259d3\") " pod="kube-system/cilium-mxqmq" May 17 00:08:08.299003 kubelet[3512]: I0517 00:08:08.298803 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/445e2a26-e6c8-44ae-86fa-8c657b5259d3-cilium-ipsec-secrets\") pod \"cilium-mxqmq\" (UID: \"445e2a26-e6c8-44ae-86fa-8c657b5259d3\") " pod="kube-system/cilium-mxqmq" May 17 00:08:08.299003 kubelet[3512]: I0517 00:08:08.298847 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/445e2a26-e6c8-44ae-86fa-8c657b5259d3-host-proc-sys-kernel\") pod \"cilium-mxqmq\" (UID: \"445e2a26-e6c8-44ae-86fa-8c657b5259d3\") " pod="kube-system/cilium-mxqmq" May 17 00:08:08.299003 kubelet[3512]: I0517 00:08:08.298886 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/445e2a26-e6c8-44ae-86fa-8c657b5259d3-cilium-cgroup\") pod \"cilium-mxqmq\" (UID: \"445e2a26-e6c8-44ae-86fa-8c657b5259d3\") " pod="kube-system/cilium-mxqmq" May 17 00:08:08.299003 kubelet[3512]: I0517 00:08:08.298923 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/445e2a26-e6c8-44ae-86fa-8c657b5259d3-clustermesh-secrets\") pod \"cilium-mxqmq\" (UID: \"445e2a26-e6c8-44ae-86fa-8c657b5259d3\") " pod="kube-system/cilium-mxqmq" May 17 00:08:08.299311 kubelet[3512]: I0517 00:08:08.298961 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9vwh\" (UniqueName: \"kubernetes.io/projected/445e2a26-e6c8-44ae-86fa-8c657b5259d3-kube-api-access-j9vwh\") pod \"cilium-mxqmq\" (UID: \"445e2a26-e6c8-44ae-86fa-8c657b5259d3\") " pod="kube-system/cilium-mxqmq" May 17 00:08:08.299311 kubelet[3512]: I0517 00:08:08.299003 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/445e2a26-e6c8-44ae-86fa-8c657b5259d3-etc-cni-netd\") pod \"cilium-mxqmq\" (UID: \"445e2a26-e6c8-44ae-86fa-8c657b5259d3\") " pod="kube-system/cilium-mxqmq" May 17 00:08:08.299311 kubelet[3512]: I0517 00:08:08.299038 3512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/445e2a26-e6c8-44ae-86fa-8c657b5259d3-hubble-tls\") pod \"cilium-mxqmq\" (UID: \"445e2a26-e6c8-44ae-86fa-8c657b5259d3\") " pod="kube-system/cilium-mxqmq" May 17 00:08:08.460358 sshd[5308]: Accepted publickey for core from 139.178.89.65 port 49514 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:08:08.463610 sshd[5308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:08.471595 systemd-logind[2003]: New session 30 of user core. May 17 00:08:08.478531 systemd[1]: Started session-30.scope - Session 30 of User core. May 17 00:08:08.579699 containerd[2020]: time="2025-05-17T00:08:08.578687278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mxqmq,Uid:445e2a26-e6c8-44ae-86fa-8c657b5259d3,Namespace:kube-system,Attempt:0,}" May 17 00:08:08.606681 sshd[5308]: pam_unix(sshd:session): session closed for user core May 17 00:08:08.617736 systemd[1]: sshd@29-172.31.24.47:22-139.178.89.65:49514.service: Deactivated successfully. May 17 00:08:08.624722 systemd[1]: session-30.scope: Deactivated successfully. May 17 00:08:08.627010 systemd-logind[2003]: Session 30 logged out. Waiting for processes to exit. May 17 00:08:08.634428 containerd[2020]: time="2025-05-17T00:08:08.634229686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:08:08.635657 containerd[2020]: time="2025-05-17T00:08:08.635212114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:08:08.637344 containerd[2020]: time="2025-05-17T00:08:08.635849026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:08:08.637344 containerd[2020]: time="2025-05-17T00:08:08.636039082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:08:08.648416 systemd[1]: Started sshd@30-172.31.24.47:22-139.178.89.65:49524.service - OpenSSH per-connection server daemon (139.178.89.65:49524). May 17 00:08:08.650672 systemd-logind[2003]: Removed session 30. May 17 00:08:08.678243 systemd[1]: Started cri-containerd-ea8a070e2067123e56dd5d72c20e1d1a0e47e4dc8748aca4deb6cdf650a73197.scope - libcontainer container ea8a070e2067123e56dd5d72c20e1d1a0e47e4dc8748aca4deb6cdf650a73197. May 17 00:08:08.727995 containerd[2020]: time="2025-05-17T00:08:08.727783799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mxqmq,Uid:445e2a26-e6c8-44ae-86fa-8c657b5259d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea8a070e2067123e56dd5d72c20e1d1a0e47e4dc8748aca4deb6cdf650a73197\"" May 17 00:08:08.735199 containerd[2020]: time="2025-05-17T00:08:08.734787731Z" level=info msg="CreateContainer within sandbox \"ea8a070e2067123e56dd5d72c20e1d1a0e47e4dc8748aca4deb6cdf650a73197\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:08:08.760746 containerd[2020]: time="2025-05-17T00:08:08.760665647Z" level=info msg="CreateContainer within sandbox \"ea8a070e2067123e56dd5d72c20e1d1a0e47e4dc8748aca4deb6cdf650a73197\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1b0e70ccd6798a8e402d85089d2996be284cf14d26f4f418ecee33e02af5520a\"" May 17 00:08:08.763336 containerd[2020]: time="2025-05-17T00:08:08.761879315Z" level=info msg="StartContainer for \"1b0e70ccd6798a8e402d85089d2996be284cf14d26f4f418ecee33e02af5520a\"" May 17 00:08:08.808892 systemd[1]: Started cri-containerd-1b0e70ccd6798a8e402d85089d2996be284cf14d26f4f418ecee33e02af5520a.scope - libcontainer container 1b0e70ccd6798a8e402d85089d2996be284cf14d26f4f418ecee33e02af5520a. May 17 00:08:08.849611 sshd[5340]: Accepted publickey for core from 139.178.89.65 port 49524 ssh2: RSA SHA256:eHxhS8dryg5cUeSGvw1rEy5lVBwPN/UG03VmUnZDeRM May 17 00:08:08.854388 sshd[5340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:08:08.863206 containerd[2020]: time="2025-05-17T00:08:08.863008680Z" level=info msg="StartContainer for \"1b0e70ccd6798a8e402d85089d2996be284cf14d26f4f418ecee33e02af5520a\" returns successfully" May 17 00:08:08.867313 systemd-logind[2003]: New session 31 of user core. May 17 00:08:08.876585 systemd[1]: Started session-31.scope - Session 31 of User core. May 17 00:08:08.884799 systemd[1]: cri-containerd-1b0e70ccd6798a8e402d85089d2996be284cf14d26f4f418ecee33e02af5520a.scope: Deactivated successfully. May 17 00:08:08.915556 containerd[2020]: time="2025-05-17T00:08:08.914875644Z" level=info msg="StopPodSandbox for \"8fc98ee84d16a5ed467bbdb1f6e68c1fef9ea1c1601349afe76c10e7138b14e7\"" May 17 00:08:08.915709 containerd[2020]: time="2025-05-17T00:08:08.915643284Z" level=info msg="TearDown network for sandbox \"8fc98ee84d16a5ed467bbdb1f6e68c1fef9ea1c1601349afe76c10e7138b14e7\" successfully" May 17 00:08:08.915709 containerd[2020]: time="2025-05-17T00:08:08.915671220Z" level=info msg="StopPodSandbox for \"8fc98ee84d16a5ed467bbdb1f6e68c1fef9ea1c1601349afe76c10e7138b14e7\" returns successfully" May 17 00:08:08.917634 containerd[2020]: time="2025-05-17T00:08:08.916679772Z" level=info msg="RemovePodSandbox for \"8fc98ee84d16a5ed467bbdb1f6e68c1fef9ea1c1601349afe76c10e7138b14e7\"" May 17 00:08:08.917634 containerd[2020]: time="2025-05-17T00:08:08.916763700Z" level=info msg="Forcibly stopping sandbox \"8fc98ee84d16a5ed467bbdb1f6e68c1fef9ea1c1601349afe76c10e7138b14e7\"" May 17 00:08:08.917634 containerd[2020]: time="2025-05-17T00:08:08.916931196Z" level=info msg="TearDown network for sandbox \"8fc98ee84d16a5ed467bbdb1f6e68c1fef9ea1c1601349afe76c10e7138b14e7\" successfully" May 17 00:08:08.939007 containerd[2020]: time="2025-05-17T00:08:08.938846868Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8fc98ee84d16a5ed467bbdb1f6e68c1fef9ea1c1601349afe76c10e7138b14e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:08:08.939007 containerd[2020]: time="2025-05-17T00:08:08.938988852Z" level=info msg="RemovePodSandbox \"8fc98ee84d16a5ed467bbdb1f6e68c1fef9ea1c1601349afe76c10e7138b14e7\" returns successfully" May 17 00:08:08.940323 containerd[2020]: time="2025-05-17T00:08:08.939984504Z" level=info msg="StopPodSandbox for \"760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5\"" May 17 00:08:08.940323 containerd[2020]: time="2025-05-17T00:08:08.940118196Z" level=info msg="TearDown network for sandbox \"760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5\" successfully" May 17 00:08:08.940323 containerd[2020]: time="2025-05-17T00:08:08.940141872Z" level=info msg="StopPodSandbox for \"760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5\" returns successfully" May 17 00:08:08.942587 containerd[2020]: time="2025-05-17T00:08:08.941080572Z" level=info msg="RemovePodSandbox for \"760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5\"" May 17 00:08:08.942587 containerd[2020]: time="2025-05-17T00:08:08.941149368Z" level=info msg="Forcibly stopping sandbox \"760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5\"" May 17 00:08:08.942587 containerd[2020]: time="2025-05-17T00:08:08.941301960Z" level=info msg="TearDown network for sandbox \"760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5\" successfully" May 17 00:08:08.952297 containerd[2020]: time="2025-05-17T00:08:08.952198944Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:08:08.952564 containerd[2020]: time="2025-05-17T00:08:08.952531704Z" level=info msg="RemovePodSandbox \"760288ee65d6103eba124d82563853aa5a809c0512a43f44d51fba3b7bb7d2f5\" returns successfully" May 17 00:08:08.967888 containerd[2020]: time="2025-05-17T00:08:08.967795116Z" level=info msg="shim disconnected" id=1b0e70ccd6798a8e402d85089d2996be284cf14d26f4f418ecee33e02af5520a namespace=k8s.io May 17 00:08:08.968238 containerd[2020]: time="2025-05-17T00:08:08.968189460Z" level=warning msg="cleaning up after shim disconnected" id=1b0e70ccd6798a8e402d85089d2996be284cf14d26f4f418ecee33e02af5520a namespace=k8s.io May 17 00:08:08.968622 containerd[2020]: time="2025-05-17T00:08:08.968445720Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:08:09.183576 kubelet[3512]: E0517 00:08:09.183421 3512 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:08:09.481487 containerd[2020]: time="2025-05-17T00:08:09.481413311Z" level=info msg="CreateContainer within sandbox \"ea8a070e2067123e56dd5d72c20e1d1a0e47e4dc8748aca4deb6cdf650a73197\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:08:09.514708 containerd[2020]: time="2025-05-17T00:08:09.513469715Z" level=info msg="CreateContainer within sandbox \"ea8a070e2067123e56dd5d72c20e1d1a0e47e4dc8748aca4deb6cdf650a73197\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"76b14d5f6fd5ad015175f8a52dfbe21215d71b31649cb248020d61be8ec4e93a\"" May 17 00:08:09.515759 containerd[2020]: time="2025-05-17T00:08:09.515696279Z" level=info msg="StartContainer for \"76b14d5f6fd5ad015175f8a52dfbe21215d71b31649cb248020d61be8ec4e93a\"" May 17 00:08:09.576558 systemd[1]: Started cri-containerd-76b14d5f6fd5ad015175f8a52dfbe21215d71b31649cb248020d61be8ec4e93a.scope - libcontainer container 76b14d5f6fd5ad015175f8a52dfbe21215d71b31649cb248020d61be8ec4e93a. May 17 00:08:09.626488 containerd[2020]: time="2025-05-17T00:08:09.626354519Z" level=info msg="StartContainer for \"76b14d5f6fd5ad015175f8a52dfbe21215d71b31649cb248020d61be8ec4e93a\" returns successfully" May 17 00:08:09.640514 systemd[1]: cri-containerd-76b14d5f6fd5ad015175f8a52dfbe21215d71b31649cb248020d61be8ec4e93a.scope: Deactivated successfully. May 17 00:08:09.687721 containerd[2020]: time="2025-05-17T00:08:09.687349224Z" level=info msg="shim disconnected" id=76b14d5f6fd5ad015175f8a52dfbe21215d71b31649cb248020d61be8ec4e93a namespace=k8s.io May 17 00:08:09.687721 containerd[2020]: time="2025-05-17T00:08:09.687428928Z" level=warning msg="cleaning up after shim disconnected" id=76b14d5f6fd5ad015175f8a52dfbe21215d71b31649cb248020d61be8ec4e93a namespace=k8s.io May 17 00:08:09.687721 containerd[2020]: time="2025-05-17T00:08:09.687456468Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:08:10.414394 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76b14d5f6fd5ad015175f8a52dfbe21215d71b31649cb248020d61be8ec4e93a-rootfs.mount: Deactivated successfully. May 17 00:08:10.487303 containerd[2020]: time="2025-05-17T00:08:10.487216908Z" level=info msg="CreateContainer within sandbox \"ea8a070e2067123e56dd5d72c20e1d1a0e47e4dc8748aca4deb6cdf650a73197\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:08:10.532483 containerd[2020]: time="2025-05-17T00:08:10.531541572Z" level=info msg="CreateContainer within sandbox \"ea8a070e2067123e56dd5d72c20e1d1a0e47e4dc8748aca4deb6cdf650a73197\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fa92c65c24d0fbd382f30bb8c97a62b8a13a9a6895c17c7e554d46c4b6bc81cf\"" May 17 00:08:10.536927 containerd[2020]: time="2025-05-17T00:08:10.535889508Z" level=info msg="StartContainer for \"fa92c65c24d0fbd382f30bb8c97a62b8a13a9a6895c17c7e554d46c4b6bc81cf\"" May 17 00:08:10.596724 systemd[1]: Started cri-containerd-fa92c65c24d0fbd382f30bb8c97a62b8a13a9a6895c17c7e554d46c4b6bc81cf.scope - libcontainer container fa92c65c24d0fbd382f30bb8c97a62b8a13a9a6895c17c7e554d46c4b6bc81cf. May 17 00:08:10.649382 containerd[2020]: time="2025-05-17T00:08:10.649312740Z" level=info msg="StartContainer for \"fa92c65c24d0fbd382f30bb8c97a62b8a13a9a6895c17c7e554d46c4b6bc81cf\" returns successfully" May 17 00:08:10.652545 systemd[1]: cri-containerd-fa92c65c24d0fbd382f30bb8c97a62b8a13a9a6895c17c7e554d46c4b6bc81cf.scope: Deactivated successfully. May 17 00:08:10.699949 containerd[2020]: time="2025-05-17T00:08:10.699727417Z" level=info msg="shim disconnected" id=fa92c65c24d0fbd382f30bb8c97a62b8a13a9a6895c17c7e554d46c4b6bc81cf namespace=k8s.io May 17 00:08:10.699949 containerd[2020]: time="2025-05-17T00:08:10.699802837Z" level=warning msg="cleaning up after shim disconnected" id=fa92c65c24d0fbd382f30bb8c97a62b8a13a9a6895c17c7e554d46c4b6bc81cf namespace=k8s.io May 17 00:08:10.699949 containerd[2020]: time="2025-05-17T00:08:10.699823105Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:08:11.414323 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa92c65c24d0fbd382f30bb8c97a62b8a13a9a6895c17c7e554d46c4b6bc81cf-rootfs.mount: Deactivated successfully. May 17 00:08:11.493550 containerd[2020]: time="2025-05-17T00:08:11.493085233Z" level=info msg="CreateContainer within sandbox \"ea8a070e2067123e56dd5d72c20e1d1a0e47e4dc8748aca4deb6cdf650a73197\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:08:11.524812 containerd[2020]: time="2025-05-17T00:08:11.524567365Z" level=info msg="CreateContainer within sandbox \"ea8a070e2067123e56dd5d72c20e1d1a0e47e4dc8748aca4deb6cdf650a73197\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dd8673b81059a370f91be90d732d5f150882c575c12aba5ef10409cd8f5794a7\"" May 17 00:08:11.526792 containerd[2020]: time="2025-05-17T00:08:11.526726765Z" level=info msg="StartContainer for \"dd8673b81059a370f91be90d732d5f150882c575c12aba5ef10409cd8f5794a7\"" May 17 00:08:11.586584 systemd[1]: Started cri-containerd-dd8673b81059a370f91be90d732d5f150882c575c12aba5ef10409cd8f5794a7.scope - libcontainer container dd8673b81059a370f91be90d732d5f150882c575c12aba5ef10409cd8f5794a7. May 17 00:08:11.626562 systemd[1]: cri-containerd-dd8673b81059a370f91be90d732d5f150882c575c12aba5ef10409cd8f5794a7.scope: Deactivated successfully. May 17 00:08:11.642567 containerd[2020]: time="2025-05-17T00:08:11.642427909Z" level=info msg="StartContainer for \"dd8673b81059a370f91be90d732d5f150882c575c12aba5ef10409cd8f5794a7\" returns successfully" May 17 00:08:11.675839 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd8673b81059a370f91be90d732d5f150882c575c12aba5ef10409cd8f5794a7-rootfs.mount: Deactivated successfully. May 17 00:08:11.686670 containerd[2020]: time="2025-05-17T00:08:11.686579402Z" level=info msg="shim disconnected" id=dd8673b81059a370f91be90d732d5f150882c575c12aba5ef10409cd8f5794a7 namespace=k8s.io May 17 00:08:11.686670 containerd[2020]: time="2025-05-17T00:08:11.686660450Z" level=warning msg="cleaning up after shim disconnected" id=dd8673b81059a370f91be90d732d5f150882c575c12aba5ef10409cd8f5794a7 namespace=k8s.io May 17 00:08:11.686670 containerd[2020]: time="2025-05-17T00:08:11.686682650Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:08:11.692185 kubelet[3512]: I0517 00:08:11.692019 3512 setters.go:602] "Node became not ready" node="ip-172-31-24-47" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T00:08:11Z","lastTransitionTime":"2025-05-17T00:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 00:08:12.510968 containerd[2020]: time="2025-05-17T00:08:12.510905858Z" level=info msg="CreateContainer within sandbox \"ea8a070e2067123e56dd5d72c20e1d1a0e47e4dc8748aca4deb6cdf650a73197\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:08:12.549538 containerd[2020]: time="2025-05-17T00:08:12.549459002Z" level=info msg="CreateContainer within sandbox \"ea8a070e2067123e56dd5d72c20e1d1a0e47e4dc8748aca4deb6cdf650a73197\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8aa866a04fd15202ff76d0216879e29123a4f97785a55ecb7e7b99afc15db6a6\"" May 17 00:08:12.550704 containerd[2020]: time="2025-05-17T00:08:12.550476506Z" level=info msg="StartContainer for \"8aa866a04fd15202ff76d0216879e29123a4f97785a55ecb7e7b99afc15db6a6\"" May 17 00:08:12.606564 systemd[1]: Started cri-containerd-8aa866a04fd15202ff76d0216879e29123a4f97785a55ecb7e7b99afc15db6a6.scope - libcontainer container 8aa866a04fd15202ff76d0216879e29123a4f97785a55ecb7e7b99afc15db6a6. May 17 00:08:12.663990 containerd[2020]: time="2025-05-17T00:08:12.663886430Z" level=info msg="StartContainer for \"8aa866a04fd15202ff76d0216879e29123a4f97785a55ecb7e7b99afc15db6a6\" returns successfully" May 17 00:08:13.477298 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 17 00:08:13.548298 kubelet[3512]: I0517 00:08:13.547583 3512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mxqmq" podStartSLOduration=5.547559991 podStartE2EDuration="5.547559991s" podCreationTimestamp="2025-05-17 00:08:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:08:13.545890395 +0000 UTC m=+124.886413449" watchObservedRunningTime="2025-05-17 00:08:13.547559991 +0000 UTC m=+124.888082973" May 17 00:08:15.373750 systemd[1]: run-containerd-runc-k8s.io-8aa866a04fd15202ff76d0216879e29123a4f97785a55ecb7e7b99afc15db6a6-runc.kXf5Qn.mount: Deactivated successfully. May 17 00:08:17.807380 systemd-networkd[1941]: lxc_health: Link UP May 17 00:08:17.820916 (udev-worker)[6158]: Network interface NamePolicy= disabled on kernel command line. May 17 00:08:17.821129 systemd-networkd[1941]: lxc_health: Gained carrier May 17 00:08:19.428496 systemd-networkd[1941]: lxc_health: Gained IPv6LL May 17 00:08:19.944424 systemd[1]: run-containerd-runc-k8s.io-8aa866a04fd15202ff76d0216879e29123a4f97785a55ecb7e7b99afc15db6a6-runc.RGd0VB.mount: Deactivated successfully. May 17 00:08:22.339581 ntpd[1995]: Listen normally on 15 lxc_health [fe80::9c03:4eff:fe53:219f%14]:123 May 17 00:08:22.341174 ntpd[1995]: 17 May 00:08:22 ntpd[1995]: Listen normally on 15 lxc_health [fe80::9c03:4eff:fe53:219f%14]:123 May 17 00:08:22.360506 kubelet[3512]: E0517 00:08:22.360294 3512 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:50066->127.0.0.1:41271: write tcp 127.0.0.1:50066->127.0.0.1:41271: write: broken pipe May 17 00:08:24.689601 sshd[5340]: pam_unix(sshd:session): session closed for user core May 17 00:08:24.696649 systemd[1]: sshd@30-172.31.24.47:22-139.178.89.65:49524.service: Deactivated successfully. May 17 00:08:24.703618 systemd[1]: session-31.scope: Deactivated successfully. May 17 00:08:24.710133 systemd-logind[2003]: Session 31 logged out. Waiting for processes to exit. May 17 00:08:24.712762 systemd-logind[2003]: Removed session 31.