May 8 23:53:00.167255 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] May 8 23:53:00.167301 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu May 8 22:24:27 -00 2025 May 8 23:53:00.167325 kernel: KASLR disabled due to lack of seed May 8 23:53:00.167341 kernel: efi: EFI v2.7 by EDK II May 8 23:53:00.167357 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b000a98 MEMRESERVE=0x78503d98 May 8 23:53:00.167372 kernel: secureboot: Secure boot disabled May 8 23:53:00.167389 kernel: ACPI: Early table checksum verification disabled May 8 23:53:00.167404 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) May 8 23:53:00.167420 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) May 8 23:53:00.167435 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) May 8 23:53:00.167455 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) May 8 23:53:00.167470 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 8 23:53:00.167485 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) May 8 23:53:00.169609 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) May 8 23:53:00.169868 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) May 8 23:53:00.169895 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 8 23:53:00.169912 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) May 8 23:53:00.169929 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) May 8 23:53:00.169945 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 May 8 23:53:00.169961 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') May 8 23:53:00.169977 kernel: printk: bootconsole [uart0] enabled May 8 23:53:00.169993 kernel: NUMA: Failed to initialise from firmware May 8 23:53:00.170010 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] May 8 23:53:00.170026 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] May 8 23:53:00.170042 kernel: Zone ranges: May 8 23:53:00.170059 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 8 23:53:00.170080 kernel: DMA32 empty May 8 23:53:00.170097 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] May 8 23:53:00.170113 kernel: Movable zone start for each node May 8 23:53:00.172534 kernel: Early memory node ranges May 8 23:53:00.172888 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] May 8 23:53:00.173035 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] May 8 23:53:00.173080 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] May 8 23:53:00.173101 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] May 8 23:53:00.173117 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] May 8 23:53:00.173173 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] May 8 23:53:00.173190 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] May 8 23:53:00.173206 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] May 8 23:53:00.173231 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] May 8 23:53:00.173247 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges May 8 23:53:00.173270 kernel: psci: probing for conduit method from ACPI. May 8 23:53:00.173287 kernel: psci: PSCIv1.0 detected in firmware. May 8 23:53:00.173304 kernel: psci: Using standard PSCI v0.2 function IDs May 8 23:53:00.173325 kernel: psci: Trusted OS migration not required May 8 23:53:00.173343 kernel: psci: SMC Calling Convention v1.1 May 8 23:53:00.173360 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 8 23:53:00.173376 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 8 23:53:00.173393 kernel: pcpu-alloc: [0] 0 [0] 1 May 8 23:53:00.173410 kernel: Detected PIPT I-cache on CPU0 May 8 23:53:00.173426 kernel: CPU features: detected: GIC system register CPU interface May 8 23:53:00.173443 kernel: CPU features: detected: Spectre-v2 May 8 23:53:00.173460 kernel: CPU features: detected: Spectre-v3a May 8 23:53:00.173476 kernel: CPU features: detected: Spectre-BHB May 8 23:53:00.173493 kernel: CPU features: detected: ARM erratum 1742098 May 8 23:53:00.173509 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 May 8 23:53:00.173530 kernel: alternatives: applying boot alternatives May 8 23:53:00.173549 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c64a0b436b1966f9e1b9e71c914f0e311fc31b586ad91dbeab7146e426399a98 May 8 23:53:00.173568 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 23:53:00.173585 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 23:53:00.173601 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 23:53:00.173618 kernel: Fallback order for Node 0: 0 May 8 23:53:00.173635 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 May 8 23:53:00.173651 kernel: Policy zone: Normal May 8 23:53:00.173668 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 23:53:00.173684 kernel: software IO TLB: area num 2. May 8 23:53:00.173705 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) May 8 23:53:00.173723 kernel: Memory: 3819896K/4030464K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39744K init, 897K bss, 210568K reserved, 0K cma-reserved) May 8 23:53:00.173740 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 8 23:53:00.173757 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 23:53:00.173775 kernel: rcu: RCU event tracing is enabled. May 8 23:53:00.173792 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 8 23:53:00.173809 kernel: Trampoline variant of Tasks RCU enabled. May 8 23:53:00.173826 kernel: Tracing variant of Tasks RCU enabled. May 8 23:53:00.173843 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 23:53:00.173860 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 8 23:53:00.173876 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 8 23:53:00.173897 kernel: GICv3: 96 SPIs implemented May 8 23:53:00.173914 kernel: GICv3: 0 Extended SPIs implemented May 8 23:53:00.173931 kernel: Root IRQ handler: gic_handle_irq May 8 23:53:00.173947 kernel: GICv3: GICv3 features: 16 PPIs May 8 23:53:00.173964 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 May 8 23:53:00.173980 kernel: ITS [mem 0x10080000-0x1009ffff] May 8 23:53:00.173997 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) May 8 23:53:00.174014 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) May 8 23:53:00.174031 kernel: GICv3: using LPI property table @0x00000004000d0000 May 8 23:53:00.174047 kernel: ITS: Using hypervisor restricted LPI range [128] May 8 23:53:00.174064 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 May 8 23:53:00.174081 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 8 23:53:00.174102 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). May 8 23:53:00.174119 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns May 8 23:53:00.174177 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns May 8 23:53:00.174195 kernel: Console: colour dummy device 80x25 May 8 23:53:00.174212 kernel: printk: console [tty1] enabled May 8 23:53:00.174229 kernel: ACPI: Core revision 20230628 May 8 23:53:00.174247 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) May 8 23:53:00.174265 kernel: pid_max: default: 32768 minimum: 301 May 8 23:53:00.174282 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 8 23:53:00.174299 kernel: landlock: Up and running. May 8 23:53:00.174323 kernel: SELinux: Initializing. May 8 23:53:00.174340 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 23:53:00.174357 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 23:53:00.174375 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 23:53:00.174392 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 8 23:53:00.174409 kernel: rcu: Hierarchical SRCU implementation. May 8 23:53:00.174427 kernel: rcu: Max phase no-delay instances is 400. May 8 23:53:00.174444 kernel: Platform MSI: ITS@0x10080000 domain created May 8 23:53:00.174466 kernel: PCI/MSI: ITS@0x10080000 domain created May 8 23:53:00.174483 kernel: Remapping and enabling EFI services. May 8 23:53:00.174501 kernel: smp: Bringing up secondary CPUs ... May 8 23:53:00.174518 kernel: Detected PIPT I-cache on CPU1 May 8 23:53:00.174535 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 May 8 23:53:00.174552 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 May 8 23:53:00.174570 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] May 8 23:53:00.174587 kernel: smp: Brought up 1 node, 2 CPUs May 8 23:53:00.174604 kernel: SMP: Total of 2 processors activated. May 8 23:53:00.174621 kernel: CPU features: detected: 32-bit EL0 Support May 8 23:53:00.174642 kernel: CPU features: detected: 32-bit EL1 Support May 8 23:53:00.174659 kernel: CPU features: detected: CRC32 instructions May 8 23:53:00.174687 kernel: CPU: All CPU(s) started at EL1 May 8 23:53:00.174709 kernel: alternatives: applying system-wide alternatives May 8 23:53:00.174727 kernel: devtmpfs: initialized May 8 23:53:00.174745 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 23:53:00.174763 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 8 23:53:00.174781 kernel: pinctrl core: initialized pinctrl subsystem May 8 23:53:00.174799 kernel: SMBIOS 3.0.0 present. May 8 23:53:00.174821 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 May 8 23:53:00.174839 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 23:53:00.174856 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 8 23:53:00.174874 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 8 23:53:00.174892 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 8 23:53:00.174910 kernel: audit: initializing netlink subsys (disabled) May 8 23:53:00.174928 kernel: audit: type=2000 audit(0.218:1): state=initialized audit_enabled=0 res=1 May 8 23:53:00.174950 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 23:53:00.174968 kernel: cpuidle: using governor menu May 8 23:53:00.174986 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 8 23:53:00.175003 kernel: ASID allocator initialised with 65536 entries May 8 23:53:00.175021 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 23:53:00.175039 kernel: Serial: AMBA PL011 UART driver May 8 23:53:00.175057 kernel: Modules: 17424 pages in range for non-PLT usage May 8 23:53:00.175075 kernel: Modules: 508944 pages in range for PLT usage May 8 23:53:00.175093 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 8 23:53:00.175114 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 8 23:53:00.176218 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 8 23:53:00.176244 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 8 23:53:00.176263 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 8 23:53:00.176282 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 8 23:53:00.176300 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 8 23:53:00.176320 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 8 23:53:00.176338 kernel: ACPI: Added _OSI(Module Device) May 8 23:53:00.176356 kernel: ACPI: Added _OSI(Processor Device) May 8 23:53:00.176383 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 23:53:00.176401 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 23:53:00.176419 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 23:53:00.176437 kernel: ACPI: Interpreter enabled May 8 23:53:00.176455 kernel: ACPI: Using GIC for interrupt routing May 8 23:53:00.176473 kernel: ACPI: MCFG table detected, 1 entries May 8 23:53:00.176491 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) May 8 23:53:00.176782 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 23:53:00.176991 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 8 23:53:00.177220 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 8 23:53:00.177421 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 May 8 23:53:00.177617 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] May 8 23:53:00.177642 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] May 8 23:53:00.177661 kernel: acpiphp: Slot [1] registered May 8 23:53:00.177680 kernel: acpiphp: Slot [2] registered May 8 23:53:00.177698 kernel: acpiphp: Slot [3] registered May 8 23:53:00.177722 kernel: acpiphp: Slot [4] registered May 8 23:53:00.177740 kernel: acpiphp: Slot [5] registered May 8 23:53:00.177757 kernel: acpiphp: Slot [6] registered May 8 23:53:00.177775 kernel: acpiphp: Slot [7] registered May 8 23:53:00.177793 kernel: acpiphp: Slot [8] registered May 8 23:53:00.177811 kernel: acpiphp: Slot [9] registered May 8 23:53:00.177830 kernel: acpiphp: Slot [10] registered May 8 23:53:00.177848 kernel: acpiphp: Slot [11] registered May 8 23:53:00.177867 kernel: acpiphp: Slot [12] registered May 8 23:53:00.177885 kernel: acpiphp: Slot [13] registered May 8 23:53:00.177907 kernel: acpiphp: Slot [14] registered May 8 23:53:00.177926 kernel: acpiphp: Slot [15] registered May 8 23:53:00.177943 kernel: acpiphp: Slot [16] registered May 8 23:53:00.177962 kernel: acpiphp: Slot [17] registered May 8 23:53:00.177980 kernel: acpiphp: Slot [18] registered May 8 23:53:00.177998 kernel: acpiphp: Slot [19] registered May 8 23:53:00.178016 kernel: acpiphp: Slot [20] registered May 8 23:53:00.178033 kernel: acpiphp: Slot [21] registered May 8 23:53:00.178051 kernel: acpiphp: Slot [22] registered May 8 23:53:00.178072 kernel: acpiphp: Slot [23] registered May 8 23:53:00.178093 kernel: acpiphp: Slot [24] registered May 8 23:53:00.178111 kernel: acpiphp: Slot [25] registered May 8 23:53:00.180252 kernel: acpiphp: Slot [26] registered May 8 23:53:00.180285 kernel: acpiphp: Slot [27] registered May 8 23:53:00.180303 kernel: acpiphp: Slot [28] registered May 8 23:53:00.180321 kernel: acpiphp: Slot [29] registered May 8 23:53:00.180339 kernel: acpiphp: Slot [30] registered May 8 23:53:00.180357 kernel: acpiphp: Slot [31] registered May 8 23:53:00.180375 kernel: PCI host bridge to bus 0000:00 May 8 23:53:00.180665 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] May 8 23:53:00.180851 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 8 23:53:00.181038 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] May 8 23:53:00.181282 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] May 8 23:53:00.181603 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 May 8 23:53:00.182664 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 May 8 23:53:00.182918 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] May 8 23:53:00.185291 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 8 23:53:00.185580 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] May 8 23:53:00.185807 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold May 8 23:53:00.186058 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 8 23:53:00.189825 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] May 8 23:53:00.190047 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] May 8 23:53:00.190296 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] May 8 23:53:00.190500 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold May 8 23:53:00.190700 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] May 8 23:53:00.190900 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] May 8 23:53:00.191118 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] May 8 23:53:00.193445 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] May 8 23:53:00.193664 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] May 8 23:53:00.193862 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] May 8 23:53:00.194044 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 8 23:53:00.194267 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] May 8 23:53:00.194294 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 8 23:53:00.194314 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 8 23:53:00.194334 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 8 23:53:00.194353 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 8 23:53:00.194372 kernel: iommu: Default domain type: Translated May 8 23:53:00.194397 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 8 23:53:00.194416 kernel: efivars: Registered efivars operations May 8 23:53:00.194434 kernel: vgaarb: loaded May 8 23:53:00.194452 kernel: clocksource: Switched to clocksource arch_sys_counter May 8 23:53:00.194470 kernel: VFS: Disk quotas dquot_6.6.0 May 8 23:53:00.194488 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 23:53:00.194506 kernel: pnp: PnP ACPI init May 8 23:53:00.194725 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved May 8 23:53:00.194760 kernel: pnp: PnP ACPI: found 1 devices May 8 23:53:00.194779 kernel: NET: Registered PF_INET protocol family May 8 23:53:00.194798 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 23:53:00.194816 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 23:53:00.194835 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 23:53:00.194854 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 23:53:00.194872 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 8 23:53:00.194890 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 23:53:00.194908 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 23:53:00.194931 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 23:53:00.194950 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 23:53:00.194967 kernel: PCI: CLS 0 bytes, default 64 May 8 23:53:00.194986 kernel: kvm [1]: HYP mode not available May 8 23:53:00.195004 kernel: Initialise system trusted keyrings May 8 23:53:00.195022 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 23:53:00.195041 kernel: Key type asymmetric registered May 8 23:53:00.195058 kernel: Asymmetric key parser 'x509' registered May 8 23:53:00.195076 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 8 23:53:00.195099 kernel: io scheduler mq-deadline registered May 8 23:53:00.195117 kernel: io scheduler kyber registered May 8 23:53:00.197267 kernel: io scheduler bfq registered May 8 23:53:00.197528 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered May 8 23:53:00.197557 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 8 23:53:00.197576 kernel: ACPI: button: Power Button [PWRB] May 8 23:53:00.197595 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 May 8 23:53:00.197613 kernel: ACPI: button: Sleep Button [SLPB] May 8 23:53:00.197640 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 23:53:00.197660 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 8 23:53:00.197867 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) May 8 23:53:00.197894 kernel: printk: console [ttyS0] disabled May 8 23:53:00.197912 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A May 8 23:53:00.197930 kernel: printk: console [ttyS0] enabled May 8 23:53:00.197948 kernel: printk: bootconsole [uart0] disabled May 8 23:53:00.197966 kernel: thunder_xcv, ver 1.0 May 8 23:53:00.197984 kernel: thunder_bgx, ver 1.0 May 8 23:53:00.198002 kernel: nicpf, ver 1.0 May 8 23:53:00.198026 kernel: nicvf, ver 1.0 May 8 23:53:00.198278 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 8 23:53:00.198475 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-08T23:52:59 UTC (1746748379) May 8 23:53:00.198503 kernel: hid: raw HID events driver (C) Jiri Kosina May 8 23:53:00.198522 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available May 8 23:53:00.198541 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 8 23:53:00.198559 kernel: watchdog: Hard watchdog permanently disabled May 8 23:53:00.198584 kernel: NET: Registered PF_INET6 protocol family May 8 23:53:00.198603 kernel: Segment Routing with IPv6 May 8 23:53:00.198621 kernel: In-situ OAM (IOAM) with IPv6 May 8 23:53:00.198639 kernel: NET: Registered PF_PACKET protocol family May 8 23:53:00.198657 kernel: Key type dns_resolver registered May 8 23:53:00.198675 kernel: registered taskstats version 1 May 8 23:53:00.198693 kernel: Loading compiled-in X.509 certificates May 8 23:53:00.198713 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: c12e278d643ef0ddd9117a97de150d7afa727d1b' May 8 23:53:00.198731 kernel: Key type .fscrypt registered May 8 23:53:00.198750 kernel: Key type fscrypt-provisioning registered May 8 23:53:00.198773 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 23:53:00.198792 kernel: ima: Allocated hash algorithm: sha1 May 8 23:53:00.198811 kernel: ima: No architecture policies found May 8 23:53:00.198829 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 8 23:53:00.198849 kernel: clk: Disabling unused clocks May 8 23:53:00.198869 kernel: Freeing unused kernel memory: 39744K May 8 23:53:00.198887 kernel: Run /init as init process May 8 23:53:00.198905 kernel: with arguments: May 8 23:53:00.198924 kernel: /init May 8 23:53:00.198948 kernel: with environment: May 8 23:53:00.198966 kernel: HOME=/ May 8 23:53:00.198985 kernel: TERM=linux May 8 23:53:00.199004 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 23:53:00.199027 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 23:53:00.199053 systemd[1]: Detected virtualization amazon. May 8 23:53:00.199074 systemd[1]: Detected architecture arm64. May 8 23:53:00.199101 systemd[1]: Running in initrd. May 8 23:53:00.201167 systemd[1]: No hostname configured, using default hostname. May 8 23:53:00.201219 systemd[1]: Hostname set to . May 8 23:53:00.201242 systemd[1]: Initializing machine ID from VM UUID. May 8 23:53:00.201262 systemd[1]: Queued start job for default target initrd.target. May 8 23:53:00.201282 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 23:53:00.201302 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 23:53:00.201324 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 8 23:53:00.201354 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 23:53:00.201375 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 8 23:53:00.201395 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 8 23:53:00.201419 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 8 23:53:00.201440 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 8 23:53:00.201460 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 23:53:00.201479 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 23:53:00.201503 systemd[1]: Reached target paths.target - Path Units. May 8 23:53:00.201524 systemd[1]: Reached target slices.target - Slice Units. May 8 23:53:00.201543 systemd[1]: Reached target swap.target - Swaps. May 8 23:53:00.201562 systemd[1]: Reached target timers.target - Timer Units. May 8 23:53:00.201582 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 8 23:53:00.201602 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 23:53:00.201621 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 8 23:53:00.201641 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 8 23:53:00.201661 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 23:53:00.201685 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 23:53:00.201705 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 23:53:00.201726 systemd[1]: Reached target sockets.target - Socket Units. May 8 23:53:00.201745 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 8 23:53:00.201765 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 23:53:00.201785 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 8 23:53:00.201804 systemd[1]: Starting systemd-fsck-usr.service... May 8 23:53:00.201824 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 23:53:00.201848 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 23:53:00.201868 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:53:00.201887 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 8 23:53:00.201965 systemd-journald[252]: Collecting audit messages is disabled. May 8 23:53:00.202013 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 23:53:00.202033 systemd[1]: Finished systemd-fsck-usr.service. May 8 23:53:00.202058 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 23:53:00.202078 systemd-journald[252]: Journal started May 8 23:53:00.202119 systemd-journald[252]: Runtime Journal (/run/log/journal/ec2accff2f1963401fdb7a07370c1cdf) is 8.0M, max 75.3M, 67.3M free. May 8 23:53:00.184555 systemd-modules-load[253]: Inserted module 'overlay' May 8 23:53:00.211235 systemd[1]: Started systemd-journald.service - Journal Service. May 8 23:53:00.212265 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 23:53:00.215464 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:53:00.231295 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 23:53:00.233474 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 23:53:00.236402 kernel: Bridge firewalling registered May 8 23:53:00.234102 systemd-modules-load[253]: Inserted module 'br_netfilter' May 8 23:53:00.247422 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 23:53:00.267423 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 23:53:00.268707 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 23:53:00.285354 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 23:53:00.295180 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:53:00.301024 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 23:53:00.311411 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 8 23:53:00.330107 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 23:53:00.339195 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 23:53:00.358441 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 23:53:00.368460 dracut-cmdline[283]: dracut-dracut-053 May 8 23:53:00.378200 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c64a0b436b1966f9e1b9e71c914f0e311fc31b586ad91dbeab7146e426399a98 May 8 23:53:00.435495 systemd-resolved[291]: Positive Trust Anchors: May 8 23:53:00.437556 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 23:53:00.440334 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 23:53:00.501176 kernel: SCSI subsystem initialized May 8 23:53:00.509158 kernel: Loading iSCSI transport class v2.0-870. May 8 23:53:00.522169 kernel: iscsi: registered transport (tcp) May 8 23:53:00.543635 kernel: iscsi: registered transport (qla4xxx) May 8 23:53:00.543727 kernel: QLogic iSCSI HBA Driver May 8 23:53:00.650165 kernel: random: crng init done May 8 23:53:00.650617 systemd-resolved[291]: Defaulting to hostname 'linux'. May 8 23:53:00.654648 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 23:53:00.657032 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 23:53:00.679894 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 8 23:53:00.689446 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 8 23:53:00.739968 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 23:53:00.740041 kernel: device-mapper: uevent: version 1.0.3 May 8 23:53:00.742145 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 8 23:53:00.806174 kernel: raid6: neonx8 gen() 6752 MB/s May 8 23:53:00.823156 kernel: raid6: neonx4 gen() 6568 MB/s May 8 23:53:00.840155 kernel: raid6: neonx2 gen() 5485 MB/s May 8 23:53:00.857155 kernel: raid6: neonx1 gen() 3952 MB/s May 8 23:53:00.874154 kernel: raid6: int64x8 gen() 3829 MB/s May 8 23:53:00.891154 kernel: raid6: int64x4 gen() 3735 MB/s May 8 23:53:00.908155 kernel: raid6: int64x2 gen() 3619 MB/s May 8 23:53:00.925972 kernel: raid6: int64x1 gen() 2764 MB/s May 8 23:53:00.926003 kernel: raid6: using algorithm neonx8 gen() 6752 MB/s May 8 23:53:00.943974 kernel: raid6: .... xor() 4826 MB/s, rmw enabled May 8 23:53:00.944015 kernel: raid6: using neon recovery algorithm May 8 23:53:00.952381 kernel: xor: measuring software checksum speed May 8 23:53:00.952437 kernel: 8regs : 10956 MB/sec May 8 23:53:00.953503 kernel: 32regs : 11942 MB/sec May 8 23:53:00.954715 kernel: arm64_neon : 9284 MB/sec May 8 23:53:00.954754 kernel: xor: using function: 32regs (11942 MB/sec) May 8 23:53:01.038197 kernel: Btrfs loaded, zoned=no, fsverity=no May 8 23:53:01.056485 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 8 23:53:01.066445 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 23:53:01.106147 systemd-udevd[470]: Using default interface naming scheme 'v255'. May 8 23:53:01.114839 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 23:53:01.135456 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 8 23:53:01.161793 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation May 8 23:53:01.216436 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 8 23:53:01.226430 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 23:53:01.343791 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 23:53:01.372418 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 8 23:53:01.417000 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 8 23:53:01.426519 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 8 23:53:01.429616 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 23:53:01.434003 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 23:53:01.457398 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 8 23:53:01.495084 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 8 23:53:01.550060 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 8 23:53:01.550163 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) May 8 23:53:01.553416 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 8 23:53:01.553683 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 8 23:53:01.554781 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 23:53:01.556800 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:53:01.571936 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:b6:92:a8:f5:6d May 8 23:53:01.559464 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 23:53:01.561861 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 23:53:01.562170 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:53:01.566354 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:53:01.581568 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:53:01.585027 (udev-worker)[518]: Network interface NamePolicy= disabled on kernel command line. May 8 23:53:01.611823 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 8 23:53:01.611887 kernel: nvme nvme0: pci function 0000:00:04.0 May 8 23:53:01.620196 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 8 23:53:01.629074 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 23:53:01.629258 kernel: GPT:9289727 != 16777215 May 8 23:53:01.629286 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 23:53:01.630643 kernel: GPT:9289727 != 16777215 May 8 23:53:01.630679 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 23:53:01.630704 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 8 23:53:01.642357 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:53:01.650379 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 8 23:53:01.702554 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:53:01.754211 kernel: BTRFS: device fsid 3ce8b70c-40bf-43bf-a983-bb6fd2e43017 devid 1 transid 43 /dev/nvme0n1p3 scanned by (udev-worker) (518) May 8 23:53:01.767170 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by (udev-worker) (524) May 8 23:53:01.847173 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 8 23:53:01.865201 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 8 23:53:01.882343 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 8 23:53:01.897514 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 8 23:53:01.899966 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 8 23:53:01.915517 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 8 23:53:01.931192 disk-uuid[662]: Primary Header is updated. May 8 23:53:01.931192 disk-uuid[662]: Secondary Entries is updated. May 8 23:53:01.931192 disk-uuid[662]: Secondary Header is updated. May 8 23:53:01.942177 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 8 23:53:02.959253 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 8 23:53:02.961403 disk-uuid[663]: The operation has completed successfully. May 8 23:53:03.131694 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 23:53:03.133782 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 8 23:53:03.177437 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 8 23:53:03.199464 sh[924]: Success May 8 23:53:03.217161 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 8 23:53:03.336725 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 8 23:53:03.342218 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 8 23:53:03.347233 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 8 23:53:03.395380 kernel: BTRFS info (device dm-0): first mount of filesystem 3ce8b70c-40bf-43bf-a983-bb6fd2e43017 May 8 23:53:03.395439 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 8 23:53:03.397231 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 8 23:53:03.398532 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 8 23:53:03.399638 kernel: BTRFS info (device dm-0): using free space tree May 8 23:53:03.509144 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 8 23:53:03.537673 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 8 23:53:03.541043 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 8 23:53:03.557465 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 8 23:53:03.564427 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 8 23:53:03.596309 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 8 23:53:03.596382 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 8 23:53:03.597683 kernel: BTRFS info (device nvme0n1p6): using free space tree May 8 23:53:03.605658 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 8 23:53:03.621099 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 23:53:03.624251 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 8 23:53:03.634556 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 8 23:53:03.648574 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 8 23:53:03.743248 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 23:53:03.755548 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 23:53:03.808293 systemd-networkd[1117]: lo: Link UP May 8 23:53:03.808315 systemd-networkd[1117]: lo: Gained carrier May 8 23:53:03.813248 systemd-networkd[1117]: Enumeration completed May 8 23:53:03.815231 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 23:53:03.815281 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:53:03.815288 systemd-networkd[1117]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 23:53:03.824537 systemd[1]: Reached target network.target - Network. May 8 23:53:03.831691 systemd-networkd[1117]: eth0: Link UP May 8 23:53:03.831703 systemd-networkd[1117]: eth0: Gained carrier May 8 23:53:03.831721 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:53:03.849197 systemd-networkd[1117]: eth0: DHCPv4 address 172.31.31.246/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 8 23:53:04.028289 ignition[1029]: Ignition 2.20.0 May 8 23:53:04.028315 ignition[1029]: Stage: fetch-offline May 8 23:53:04.028741 ignition[1029]: no configs at "/usr/lib/ignition/base.d" May 8 23:53:04.028765 ignition[1029]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 8 23:53:04.031503 ignition[1029]: Ignition finished successfully May 8 23:53:04.038507 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 8 23:53:04.059546 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 8 23:53:04.084661 ignition[1128]: Ignition 2.20.0 May 8 23:53:04.084706 ignition[1128]: Stage: fetch May 8 23:53:04.087447 ignition[1128]: no configs at "/usr/lib/ignition/base.d" May 8 23:53:04.087497 ignition[1128]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 8 23:53:04.088069 ignition[1128]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 8 23:53:04.098972 ignition[1128]: PUT result: OK May 8 23:53:04.101977 ignition[1128]: parsed url from cmdline: "" May 8 23:53:04.102019 ignition[1128]: no config URL provided May 8 23:53:04.102036 ignition[1128]: reading system config file "/usr/lib/ignition/user.ign" May 8 23:53:04.102101 ignition[1128]: no config at "/usr/lib/ignition/user.ign" May 8 23:53:04.102158 ignition[1128]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 8 23:53:04.105633 ignition[1128]: PUT result: OK May 8 23:53:04.105708 ignition[1128]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 8 23:53:04.119810 unknown[1128]: fetched base config from "system" May 8 23:53:04.108942 ignition[1128]: GET result: OK May 8 23:53:04.119826 unknown[1128]: fetched base config from "system" May 8 23:53:04.111150 ignition[1128]: parsing config with SHA512: 4f0109b5f75cbfd4618342b98834215a162d13602624abedef818ba8f1ae78f048a1b7573083fc092d5986c6c327023e0893a3577a8848ff5e3656735fd12668 May 8 23:53:04.119840 unknown[1128]: fetched user config from "aws" May 8 23:53:04.126102 ignition[1128]: fetch: fetch complete May 8 23:53:04.130963 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 8 23:53:04.126157 ignition[1128]: fetch: fetch passed May 8 23:53:04.126270 ignition[1128]: Ignition finished successfully May 8 23:53:04.154430 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 8 23:53:04.178239 ignition[1134]: Ignition 2.20.0 May 8 23:53:04.178267 ignition[1134]: Stage: kargs May 8 23:53:04.179870 ignition[1134]: no configs at "/usr/lib/ignition/base.d" May 8 23:53:04.179897 ignition[1134]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 8 23:53:04.181005 ignition[1134]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 8 23:53:04.183496 ignition[1134]: PUT result: OK May 8 23:53:04.191945 ignition[1134]: kargs: kargs passed May 8 23:53:04.192054 ignition[1134]: Ignition finished successfully May 8 23:53:04.197205 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 8 23:53:04.208425 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 8 23:53:04.233352 ignition[1140]: Ignition 2.20.0 May 8 23:53:04.234786 ignition[1140]: Stage: disks May 8 23:53:04.236176 ignition[1140]: no configs at "/usr/lib/ignition/base.d" May 8 23:53:04.236205 ignition[1140]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 8 23:53:04.236433 ignition[1140]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 8 23:53:04.242448 ignition[1140]: PUT result: OK May 8 23:53:04.247059 ignition[1140]: disks: disks passed May 8 23:53:04.248491 ignition[1140]: Ignition finished successfully May 8 23:53:04.251411 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 8 23:53:04.257629 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 8 23:53:04.261550 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 8 23:53:04.265494 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 23:53:04.267437 systemd[1]: Reached target sysinit.target - System Initialization. May 8 23:53:04.269414 systemd[1]: Reached target basic.target - Basic System. May 8 23:53:04.286484 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 8 23:53:04.330529 systemd-fsck[1149]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 8 23:53:04.337947 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 8 23:53:04.348385 systemd[1]: Mounting sysroot.mount - /sysroot... May 8 23:53:04.429625 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ad4e3afa-b242-4ca7-a808-1f37a4d41793 r/w with ordered data mode. Quota mode: none. May 8 23:53:04.430500 systemd[1]: Mounted sysroot.mount - /sysroot. May 8 23:53:04.434202 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 8 23:53:04.450308 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 23:53:04.457224 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 8 23:53:04.467589 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 8 23:53:04.467691 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 23:53:04.468081 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 8 23:53:04.494019 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 8 23:53:04.506408 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 8 23:53:04.514887 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1168) May 8 23:53:04.519068 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 8 23:53:04.519116 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 8 23:53:04.520352 kernel: BTRFS info (device nvme0n1p6): using free space tree May 8 23:53:04.534439 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 8 23:53:04.536222 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 23:53:04.950956 initrd-setup-root[1192]: cut: /sysroot/etc/passwd: No such file or directory May 8 23:53:04.960260 initrd-setup-root[1199]: cut: /sysroot/etc/group: No such file or directory May 8 23:53:04.969462 initrd-setup-root[1206]: cut: /sysroot/etc/shadow: No such file or directory May 8 23:53:04.977187 initrd-setup-root[1213]: cut: /sysroot/etc/gshadow: No such file or directory May 8 23:53:05.032255 systemd-networkd[1117]: eth0: Gained IPv6LL May 8 23:53:05.271505 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 8 23:53:05.285812 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 8 23:53:05.290384 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 8 23:53:05.313768 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 8 23:53:05.317971 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 8 23:53:05.348835 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 8 23:53:05.361718 ignition[1281]: INFO : Ignition 2.20.0 May 8 23:53:05.361718 ignition[1281]: INFO : Stage: mount May 8 23:53:05.365027 ignition[1281]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 23:53:05.365027 ignition[1281]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 8 23:53:05.365027 ignition[1281]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 8 23:53:05.371813 ignition[1281]: INFO : PUT result: OK May 8 23:53:05.375872 ignition[1281]: INFO : mount: mount passed May 8 23:53:05.378289 ignition[1281]: INFO : Ignition finished successfully May 8 23:53:05.380795 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 8 23:53:05.392347 systemd[1]: Starting ignition-files.service - Ignition (files)... May 8 23:53:05.443102 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 8 23:53:05.467925 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/nvme0n1p6 scanned by mount (1293) May 8 23:53:05.467987 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem da95c317-1ae8-41cf-b66e-cb2b095046e4 May 8 23:53:05.468013 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 8 23:53:05.470622 kernel: BTRFS info (device nvme0n1p6): using free space tree May 8 23:53:05.476166 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 8 23:53:05.479284 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 8 23:53:05.517959 ignition[1310]: INFO : Ignition 2.20.0 May 8 23:53:05.517959 ignition[1310]: INFO : Stage: files May 8 23:53:05.521281 ignition[1310]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 23:53:05.521281 ignition[1310]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 8 23:53:05.521281 ignition[1310]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 8 23:53:05.528139 ignition[1310]: INFO : PUT result: OK May 8 23:53:05.532694 ignition[1310]: DEBUG : files: compiled without relabeling support, skipping May 8 23:53:05.536677 ignition[1310]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 23:53:05.536677 ignition[1310]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 23:53:05.554692 ignition[1310]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 23:53:05.557564 ignition[1310]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 23:53:05.560465 unknown[1310]: wrote ssh authorized keys file for user: core May 8 23:53:05.562595 ignition[1310]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 23:53:05.567591 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 8 23:53:05.571387 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 8 23:53:05.774941 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 23:53:05.935738 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 8 23:53:05.935738 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 23:53:05.935738 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 8 23:53:06.397244 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 23:53:06.518844 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 23:53:06.518844 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 23:53:06.531108 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 23:53:06.531108 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 23:53:06.531108 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 23:53:06.531108 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 23:53:06.531108 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 23:53:06.531108 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 23:53:06.531108 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 23:53:06.531108 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 23:53:06.531108 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 23:53:06.531108 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 8 23:53:06.531108 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 8 23:53:06.531108 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 8 23:53:06.531108 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 8 23:53:06.954737 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 23:53:07.275760 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 8 23:53:07.280599 ignition[1310]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 8 23:53:07.280599 ignition[1310]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 23:53:07.280599 ignition[1310]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 23:53:07.280599 ignition[1310]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 8 23:53:07.280599 ignition[1310]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 8 23:53:07.280599 ignition[1310]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 8 23:53:07.280599 ignition[1310]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 23:53:07.280599 ignition[1310]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 23:53:07.280599 ignition[1310]: INFO : files: files passed May 8 23:53:07.280599 ignition[1310]: INFO : Ignition finished successfully May 8 23:53:07.309393 systemd[1]: Finished ignition-files.service - Ignition (files). May 8 23:53:07.317539 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 8 23:53:07.327435 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 8 23:53:07.341699 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 23:53:07.341899 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 8 23:53:07.360497 initrd-setup-root-after-ignition[1339]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 23:53:07.360497 initrd-setup-root-after-ignition[1339]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 8 23:53:07.367396 initrd-setup-root-after-ignition[1343]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 23:53:07.374186 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 23:53:07.378978 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 8 23:53:07.389434 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 8 23:53:07.439686 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 23:53:07.440098 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 8 23:53:07.448359 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 8 23:53:07.452269 systemd[1]: Reached target initrd.target - Initrd Default Target. May 8 23:53:07.456115 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 8 23:53:07.473500 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 8 23:53:07.501197 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 23:53:07.514551 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 8 23:53:07.538593 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 8 23:53:07.543065 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 23:53:07.547490 systemd[1]: Stopped target timers.target - Timer Units. May 8 23:53:07.549486 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 23:53:07.550483 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 8 23:53:07.566042 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 8 23:53:07.581432 systemd[1]: Stopped target basic.target - Basic System. May 8 23:53:07.585065 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 8 23:53:07.589175 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 8 23:53:07.591887 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 8 23:53:07.596655 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 8 23:53:07.598777 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 8 23:53:07.601517 systemd[1]: Stopped target sysinit.target - System Initialization. May 8 23:53:07.604642 systemd[1]: Stopped target local-fs.target - Local File Systems. May 8 23:53:07.611597 systemd[1]: Stopped target swap.target - Swaps. May 8 23:53:07.613574 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 23:53:07.614028 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 8 23:53:07.625189 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 8 23:53:07.627514 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 23:53:07.631791 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 8 23:53:07.635789 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 23:53:07.639110 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 23:53:07.639417 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 8 23:53:07.647694 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 23:53:07.648110 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 8 23:53:07.654958 systemd[1]: ignition-files.service: Deactivated successfully. May 8 23:53:07.655185 systemd[1]: Stopped ignition-files.service - Ignition (files). May 8 23:53:07.670567 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 8 23:53:07.678512 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 8 23:53:07.682347 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 23:53:07.682649 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 8 23:53:07.685994 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 23:53:07.696352 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 8 23:53:07.711951 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 23:53:07.712231 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 8 23:53:07.724838 ignition[1363]: INFO : Ignition 2.20.0 May 8 23:53:07.727094 ignition[1363]: INFO : Stage: umount May 8 23:53:07.727094 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 23:53:07.727094 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 8 23:53:07.727094 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 8 23:53:07.735637 ignition[1363]: INFO : PUT result: OK May 8 23:53:07.741807 ignition[1363]: INFO : umount: umount passed May 8 23:53:07.741807 ignition[1363]: INFO : Ignition finished successfully May 8 23:53:07.746294 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 23:53:07.746524 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 8 23:53:07.752866 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 23:53:07.752962 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 8 23:53:07.758477 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 23:53:07.758577 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 8 23:53:07.760697 systemd[1]: ignition-fetch.service: Deactivated successfully. May 8 23:53:07.760773 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 8 23:53:07.762738 systemd[1]: Stopped target network.target - Network. May 8 23:53:07.764381 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 23:53:07.764460 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 8 23:53:07.766801 systemd[1]: Stopped target paths.target - Path Units. May 8 23:53:07.773774 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 23:53:07.773892 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 23:53:07.774021 systemd[1]: Stopped target slices.target - Slice Units. May 8 23:53:07.774077 systemd[1]: Stopped target sockets.target - Socket Units. May 8 23:53:07.774220 systemd[1]: iscsid.socket: Deactivated successfully. May 8 23:53:07.774295 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 8 23:53:07.774430 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 23:53:07.774496 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 8 23:53:07.774583 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 23:53:07.774665 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 8 23:53:07.774767 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 8 23:53:07.774835 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 8 23:53:07.775164 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 8 23:53:07.775397 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 8 23:53:07.777425 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 23:53:07.824182 systemd-networkd[1117]: eth0: DHCPv6 lease lost May 8 23:53:07.833021 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 23:53:07.835581 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 8 23:53:07.841102 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 23:53:07.841425 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 8 23:53:07.860493 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 23:53:07.860623 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 8 23:53:07.877252 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 8 23:53:07.879756 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 23:53:07.879861 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 8 23:53:07.884545 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 23:53:07.884636 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 23:53:07.890838 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 23:53:07.891593 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 8 23:53:07.893870 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 8 23:53:07.893957 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 23:53:07.896552 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 23:53:07.928918 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 23:53:07.931315 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 8 23:53:07.941785 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 23:53:07.942065 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 23:53:07.947367 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 23:53:07.947495 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 8 23:53:07.952199 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 23:53:07.952502 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 8 23:53:07.958588 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 23:53:07.958690 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 8 23:53:07.960951 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 23:53:07.961031 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 8 23:53:07.963253 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 23:53:07.963329 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 8 23:53:07.965868 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 23:53:07.965947 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 8 23:53:07.981000 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 8 23:53:08.002156 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 23:53:08.002274 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 23:53:08.004809 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 8 23:53:08.004887 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 23:53:08.007246 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 23:53:08.007322 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 8 23:53:08.009667 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 23:53:08.009742 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:53:08.013259 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 23:53:08.013444 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 8 23:53:08.015987 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 23:53:08.016254 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 8 23:53:08.029561 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 8 23:53:08.055519 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 8 23:53:08.073896 systemd[1]: Switching root. May 8 23:53:08.114290 systemd-journald[252]: Journal stopped May 8 23:53:10.557538 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). May 8 23:53:10.557658 kernel: SELinux: policy capability network_peer_controls=1 May 8 23:53:10.557709 kernel: SELinux: policy capability open_perms=1 May 8 23:53:10.557739 kernel: SELinux: policy capability extended_socket_class=1 May 8 23:53:10.557769 kernel: SELinux: policy capability always_check_network=0 May 8 23:53:10.557797 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 23:53:10.557834 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 23:53:10.557871 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 23:53:10.557901 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 23:53:10.557930 kernel: audit: type=1403 audit(1746748388.669:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 23:53:10.557970 systemd[1]: Successfully loaded SELinux policy in 57.984ms. May 8 23:53:10.558020 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.519ms. May 8 23:53:10.558055 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 8 23:53:10.558087 systemd[1]: Detected virtualization amazon. May 8 23:53:10.558117 systemd[1]: Detected architecture arm64. May 8 23:53:10.558177 systemd[1]: Detected first boot. May 8 23:53:10.558210 systemd[1]: Initializing machine ID from VM UUID. May 8 23:53:10.558243 zram_generator::config[1406]: No configuration found. May 8 23:53:10.558277 systemd[1]: Populated /etc with preset unit settings. May 8 23:53:10.558310 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 23:53:10.558341 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 8 23:53:10.558374 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 23:53:10.558407 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 8 23:53:10.558443 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 8 23:53:10.558475 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 8 23:53:10.558504 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 8 23:53:10.558532 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 8 23:53:10.558563 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 8 23:53:10.558595 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 8 23:53:10.558629 systemd[1]: Created slice user.slice - User and Session Slice. May 8 23:53:10.558657 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 8 23:53:10.558686 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 8 23:53:10.558719 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 8 23:53:10.558750 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 8 23:53:10.558779 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 8 23:53:10.558810 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 8 23:53:10.558841 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 8 23:53:10.558872 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 8 23:53:10.558900 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 8 23:53:10.558931 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 8 23:53:10.558961 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 8 23:53:10.558995 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 8 23:53:10.559026 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 8 23:53:10.559058 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 8 23:53:10.559089 systemd[1]: Reached target slices.target - Slice Units. May 8 23:53:10.559119 systemd[1]: Reached target swap.target - Swaps. May 8 23:53:10.562211 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 8 23:53:10.562247 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 8 23:53:10.562277 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 8 23:53:10.562313 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 8 23:53:10.562355 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 8 23:53:10.562384 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 8 23:53:10.562414 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 8 23:53:10.562443 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 8 23:53:10.562474 systemd[1]: Mounting media.mount - External Media Directory... May 8 23:53:10.562505 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 8 23:53:10.562534 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 8 23:53:10.562562 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 8 23:53:10.562598 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 23:53:10.562628 systemd[1]: Reached target machines.target - Containers. May 8 23:53:10.562656 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 8 23:53:10.562727 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 23:53:10.562761 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 8 23:53:10.562792 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 8 23:53:10.562823 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 23:53:10.562852 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 23:53:10.562887 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 23:53:10.562918 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 8 23:53:10.562948 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 23:53:10.562980 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 23:53:10.563009 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 23:53:10.563038 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 8 23:53:10.563066 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 23:53:10.563097 systemd[1]: Stopped systemd-fsck-usr.service. May 8 23:53:10.563143 systemd[1]: Starting systemd-journald.service - Journal Service... May 8 23:53:10.563211 kernel: fuse: init (API version 7.39) May 8 23:53:10.563243 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 8 23:53:10.563272 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 8 23:53:10.563302 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 8 23:53:10.565257 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 8 23:53:10.565293 systemd[1]: verity-setup.service: Deactivated successfully. May 8 23:53:10.565324 systemd[1]: Stopped verity-setup.service. May 8 23:53:10.565354 kernel: loop: module loaded May 8 23:53:10.565386 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 8 23:53:10.565421 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 8 23:53:10.565451 systemd[1]: Mounted media.mount - External Media Directory. May 8 23:53:10.565479 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 8 23:53:10.565508 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 8 23:53:10.565537 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 8 23:53:10.565566 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 8 23:53:10.565599 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 23:53:10.565630 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 8 23:53:10.565659 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 23:53:10.565687 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 23:53:10.565715 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 23:53:10.565743 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 23:53:10.565813 systemd-journald[1484]: Collecting audit messages is disabled. May 8 23:53:10.565870 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 23:53:10.565902 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 8 23:53:10.565932 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 23:53:10.565961 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 23:53:10.565996 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 8 23:53:10.566024 systemd-journald[1484]: Journal started May 8 23:53:10.566071 systemd-journald[1484]: Runtime Journal (/run/log/journal/ec2accff2f1963401fdb7a07370c1cdf) is 8.0M, max 75.3M, 67.3M free. May 8 23:53:09.984555 systemd[1]: Queued start job for default target multi-user.target. May 8 23:53:10.060836 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 8 23:53:10.061619 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 23:53:10.576888 systemd[1]: Started systemd-journald.service - Journal Service. May 8 23:53:10.586189 kernel: ACPI: bus type drm_connector registered May 8 23:53:10.591395 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 23:53:10.594429 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 23:53:10.599737 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 8 23:53:10.623227 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 8 23:53:10.635302 systemd[1]: Reached target network-pre.target - Preparation for Network. May 8 23:53:10.649470 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 8 23:53:10.664447 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 8 23:53:10.666847 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 23:53:10.666920 systemd[1]: Reached target local-fs.target - Local File Systems. May 8 23:53:10.670804 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 8 23:53:10.679494 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 8 23:53:10.686491 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 8 23:53:10.689595 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 23:53:10.695538 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 8 23:53:10.706469 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 8 23:53:10.709935 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 23:53:10.713593 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 8 23:53:10.715808 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 23:53:10.719420 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 23:53:10.726423 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 8 23:53:10.732751 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 8 23:53:10.740079 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 8 23:53:10.742768 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 8 23:53:10.745631 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 8 23:53:10.756239 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 8 23:53:10.790280 systemd-journald[1484]: Time spent on flushing to /var/log/journal/ec2accff2f1963401fdb7a07370c1cdf is 122.731ms for 911 entries. May 8 23:53:10.790280 systemd-journald[1484]: System Journal (/var/log/journal/ec2accff2f1963401fdb7a07370c1cdf) is 8.0M, max 195.6M, 187.6M free. May 8 23:53:10.927468 systemd-journald[1484]: Received client request to flush runtime journal. May 8 23:53:10.927550 kernel: loop0: detected capacity change from 0 to 53784 May 8 23:53:10.825782 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 8 23:53:10.828415 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 8 23:53:10.839455 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 8 23:53:10.900694 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 23:53:10.935527 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 8 23:53:10.939795 systemd-tmpfiles[1535]: ACLs are not supported, ignoring. May 8 23:53:10.941578 systemd-tmpfiles[1535]: ACLs are not supported, ignoring. May 8 23:53:10.968726 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 23:53:10.970161 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 8 23:53:10.976626 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 8 23:53:10.995446 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 8 23:53:10.999243 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 8 23:53:11.014514 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 8 23:53:11.024481 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 23:53:11.067997 kernel: loop1: detected capacity change from 0 to 113536 May 8 23:53:11.083824 udevadm[1554]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 8 23:53:11.122244 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 8 23:53:11.137561 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 8 23:53:11.173070 systemd-tmpfiles[1557]: ACLs are not supported, ignoring. May 8 23:53:11.173111 systemd-tmpfiles[1557]: ACLs are not supported, ignoring. May 8 23:53:11.179506 kernel: loop2: detected capacity change from 0 to 201592 May 8 23:53:11.181499 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 8 23:53:11.460156 kernel: loop3: detected capacity change from 0 to 116808 May 8 23:53:11.564871 kernel: loop4: detected capacity change from 0 to 53784 May 8 23:53:11.576821 kernel: loop5: detected capacity change from 0 to 113536 May 8 23:53:11.591752 kernel: loop6: detected capacity change from 0 to 201592 May 8 23:53:11.622185 kernel: loop7: detected capacity change from 0 to 116808 May 8 23:53:11.640269 (sd-merge)[1563]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 8 23:53:11.643352 (sd-merge)[1563]: Merged extensions into '/usr'. May 8 23:53:11.656856 systemd[1]: Reloading requested from client PID 1534 ('systemd-sysext') (unit systemd-sysext.service)... May 8 23:53:11.656892 systemd[1]: Reloading... May 8 23:53:11.845180 zram_generator::config[1589]: No configuration found. May 8 23:53:12.116266 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 23:53:12.223878 systemd[1]: Reloading finished in 565 ms. May 8 23:53:12.276191 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 8 23:53:12.279090 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 8 23:53:12.292442 systemd[1]: Starting ensure-sysext.service... May 8 23:53:12.297212 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 8 23:53:12.305923 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 8 23:53:12.326808 systemd[1]: Reloading requested from client PID 1641 ('systemctl') (unit ensure-sysext.service)... May 8 23:53:12.326839 systemd[1]: Reloading... May 8 23:53:12.367741 ldconfig[1529]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 23:53:12.369705 systemd-tmpfiles[1642]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 23:53:12.370902 systemd-tmpfiles[1642]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 8 23:53:12.372844 systemd-tmpfiles[1642]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 23:53:12.373593 systemd-tmpfiles[1642]: ACLs are not supported, ignoring. May 8 23:53:12.373831 systemd-tmpfiles[1642]: ACLs are not supported, ignoring. May 8 23:53:12.381439 systemd-tmpfiles[1642]: Detected autofs mount point /boot during canonicalization of boot. May 8 23:53:12.381641 systemd-tmpfiles[1642]: Skipping /boot May 8 23:53:12.408901 systemd-tmpfiles[1642]: Detected autofs mount point /boot during canonicalization of boot. May 8 23:53:12.409097 systemd-tmpfiles[1642]: Skipping /boot May 8 23:53:12.431022 systemd-udevd[1643]: Using default interface naming scheme 'v255'. May 8 23:53:12.537158 zram_generator::config[1670]: No configuration found. May 8 23:53:12.696347 (udev-worker)[1677]: Network interface NamePolicy= disabled on kernel command line. May 8 23:53:12.886179 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (1681) May 8 23:53:12.943799 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 23:53:13.121686 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 8 23:53:13.122561 systemd[1]: Reloading finished in 794 ms. May 8 23:53:13.148827 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 8 23:53:13.152036 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 8 23:53:13.163514 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 8 23:53:13.208363 systemd[1]: Finished ensure-sysext.service. May 8 23:53:13.224930 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 8 23:53:13.258986 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 8 23:53:13.272403 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 23:53:13.279438 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 8 23:53:13.283810 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 8 23:53:13.300513 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 8 23:53:13.305971 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 8 23:53:13.320055 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 8 23:53:13.333465 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 8 23:53:13.338182 lvm[1842]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 23:53:13.348606 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 8 23:53:13.350841 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 8 23:53:13.353449 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 8 23:53:13.360435 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 8 23:53:13.368473 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 8 23:53:13.378450 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 8 23:53:13.380578 systemd[1]: Reached target time-set.target - System Time Set. May 8 23:53:13.397449 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 8 23:53:13.404404 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 8 23:53:13.408832 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 23:53:13.412211 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 8 23:53:13.415140 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 23:53:13.415502 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 8 23:53:13.434425 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 8 23:53:13.441932 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 23:53:13.442480 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 8 23:53:13.445167 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 8 23:53:13.460602 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 23:53:13.460967 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 8 23:53:13.463579 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 23:53:13.495063 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 8 23:53:13.534030 augenrules[1879]: No rules May 8 23:53:13.536550 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 8 23:53:13.543795 systemd[1]: audit-rules.service: Deactivated successfully. May 8 23:53:13.544997 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 23:53:13.559108 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 8 23:53:13.562089 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 23:53:13.564796 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 8 23:53:13.568031 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 8 23:53:13.579727 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 8 23:53:13.602181 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 8 23:53:13.615612 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 8 23:53:13.621175 lvm[1889]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 23:53:13.624268 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 8 23:53:13.667371 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 8 23:53:13.677748 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 8 23:53:13.735209 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 8 23:53:13.776938 systemd-networkd[1862]: lo: Link UP May 8 23:53:13.776967 systemd-networkd[1862]: lo: Gained carrier May 8 23:53:13.779038 systemd-resolved[1864]: Positive Trust Anchors: May 8 23:53:13.779599 systemd-resolved[1864]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 23:53:13.779687 systemd-resolved[1864]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 8 23:53:13.779751 systemd-networkd[1862]: Enumeration completed May 8 23:53:13.779913 systemd[1]: Started systemd-networkd.service - Network Configuration. May 8 23:53:13.784950 systemd-networkd[1862]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:53:13.784970 systemd-networkd[1862]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 23:53:13.789620 systemd-resolved[1864]: Defaulting to hostname 'linux'. May 8 23:53:13.791437 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 8 23:53:13.794851 systemd-networkd[1862]: eth0: Link UP May 8 23:53:13.795284 systemd-networkd[1862]: eth0: Gained carrier May 8 23:53:13.795328 systemd-networkd[1862]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 8 23:53:13.798992 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 8 23:53:13.801347 systemd[1]: Reached target network.target - Network. May 8 23:53:13.803201 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 8 23:53:13.805568 systemd[1]: Reached target sysinit.target - System Initialization. May 8 23:53:13.807769 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 8 23:53:13.810304 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 8 23:53:13.817563 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 8 23:53:13.820999 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 8 23:53:13.823652 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 8 23:53:13.826810 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 23:53:13.826864 systemd[1]: Reached target paths.target - Path Units. May 8 23:53:13.828725 systemd-networkd[1862]: eth0: DHCPv4 address 172.31.31.246/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 8 23:53:13.829587 systemd[1]: Reached target timers.target - Timer Units. May 8 23:53:13.832755 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 8 23:53:13.837510 systemd[1]: Starting docker.socket - Docker Socket for the API... May 8 23:53:13.849593 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 8 23:53:13.852720 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 8 23:53:13.854961 systemd[1]: Reached target sockets.target - Socket Units. May 8 23:53:13.856815 systemd[1]: Reached target basic.target - Basic System. May 8 23:53:13.858786 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 8 23:53:13.858840 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 8 23:53:13.865331 systemd[1]: Starting containerd.service - containerd container runtime... May 8 23:53:13.870904 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 8 23:53:13.878956 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 8 23:53:13.887396 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 8 23:53:13.894536 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 8 23:53:13.897371 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 8 23:53:13.905574 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 8 23:53:13.920444 systemd[1]: Started ntpd.service - Network Time Service. May 8 23:53:13.929751 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 8 23:53:13.936416 systemd[1]: Starting setup-oem.service - Setup OEM... May 8 23:53:13.941613 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 8 23:53:13.963115 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 8 23:53:13.986603 systemd[1]: Starting systemd-logind.service - User Login Management... May 8 23:53:13.989479 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 23:53:13.991985 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 23:53:14.010501 jq[1910]: false May 8 23:53:14.012432 systemd[1]: Starting update-engine.service - Update Engine... May 8 23:53:14.019776 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 8 23:53:14.029085 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 23:53:14.029493 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 8 23:53:14.041173 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 23:53:14.043423 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 8 23:53:14.047776 jq[1924]: true May 8 23:53:14.094069 jq[1931]: true May 8 23:53:14.118539 ntpd[1913]: 8 May 23:53:14 ntpd[1913]: ntpd 4.2.8p17@1.4004-o Thu May 8 21:46:52 UTC 2025 (1): Starting May 8 23:53:14.118539 ntpd[1913]: 8 May 23:53:14 ntpd[1913]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 8 23:53:14.118539 ntpd[1913]: 8 May 23:53:14 ntpd[1913]: ---------------------------------------------------- May 8 23:53:14.118539 ntpd[1913]: 8 May 23:53:14 ntpd[1913]: ntp-4 is maintained by Network Time Foundation, May 8 23:53:14.118539 ntpd[1913]: 8 May 23:53:14 ntpd[1913]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 8 23:53:14.118539 ntpd[1913]: 8 May 23:53:14 ntpd[1913]: corporation. Support and training for ntp-4 are May 8 23:53:14.118539 ntpd[1913]: 8 May 23:53:14 ntpd[1913]: available at https://www.nwtime.org/support May 8 23:53:14.118539 ntpd[1913]: 8 May 23:53:14 ntpd[1913]: ---------------------------------------------------- May 8 23:53:14.118539 ntpd[1913]: 8 May 23:53:14 ntpd[1913]: proto: precision = 0.096 usec (-23) May 8 23:53:14.118539 ntpd[1913]: 8 May 23:53:14 ntpd[1913]: basedate set to 2025-04-26 May 8 23:53:14.118539 ntpd[1913]: 8 May 23:53:14 ntpd[1913]: gps base set to 2025-04-27 (week 2364) May 8 23:53:14.118539 ntpd[1913]: 8 May 23:53:14 ntpd[1913]: Listen and drop on 0 v6wildcard [::]:123 May 8 23:53:14.118539 ntpd[1913]: 8 May 23:53:14 ntpd[1913]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 8 23:53:14.105448 ntpd[1913]: ntpd 4.2.8p17@1.4004-o Thu May 8 21:46:52 UTC 2025 (1): Starting May 8 23:53:14.105498 ntpd[1913]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 8 23:53:14.105518 ntpd[1913]: ---------------------------------------------------- May 8 23:53:14.105539 ntpd[1913]: ntp-4 is maintained by Network Time Foundation, May 8 23:53:14.105557 ntpd[1913]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 8 23:53:14.105575 ntpd[1913]: corporation. Support and training for ntp-4 are May 8 23:53:14.135354 ntpd[1913]: 8 May 23:53:14 ntpd[1913]: Listen normally on 2 lo 127.0.0.1:123 May 8 23:53:14.135354 ntpd[1913]: 8 May 23:53:14 ntpd[1913]: Listen normally on 3 eth0 172.31.31.246:123 May 8 23:53:14.135354 ntpd[1913]: 8 May 23:53:14 ntpd[1913]: Listen normally on 4 lo [::1]:123 May 8 23:53:14.135354 ntpd[1913]: 8 May 23:53:14 ntpd[1913]: bind(21) AF_INET6 fe80::4b6:92ff:fea8:f56d%2#123 flags 0x11 failed: Cannot assign requested address May 8 23:53:14.135354 ntpd[1913]: 8 May 23:53:14 ntpd[1913]: unable to create socket on eth0 (5) for fe80::4b6:92ff:fea8:f56d%2#123 May 8 23:53:14.135354 ntpd[1913]: 8 May 23:53:14 ntpd[1913]: failed to init interface for address fe80::4b6:92ff:fea8:f56d%2 May 8 23:53:14.135354 ntpd[1913]: 8 May 23:53:14 ntpd[1913]: Listening on routing socket on fd #21 for interface updates May 8 23:53:14.105594 ntpd[1913]: available at https://www.nwtime.org/support May 8 23:53:14.105611 ntpd[1913]: ---------------------------------------------------- May 8 23:53:14.111948 ntpd[1913]: proto: precision = 0.096 usec (-23) May 8 23:53:14.112393 ntpd[1913]: basedate set to 2025-04-26 May 8 23:53:14.112419 ntpd[1913]: gps base set to 2025-04-27 (week 2364) May 8 23:53:14.115001 ntpd[1913]: Listen and drop on 0 v6wildcard [::]:123 May 8 23:53:14.115074 ntpd[1913]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 8 23:53:14.122205 ntpd[1913]: Listen normally on 2 lo 127.0.0.1:123 May 8 23:53:14.122306 ntpd[1913]: Listen normally on 3 eth0 172.31.31.246:123 May 8 23:53:14.122376 ntpd[1913]: Listen normally on 4 lo [::1]:123 May 8 23:53:14.122453 ntpd[1913]: bind(21) AF_INET6 fe80::4b6:92ff:fea8:f56d%2#123 flags 0x11 failed: Cannot assign requested address May 8 23:53:14.122491 ntpd[1913]: unable to create socket on eth0 (5) for fe80::4b6:92ff:fea8:f56d%2#123 May 8 23:53:14.122524 ntpd[1913]: failed to init interface for address fe80::4b6:92ff:fea8:f56d%2 May 8 23:53:14.122580 ntpd[1913]: Listening on routing socket on fd #21 for interface updates May 8 23:53:14.163341 extend-filesystems[1911]: Found loop4 May 8 23:53:14.163341 extend-filesystems[1911]: Found loop5 May 8 23:53:14.163341 extend-filesystems[1911]: Found loop6 May 8 23:53:14.163341 extend-filesystems[1911]: Found loop7 May 8 23:53:14.163341 extend-filesystems[1911]: Found nvme0n1 May 8 23:53:14.163341 extend-filesystems[1911]: Found nvme0n1p1 May 8 23:53:14.163341 extend-filesystems[1911]: Found nvme0n1p2 May 8 23:53:14.163341 extend-filesystems[1911]: Found nvme0n1p3 May 8 23:53:14.163341 extend-filesystems[1911]: Found usr May 8 23:53:14.163341 extend-filesystems[1911]: Found nvme0n1p4 May 8 23:53:14.163341 extend-filesystems[1911]: Found nvme0n1p6 May 8 23:53:14.163341 extend-filesystems[1911]: Found nvme0n1p7 May 8 23:53:14.163341 extend-filesystems[1911]: Found nvme0n1p9 May 8 23:53:14.163341 extend-filesystems[1911]: Checking size of /dev/nvme0n1p9 May 8 23:53:14.215575 ntpd[1913]: 8 May 23:53:14 ntpd[1913]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 8 23:53:14.215575 ntpd[1913]: 8 May 23:53:14 ntpd[1913]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 8 23:53:14.142468 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 8 23:53:14.138023 dbus-daemon[1909]: [system] SELinux support is enabled May 8 23:53:14.149228 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 23:53:14.142848 ntpd[1913]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 8 23:53:14.149277 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 8 23:53:14.142897 ntpd[1913]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 8 23:53:14.153189 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 23:53:14.167345 dbus-daemon[1909]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1862 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 8 23:53:14.153254 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 8 23:53:14.195907 dbus-daemon[1909]: [system] Successfully activated service 'org.freedesktop.systemd1' May 8 23:53:14.198680 (ntainerd)[1943]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 8 23:53:14.212765 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 8 23:53:14.261825 update_engine[1923]: I20250508 23:53:14.255607 1923 main.cc:92] Flatcar Update Engine starting May 8 23:53:14.261825 update_engine[1923]: I20250508 23:53:14.258144 1923 update_check_scheduler.cc:74] Next update check in 10m4s May 8 23:53:14.256401 systemd[1]: motdgen.service: Deactivated successfully. May 8 23:53:14.258230 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 8 23:53:14.260782 systemd[1]: Started update-engine.service - Update Engine. May 8 23:53:14.281750 tar[1934]: linux-arm64/LICENSE May 8 23:53:14.281750 tar[1934]: linux-arm64/helm May 8 23:53:14.277916 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 8 23:53:14.282997 systemd[1]: Finished setup-oem.service - Setup OEM. May 8 23:53:14.319716 extend-filesystems[1911]: Resized partition /dev/nvme0n1p9 May 8 23:53:14.323853 coreos-metadata[1908]: May 08 23:53:14.316 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 8 23:53:14.323853 coreos-metadata[1908]: May 08 23:53:14.316 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 8 23:53:14.323853 coreos-metadata[1908]: May 08 23:53:14.316 INFO Fetch successful May 8 23:53:14.323853 coreos-metadata[1908]: May 08 23:53:14.316 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 8 23:53:14.323853 coreos-metadata[1908]: May 08 23:53:14.316 INFO Fetch successful May 8 23:53:14.323853 coreos-metadata[1908]: May 08 23:53:14.316 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 8 23:53:14.323853 coreos-metadata[1908]: May 08 23:53:14.316 INFO Fetch successful May 8 23:53:14.323853 coreos-metadata[1908]: May 08 23:53:14.316 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 8 23:53:14.323853 coreos-metadata[1908]: May 08 23:53:14.319 INFO Fetch successful May 8 23:53:14.323853 coreos-metadata[1908]: May 08 23:53:14.319 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 8 23:53:14.323853 coreos-metadata[1908]: May 08 23:53:14.319 INFO Fetch failed with 404: resource not found May 8 23:53:14.323853 coreos-metadata[1908]: May 08 23:53:14.319 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 8 23:53:14.330651 coreos-metadata[1908]: May 08 23:53:14.328 INFO Fetch successful May 8 23:53:14.330651 coreos-metadata[1908]: May 08 23:53:14.328 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 8 23:53:14.330651 coreos-metadata[1908]: May 08 23:53:14.328 INFO Fetch successful May 8 23:53:14.330651 coreos-metadata[1908]: May 08 23:53:14.328 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 8 23:53:14.330651 coreos-metadata[1908]: May 08 23:53:14.328 INFO Fetch successful May 8 23:53:14.330651 coreos-metadata[1908]: May 08 23:53:14.328 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 8 23:53:14.330651 coreos-metadata[1908]: May 08 23:53:14.328 INFO Fetch successful May 8 23:53:14.330651 coreos-metadata[1908]: May 08 23:53:14.328 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 8 23:53:14.331087 extend-filesystems[1970]: resize2fs 1.47.1 (20-May-2024) May 8 23:53:14.344337 coreos-metadata[1908]: May 08 23:53:14.331 INFO Fetch successful May 8 23:53:14.371155 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 8 23:53:14.467308 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 8 23:53:14.477185 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 8 23:53:14.481899 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 8 23:53:14.485655 systemd-logind[1921]: Watching system buttons on /dev/input/event0 (Power Button) May 8 23:53:14.485713 systemd-logind[1921]: Watching system buttons on /dev/input/event1 (Sleep Button) May 8 23:53:14.486397 systemd-logind[1921]: New seat seat0. May 8 23:53:14.490085 systemd[1]: Started systemd-logind.service - User Login Management. May 8 23:53:14.499164 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 8 23:53:14.523542 extend-filesystems[1970]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 8 23:53:14.523542 extend-filesystems[1970]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 23:53:14.523542 extend-filesystems[1970]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 8 23:53:14.530998 extend-filesystems[1911]: Resized filesystem in /dev/nvme0n1p9 May 8 23:53:14.530426 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 23:53:14.530792 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 8 23:53:14.538154 bash[1980]: Updated "/home/core/.ssh/authorized_keys" May 8 23:53:14.550801 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 8 23:53:14.581738 systemd[1]: Starting sshkeys.service... May 8 23:53:14.617284 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (1683) May 8 23:53:14.636313 locksmithd[1960]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 23:53:14.648376 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 8 23:53:14.693432 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 8 23:53:14.833777 dbus-daemon[1909]: [system] Successfully activated service 'org.freedesktop.hostname1' May 8 23:53:14.834067 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 8 23:53:14.847368 dbus-daemon[1909]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1954 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 8 23:53:14.863709 systemd[1]: Starting polkit.service - Authorization Manager... May 8 23:53:15.017656 coreos-metadata[2017]: May 08 23:53:15.017 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 8 23:53:15.021576 coreos-metadata[2017]: May 08 23:53:15.019 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 8 23:53:15.023926 coreos-metadata[2017]: May 08 23:53:15.023 INFO Fetch successful May 8 23:53:15.023926 coreos-metadata[2017]: May 08 23:53:15.023 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 8 23:53:15.023926 coreos-metadata[2017]: May 08 23:53:15.023 INFO Fetch successful May 8 23:53:15.027560 unknown[2017]: wrote ssh authorized keys file for user: core May 8 23:53:15.054659 polkitd[2070]: Started polkitd version 121 May 8 23:53:15.094087 polkitd[2070]: Loading rules from directory /etc/polkit-1/rules.d May 8 23:53:15.094262 polkitd[2070]: Loading rules from directory /usr/share/polkit-1/rules.d May 8 23:53:15.098119 containerd[1943]: time="2025-05-08T23:53:15.097967217Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 8 23:53:15.099165 polkitd[2070]: Finished loading, compiling and executing 2 rules May 8 23:53:15.101194 dbus-daemon[1909]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 8 23:53:15.101518 systemd[1]: Started polkit.service - Authorization Manager. May 8 23:53:15.107179 update-ssh-keys[2095]: Updated "/home/core/.ssh/authorized_keys" May 8 23:53:15.107599 ntpd[1913]: 8 May 23:53:15 ntpd[1913]: bind(24) AF_INET6 fe80::4b6:92ff:fea8:f56d%2#123 flags 0x11 failed: Cannot assign requested address May 8 23:53:15.107599 ntpd[1913]: 8 May 23:53:15 ntpd[1913]: unable to create socket on eth0 (6) for fe80::4b6:92ff:fea8:f56d%2#123 May 8 23:53:15.107599 ntpd[1913]: 8 May 23:53:15 ntpd[1913]: failed to init interface for address fe80::4b6:92ff:fea8:f56d%2 May 8 23:53:15.106903 ntpd[1913]: bind(24) AF_INET6 fe80::4b6:92ff:fea8:f56d%2#123 flags 0x11 failed: Cannot assign requested address May 8 23:53:15.106955 ntpd[1913]: unable to create socket on eth0 (6) for fe80::4b6:92ff:fea8:f56d%2#123 May 8 23:53:15.106982 ntpd[1913]: failed to init interface for address fe80::4b6:92ff:fea8:f56d%2 May 8 23:53:15.107277 polkitd[2070]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 8 23:53:15.112811 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 8 23:53:15.119746 systemd[1]: Finished sshkeys.service. May 8 23:53:15.157777 systemd-resolved[1864]: System hostname changed to 'ip-172-31-31-246'. May 8 23:53:15.157783 systemd-hostnamed[1954]: Hostname set to (transient) May 8 23:53:15.232081 containerd[1943]: time="2025-05-08T23:53:15.231965806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 23:53:15.237670 containerd[1943]: time="2025-05-08T23:53:15.236849434Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 23:53:15.237670 containerd[1943]: time="2025-05-08T23:53:15.236916514Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 23:53:15.237670 containerd[1943]: time="2025-05-08T23:53:15.236955790Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 23:53:15.238484 containerd[1943]: time="2025-05-08T23:53:15.238450222Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 8 23:53:15.239162 containerd[1943]: time="2025-05-08T23:53:15.238959598Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 8 23:53:15.239162 containerd[1943]: time="2025-05-08T23:53:15.239094286Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 8 23:53:15.241150 containerd[1943]: time="2025-05-08T23:53:15.239139166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 23:53:15.241150 containerd[1943]: time="2025-05-08T23:53:15.240502222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 23:53:15.241150 containerd[1943]: time="2025-05-08T23:53:15.240532474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 23:53:15.241150 containerd[1943]: time="2025-05-08T23:53:15.240562774Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 8 23:53:15.241150 containerd[1943]: time="2025-05-08T23:53:15.240586906Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 23:53:15.241150 containerd[1943]: time="2025-05-08T23:53:15.240744058Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 23:53:15.242328 containerd[1943]: time="2025-05-08T23:53:15.242290978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 23:53:15.242815 containerd[1943]: time="2025-05-08T23:53:15.242778838Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 23:53:15.243955 containerd[1943]: time="2025-05-08T23:53:15.243425866Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 23:53:15.243955 containerd[1943]: time="2025-05-08T23:53:15.243618850Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 23:53:15.243955 containerd[1943]: time="2025-05-08T23:53:15.243734158Z" level=info msg="metadata content store policy set" policy=shared May 8 23:53:15.248901 containerd[1943]: time="2025-05-08T23:53:15.248814766Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 23:53:15.249174 containerd[1943]: time="2025-05-08T23:53:15.249041422Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 23:53:15.250168 containerd[1943]: time="2025-05-08T23:53:15.249377050Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 8 23:53:15.250168 containerd[1943]: time="2025-05-08T23:53:15.249425410Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 8 23:53:15.250168 containerd[1943]: time="2025-05-08T23:53:15.249467230Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 23:53:15.250168 containerd[1943]: time="2025-05-08T23:53:15.249722770Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 23:53:15.251451 containerd[1943]: time="2025-05-08T23:53:15.251416486Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 23:53:15.252453 containerd[1943]: time="2025-05-08T23:53:15.251996278Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 8 23:53:15.252453 containerd[1943]: time="2025-05-08T23:53:15.252036682Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 8 23:53:15.252453 containerd[1943]: time="2025-05-08T23:53:15.252082618Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 8 23:53:15.253008 containerd[1943]: time="2025-05-08T23:53:15.252115054Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 23:53:15.253171 containerd[1943]: time="2025-05-08T23:53:15.253141846Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 23:53:15.253287 containerd[1943]: time="2025-05-08T23:53:15.253261234Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 23:53:15.253793 containerd[1943]: time="2025-05-08T23:53:15.253463098Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 23:53:15.253921 containerd[1943]: time="2025-05-08T23:53:15.253505206Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 23:53:15.254161 containerd[1943]: time="2025-05-08T23:53:15.254018638Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 23:53:15.254161 containerd[1943]: time="2025-05-08T23:53:15.254055286Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 23:53:15.254161 containerd[1943]: time="2025-05-08T23:53:15.254083294Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 23:53:15.255406 containerd[1943]: time="2025-05-08T23:53:15.254418142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 23:53:15.255406 containerd[1943]: time="2025-05-08T23:53:15.254467102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 23:53:15.255406 containerd[1943]: time="2025-05-08T23:53:15.255175498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 23:53:15.255406 containerd[1943]: time="2025-05-08T23:53:15.255209482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 23:53:15.255406 containerd[1943]: time="2025-05-08T23:53:15.255267202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 23:53:15.255406 containerd[1943]: time="2025-05-08T23:53:15.255329470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 23:53:15.255406 containerd[1943]: time="2025-05-08T23:53:15.255364558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 23:53:15.255916 containerd[1943]: time="2025-05-08T23:53:15.255756046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 23:53:15.255916 containerd[1943]: time="2025-05-08T23:53:15.255805006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 8 23:53:15.255916 containerd[1943]: time="2025-05-08T23:53:15.255866290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 8 23:53:15.256095 containerd[1943]: time="2025-05-08T23:53:15.255895510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 23:53:15.256448 containerd[1943]: time="2025-05-08T23:53:15.256293622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 8 23:53:15.256807 containerd[1943]: time="2025-05-08T23:53:15.256332394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 23:53:15.256807 containerd[1943]: time="2025-05-08T23:53:15.256661002Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 8 23:53:15.257096 containerd[1943]: time="2025-05-08T23:53:15.256919710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 8 23:53:15.257877 containerd[1943]: time="2025-05-08T23:53:15.256956346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 23:53:15.257877 containerd[1943]: time="2025-05-08T23:53:15.257637658Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 23:53:15.257877 containerd[1943]: time="2025-05-08T23:53:15.257820790Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 23:53:15.258717 containerd[1943]: time="2025-05-08T23:53:15.258210538Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 8 23:53:15.258717 containerd[1943]: time="2025-05-08T23:53:15.258247474Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 23:53:15.258888 containerd[1943]: time="2025-05-08T23:53:15.258857470Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 8 23:53:15.259006 containerd[1943]: time="2025-05-08T23:53:15.258980050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 23:53:15.259173 containerd[1943]: time="2025-05-08T23:53:15.259078522Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 8 23:53:15.259173 containerd[1943]: time="2025-05-08T23:53:15.259105606Z" level=info msg="NRI interface is disabled by configuration." May 8 23:53:15.259386 containerd[1943]: time="2025-05-08T23:53:15.259246618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 23:53:15.261159 containerd[1943]: time="2025-05-08T23:53:15.259956334Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 23:53:15.261159 containerd[1943]: time="2025-05-08T23:53:15.260051266Z" level=info msg="Connect containerd service" May 8 23:53:15.261540 containerd[1943]: time="2025-05-08T23:53:15.260112274Z" level=info msg="using legacy CRI server" May 8 23:53:15.261540 containerd[1943]: time="2025-05-08T23:53:15.261473206Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 8 23:53:15.262482 containerd[1943]: time="2025-05-08T23:53:15.262422478Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 23:53:15.266958 containerd[1943]: time="2025-05-08T23:53:15.266887114Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 23:53:15.267677 containerd[1943]: time="2025-05-08T23:53:15.267212290Z" level=info msg="Start subscribing containerd event" May 8 23:53:15.267677 containerd[1943]: time="2025-05-08T23:53:15.267278782Z" level=info msg="Start recovering state" May 8 23:53:15.267677 containerd[1943]: time="2025-05-08T23:53:15.267414166Z" level=info msg="Start event monitor" May 8 23:53:15.267677 containerd[1943]: time="2025-05-08T23:53:15.267438070Z" level=info msg="Start snapshots syncer" May 8 23:53:15.267677 containerd[1943]: time="2025-05-08T23:53:15.267458470Z" level=info msg="Start cni network conf syncer for default" May 8 23:53:15.267677 containerd[1943]: time="2025-05-08T23:53:15.267476722Z" level=info msg="Start streaming server" May 8 23:53:15.271533 containerd[1943]: time="2025-05-08T23:53:15.268445830Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 23:53:15.271533 containerd[1943]: time="2025-05-08T23:53:15.268542022Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 23:53:15.268757 systemd[1]: Started containerd.service - containerd container runtime. May 8 23:53:15.277146 containerd[1943]: time="2025-05-08T23:53:15.275268346Z" level=info msg="containerd successfully booted in 0.184642s" May 8 23:53:15.400341 systemd-networkd[1862]: eth0: Gained IPv6LL May 8 23:53:15.406944 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 8 23:53:15.410337 systemd[1]: Reached target network-online.target - Network is Online. May 8 23:53:15.422638 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 8 23:53:15.436858 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:53:15.443628 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 8 23:53:15.564221 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 8 23:53:15.588020 amazon-ssm-agent[2115]: Initializing new seelog logger May 8 23:53:15.592298 amazon-ssm-agent[2115]: New Seelog Logger Creation Complete May 8 23:53:15.592408 amazon-ssm-agent[2115]: 2025/05/08 23:53:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 8 23:53:15.592408 amazon-ssm-agent[2115]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 8 23:53:15.593019 amazon-ssm-agent[2115]: 2025/05/08 23:53:15 processing appconfig overrides May 8 23:53:15.595368 amazon-ssm-agent[2115]: 2025/05/08 23:53:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 8 23:53:15.595368 amazon-ssm-agent[2115]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 8 23:53:15.595496 amazon-ssm-agent[2115]: 2025/05/08 23:53:15 processing appconfig overrides May 8 23:53:15.595816 amazon-ssm-agent[2115]: 2025/05/08 23:53:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 8 23:53:15.595816 amazon-ssm-agent[2115]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 8 23:53:15.595978 amazon-ssm-agent[2115]: 2025/05/08 23:53:15 processing appconfig overrides May 8 23:53:15.599220 amazon-ssm-agent[2115]: 2025-05-08 23:53:15 INFO Proxy environment variables: May 8 23:53:15.602148 amazon-ssm-agent[2115]: 2025/05/08 23:53:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 8 23:53:15.602148 amazon-ssm-agent[2115]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 8 23:53:15.602148 amazon-ssm-agent[2115]: 2025/05/08 23:53:15 processing appconfig overrides May 8 23:53:15.698956 amazon-ssm-agent[2115]: 2025-05-08 23:53:15 INFO https_proxy: May 8 23:53:15.798885 amazon-ssm-agent[2115]: 2025-05-08 23:53:15 INFO http_proxy: May 8 23:53:15.899693 amazon-ssm-agent[2115]: 2025-05-08 23:53:15 INFO no_proxy: May 8 23:53:15.996281 amazon-ssm-agent[2115]: 2025-05-08 23:53:15 INFO Checking if agent identity type OnPrem can be assumed May 8 23:53:16.096229 amazon-ssm-agent[2115]: 2025-05-08 23:53:15 INFO Checking if agent identity type EC2 can be assumed May 8 23:53:16.097649 tar[1934]: linux-arm64/README.md May 8 23:53:16.134210 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 8 23:53:16.196225 amazon-ssm-agent[2115]: 2025-05-08 23:53:15 INFO Agent will take identity from EC2 May 8 23:53:16.294884 amazon-ssm-agent[2115]: 2025-05-08 23:53:15 INFO [amazon-ssm-agent] using named pipe channel for IPC May 8 23:53:16.348593 amazon-ssm-agent[2115]: 2025-05-08 23:53:15 INFO [amazon-ssm-agent] using named pipe channel for IPC May 8 23:53:16.348593 amazon-ssm-agent[2115]: 2025-05-08 23:53:15 INFO [amazon-ssm-agent] using named pipe channel for IPC May 8 23:53:16.348593 amazon-ssm-agent[2115]: 2025-05-08 23:53:15 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 May 8 23:53:16.348593 amazon-ssm-agent[2115]: 2025-05-08 23:53:15 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 May 8 23:53:16.348593 amazon-ssm-agent[2115]: 2025-05-08 23:53:15 INFO [amazon-ssm-agent] Starting Core Agent May 8 23:53:16.348593 amazon-ssm-agent[2115]: 2025-05-08 23:53:15 INFO [amazon-ssm-agent] registrar detected. Attempting registration May 8 23:53:16.348593 amazon-ssm-agent[2115]: 2025-05-08 23:53:15 INFO [Registrar] Starting registrar module May 8 23:53:16.348593 amazon-ssm-agent[2115]: 2025-05-08 23:53:15 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration May 8 23:53:16.348593 amazon-ssm-agent[2115]: 2025-05-08 23:53:16 INFO [EC2Identity] EC2 registration was successful. May 8 23:53:16.348593 amazon-ssm-agent[2115]: 2025-05-08 23:53:16 INFO [CredentialRefresher] credentialRefresher has started May 8 23:53:16.348593 amazon-ssm-agent[2115]: 2025-05-08 23:53:16 INFO [CredentialRefresher] Starting credentials refresher loop May 8 23:53:16.348593 amazon-ssm-agent[2115]: 2025-05-08 23:53:16 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 8 23:53:16.394424 amazon-ssm-agent[2115]: 2025-05-08 23:53:16 INFO [CredentialRefresher] Next credential rotation will be in 30.133285833733332 minutes May 8 23:53:16.669822 sshd_keygen[1940]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 23:53:16.713597 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 8 23:53:16.723835 systemd[1]: Starting issuegen.service - Generate /run/issue... May 8 23:53:16.732432 systemd[1]: Started sshd@0-172.31.31.246:22-139.178.68.195:45608.service - OpenSSH per-connection server daemon (139.178.68.195:45608). May 8 23:53:16.746347 systemd[1]: issuegen.service: Deactivated successfully. May 8 23:53:16.746956 systemd[1]: Finished issuegen.service - Generate /run/issue. May 8 23:53:16.754408 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 8 23:53:16.791039 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 8 23:53:16.805925 systemd[1]: Started getty@tty1.service - Getty on tty1. May 8 23:53:16.822942 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 8 23:53:16.825565 systemd[1]: Reached target getty.target - Login Prompts. May 8 23:53:16.996194 sshd[2145]: Accepted publickey for core from 139.178.68.195 port 45608 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:53:16.998761 sshd-session[2145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:53:17.019638 systemd-logind[1921]: New session 1 of user core. May 8 23:53:17.021186 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 8 23:53:17.028618 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 8 23:53:17.070193 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 8 23:53:17.082641 systemd[1]: Starting user@500.service - User Manager for UID 500... May 8 23:53:17.103689 (systemd)[2156]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 23:53:17.318209 systemd[2156]: Queued start job for default target default.target. May 8 23:53:17.323030 systemd[2156]: Created slice app.slice - User Application Slice. May 8 23:53:17.323087 systemd[2156]: Reached target paths.target - Paths. May 8 23:53:17.323120 systemd[2156]: Reached target timers.target - Timers. May 8 23:53:17.328361 systemd[2156]: Starting dbus.socket - D-Bus User Message Bus Socket... May 8 23:53:17.349002 systemd[2156]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 8 23:53:17.349253 systemd[2156]: Reached target sockets.target - Sockets. May 8 23:53:17.349287 systemd[2156]: Reached target basic.target - Basic System. May 8 23:53:17.349367 systemd[2156]: Reached target default.target - Main User Target. May 8 23:53:17.349430 systemd[2156]: Startup finished in 234ms. May 8 23:53:17.349657 systemd[1]: Started user@500.service - User Manager for UID 500. May 8 23:53:17.360676 systemd[1]: Started session-1.scope - Session 1 of User core. May 8 23:53:17.388278 amazon-ssm-agent[2115]: 2025-05-08 23:53:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 8 23:53:17.489605 amazon-ssm-agent[2115]: 2025-05-08 23:53:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2166) started May 8 23:53:17.539683 systemd[1]: Started sshd@1-172.31.31.246:22-139.178.68.195:50560.service - OpenSSH per-connection server daemon (139.178.68.195:50560). May 8 23:53:17.590424 amazon-ssm-agent[2115]: 2025-05-08 23:53:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 8 23:53:17.752892 sshd[2174]: Accepted publickey for core from 139.178.68.195 port 50560 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:53:17.755834 sshd-session[2174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:53:17.764387 systemd-logind[1921]: New session 2 of user core. May 8 23:53:17.770398 systemd[1]: Started session-2.scope - Session 2 of User core. May 8 23:53:17.901804 sshd[2180]: Connection closed by 139.178.68.195 port 50560 May 8 23:53:17.903020 sshd-session[2174]: pam_unix(sshd:session): session closed for user core May 8 23:53:17.908384 systemd[1]: sshd@1-172.31.31.246:22-139.178.68.195:50560.service: Deactivated successfully. May 8 23:53:17.911734 systemd[1]: session-2.scope: Deactivated successfully. May 8 23:53:17.914510 systemd-logind[1921]: Session 2 logged out. Waiting for processes to exit. May 8 23:53:17.917410 systemd-logind[1921]: Removed session 2. May 8 23:53:17.945607 systemd[1]: Started sshd@2-172.31.31.246:22-139.178.68.195:50574.service - OpenSSH per-connection server daemon (139.178.68.195:50574). May 8 23:53:18.106198 ntpd[1913]: Listen normally on 7 eth0 [fe80::4b6:92ff:fea8:f56d%2]:123 May 8 23:53:18.107313 ntpd[1913]: 8 May 23:53:18 ntpd[1913]: Listen normally on 7 eth0 [fe80::4b6:92ff:fea8:f56d%2]:123 May 8 23:53:18.132356 sshd[2185]: Accepted publickey for core from 139.178.68.195 port 50574 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:53:18.134816 sshd-session[2185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:53:18.143866 systemd-logind[1921]: New session 3 of user core. May 8 23:53:18.161394 systemd[1]: Started session-3.scope - Session 3 of User core. May 8 23:53:18.291471 sshd[2187]: Connection closed by 139.178.68.195 port 50574 May 8 23:53:18.292324 sshd-session[2185]: pam_unix(sshd:session): session closed for user core May 8 23:53:18.297972 systemd-logind[1921]: Session 3 logged out. Waiting for processes to exit. May 8 23:53:18.298412 systemd[1]: sshd@2-172.31.31.246:22-139.178.68.195:50574.service: Deactivated successfully. May 8 23:53:18.303864 systemd[1]: session-3.scope: Deactivated successfully. May 8 23:53:18.307689 systemd-logind[1921]: Removed session 3. May 8 23:53:19.128196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:53:19.131228 systemd[1]: Reached target multi-user.target - Multi-User System. May 8 23:53:19.134171 systemd[1]: Startup finished in 1.086s (kernel) + 8.861s (initrd) + 10.520s (userspace) = 20.469s. May 8 23:53:19.155328 (kubelet)[2196]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 23:53:20.565294 kubelet[2196]: E0508 23:53:20.565201 2196 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 23:53:20.569765 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 23:53:20.570103 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 23:53:20.570845 systemd[1]: kubelet.service: Consumed 1.297s CPU time. May 8 23:53:28.330633 systemd[1]: Started sshd@3-172.31.31.246:22-139.178.68.195:33728.service - OpenSSH per-connection server daemon (139.178.68.195:33728). May 8 23:53:28.509476 sshd[2209]: Accepted publickey for core from 139.178.68.195 port 33728 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:53:28.511813 sshd-session[2209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:53:28.519797 systemd-logind[1921]: New session 4 of user core. May 8 23:53:28.530433 systemd[1]: Started session-4.scope - Session 4 of User core. May 8 23:53:28.654162 sshd[2211]: Connection closed by 139.178.68.195 port 33728 May 8 23:53:28.654933 sshd-session[2209]: pam_unix(sshd:session): session closed for user core May 8 23:53:28.660873 systemd[1]: sshd@3-172.31.31.246:22-139.178.68.195:33728.service: Deactivated successfully. May 8 23:53:28.665861 systemd[1]: session-4.scope: Deactivated successfully. May 8 23:53:28.667097 systemd-logind[1921]: Session 4 logged out. Waiting for processes to exit. May 8 23:53:28.668756 systemd-logind[1921]: Removed session 4. May 8 23:53:28.701604 systemd[1]: Started sshd@4-172.31.31.246:22-139.178.68.195:33732.service - OpenSSH per-connection server daemon (139.178.68.195:33732). May 8 23:53:28.879031 sshd[2216]: Accepted publickey for core from 139.178.68.195 port 33732 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:53:28.881379 sshd-session[2216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:53:28.888453 systemd-logind[1921]: New session 5 of user core. May 8 23:53:28.896369 systemd[1]: Started session-5.scope - Session 5 of User core. May 8 23:53:29.013672 sshd[2218]: Connection closed by 139.178.68.195 port 33732 May 8 23:53:29.015241 sshd-session[2216]: pam_unix(sshd:session): session closed for user core May 8 23:53:29.021107 systemd-logind[1921]: Session 5 logged out. Waiting for processes to exit. May 8 23:53:29.022296 systemd[1]: sshd@4-172.31.31.246:22-139.178.68.195:33732.service: Deactivated successfully. May 8 23:53:29.025110 systemd[1]: session-5.scope: Deactivated successfully. May 8 23:53:29.027469 systemd-logind[1921]: Removed session 5. May 8 23:53:29.052186 systemd[1]: Started sshd@5-172.31.31.246:22-139.178.68.195:33742.service - OpenSSH per-connection server daemon (139.178.68.195:33742). May 8 23:53:29.249164 sshd[2223]: Accepted publickey for core from 139.178.68.195 port 33742 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:53:29.251517 sshd-session[2223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:53:29.260477 systemd-logind[1921]: New session 6 of user core. May 8 23:53:29.270388 systemd[1]: Started session-6.scope - Session 6 of User core. May 8 23:53:29.398318 sshd[2225]: Connection closed by 139.178.68.195 port 33742 May 8 23:53:29.398206 sshd-session[2223]: pam_unix(sshd:session): session closed for user core May 8 23:53:29.403283 systemd-logind[1921]: Session 6 logged out. Waiting for processes to exit. May 8 23:53:29.403667 systemd[1]: sshd@5-172.31.31.246:22-139.178.68.195:33742.service: Deactivated successfully. May 8 23:53:29.407732 systemd[1]: session-6.scope: Deactivated successfully. May 8 23:53:29.411968 systemd-logind[1921]: Removed session 6. May 8 23:53:29.435613 systemd[1]: Started sshd@6-172.31.31.246:22-139.178.68.195:33756.service - OpenSSH per-connection server daemon (139.178.68.195:33756). May 8 23:53:29.623541 sshd[2230]: Accepted publickey for core from 139.178.68.195 port 33756 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:53:29.625432 sshd-session[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:53:29.633594 systemd-logind[1921]: New session 7 of user core. May 8 23:53:29.640402 systemd[1]: Started session-7.scope - Session 7 of User core. May 8 23:53:29.783372 sudo[2233]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 8 23:53:29.784001 sudo[2233]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 23:53:29.799787 sudo[2233]: pam_unix(sudo:session): session closed for user root May 8 23:53:29.823072 sshd[2232]: Connection closed by 139.178.68.195 port 33756 May 8 23:53:29.824224 sshd-session[2230]: pam_unix(sshd:session): session closed for user core May 8 23:53:29.830638 systemd-logind[1921]: Session 7 logged out. Waiting for processes to exit. May 8 23:53:29.832060 systemd[1]: sshd@6-172.31.31.246:22-139.178.68.195:33756.service: Deactivated successfully. May 8 23:53:29.836882 systemd[1]: session-7.scope: Deactivated successfully. May 8 23:53:29.840002 systemd-logind[1921]: Removed session 7. May 8 23:53:29.856427 systemd[1]: Started sshd@7-172.31.31.246:22-139.178.68.195:33772.service - OpenSSH per-connection server daemon (139.178.68.195:33772). May 8 23:53:30.051816 sshd[2238]: Accepted publickey for core from 139.178.68.195 port 33772 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:53:30.054623 sshd-session[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:53:30.063432 systemd-logind[1921]: New session 8 of user core. May 8 23:53:30.069398 systemd[1]: Started session-8.scope - Session 8 of User core. May 8 23:53:30.173909 sudo[2242]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 8 23:53:30.174560 sudo[2242]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 23:53:30.180577 sudo[2242]: pam_unix(sudo:session): session closed for user root May 8 23:53:30.190322 sudo[2241]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 8 23:53:30.190954 sudo[2241]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 23:53:30.220694 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 8 23:53:30.267101 augenrules[2264]: No rules May 8 23:53:30.269866 systemd[1]: audit-rules.service: Deactivated successfully. May 8 23:53:30.270260 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 8 23:53:30.272407 sudo[2241]: pam_unix(sudo:session): session closed for user root May 8 23:53:30.295781 sshd[2240]: Connection closed by 139.178.68.195 port 33772 May 8 23:53:30.296560 sshd-session[2238]: pam_unix(sshd:session): session closed for user core May 8 23:53:30.302444 systemd[1]: sshd@7-172.31.31.246:22-139.178.68.195:33772.service: Deactivated successfully. May 8 23:53:30.305877 systemd[1]: session-8.scope: Deactivated successfully. May 8 23:53:30.307176 systemd-logind[1921]: Session 8 logged out. Waiting for processes to exit. May 8 23:53:30.308858 systemd-logind[1921]: Removed session 8. May 8 23:53:30.333026 systemd[1]: Started sshd@8-172.31.31.246:22-139.178.68.195:33778.service - OpenSSH per-connection server daemon (139.178.68.195:33778). May 8 23:53:30.519287 sshd[2272]: Accepted publickey for core from 139.178.68.195 port 33778 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:53:30.521617 sshd-session[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:53:30.529441 systemd-logind[1921]: New session 9 of user core. May 8 23:53:30.539396 systemd[1]: Started session-9.scope - Session 9 of User core. May 8 23:53:30.640426 sudo[2275]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 23:53:30.641019 sudo[2275]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 8 23:53:30.643537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 23:53:30.652504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:53:31.172514 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:53:31.181702 (kubelet)[2299]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 23:53:31.259213 kubelet[2299]: E0508 23:53:31.258972 2299 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 23:53:31.267035 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 23:53:31.267411 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 23:53:31.395870 systemd[1]: Starting docker.service - Docker Application Container Engine... May 8 23:53:31.397755 (dockerd)[2307]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 8 23:53:31.871319 dockerd[2307]: time="2025-05-08T23:53:31.868787721Z" level=info msg="Starting up" May 8 23:53:32.088449 dockerd[2307]: time="2025-05-08T23:53:32.088370887Z" level=info msg="Loading containers: start." May 8 23:53:32.367417 kernel: Initializing XFRM netlink socket May 8 23:53:32.443055 (udev-worker)[2330]: Network interface NamePolicy= disabled on kernel command line. May 8 23:53:32.540310 systemd-networkd[1862]: docker0: Link UP May 8 23:53:32.581448 dockerd[2307]: time="2025-05-08T23:53:32.581377758Z" level=info msg="Loading containers: done." May 8 23:53:32.605199 dockerd[2307]: time="2025-05-08T23:53:32.604552325Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 23:53:32.605199 dockerd[2307]: time="2025-05-08T23:53:32.604684199Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 8 23:53:32.605199 dockerd[2307]: time="2025-05-08T23:53:32.604872745Z" level=info msg="Daemon has completed initialization" May 8 23:53:32.656624 dockerd[2307]: time="2025-05-08T23:53:32.656538267Z" level=info msg="API listen on /run/docker.sock" May 8 23:53:32.656846 systemd[1]: Started docker.service - Docker Application Container Engine. May 8 23:53:34.118689 containerd[1943]: time="2025-05-08T23:53:34.118619069Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 8 23:53:34.724118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3737166337.mount: Deactivated successfully. May 8 23:53:36.006882 containerd[1943]: time="2025-05-08T23:53:36.006814333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:36.008981 containerd[1943]: time="2025-05-08T23:53:36.008858398Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233118" May 8 23:53:36.010156 containerd[1943]: time="2025-05-08T23:53:36.009428389Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:36.015106 containerd[1943]: time="2025-05-08T23:53:36.015054835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:36.017673 containerd[1943]: time="2025-05-08T23:53:36.017425221Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 1.898743123s" May 8 23:53:36.017673 containerd[1943]: time="2025-05-08T23:53:36.017482132Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 8 23:53:36.018840 containerd[1943]: time="2025-05-08T23:53:36.018688021Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 8 23:53:37.409459 containerd[1943]: time="2025-05-08T23:53:37.409097741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:37.411231 containerd[1943]: time="2025-05-08T23:53:37.411160937Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529571" May 8 23:53:37.412161 containerd[1943]: time="2025-05-08T23:53:37.411684978Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:37.422635 containerd[1943]: time="2025-05-08T23:53:37.422547812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:37.425039 containerd[1943]: time="2025-05-08T23:53:37.424853095Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.406102873s" May 8 23:53:37.425039 containerd[1943]: time="2025-05-08T23:53:37.424905857Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 8 23:53:37.426021 containerd[1943]: time="2025-05-08T23:53:37.425964399Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 8 23:53:38.693261 containerd[1943]: time="2025-05-08T23:53:38.693184848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:38.695344 containerd[1943]: time="2025-05-08T23:53:38.695271924Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482173" May 8 23:53:38.696433 containerd[1943]: time="2025-05-08T23:53:38.696345303Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:38.702070 containerd[1943]: time="2025-05-08T23:53:38.701955472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:38.704447 containerd[1943]: time="2025-05-08T23:53:38.704237259Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.278213009s" May 8 23:53:38.704447 containerd[1943]: time="2025-05-08T23:53:38.704292395Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 8 23:53:38.705287 containerd[1943]: time="2025-05-08T23:53:38.704876143Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 8 23:53:40.403036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3598333094.mount: Deactivated successfully. May 8 23:53:40.950948 containerd[1943]: time="2025-05-08T23:53:40.950887930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:40.953842 containerd[1943]: time="2025-05-08T23:53:40.953783366Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370351" May 8 23:53:40.955371 containerd[1943]: time="2025-05-08T23:53:40.955324439Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:40.958683 containerd[1943]: time="2025-05-08T23:53:40.958620054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:40.960376 containerd[1943]: time="2025-05-08T23:53:40.960320756Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 2.255398496s" May 8 23:53:40.960485 containerd[1943]: time="2025-05-08T23:53:40.960374093Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 8 23:53:40.961175 containerd[1943]: time="2025-05-08T23:53:40.961079784Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 8 23:53:41.450597 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 23:53:41.461066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:53:41.492924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount143336203.mount: Deactivated successfully. May 8 23:53:41.887534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:53:41.899074 (kubelet)[2584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 23:53:41.995946 kubelet[2584]: E0508 23:53:41.995710 2584 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 23:53:42.006715 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 23:53:42.007393 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 23:53:42.887965 containerd[1943]: time="2025-05-08T23:53:42.887901713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:42.891489 containerd[1943]: time="2025-05-08T23:53:42.891395109Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" May 8 23:53:42.895061 containerd[1943]: time="2025-05-08T23:53:42.893745969Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:42.905243 containerd[1943]: time="2025-05-08T23:53:42.905183987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:42.910475 containerd[1943]: time="2025-05-08T23:53:42.910401869Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.949234457s" May 8 23:53:42.910475 containerd[1943]: time="2025-05-08T23:53:42.910466733Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 8 23:53:42.911501 containerd[1943]: time="2025-05-08T23:53:42.911455794Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 8 23:53:43.404301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2420786186.mount: Deactivated successfully. May 8 23:53:43.415175 containerd[1943]: time="2025-05-08T23:53:43.414305888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:43.415642 containerd[1943]: time="2025-05-08T23:53:43.415570416Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" May 8 23:53:43.416584 containerd[1943]: time="2025-05-08T23:53:43.416543069Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:43.422658 containerd[1943]: time="2025-05-08T23:53:43.422593670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:43.424627 containerd[1943]: time="2025-05-08T23:53:43.424483589Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 512.972383ms" May 8 23:53:43.424627 containerd[1943]: time="2025-05-08T23:53:43.424536087Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 8 23:53:43.425752 containerd[1943]: time="2025-05-08T23:53:43.425442569Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 8 23:53:44.036351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1336200100.mount: Deactivated successfully. May 8 23:53:45.193014 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 8 23:53:46.486529 containerd[1943]: time="2025-05-08T23:53:46.486467912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:46.500165 containerd[1943]: time="2025-05-08T23:53:46.498753450Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:46.500165 containerd[1943]: time="2025-05-08T23:53:46.498790692Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812469" May 8 23:53:46.504698 containerd[1943]: time="2025-05-08T23:53:46.504631626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:53:46.507570 containerd[1943]: time="2025-05-08T23:53:46.507506708Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.08200917s" May 8 23:53:46.507570 containerd[1943]: time="2025-05-08T23:53:46.507565394Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 8 23:53:52.256212 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 8 23:53:52.265286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:53:52.611477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:53:52.614990 (kubelet)[2722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 8 23:53:52.693695 kubelet[2722]: E0508 23:53:52.693573 2722 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 23:53:52.700350 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 23:53:52.700665 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 23:53:54.700836 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:53:54.715942 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:53:54.770407 systemd[1]: Reloading requested from client PID 2736 ('systemctl') (unit session-9.scope)... May 8 23:53:54.770436 systemd[1]: Reloading... May 8 23:53:54.999208 zram_generator::config[2779]: No configuration found. May 8 23:53:55.246501 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 23:53:55.414319 systemd[1]: Reloading finished in 643 ms. May 8 23:53:55.520528 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:53:55.523250 systemd[1]: kubelet.service: Deactivated successfully. May 8 23:53:55.523884 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:53:55.535748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:53:55.899405 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:53:55.916709 (kubelet)[2842]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 23:53:55.988755 kubelet[2842]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 23:53:55.988755 kubelet[2842]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 23:53:55.988755 kubelet[2842]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 23:53:55.989386 kubelet[2842]: I0508 23:53:55.988679 2842 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 23:53:57.618892 kubelet[2842]: I0508 23:53:57.618782 2842 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 23:53:57.618892 kubelet[2842]: I0508 23:53:57.618870 2842 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 23:53:57.619656 kubelet[2842]: I0508 23:53:57.619369 2842 server.go:954] "Client rotation is on, will bootstrap in background" May 8 23:53:57.667736 kubelet[2842]: E0508 23:53:57.667684 2842 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.31.246:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.246:6443: connect: connection refused" logger="UnhandledError" May 8 23:53:57.672994 kubelet[2842]: I0508 23:53:57.672705 2842 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 23:53:57.686788 kubelet[2842]: E0508 23:53:57.686714 2842 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 23:53:57.686788 kubelet[2842]: I0508 23:53:57.686762 2842 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 23:53:57.691602 kubelet[2842]: I0508 23:53:57.691514 2842 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 23:53:57.693568 kubelet[2842]: I0508 23:53:57.693495 2842 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 23:53:57.693867 kubelet[2842]: I0508 23:53:57.693569 2842 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-246","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 23:53:57.694038 kubelet[2842]: I0508 23:53:57.693896 2842 topology_manager.go:138] "Creating topology manager with none policy" May 8 23:53:57.694038 kubelet[2842]: I0508 23:53:57.693920 2842 container_manager_linux.go:304] "Creating device plugin manager" May 8 23:53:57.694195 kubelet[2842]: I0508 23:53:57.694163 2842 state_mem.go:36] "Initialized new in-memory state store" May 8 23:53:57.699896 kubelet[2842]: I0508 23:53:57.699717 2842 kubelet.go:446] "Attempting to sync node with API server" May 8 23:53:57.699896 kubelet[2842]: I0508 23:53:57.699760 2842 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 23:53:57.699896 kubelet[2842]: I0508 23:53:57.699797 2842 kubelet.go:352] "Adding apiserver pod source" May 8 23:53:57.699896 kubelet[2842]: I0508 23:53:57.699817 2842 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 23:53:57.708554 kubelet[2842]: W0508 23:53:57.708304 2842 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-246&limit=500&resourceVersion=0": dial tcp 172.31.31.246:6443: connect: connection refused May 8 23:53:57.708554 kubelet[2842]: E0508 23:53:57.708400 2842 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-246&limit=500&resourceVersion=0\": dial tcp 172.31.31.246:6443: connect: connection refused" logger="UnhandledError" May 8 23:53:57.711795 kubelet[2842]: W0508 23:53:57.711719 2842 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.246:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.246:6443: connect: connection refused May 8 23:53:57.714036 kubelet[2842]: E0508 23:53:57.711999 2842 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.246:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.246:6443: connect: connection refused" logger="UnhandledError" May 8 23:53:57.714862 kubelet[2842]: I0508 23:53:57.714250 2842 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 23:53:57.715617 kubelet[2842]: I0508 23:53:57.715569 2842 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 23:53:57.715745 kubelet[2842]: W0508 23:53:57.715731 2842 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 23:53:57.718265 kubelet[2842]: I0508 23:53:57.718216 2842 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 23:53:57.718413 kubelet[2842]: I0508 23:53:57.718283 2842 server.go:1287] "Started kubelet" May 8 23:53:57.721891 kubelet[2842]: I0508 23:53:57.721838 2842 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 23:53:57.725398 kubelet[2842]: I0508 23:53:57.725358 2842 server.go:490] "Adding debug handlers to kubelet server" May 8 23:53:57.726972 kubelet[2842]: I0508 23:53:57.726877 2842 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 23:53:57.727534 kubelet[2842]: I0508 23:53:57.727487 2842 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 23:53:57.728930 kubelet[2842]: E0508 23:53:57.728700 2842 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.246:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.246:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-246.183db2730e2a3415 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-246,UID:ip-172-31-31-246,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-246,},FirstTimestamp:2025-05-08 23:53:57.718250517 +0000 UTC m=+1.794503198,LastTimestamp:2025-05-08 23:53:57.718250517 +0000 UTC m=+1.794503198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-246,}" May 8 23:53:57.731234 kubelet[2842]: I0508 23:53:57.731183 2842 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 23:53:57.733039 kubelet[2842]: I0508 23:53:57.732997 2842 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 23:53:57.738634 kubelet[2842]: E0508 23:53:57.738586 2842 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-31-246\" not found" May 8 23:53:57.738868 kubelet[2842]: I0508 23:53:57.738842 2842 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 23:53:57.739383 kubelet[2842]: I0508 23:53:57.739350 2842 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 23:53:57.739634 kubelet[2842]: I0508 23:53:57.739611 2842 reconciler.go:26] "Reconciler: start to sync state" May 8 23:53:57.740987 kubelet[2842]: W0508 23:53:57.740910 2842 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.246:6443: connect: connection refused May 8 23:53:57.741311 kubelet[2842]: E0508 23:53:57.741276 2842 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.246:6443: connect: connection refused" logger="UnhandledError" May 8 23:53:57.741769 kubelet[2842]: I0508 23:53:57.741737 2842 factory.go:221] Registration of the systemd container factory successfully May 8 23:53:57.742198 kubelet[2842]: I0508 23:53:57.742163 2842 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 23:53:57.742842 kubelet[2842]: E0508 23:53:57.742808 2842 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 23:53:57.745700 kubelet[2842]: E0508 23:53:57.745643 2842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-246?timeout=10s\": dial tcp 172.31.31.246:6443: connect: connection refused" interval="200ms" May 8 23:53:57.746182 kubelet[2842]: I0508 23:53:57.746117 2842 factory.go:221] Registration of the containerd container factory successfully May 8 23:53:57.776749 kubelet[2842]: I0508 23:53:57.776519 2842 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 23:53:57.779235 kubelet[2842]: I0508 23:53:57.779201 2842 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 23:53:57.779852 kubelet[2842]: I0508 23:53:57.779299 2842 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 23:53:57.779852 kubelet[2842]: I0508 23:53:57.779331 2842 state_mem.go:36] "Initialized new in-memory state store" May 8 23:53:57.781762 kubelet[2842]: I0508 23:53:57.781443 2842 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 23:53:57.781762 kubelet[2842]: I0508 23:53:57.781546 2842 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 23:53:57.781762 kubelet[2842]: I0508 23:53:57.781612 2842 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 23:53:57.781762 kubelet[2842]: I0508 23:53:57.781633 2842 kubelet.go:2388] "Starting kubelet main sync loop" May 8 23:53:57.782616 kubelet[2842]: E0508 23:53:57.781722 2842 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 23:53:57.785738 kubelet[2842]: W0508 23:53:57.785667 2842 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.246:6443: connect: connection refused May 8 23:53:57.786273 kubelet[2842]: E0508 23:53:57.785887 2842 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.246:6443: connect: connection refused" logger="UnhandledError" May 8 23:53:57.786742 kubelet[2842]: I0508 23:53:57.786706 2842 policy_none.go:49] "None policy: Start" May 8 23:53:57.786955 kubelet[2842]: I0508 23:53:57.786745 2842 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 23:53:57.786955 kubelet[2842]: I0508 23:53:57.786770 2842 state_mem.go:35] "Initializing new in-memory state store" May 8 23:53:57.798487 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 8 23:53:57.814033 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 8 23:53:57.820897 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 8 23:53:57.833172 kubelet[2842]: I0508 23:53:57.832931 2842 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 23:53:57.833317 kubelet[2842]: I0508 23:53:57.833269 2842 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 23:53:57.833375 kubelet[2842]: I0508 23:53:57.833292 2842 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 23:53:57.834059 kubelet[2842]: I0508 23:53:57.833916 2842 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 23:53:57.837227 kubelet[2842]: E0508 23:53:57.837120 2842 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 23:53:57.837227 kubelet[2842]: E0508 23:53:57.837230 2842 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-246\" not found" May 8 23:53:57.901471 systemd[1]: Created slice kubepods-burstable-podb7c60f0a3f70d7e88300966dab410d63.slice - libcontainer container kubepods-burstable-podb7c60f0a3f70d7e88300966dab410d63.slice. May 8 23:53:57.917184 kubelet[2842]: E0508 23:53:57.916835 2842 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-246\" not found" node="ip-172-31-31-246" May 8 23:53:57.920840 systemd[1]: Created slice kubepods-burstable-pod9a8f31e53fa40198eea417b26700d3c6.slice - libcontainer container kubepods-burstable-pod9a8f31e53fa40198eea417b26700d3c6.slice. May 8 23:53:57.926607 kubelet[2842]: E0508 23:53:57.926553 2842 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-246\" not found" node="ip-172-31-31-246" May 8 23:53:57.930206 systemd[1]: Created slice kubepods-burstable-podcd2b4ba0fde621df2bd32c65dd3b818a.slice - libcontainer container kubepods-burstable-podcd2b4ba0fde621df2bd32c65dd3b818a.slice. May 8 23:53:57.935217 kubelet[2842]: E0508 23:53:57.935150 2842 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-246\" not found" node="ip-172-31-31-246" May 8 23:53:57.937314 kubelet[2842]: I0508 23:53:57.936713 2842 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-31-246" May 8 23:53:57.937314 kubelet[2842]: E0508 23:53:57.937260 2842 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.31.246:6443/api/v1/nodes\": dial tcp 172.31.31.246:6443: connect: connection refused" node="ip-172-31-31-246" May 8 23:53:57.947011 kubelet[2842]: E0508 23:53:57.946945 2842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-246?timeout=10s\": dial tcp 172.31.31.246:6443: connect: connection refused" interval="400ms" May 8 23:53:58.040709 kubelet[2842]: I0508 23:53:58.040667 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd2b4ba0fde621df2bd32c65dd3b818a-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-246\" (UID: \"cd2b4ba0fde621df2bd32c65dd3b818a\") " pod="kube-system/kube-controller-manager-ip-172-31-31-246" May 8 23:53:58.040709 kubelet[2842]: I0508 23:53:58.040726 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd2b4ba0fde621df2bd32c65dd3b818a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-246\" (UID: \"cd2b4ba0fde621df2bd32c65dd3b818a\") " pod="kube-system/kube-controller-manager-ip-172-31-31-246" May 8 23:53:58.040709 kubelet[2842]: I0508 23:53:58.040773 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cd2b4ba0fde621df2bd32c65dd3b818a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-246\" (UID: \"cd2b4ba0fde621df2bd32c65dd3b818a\") " pod="kube-system/kube-controller-manager-ip-172-31-31-246" May 8 23:53:58.041031 kubelet[2842]: I0508 23:53:58.040807 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a8f31e53fa40198eea417b26700d3c6-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-246\" (UID: \"9a8f31e53fa40198eea417b26700d3c6\") " pod="kube-system/kube-apiserver-ip-172-31-31-246" May 8 23:53:58.041031 kubelet[2842]: I0508 23:53:58.040840 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a8f31e53fa40198eea417b26700d3c6-ca-certs\") pod \"kube-apiserver-ip-172-31-31-246\" (UID: \"9a8f31e53fa40198eea417b26700d3c6\") " pod="kube-system/kube-apiserver-ip-172-31-31-246" May 8 23:53:58.041031 kubelet[2842]: I0508 23:53:58.040880 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a8f31e53fa40198eea417b26700d3c6-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-246\" (UID: \"9a8f31e53fa40198eea417b26700d3c6\") " pod="kube-system/kube-apiserver-ip-172-31-31-246" May 8 23:53:58.041031 kubelet[2842]: I0508 23:53:58.040915 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cd2b4ba0fde621df2bd32c65dd3b818a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-246\" (UID: \"cd2b4ba0fde621df2bd32c65dd3b818a\") " pod="kube-system/kube-controller-manager-ip-172-31-31-246" May 8 23:53:58.041031 kubelet[2842]: I0508 23:53:58.040951 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd2b4ba0fde621df2bd32c65dd3b818a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-246\" (UID: \"cd2b4ba0fde621df2bd32c65dd3b818a\") " pod="kube-system/kube-controller-manager-ip-172-31-31-246" May 8 23:53:58.041410 kubelet[2842]: I0508 23:53:58.040990 2842 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b7c60f0a3f70d7e88300966dab410d63-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-246\" (UID: \"b7c60f0a3f70d7e88300966dab410d63\") " pod="kube-system/kube-scheduler-ip-172-31-31-246" May 8 23:53:58.140039 kubelet[2842]: I0508 23:53:58.139966 2842 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-31-246" May 8 23:53:58.140554 kubelet[2842]: E0508 23:53:58.140507 2842 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.31.246:6443/api/v1/nodes\": dial tcp 172.31.31.246:6443: connect: connection refused" node="ip-172-31-31-246" May 8 23:53:58.219954 containerd[1943]: time="2025-05-08T23:53:58.219808882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-246,Uid:b7c60f0a3f70d7e88300966dab410d63,Namespace:kube-system,Attempt:0,}" May 8 23:53:58.229116 containerd[1943]: time="2025-05-08T23:53:58.228742421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-246,Uid:9a8f31e53fa40198eea417b26700d3c6,Namespace:kube-system,Attempt:0,}" May 8 23:53:58.237658 containerd[1943]: time="2025-05-08T23:53:58.237580931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-246,Uid:cd2b4ba0fde621df2bd32c65dd3b818a,Namespace:kube-system,Attempt:0,}" May 8 23:53:58.348166 kubelet[2842]: E0508 23:53:58.348059 2842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-246?timeout=10s\": dial tcp 172.31.31.246:6443: connect: connection refused" interval="800ms" May 8 23:53:58.544041 kubelet[2842]: I0508 23:53:58.543893 2842 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-31-246" May 8 23:53:58.544802 kubelet[2842]: E0508 23:53:58.544724 2842 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.31.246:6443/api/v1/nodes\": dial tcp 172.31.31.246:6443: connect: connection refused" node="ip-172-31-31-246" May 8 23:53:58.549718 kubelet[2842]: W0508 23:53:58.549615 2842 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-246&limit=500&resourceVersion=0": dial tcp 172.31.31.246:6443: connect: connection refused May 8 23:53:58.549876 kubelet[2842]: E0508 23:53:58.549721 2842 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-246&limit=500&resourceVersion=0\": dial tcp 172.31.31.246:6443: connect: connection refused" logger="UnhandledError" May 8 23:53:58.691424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3567769346.mount: Deactivated successfully. May 8 23:53:58.699714 containerd[1943]: time="2025-05-08T23:53:58.699631096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:53:58.701818 containerd[1943]: time="2025-05-08T23:53:58.701749140Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:53:58.703677 containerd[1943]: time="2025-05-08T23:53:58.703604648Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 8 23:53:58.704872 containerd[1943]: time="2025-05-08T23:53:58.704550291Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 23:53:58.707437 containerd[1943]: time="2025-05-08T23:53:58.707389115Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:53:58.709654 containerd[1943]: time="2025-05-08T23:53:58.709568821Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 8 23:53:58.716181 containerd[1943]: time="2025-05-08T23:53:58.715951884Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:53:58.718968 containerd[1943]: time="2025-05-08T23:53:58.718907325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 8 23:53:58.721723 containerd[1943]: time="2025-05-08T23:53:58.721638635Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 483.941421ms" May 8 23:53:58.727701 containerd[1943]: time="2025-05-08T23:53:58.727642759Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 507.721685ms" May 8 23:53:58.729281 containerd[1943]: time="2025-05-08T23:53:58.729223113Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 500.368895ms" May 8 23:53:58.913415 containerd[1943]: time="2025-05-08T23:53:58.912262384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:53:58.913415 containerd[1943]: time="2025-05-08T23:53:58.913151595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:53:58.913415 containerd[1943]: time="2025-05-08T23:53:58.913181940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:53:58.914271 containerd[1943]: time="2025-05-08T23:53:58.913725232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:53:58.921287 containerd[1943]: time="2025-05-08T23:53:58.920671197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:53:58.921287 containerd[1943]: time="2025-05-08T23:53:58.921049896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:53:58.921287 containerd[1943]: time="2025-05-08T23:53:58.921087425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:53:58.922511 containerd[1943]: time="2025-05-08T23:53:58.922270010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:53:58.927933 containerd[1943]: time="2025-05-08T23:53:58.925971802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:53:58.927933 containerd[1943]: time="2025-05-08T23:53:58.926627706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:53:58.927933 containerd[1943]: time="2025-05-08T23:53:58.926690290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:53:58.927933 containerd[1943]: time="2025-05-08T23:53:58.926888503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:53:58.975462 systemd[1]: Started cri-containerd-c728bed030f3bc1e844c96643ea244bea1fa86dc115d9bace5782bde7be2f79f.scope - libcontainer container c728bed030f3bc1e844c96643ea244bea1fa86dc115d9bace5782bde7be2f79f. May 8 23:53:58.988953 systemd[1]: Started cri-containerd-be02131bd700b20c87c3b8a6540bbc6d4a1567d35dc1318ad9240b452e115889.scope - libcontainer container be02131bd700b20c87c3b8a6540bbc6d4a1567d35dc1318ad9240b452e115889. May 8 23:53:59.008495 systemd[1]: Started cri-containerd-ad17dfe7ee9453fc961421930a169ac29534889c16425d885542d0b022b069d5.scope - libcontainer container ad17dfe7ee9453fc961421930a169ac29534889c16425d885542d0b022b069d5. May 8 23:53:59.018091 kubelet[2842]: W0508 23:53:59.017776 2842 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.246:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.246:6443: connect: connection refused May 8 23:53:59.018091 kubelet[2842]: E0508 23:53:59.018032 2842 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.246:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.246:6443: connect: connection refused" logger="UnhandledError" May 8 23:53:59.136806 containerd[1943]: time="2025-05-08T23:53:59.135564925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-246,Uid:cd2b4ba0fde621df2bd32c65dd3b818a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c728bed030f3bc1e844c96643ea244bea1fa86dc115d9bace5782bde7be2f79f\"" May 8 23:53:59.140773 containerd[1943]: time="2025-05-08T23:53:59.140445200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-246,Uid:9a8f31e53fa40198eea417b26700d3c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"be02131bd700b20c87c3b8a6540bbc6d4a1567d35dc1318ad9240b452e115889\"" May 8 23:53:59.151874 kubelet[2842]: E0508 23:53:59.150794 2842 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-246?timeout=10s\": dial tcp 172.31.31.246:6443: connect: connection refused" interval="1.6s" May 8 23:53:59.159846 containerd[1943]: time="2025-05-08T23:53:59.159611408Z" level=info msg="CreateContainer within sandbox \"be02131bd700b20c87c3b8a6540bbc6d4a1567d35dc1318ad9240b452e115889\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 23:53:59.161295 containerd[1943]: time="2025-05-08T23:53:59.161066172Z" level=info msg="CreateContainer within sandbox \"c728bed030f3bc1e844c96643ea244bea1fa86dc115d9bace5782bde7be2f79f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 23:53:59.177353 containerd[1943]: time="2025-05-08T23:53:59.176660795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-246,Uid:b7c60f0a3f70d7e88300966dab410d63,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad17dfe7ee9453fc961421930a169ac29534889c16425d885542d0b022b069d5\"" May 8 23:53:59.184600 containerd[1943]: time="2025-05-08T23:53:59.184499714Z" level=info msg="CreateContainer within sandbox \"ad17dfe7ee9453fc961421930a169ac29534889c16425d885542d0b022b069d5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 23:53:59.205770 containerd[1943]: time="2025-05-08T23:53:59.205557712Z" level=info msg="CreateContainer within sandbox \"be02131bd700b20c87c3b8a6540bbc6d4a1567d35dc1318ad9240b452e115889\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0eab2a4cce17becf4a526e805d931520d8c9ab4877a46bba4fd8e491536182f9\"" May 8 23:53:59.206777 containerd[1943]: time="2025-05-08T23:53:59.206716644Z" level=info msg="StartContainer for \"0eab2a4cce17becf4a526e805d931520d8c9ab4877a46bba4fd8e491536182f9\"" May 8 23:53:59.208749 containerd[1943]: time="2025-05-08T23:53:59.208567810Z" level=info msg="CreateContainer within sandbox \"c728bed030f3bc1e844c96643ea244bea1fa86dc115d9bace5782bde7be2f79f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"09271303eae6883860c28cb6bc81467cd729526ddf455a4252dd3f3589097868\"" May 8 23:53:59.212208 containerd[1943]: time="2025-05-08T23:53:59.210753465Z" level=info msg="StartContainer for \"09271303eae6883860c28cb6bc81467cd729526ddf455a4252dd3f3589097868\"" May 8 23:53:59.215050 containerd[1943]: time="2025-05-08T23:53:59.214991041Z" level=info msg="CreateContainer within sandbox \"ad17dfe7ee9453fc961421930a169ac29534889c16425d885542d0b022b069d5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"61229df5358bb672314e805ee1ae519e6c46ae91bf5a50e8fb74c568e43aa657\"" May 8 23:53:59.216050 containerd[1943]: time="2025-05-08T23:53:59.216006033Z" level=info msg="StartContainer for \"61229df5358bb672314e805ee1ae519e6c46ae91bf5a50e8fb74c568e43aa657\"" May 8 23:53:59.269484 systemd[1]: Started cri-containerd-0eab2a4cce17becf4a526e805d931520d8c9ab4877a46bba4fd8e491536182f9.scope - libcontainer container 0eab2a4cce17becf4a526e805d931520d8c9ab4877a46bba4fd8e491536182f9. May 8 23:53:59.303525 systemd[1]: Started cri-containerd-61229df5358bb672314e805ee1ae519e6c46ae91bf5a50e8fb74c568e43aa657.scope - libcontainer container 61229df5358bb672314e805ee1ae519e6c46ae91bf5a50e8fb74c568e43aa657. May 8 23:53:59.318648 systemd[1]: Started cri-containerd-09271303eae6883860c28cb6bc81467cd729526ddf455a4252dd3f3589097868.scope - libcontainer container 09271303eae6883860c28cb6bc81467cd729526ddf455a4252dd3f3589097868. May 8 23:53:59.326005 kubelet[2842]: W0508 23:53:59.325877 2842 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.246:6443: connect: connection refused May 8 23:53:59.326005 kubelet[2842]: E0508 23:53:59.325978 2842 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.246:6443: connect: connection refused" logger="UnhandledError" May 8 23:53:59.348030 kubelet[2842]: I0508 23:53:59.347796 2842 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-31-246" May 8 23:53:59.351537 kubelet[2842]: E0508 23:53:59.351452 2842 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.31.31.246:6443/api/v1/nodes\": dial tcp 172.31.31.246:6443: connect: connection refused" node="ip-172-31-31-246" May 8 23:53:59.381633 kubelet[2842]: W0508 23:53:59.380393 2842 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.246:6443: connect: connection refused May 8 23:53:59.381633 kubelet[2842]: E0508 23:53:59.380491 2842 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.246:6443: connect: connection refused" logger="UnhandledError" May 8 23:53:59.440551 containerd[1943]: time="2025-05-08T23:53:59.440410463Z" level=info msg="StartContainer for \"0eab2a4cce17becf4a526e805d931520d8c9ab4877a46bba4fd8e491536182f9\" returns successfully" May 8 23:53:59.451325 containerd[1943]: time="2025-05-08T23:53:59.451217645Z" level=info msg="StartContainer for \"09271303eae6883860c28cb6bc81467cd729526ddf455a4252dd3f3589097868\" returns successfully" May 8 23:53:59.464053 containerd[1943]: time="2025-05-08T23:53:59.463982631Z" level=info msg="StartContainer for \"61229df5358bb672314e805ee1ae519e6c46ae91bf5a50e8fb74c568e43aa657\" returns successfully" May 8 23:53:59.666326 update_engine[1923]: I20250508 23:53:59.666237 1923 update_attempter.cc:509] Updating boot flags... May 8 23:53:59.817176 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 43 scanned by (udev-worker) (3136) May 8 23:53:59.817790 kubelet[2842]: E0508 23:53:59.817757 2842 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-246\" not found" node="ip-172-31-31-246" May 8 23:53:59.831148 kubelet[2842]: E0508 23:53:59.828887 2842 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-246\" not found" node="ip-172-31-31-246" May 8 23:53:59.837467 kubelet[2842]: E0508 23:53:59.837208 2842 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-246\" not found" node="ip-172-31-31-246" May 8 23:54:00.837338 kubelet[2842]: E0508 23:54:00.837289 2842 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-246\" not found" node="ip-172-31-31-246" May 8 23:54:00.837845 kubelet[2842]: E0508 23:54:00.837810 2842 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-31-246\" not found" node="ip-172-31-31-246" May 8 23:54:00.954063 kubelet[2842]: I0508 23:54:00.954017 2842 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-31-246" May 8 23:54:03.720150 kubelet[2842]: I0508 23:54:03.718354 2842 apiserver.go:52] "Watching apiserver" May 8 23:54:03.740212 kubelet[2842]: I0508 23:54:03.740144 2842 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 23:54:03.748741 kubelet[2842]: E0508 23:54:03.748681 2842 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-246\" not found" node="ip-172-31-31-246" May 8 23:54:03.841234 kubelet[2842]: E0508 23:54:03.840987 2842 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-31-246.183db2730e2a3415 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-246,UID:ip-172-31-31-246,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-246,},FirstTimestamp:2025-05-08 23:53:57.718250517 +0000 UTC m=+1.794503198,LastTimestamp:2025-05-08 23:53:57.718250517 +0000 UTC m=+1.794503198,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-246,}" May 8 23:54:03.893930 kubelet[2842]: I0508 23:54:03.892496 2842 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-31-246" May 8 23:54:03.893930 kubelet[2842]: E0508 23:54:03.892557 2842 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ip-172-31-31-246\": node \"ip-172-31-31-246\" not found" May 8 23:54:03.901705 kubelet[2842]: E0508 23:54:03.901488 2842 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-31-246.183db2730fa09ace default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-246,UID:ip-172-31-31-246,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-31-246,},FirstTimestamp:2025-05-08 23:53:57.742787278 +0000 UTC m=+1.819039923,LastTimestamp:2025-05-08 23:53:57.742787278 +0000 UTC m=+1.819039923,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-246,}" May 8 23:54:03.945071 kubelet[2842]: I0508 23:54:03.944979 2842 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-246" May 8 23:54:03.998639 kubelet[2842]: E0508 23:54:03.997867 2842 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-31-246\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-31-246" May 8 23:54:03.998639 kubelet[2842]: I0508 23:54:03.997918 2842 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-246" May 8 23:54:04.018325 kubelet[2842]: E0508 23:54:04.018043 2842 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-31-246\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-31-246" May 8 23:54:04.018325 kubelet[2842]: I0508 23:54:04.018094 2842 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-246" May 8 23:54:04.025659 kubelet[2842]: E0508 23:54:04.024656 2842 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-31-246\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-31-246" May 8 23:54:04.051160 kubelet[2842]: I0508 23:54:04.050835 2842 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-246" May 8 23:54:04.060590 kubelet[2842]: E0508 23:54:04.060546 2842 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-31-246\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-31-246" May 8 23:54:05.893316 systemd[1]: Reloading requested from client PID 3222 ('systemctl') (unit session-9.scope)... May 8 23:54:05.893370 systemd[1]: Reloading... May 8 23:54:06.169173 zram_generator::config[3263]: No configuration found. May 8 23:54:06.440379 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 23:54:06.644576 systemd[1]: Reloading finished in 749 ms. May 8 23:54:06.737223 kubelet[2842]: I0508 23:54:06.736289 2842 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 23:54:06.736573 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:54:06.753249 systemd[1]: kubelet.service: Deactivated successfully. May 8 23:54:06.753828 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:54:06.754022 systemd[1]: kubelet.service: Consumed 2.546s CPU time, 124.0M memory peak, 0B memory swap peak. May 8 23:54:06.771513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 8 23:54:07.158534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 8 23:54:07.174715 (kubelet)[3324]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 8 23:54:07.274512 kubelet[3324]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 23:54:07.274512 kubelet[3324]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 23:54:07.274512 kubelet[3324]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 23:54:07.274512 kubelet[3324]: I0508 23:54:07.274333 3324 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 23:54:07.295179 kubelet[3324]: I0508 23:54:07.294239 3324 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 23:54:07.295179 kubelet[3324]: I0508 23:54:07.294285 3324 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 23:54:07.295179 kubelet[3324]: I0508 23:54:07.294769 3324 server.go:954] "Client rotation is on, will bootstrap in background" May 8 23:54:07.297678 kubelet[3324]: I0508 23:54:07.297178 3324 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 23:54:07.301846 kubelet[3324]: I0508 23:54:07.301585 3324 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 23:54:07.310113 kubelet[3324]: E0508 23:54:07.310056 3324 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 23:54:07.310113 kubelet[3324]: I0508 23:54:07.310112 3324 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 23:54:07.319097 sudo[3338]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 8 23:54:07.319760 sudo[3338]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 8 23:54:07.324285 kubelet[3324]: I0508 23:54:07.323992 3324 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 23:54:07.324420 kubelet[3324]: I0508 23:54:07.324379 3324 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 23:54:07.325097 kubelet[3324]: I0508 23:54:07.324425 3324 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-246","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 23:54:07.325097 kubelet[3324]: I0508 23:54:07.324746 3324 topology_manager.go:138] "Creating topology manager with none policy" May 8 23:54:07.325097 kubelet[3324]: I0508 23:54:07.324767 3324 container_manager_linux.go:304] "Creating device plugin manager" May 8 23:54:07.325097 kubelet[3324]: I0508 23:54:07.324850 3324 state_mem.go:36] "Initialized new in-memory state store" May 8 23:54:07.325097 kubelet[3324]: I0508 23:54:07.325067 3324 kubelet.go:446] "Attempting to sync node with API server" May 8 23:54:07.325711 kubelet[3324]: I0508 23:54:07.325092 3324 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 23:54:07.325711 kubelet[3324]: I0508 23:54:07.325403 3324 kubelet.go:352] "Adding apiserver pod source" May 8 23:54:07.325711 kubelet[3324]: I0508 23:54:07.325436 3324 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 23:54:07.331351 kubelet[3324]: I0508 23:54:07.330721 3324 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 8 23:54:07.332509 kubelet[3324]: I0508 23:54:07.332432 3324 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 23:54:07.334355 kubelet[3324]: I0508 23:54:07.334102 3324 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 23:54:07.334355 kubelet[3324]: I0508 23:54:07.334295 3324 server.go:1287] "Started kubelet" May 8 23:54:07.348012 kubelet[3324]: I0508 23:54:07.347883 3324 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 23:54:07.353086 kubelet[3324]: I0508 23:54:07.352310 3324 server.go:490] "Adding debug handlers to kubelet server" May 8 23:54:07.362378 kubelet[3324]: I0508 23:54:07.362266 3324 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 23:54:07.363160 kubelet[3324]: I0508 23:54:07.362855 3324 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 23:54:07.365983 kubelet[3324]: I0508 23:54:07.364623 3324 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 23:54:07.391174 kubelet[3324]: I0508 23:54:07.391094 3324 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 23:54:07.394794 kubelet[3324]: I0508 23:54:07.393977 3324 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 23:54:07.396239 kubelet[3324]: E0508 23:54:07.395765 3324 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ip-172-31-31-246\" not found" May 8 23:54:07.397335 kubelet[3324]: I0508 23:54:07.396862 3324 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 23:54:07.398296 kubelet[3324]: I0508 23:54:07.397884 3324 reconciler.go:26] "Reconciler: start to sync state" May 8 23:54:07.421696 kubelet[3324]: E0508 23:54:07.421552 3324 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 23:54:07.424895 kubelet[3324]: I0508 23:54:07.422477 3324 factory.go:221] Registration of the containerd container factory successfully May 8 23:54:07.424895 kubelet[3324]: I0508 23:54:07.422538 3324 factory.go:221] Registration of the systemd container factory successfully May 8 23:54:07.426974 kubelet[3324]: I0508 23:54:07.426798 3324 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 23:54:07.504955 kubelet[3324]: I0508 23:54:07.504862 3324 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 23:54:07.514995 kubelet[3324]: I0508 23:54:07.514801 3324 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 23:54:07.514995 kubelet[3324]: I0508 23:54:07.514846 3324 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 23:54:07.514995 kubelet[3324]: I0508 23:54:07.514896 3324 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 23:54:07.514995 kubelet[3324]: I0508 23:54:07.514913 3324 kubelet.go:2388] "Starting kubelet main sync loop" May 8 23:54:07.521705 kubelet[3324]: E0508 23:54:07.519238 3324 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 23:54:07.630697 kubelet[3324]: E0508 23:54:07.630647 3324 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 8 23:54:07.641171 kubelet[3324]: I0508 23:54:07.640226 3324 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 23:54:07.641171 kubelet[3324]: I0508 23:54:07.640284 3324 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 23:54:07.641171 kubelet[3324]: I0508 23:54:07.640323 3324 state_mem.go:36] "Initialized new in-memory state store" May 8 23:54:07.641171 kubelet[3324]: I0508 23:54:07.640658 3324 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 23:54:07.641171 kubelet[3324]: I0508 23:54:07.640681 3324 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 23:54:07.641171 kubelet[3324]: I0508 23:54:07.640719 3324 policy_none.go:49] "None policy: Start" May 8 23:54:07.641171 kubelet[3324]: I0508 23:54:07.640738 3324 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 23:54:07.641171 kubelet[3324]: I0508 23:54:07.640767 3324 state_mem.go:35] "Initializing new in-memory state store" May 8 23:54:07.641171 kubelet[3324]: I0508 23:54:07.641005 3324 state_mem.go:75] "Updated machine memory state" May 8 23:54:07.652419 kubelet[3324]: I0508 23:54:07.652256 3324 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 23:54:07.654275 kubelet[3324]: I0508 23:54:07.654244 3324 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 23:54:07.654485 kubelet[3324]: I0508 23:54:07.654429 3324 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 23:54:07.655102 kubelet[3324]: I0508 23:54:07.655034 3324 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 23:54:07.668419 kubelet[3324]: E0508 23:54:07.667928 3324 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 23:54:07.788152 kubelet[3324]: I0508 23:54:07.788080 3324 kubelet_node_status.go:76] "Attempting to register node" node="ip-172-31-31-246" May 8 23:54:07.802448 kubelet[3324]: I0508 23:54:07.802391 3324 kubelet_node_status.go:125] "Node was previously registered" node="ip-172-31-31-246" May 8 23:54:07.802644 kubelet[3324]: I0508 23:54:07.802511 3324 kubelet_node_status.go:79] "Successfully registered node" node="ip-172-31-31-246" May 8 23:54:07.831996 kubelet[3324]: I0508 23:54:07.831943 3324 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-31-246" May 8 23:54:07.834024 kubelet[3324]: I0508 23:54:07.833975 3324 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-246" May 8 23:54:07.834461 kubelet[3324]: I0508 23:54:07.834417 3324 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-31-246" May 8 23:54:07.902138 kubelet[3324]: I0508 23:54:07.902062 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd2b4ba0fde621df2bd32c65dd3b818a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-246\" (UID: \"cd2b4ba0fde621df2bd32c65dd3b818a\") " pod="kube-system/kube-controller-manager-ip-172-31-31-246" May 8 23:54:07.902290 kubelet[3324]: I0508 23:54:07.902155 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd2b4ba0fde621df2bd32c65dd3b818a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-246\" (UID: \"cd2b4ba0fde621df2bd32c65dd3b818a\") " pod="kube-system/kube-controller-manager-ip-172-31-31-246" May 8 23:54:07.902290 kubelet[3324]: I0508 23:54:07.902203 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a8f31e53fa40198eea417b26700d3c6-ca-certs\") pod \"kube-apiserver-ip-172-31-31-246\" (UID: \"9a8f31e53fa40198eea417b26700d3c6\") " pod="kube-system/kube-apiserver-ip-172-31-31-246" May 8 23:54:07.902290 kubelet[3324]: I0508 23:54:07.902241 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a8f31e53fa40198eea417b26700d3c6-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-246\" (UID: \"9a8f31e53fa40198eea417b26700d3c6\") " pod="kube-system/kube-apiserver-ip-172-31-31-246" May 8 23:54:07.902290 kubelet[3324]: I0508 23:54:07.902279 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd2b4ba0fde621df2bd32c65dd3b818a-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-246\" (UID: \"cd2b4ba0fde621df2bd32c65dd3b818a\") " pod="kube-system/kube-controller-manager-ip-172-31-31-246" May 8 23:54:07.902499 kubelet[3324]: I0508 23:54:07.902315 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cd2b4ba0fde621df2bd32c65dd3b818a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-246\" (UID: \"cd2b4ba0fde621df2bd32c65dd3b818a\") " pod="kube-system/kube-controller-manager-ip-172-31-31-246" May 8 23:54:07.902499 kubelet[3324]: I0508 23:54:07.902351 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cd2b4ba0fde621df2bd32c65dd3b818a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-246\" (UID: \"cd2b4ba0fde621df2bd32c65dd3b818a\") " pod="kube-system/kube-controller-manager-ip-172-31-31-246" May 8 23:54:07.902499 kubelet[3324]: I0508 23:54:07.902386 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b7c60f0a3f70d7e88300966dab410d63-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-246\" (UID: \"b7c60f0a3f70d7e88300966dab410d63\") " pod="kube-system/kube-scheduler-ip-172-31-31-246" May 8 23:54:07.902499 kubelet[3324]: I0508 23:54:07.902424 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a8f31e53fa40198eea417b26700d3c6-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-246\" (UID: \"9a8f31e53fa40198eea417b26700d3c6\") " pod="kube-system/kube-apiserver-ip-172-31-31-246" May 8 23:54:08.285611 sudo[3338]: pam_unix(sudo:session): session closed for user root May 8 23:54:08.354160 kubelet[3324]: I0508 23:54:08.352084 3324 apiserver.go:52] "Watching apiserver" May 8 23:54:08.398332 kubelet[3324]: I0508 23:54:08.398265 3324 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 23:54:08.569715 kubelet[3324]: I0508 23:54:08.568510 3324 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-31-246" May 8 23:54:08.588860 kubelet[3324]: E0508 23:54:08.588818 3324 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-31-246\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-246" May 8 23:54:08.666086 kubelet[3324]: I0508 23:54:08.665844 3324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-246" podStartSLOduration=1.665822181 podStartE2EDuration="1.665822181s" podCreationTimestamp="2025-05-08 23:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:54:08.638624467 +0000 UTC m=+1.453526910" watchObservedRunningTime="2025-05-08 23:54:08.665822181 +0000 UTC m=+1.480724636" May 8 23:54:08.689806 kubelet[3324]: I0508 23:54:08.689739 3324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-246" podStartSLOduration=1.689716676 podStartE2EDuration="1.689716676s" podCreationTimestamp="2025-05-08 23:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:54:08.667922942 +0000 UTC m=+1.482825397" watchObservedRunningTime="2025-05-08 23:54:08.689716676 +0000 UTC m=+1.504619119" May 8 23:54:10.683695 kubelet[3324]: I0508 23:54:10.683609 3324 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 23:54:10.684851 containerd[1943]: time="2025-05-08T23:54:10.684632165Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 23:54:10.685794 kubelet[3324]: I0508 23:54:10.685051 3324 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 23:54:11.162653 sudo[2275]: pam_unix(sudo:session): session closed for user root May 8 23:54:11.185231 sshd[2274]: Connection closed by 139.178.68.195 port 33778 May 8 23:54:11.185079 sshd-session[2272]: pam_unix(sshd:session): session closed for user core May 8 23:54:11.191555 systemd[1]: sshd@8-172.31.31.246:22-139.178.68.195:33778.service: Deactivated successfully. May 8 23:54:11.195118 systemd[1]: session-9.scope: Deactivated successfully. May 8 23:54:11.195737 systemd[1]: session-9.scope: Consumed 11.890s CPU time, 152.1M memory peak, 0B memory swap peak. May 8 23:54:11.197038 systemd-logind[1921]: Session 9 logged out. Waiting for processes to exit. May 8 23:54:11.199346 systemd-logind[1921]: Removed session 9. May 8 23:54:11.549704 kubelet[3324]: I0508 23:54:11.549614 3324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-246" podStartSLOduration=4.549566189 podStartE2EDuration="4.549566189s" podCreationTimestamp="2025-05-08 23:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:54:08.692056562 +0000 UTC m=+1.506959053" watchObservedRunningTime="2025-05-08 23:54:11.549566189 +0000 UTC m=+4.364468632" May 8 23:54:11.560678 kubelet[3324]: I0508 23:54:11.560506 3324 status_manager.go:890] "Failed to get status for pod" podUID="1333ded9-ef2e-4dc5-8439-2513d29aa198" pod="kube-system/kube-proxy-kqhw5" err="pods \"kube-proxy-kqhw5\" is forbidden: User \"system:node:ip-172-31-31-246\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-31-246' and this object" May 8 23:54:11.560678 kubelet[3324]: W0508 23:54:11.560572 3324 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-31-246" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-31-246' and this object May 8 23:54:11.560678 kubelet[3324]: W0508 23:54:11.560622 3324 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-31-246" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-31-246' and this object May 8 23:54:11.560678 kubelet[3324]: E0508 23:54:11.560633 3324 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-31-246\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-31-246' and this object" logger="UnhandledError" May 8 23:54:11.560678 kubelet[3324]: E0508 23:54:11.560654 3324 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-31-246\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-31-246' and this object" logger="UnhandledError" May 8 23:54:11.567731 systemd[1]: Created slice kubepods-besteffort-pod1333ded9_ef2e_4dc5_8439_2513d29aa198.slice - libcontainer container kubepods-besteffort-pod1333ded9_ef2e_4dc5_8439_2513d29aa198.slice. May 8 23:54:11.596080 systemd[1]: Created slice kubepods-burstable-podb4eae93c_c1d1_4fb5_94a5_790665ce2bea.slice - libcontainer container kubepods-burstable-podb4eae93c_c1d1_4fb5_94a5_790665ce2bea.slice. May 8 23:54:11.625954 kubelet[3324]: I0508 23:54:11.625885 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctvmc\" (UniqueName: \"kubernetes.io/projected/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-kube-api-access-ctvmc\") pod \"cilium-r9pdp\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " pod="kube-system/cilium-r9pdp" May 8 23:54:11.625954 kubelet[3324]: I0508 23:54:11.625955 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msvjk\" (UniqueName: \"kubernetes.io/projected/1333ded9-ef2e-4dc5-8439-2513d29aa198-kube-api-access-msvjk\") pod \"kube-proxy-kqhw5\" (UID: \"1333ded9-ef2e-4dc5-8439-2513d29aa198\") " pod="kube-system/kube-proxy-kqhw5" May 8 23:54:11.626270 kubelet[3324]: I0508 23:54:11.626003 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1333ded9-ef2e-4dc5-8439-2513d29aa198-lib-modules\") pod \"kube-proxy-kqhw5\" (UID: \"1333ded9-ef2e-4dc5-8439-2513d29aa198\") " pod="kube-system/kube-proxy-kqhw5" May 8 23:54:11.626270 kubelet[3324]: I0508 23:54:11.626043 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-hostproc\") pod \"cilium-r9pdp\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " pod="kube-system/cilium-r9pdp" May 8 23:54:11.626270 kubelet[3324]: I0508 23:54:11.626080 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1333ded9-ef2e-4dc5-8439-2513d29aa198-kube-proxy\") pod \"kube-proxy-kqhw5\" (UID: \"1333ded9-ef2e-4dc5-8439-2513d29aa198\") " pod="kube-system/kube-proxy-kqhw5" May 8 23:54:11.626270 kubelet[3324]: I0508 23:54:11.626168 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-bpf-maps\") pod \"cilium-r9pdp\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " pod="kube-system/cilium-r9pdp" May 8 23:54:11.626270 kubelet[3324]: I0508 23:54:11.626209 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-lib-modules\") pod \"cilium-r9pdp\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " pod="kube-system/cilium-r9pdp" May 8 23:54:11.626270 kubelet[3324]: I0508 23:54:11.626252 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-cilium-config-path\") pod \"cilium-r9pdp\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " pod="kube-system/cilium-r9pdp" May 8 23:54:11.627715 kubelet[3324]: I0508 23:54:11.626287 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-hubble-tls\") pod \"cilium-r9pdp\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " pod="kube-system/cilium-r9pdp" May 8 23:54:11.627715 kubelet[3324]: I0508 23:54:11.626324 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-cilium-cgroup\") pod \"cilium-r9pdp\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " pod="kube-system/cilium-r9pdp" May 8 23:54:11.627715 kubelet[3324]: I0508 23:54:11.626364 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-cni-path\") pod \"cilium-r9pdp\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " pod="kube-system/cilium-r9pdp" May 8 23:54:11.627715 kubelet[3324]: I0508 23:54:11.626399 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-xtables-lock\") pod \"cilium-r9pdp\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " pod="kube-system/cilium-r9pdp" May 8 23:54:11.627715 kubelet[3324]: I0508 23:54:11.626437 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-etc-cni-netd\") pod \"cilium-r9pdp\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " pod="kube-system/cilium-r9pdp" May 8 23:54:11.627715 kubelet[3324]: I0508 23:54:11.626472 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-host-proc-sys-kernel\") pod \"cilium-r9pdp\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " pod="kube-system/cilium-r9pdp" May 8 23:54:11.628294 kubelet[3324]: I0508 23:54:11.626536 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1333ded9-ef2e-4dc5-8439-2513d29aa198-xtables-lock\") pod \"kube-proxy-kqhw5\" (UID: \"1333ded9-ef2e-4dc5-8439-2513d29aa198\") " pod="kube-system/kube-proxy-kqhw5" May 8 23:54:11.628294 kubelet[3324]: I0508 23:54:11.626575 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-cilium-run\") pod \"cilium-r9pdp\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " pod="kube-system/cilium-r9pdp" May 8 23:54:11.628294 kubelet[3324]: I0508 23:54:11.626636 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-clustermesh-secrets\") pod \"cilium-r9pdp\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " pod="kube-system/cilium-r9pdp" May 8 23:54:11.628550 kubelet[3324]: I0508 23:54:11.628398 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-host-proc-sys-net\") pod \"cilium-r9pdp\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " pod="kube-system/cilium-r9pdp" May 8 23:54:11.756823 kubelet[3324]: I0508 23:54:11.756490 3324 status_manager.go:890] "Failed to get status for pod" podUID="10482c1b-76cb-4e93-afaf-1ac2020938ed" pod="kube-system/cilium-operator-6c4d7847fc-784xh" err="pods \"cilium-operator-6c4d7847fc-784xh\" is forbidden: User \"system:node:ip-172-31-31-246\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-31-246' and this object" May 8 23:54:11.773555 systemd[1]: Created slice kubepods-besteffort-pod10482c1b_76cb_4e93_afaf_1ac2020938ed.slice - libcontainer container kubepods-besteffort-pod10482c1b_76cb_4e93_afaf_1ac2020938ed.slice. May 8 23:54:11.830332 kubelet[3324]: I0508 23:54:11.830103 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10482c1b-76cb-4e93-afaf-1ac2020938ed-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-784xh\" (UID: \"10482c1b-76cb-4e93-afaf-1ac2020938ed\") " pod="kube-system/cilium-operator-6c4d7847fc-784xh" May 8 23:54:11.830332 kubelet[3324]: I0508 23:54:11.830275 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj528\" (UniqueName: \"kubernetes.io/projected/10482c1b-76cb-4e93-afaf-1ac2020938ed-kube-api-access-sj528\") pod \"cilium-operator-6c4d7847fc-784xh\" (UID: \"10482c1b-76cb-4e93-afaf-1ac2020938ed\") " pod="kube-system/cilium-operator-6c4d7847fc-784xh" May 8 23:54:12.684573 containerd[1943]: time="2025-05-08T23:54:12.684510160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-784xh,Uid:10482c1b-76cb-4e93-afaf-1ac2020938ed,Namespace:kube-system,Attempt:0,}" May 8 23:54:12.725240 containerd[1943]: time="2025-05-08T23:54:12.724085599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:54:12.725240 containerd[1943]: time="2025-05-08T23:54:12.725216225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:54:12.725240 containerd[1943]: time="2025-05-08T23:54:12.725259955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:12.726099 containerd[1943]: time="2025-05-08T23:54:12.725503901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:12.759428 systemd[1]: Started cri-containerd-8478b9d8a6b83d4a7ddd19ea5734e596ee70cfd6075865c4e6c461aa37743129.scope - libcontainer container 8478b9d8a6b83d4a7ddd19ea5734e596ee70cfd6075865c4e6c461aa37743129. May 8 23:54:12.787604 containerd[1943]: time="2025-05-08T23:54:12.787090153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kqhw5,Uid:1333ded9-ef2e-4dc5-8439-2513d29aa198,Namespace:kube-system,Attempt:0,}" May 8 23:54:12.814283 containerd[1943]: time="2025-05-08T23:54:12.813853073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r9pdp,Uid:b4eae93c-c1d1-4fb5-94a5-790665ce2bea,Namespace:kube-system,Attempt:0,}" May 8 23:54:12.855588 containerd[1943]: time="2025-05-08T23:54:12.855493614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-784xh,Uid:10482c1b-76cb-4e93-afaf-1ac2020938ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"8478b9d8a6b83d4a7ddd19ea5734e596ee70cfd6075865c4e6c461aa37743129\"" May 8 23:54:12.859392 containerd[1943]: time="2025-05-08T23:54:12.859164978Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 23:54:12.869267 containerd[1943]: time="2025-05-08T23:54:12.868786552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:54:12.869267 containerd[1943]: time="2025-05-08T23:54:12.868876471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:54:12.869267 containerd[1943]: time="2025-05-08T23:54:12.868914360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:12.872437 containerd[1943]: time="2025-05-08T23:54:12.871072849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:12.903197 containerd[1943]: time="2025-05-08T23:54:12.900930281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:54:12.903197 containerd[1943]: time="2025-05-08T23:54:12.901119054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:54:12.903197 containerd[1943]: time="2025-05-08T23:54:12.901190179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:12.903197 containerd[1943]: time="2025-05-08T23:54:12.901544050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:12.927484 systemd[1]: Started cri-containerd-00aa0c67b18cbb1429a88d881adbc5f346fad7943707703a5b5b60f703cc8eb8.scope - libcontainer container 00aa0c67b18cbb1429a88d881adbc5f346fad7943707703a5b5b60f703cc8eb8. May 8 23:54:12.968442 systemd[1]: Started cri-containerd-f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b.scope - libcontainer container f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b. May 8 23:54:13.016627 containerd[1943]: time="2025-05-08T23:54:13.016455865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kqhw5,Uid:1333ded9-ef2e-4dc5-8439-2513d29aa198,Namespace:kube-system,Attempt:0,} returns sandbox id \"00aa0c67b18cbb1429a88d881adbc5f346fad7943707703a5b5b60f703cc8eb8\"" May 8 23:54:13.025303 containerd[1943]: time="2025-05-08T23:54:13.025092625Z" level=info msg="CreateContainer within sandbox \"00aa0c67b18cbb1429a88d881adbc5f346fad7943707703a5b5b60f703cc8eb8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 23:54:13.034706 containerd[1943]: time="2025-05-08T23:54:13.034651591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r9pdp,Uid:b4eae93c-c1d1-4fb5-94a5-790665ce2bea,Namespace:kube-system,Attempt:0,} returns sandbox id \"f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b\"" May 8 23:54:13.053626 containerd[1943]: time="2025-05-08T23:54:13.053570723Z" level=info msg="CreateContainer within sandbox \"00aa0c67b18cbb1429a88d881adbc5f346fad7943707703a5b5b60f703cc8eb8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ec5b6fd8fd819d2d34069093b4fae63eb8f125c96112aedaba9aa22a0f5a1b16\"" May 8 23:54:13.056230 containerd[1943]: time="2025-05-08T23:54:13.056113822Z" level=info msg="StartContainer for \"ec5b6fd8fd819d2d34069093b4fae63eb8f125c96112aedaba9aa22a0f5a1b16\"" May 8 23:54:13.102482 systemd[1]: Started cri-containerd-ec5b6fd8fd819d2d34069093b4fae63eb8f125c96112aedaba9aa22a0f5a1b16.scope - libcontainer container ec5b6fd8fd819d2d34069093b4fae63eb8f125c96112aedaba9aa22a0f5a1b16. May 8 23:54:13.175019 containerd[1943]: time="2025-05-08T23:54:13.174936606Z" level=info msg="StartContainer for \"ec5b6fd8fd819d2d34069093b4fae63eb8f125c96112aedaba9aa22a0f5a1b16\" returns successfully" May 8 23:54:13.669996 kubelet[3324]: I0508 23:54:13.669858 3324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kqhw5" podStartSLOduration=2.6698331189999998 podStartE2EDuration="2.669833119s" podCreationTimestamp="2025-05-08 23:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:54:13.626491109 +0000 UTC m=+6.441393636" watchObservedRunningTime="2025-05-08 23:54:13.669833119 +0000 UTC m=+6.484735550" May 8 23:54:13.766710 systemd[1]: run-containerd-runc-k8s.io-00aa0c67b18cbb1429a88d881adbc5f346fad7943707703a5b5b60f703cc8eb8-runc.F6CThe.mount: Deactivated successfully. May 8 23:54:13.998803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3866866181.mount: Deactivated successfully. May 8 23:54:14.565140 containerd[1943]: time="2025-05-08T23:54:14.564890085Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:54:14.566690 containerd[1943]: time="2025-05-08T23:54:14.566457233Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 8 23:54:14.567808 containerd[1943]: time="2025-05-08T23:54:14.567720825Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:54:14.570807 containerd[1943]: time="2025-05-08T23:54:14.570739919Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.71149517s" May 8 23:54:14.570807 containerd[1943]: time="2025-05-08T23:54:14.570800741Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 8 23:54:14.576413 containerd[1943]: time="2025-05-08T23:54:14.575238856Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 23:54:14.576963 containerd[1943]: time="2025-05-08T23:54:14.576911468Z" level=info msg="CreateContainer within sandbox \"8478b9d8a6b83d4a7ddd19ea5734e596ee70cfd6075865c4e6c461aa37743129\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 23:54:14.602250 containerd[1943]: time="2025-05-08T23:54:14.602195635Z" level=info msg="CreateContainer within sandbox \"8478b9d8a6b83d4a7ddd19ea5734e596ee70cfd6075865c4e6c461aa37743129\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"30cd8f75268573a3b72fafb11255bb8ddc35937386ca0d71d1ff3d68f3d7cb1c\"" May 8 23:54:14.603192 containerd[1943]: time="2025-05-08T23:54:14.603096601Z" level=info msg="StartContainer for \"30cd8f75268573a3b72fafb11255bb8ddc35937386ca0d71d1ff3d68f3d7cb1c\"" May 8 23:54:14.647492 systemd[1]: Started cri-containerd-30cd8f75268573a3b72fafb11255bb8ddc35937386ca0d71d1ff3d68f3d7cb1c.scope - libcontainer container 30cd8f75268573a3b72fafb11255bb8ddc35937386ca0d71d1ff3d68f3d7cb1c. May 8 23:54:14.693112 containerd[1943]: time="2025-05-08T23:54:14.692922078Z" level=info msg="StartContainer for \"30cd8f75268573a3b72fafb11255bb8ddc35937386ca0d71d1ff3d68f3d7cb1c\" returns successfully" May 8 23:54:16.553959 kubelet[3324]: I0508 23:54:16.552585 3324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-784xh" podStartSLOduration=3.837536542 podStartE2EDuration="5.552564041s" podCreationTimestamp="2025-05-08 23:54:11 +0000 UTC" firstStartedPulling="2025-05-08 23:54:12.858441368 +0000 UTC m=+5.673343798" lastFinishedPulling="2025-05-08 23:54:14.573468878 +0000 UTC m=+7.388371297" observedRunningTime="2025-05-08 23:54:15.648703731 +0000 UTC m=+8.463606186" watchObservedRunningTime="2025-05-08 23:54:16.552564041 +0000 UTC m=+9.367466472" May 8 23:54:19.951765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3694501922.mount: Deactivated successfully. May 8 23:54:22.466633 containerd[1943]: time="2025-05-08T23:54:22.466561585Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:54:22.468312 containerd[1943]: time="2025-05-08T23:54:22.468230022Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 8 23:54:22.469236 containerd[1943]: time="2025-05-08T23:54:22.469118418Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 8 23:54:22.474101 containerd[1943]: time="2025-05-08T23:54:22.474033379Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.898700118s" May 8 23:54:22.474301 containerd[1943]: time="2025-05-08T23:54:22.474102165Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 8 23:54:22.478692 containerd[1943]: time="2025-05-08T23:54:22.478619549Z" level=info msg="CreateContainer within sandbox \"f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 23:54:22.498090 containerd[1943]: time="2025-05-08T23:54:22.498034512Z" level=info msg="CreateContainer within sandbox \"f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4cbf921a96822efed3a1ed3fa04f12979b734fa9f01608666805307318500a0a\"" May 8 23:54:22.500363 containerd[1943]: time="2025-05-08T23:54:22.499978392Z" level=info msg="StartContainer for \"4cbf921a96822efed3a1ed3fa04f12979b734fa9f01608666805307318500a0a\"" May 8 23:54:22.553455 systemd[1]: Started cri-containerd-4cbf921a96822efed3a1ed3fa04f12979b734fa9f01608666805307318500a0a.scope - libcontainer container 4cbf921a96822efed3a1ed3fa04f12979b734fa9f01608666805307318500a0a. May 8 23:54:22.599934 containerd[1943]: time="2025-05-08T23:54:22.599749018Z" level=info msg="StartContainer for \"4cbf921a96822efed3a1ed3fa04f12979b734fa9f01608666805307318500a0a\" returns successfully" May 8 23:54:22.619072 systemd[1]: cri-containerd-4cbf921a96822efed3a1ed3fa04f12979b734fa9f01608666805307318500a0a.scope: Deactivated successfully. May 8 23:54:23.491904 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cbf921a96822efed3a1ed3fa04f12979b734fa9f01608666805307318500a0a-rootfs.mount: Deactivated successfully. May 8 23:54:23.580313 containerd[1943]: time="2025-05-08T23:54:23.580219672Z" level=info msg="shim disconnected" id=4cbf921a96822efed3a1ed3fa04f12979b734fa9f01608666805307318500a0a namespace=k8s.io May 8 23:54:23.580313 containerd[1943]: time="2025-05-08T23:54:23.580298065Z" level=warning msg="cleaning up after shim disconnected" id=4cbf921a96822efed3a1ed3fa04f12979b734fa9f01608666805307318500a0a namespace=k8s.io May 8 23:54:23.581877 containerd[1943]: time="2025-05-08T23:54:23.580319510Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:54:23.679391 containerd[1943]: time="2025-05-08T23:54:23.676412572Z" level=info msg="CreateContainer within sandbox \"f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 23:54:23.702687 containerd[1943]: time="2025-05-08T23:54:23.702610023Z" level=info msg="CreateContainer within sandbox \"f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"20edb7dbf223febe69788e8f91720ef3e9e5b41fcd2623d800466973c9a4e5b1\"" May 8 23:54:23.704463 containerd[1943]: time="2025-05-08T23:54:23.704400403Z" level=info msg="StartContainer for \"20edb7dbf223febe69788e8f91720ef3e9e5b41fcd2623d800466973c9a4e5b1\"" May 8 23:54:23.772490 systemd[1]: Started cri-containerd-20edb7dbf223febe69788e8f91720ef3e9e5b41fcd2623d800466973c9a4e5b1.scope - libcontainer container 20edb7dbf223febe69788e8f91720ef3e9e5b41fcd2623d800466973c9a4e5b1. May 8 23:54:23.820794 containerd[1943]: time="2025-05-08T23:54:23.820714797Z" level=info msg="StartContainer for \"20edb7dbf223febe69788e8f91720ef3e9e5b41fcd2623d800466973c9a4e5b1\" returns successfully" May 8 23:54:23.843290 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 23:54:23.844618 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 8 23:54:23.844941 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 8 23:54:23.855099 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 8 23:54:23.855998 systemd[1]: cri-containerd-20edb7dbf223febe69788e8f91720ef3e9e5b41fcd2623d800466973c9a4e5b1.scope: Deactivated successfully. May 8 23:54:23.900508 containerd[1943]: time="2025-05-08T23:54:23.900416727Z" level=info msg="shim disconnected" id=20edb7dbf223febe69788e8f91720ef3e9e5b41fcd2623d800466973c9a4e5b1 namespace=k8s.io May 8 23:54:23.903263 containerd[1943]: time="2025-05-08T23:54:23.901346154Z" level=warning msg="cleaning up after shim disconnected" id=20edb7dbf223febe69788e8f91720ef3e9e5b41fcd2623d800466973c9a4e5b1 namespace=k8s.io May 8 23:54:23.903263 containerd[1943]: time="2025-05-08T23:54:23.901384906Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:54:23.905042 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 8 23:54:24.492713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20edb7dbf223febe69788e8f91720ef3e9e5b41fcd2623d800466973c9a4e5b1-rootfs.mount: Deactivated successfully. May 8 23:54:24.685949 containerd[1943]: time="2025-05-08T23:54:24.685365966Z" level=info msg="CreateContainer within sandbox \"f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 23:54:24.714156 containerd[1943]: time="2025-05-08T23:54:24.714043788Z" level=info msg="CreateContainer within sandbox \"f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"35147072a90e65edc710805b601b74f122d037f60bdd0d7307ca6b947ccd06db\"" May 8 23:54:24.715160 containerd[1943]: time="2025-05-08T23:54:24.715085359Z" level=info msg="StartContainer for \"35147072a90e65edc710805b601b74f122d037f60bdd0d7307ca6b947ccd06db\"" May 8 23:54:24.718710 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2872050543.mount: Deactivated successfully. May 8 23:54:24.789442 systemd[1]: Started cri-containerd-35147072a90e65edc710805b601b74f122d037f60bdd0d7307ca6b947ccd06db.scope - libcontainer container 35147072a90e65edc710805b601b74f122d037f60bdd0d7307ca6b947ccd06db. May 8 23:54:24.846516 containerd[1943]: time="2025-05-08T23:54:24.846422730Z" level=info msg="StartContainer for \"35147072a90e65edc710805b601b74f122d037f60bdd0d7307ca6b947ccd06db\" returns successfully" May 8 23:54:24.852998 systemd[1]: cri-containerd-35147072a90e65edc710805b601b74f122d037f60bdd0d7307ca6b947ccd06db.scope: Deactivated successfully. May 8 23:54:24.895346 containerd[1943]: time="2025-05-08T23:54:24.895111090Z" level=info msg="shim disconnected" id=35147072a90e65edc710805b601b74f122d037f60bdd0d7307ca6b947ccd06db namespace=k8s.io May 8 23:54:24.895346 containerd[1943]: time="2025-05-08T23:54:24.895275695Z" level=warning msg="cleaning up after shim disconnected" id=35147072a90e65edc710805b601b74f122d037f60bdd0d7307ca6b947ccd06db namespace=k8s.io May 8 23:54:24.895346 containerd[1943]: time="2025-05-08T23:54:24.895311773Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:54:25.492695 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35147072a90e65edc710805b601b74f122d037f60bdd0d7307ca6b947ccd06db-rootfs.mount: Deactivated successfully. May 8 23:54:25.692303 containerd[1943]: time="2025-05-08T23:54:25.692066215Z" level=info msg="CreateContainer within sandbox \"f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 23:54:25.723960 containerd[1943]: time="2025-05-08T23:54:25.723779119Z" level=info msg="CreateContainer within sandbox \"f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ffdbac54786c928f281679479991468bc405c2ea479289566611e9100ea31377\"" May 8 23:54:25.728100 containerd[1943]: time="2025-05-08T23:54:25.728037960Z" level=info msg="StartContainer for \"ffdbac54786c928f281679479991468bc405c2ea479289566611e9100ea31377\"" May 8 23:54:25.799441 systemd[1]: Started cri-containerd-ffdbac54786c928f281679479991468bc405c2ea479289566611e9100ea31377.scope - libcontainer container ffdbac54786c928f281679479991468bc405c2ea479289566611e9100ea31377. May 8 23:54:25.842554 systemd[1]: cri-containerd-ffdbac54786c928f281679479991468bc405c2ea479289566611e9100ea31377.scope: Deactivated successfully. May 8 23:54:25.843775 containerd[1943]: time="2025-05-08T23:54:25.843616511Z" level=info msg="StartContainer for \"ffdbac54786c928f281679479991468bc405c2ea479289566611e9100ea31377\" returns successfully" May 8 23:54:25.885522 containerd[1943]: time="2025-05-08T23:54:25.885213466Z" level=info msg="shim disconnected" id=ffdbac54786c928f281679479991468bc405c2ea479289566611e9100ea31377 namespace=k8s.io May 8 23:54:25.885522 containerd[1943]: time="2025-05-08T23:54:25.885295145Z" level=warning msg="cleaning up after shim disconnected" id=ffdbac54786c928f281679479991468bc405c2ea479289566611e9100ea31377 namespace=k8s.io May 8 23:54:25.885522 containerd[1943]: time="2025-05-08T23:54:25.885314191Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:54:26.492707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ffdbac54786c928f281679479991468bc405c2ea479289566611e9100ea31377-rootfs.mount: Deactivated successfully. May 8 23:54:26.698318 containerd[1943]: time="2025-05-08T23:54:26.698238392Z" level=info msg="CreateContainer within sandbox \"f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 23:54:26.730304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2622472195.mount: Deactivated successfully. May 8 23:54:26.737775 containerd[1943]: time="2025-05-08T23:54:26.737701687Z" level=info msg="CreateContainer within sandbox \"f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3fbebff3af534a0b6a848f1c60b317b35bee1041e1ece071158c6cec48071ed1\"" May 8 23:54:26.740064 containerd[1943]: time="2025-05-08T23:54:26.740001513Z" level=info msg="StartContainer for \"3fbebff3af534a0b6a848f1c60b317b35bee1041e1ece071158c6cec48071ed1\"" May 8 23:54:26.804722 systemd[1]: Started cri-containerd-3fbebff3af534a0b6a848f1c60b317b35bee1041e1ece071158c6cec48071ed1.scope - libcontainer container 3fbebff3af534a0b6a848f1c60b317b35bee1041e1ece071158c6cec48071ed1. May 8 23:54:26.861744 containerd[1943]: time="2025-05-08T23:54:26.861646640Z" level=info msg="StartContainer for \"3fbebff3af534a0b6a848f1c60b317b35bee1041e1ece071158c6cec48071ed1\" returns successfully" May 8 23:54:27.032237 kubelet[3324]: I0508 23:54:27.031337 3324 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 8 23:54:27.103869 systemd[1]: Created slice kubepods-burstable-pod7569e50d_3ca4_486d_9eee_1efc1bb361a5.slice - libcontainer container kubepods-burstable-pod7569e50d_3ca4_486d_9eee_1efc1bb361a5.slice. May 8 23:54:27.124908 systemd[1]: Created slice kubepods-burstable-pod2a6df9ce_3e24_4558_9847_f75a50af3965.slice - libcontainer container kubepods-burstable-pod2a6df9ce_3e24_4558_9847_f75a50af3965.slice. May 8 23:54:27.151923 kubelet[3324]: I0508 23:54:27.151424 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsmrp\" (UniqueName: \"kubernetes.io/projected/2a6df9ce-3e24-4558-9847-f75a50af3965-kube-api-access-vsmrp\") pod \"coredns-668d6bf9bc-c5lgt\" (UID: \"2a6df9ce-3e24-4558-9847-f75a50af3965\") " pod="kube-system/coredns-668d6bf9bc-c5lgt" May 8 23:54:27.151923 kubelet[3324]: I0508 23:54:27.151494 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a6df9ce-3e24-4558-9847-f75a50af3965-config-volume\") pod \"coredns-668d6bf9bc-c5lgt\" (UID: \"2a6df9ce-3e24-4558-9847-f75a50af3965\") " pod="kube-system/coredns-668d6bf9bc-c5lgt" May 8 23:54:27.151923 kubelet[3324]: I0508 23:54:27.151536 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7569e50d-3ca4-486d-9eee-1efc1bb361a5-config-volume\") pod \"coredns-668d6bf9bc-vd85q\" (UID: \"7569e50d-3ca4-486d-9eee-1efc1bb361a5\") " pod="kube-system/coredns-668d6bf9bc-vd85q" May 8 23:54:27.151923 kubelet[3324]: I0508 23:54:27.151582 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7kd2\" (UniqueName: \"kubernetes.io/projected/7569e50d-3ca4-486d-9eee-1efc1bb361a5-kube-api-access-b7kd2\") pod \"coredns-668d6bf9bc-vd85q\" (UID: \"7569e50d-3ca4-486d-9eee-1efc1bb361a5\") " pod="kube-system/coredns-668d6bf9bc-vd85q" May 8 23:54:27.417368 containerd[1943]: time="2025-05-08T23:54:27.416435085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vd85q,Uid:7569e50d-3ca4-486d-9eee-1efc1bb361a5,Namespace:kube-system,Attempt:0,}" May 8 23:54:27.435205 containerd[1943]: time="2025-05-08T23:54:27.434929581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c5lgt,Uid:2a6df9ce-3e24-4558-9847-f75a50af3965,Namespace:kube-system,Attempt:0,}" May 8 23:54:27.736915 kubelet[3324]: I0508 23:54:27.736488 3324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r9pdp" podStartSLOduration=7.298905643 podStartE2EDuration="16.73646216s" podCreationTimestamp="2025-05-08 23:54:11 +0000 UTC" firstStartedPulling="2025-05-08 23:54:13.037709341 +0000 UTC m=+5.852611760" lastFinishedPulling="2025-05-08 23:54:22.475265858 +0000 UTC m=+15.290168277" observedRunningTime="2025-05-08 23:54:27.734958736 +0000 UTC m=+20.549861191" watchObservedRunningTime="2025-05-08 23:54:27.73646216 +0000 UTC m=+20.551364579" May 8 23:54:29.676034 systemd-networkd[1862]: cilium_host: Link UP May 8 23:54:29.676979 (udev-worker)[4114]: Network interface NamePolicy= disabled on kernel command line. May 8 23:54:29.679443 systemd-networkd[1862]: cilium_net: Link UP May 8 23:54:29.679806 systemd-networkd[1862]: cilium_net: Gained carrier May 8 23:54:29.681567 (udev-worker)[4158]: Network interface NamePolicy= disabled on kernel command line. May 8 23:54:29.682733 systemd-networkd[1862]: cilium_host: Gained carrier May 8 23:54:29.849377 (udev-worker)[4171]: Network interface NamePolicy= disabled on kernel command line. May 8 23:54:29.858479 systemd-networkd[1862]: cilium_vxlan: Link UP May 8 23:54:29.858502 systemd-networkd[1862]: cilium_vxlan: Gained carrier May 8 23:54:30.337354 kernel: NET: Registered PF_ALG protocol family May 8 23:54:30.472533 systemd-networkd[1862]: cilium_host: Gained IPv6LL May 8 23:54:30.536769 systemd-networkd[1862]: cilium_net: Gained IPv6LL May 8 23:54:31.177525 systemd-networkd[1862]: cilium_vxlan: Gained IPv6LL May 8 23:54:31.664778 systemd-networkd[1862]: lxc_health: Link UP May 8 23:54:31.668554 systemd-networkd[1862]: lxc_health: Gained carrier May 8 23:54:31.990215 systemd-networkd[1862]: lxc8715ce03b439: Link UP May 8 23:54:31.995177 kernel: eth0: renamed from tmp8cc8a May 8 23:54:32.002489 systemd-networkd[1862]: lxc8715ce03b439: Gained carrier May 8 23:54:32.081098 systemd-networkd[1862]: lxcf1788b4160ac: Link UP May 8 23:54:32.087315 kernel: eth0: renamed from tmp47329 May 8 23:54:32.094414 systemd-networkd[1862]: lxcf1788b4160ac: Gained carrier May 8 23:54:33.480428 systemd-networkd[1862]: lxcf1788b4160ac: Gained IPv6LL May 8 23:54:33.608344 systemd-networkd[1862]: lxc_health: Gained IPv6LL May 8 23:54:33.992447 systemd-networkd[1862]: lxc8715ce03b439: Gained IPv6LL May 8 23:54:34.240454 kubelet[3324]: I0508 23:54:34.240392 3324 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 23:54:36.106908 ntpd[1913]: Listen normally on 8 cilium_host 192.168.0.228:123 May 8 23:54:36.107047 ntpd[1913]: Listen normally on 9 cilium_net [fe80::c4aa:31ff:fe8c:6096%4]:123 May 8 23:54:36.107892 ntpd[1913]: 8 May 23:54:36 ntpd[1913]: Listen normally on 8 cilium_host 192.168.0.228:123 May 8 23:54:36.107892 ntpd[1913]: 8 May 23:54:36 ntpd[1913]: Listen normally on 9 cilium_net [fe80::c4aa:31ff:fe8c:6096%4]:123 May 8 23:54:36.107892 ntpd[1913]: 8 May 23:54:36 ntpd[1913]: Listen normally on 10 cilium_host [fe80::387d:44ff:fe4a:d9e%5]:123 May 8 23:54:36.107892 ntpd[1913]: 8 May 23:54:36 ntpd[1913]: Listen normally on 11 cilium_vxlan [fe80::d068:a1ff:fef2:be50%6]:123 May 8 23:54:36.107892 ntpd[1913]: 8 May 23:54:36 ntpd[1913]: Listen normally on 12 lxc_health [fe80::cd9:e9ff:fe6d:74dc%8]:123 May 8 23:54:36.107892 ntpd[1913]: 8 May 23:54:36 ntpd[1913]: Listen normally on 13 lxc8715ce03b439 [fe80::7440:a2ff:fe41:34c3%10]:123 May 8 23:54:36.107892 ntpd[1913]: 8 May 23:54:36 ntpd[1913]: Listen normally on 14 lxcf1788b4160ac [fe80::ac82:1dff:fe1f:d046%12]:123 May 8 23:54:36.107150 ntpd[1913]: Listen normally on 10 cilium_host [fe80::387d:44ff:fe4a:d9e%5]:123 May 8 23:54:36.107226 ntpd[1913]: Listen normally on 11 cilium_vxlan [fe80::d068:a1ff:fef2:be50%6]:123 May 8 23:54:36.107294 ntpd[1913]: Listen normally on 12 lxc_health [fe80::cd9:e9ff:fe6d:74dc%8]:123 May 8 23:54:36.107362 ntpd[1913]: Listen normally on 13 lxc8715ce03b439 [fe80::7440:a2ff:fe41:34c3%10]:123 May 8 23:54:36.107452 ntpd[1913]: Listen normally on 14 lxcf1788b4160ac [fe80::ac82:1dff:fe1f:d046%12]:123 May 8 23:54:40.205256 containerd[1943]: time="2025-05-08T23:54:40.202700256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:54:40.205256 containerd[1943]: time="2025-05-08T23:54:40.202811176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:54:40.207071 containerd[1943]: time="2025-05-08T23:54:40.206050876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:40.207071 containerd[1943]: time="2025-05-08T23:54:40.206300351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:40.273897 systemd[1]: Started cri-containerd-8cc8af25b090dee7314f00f783cf4fc1e71f649e3fe7fac5b8855e1df8684ed2.scope - libcontainer container 8cc8af25b090dee7314f00f783cf4fc1e71f649e3fe7fac5b8855e1df8684ed2. May 8 23:54:40.301302 containerd[1943]: time="2025-05-08T23:54:40.301084411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:54:40.301302 containerd[1943]: time="2025-05-08T23:54:40.301242229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:54:40.301757 containerd[1943]: time="2025-05-08T23:54:40.301281353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:40.304658 containerd[1943]: time="2025-05-08T23:54:40.304361616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:54:40.373456 systemd[1]: Started cri-containerd-47329f1b509ebe777af6544100aecf40676425aff007bba396062c0ec5e91814.scope - libcontainer container 47329f1b509ebe777af6544100aecf40676425aff007bba396062c0ec5e91814. May 8 23:54:40.478928 containerd[1943]: time="2025-05-08T23:54:40.478873825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vd85q,Uid:7569e50d-3ca4-486d-9eee-1efc1bb361a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cc8af25b090dee7314f00f783cf4fc1e71f649e3fe7fac5b8855e1df8684ed2\"" May 8 23:54:40.490188 containerd[1943]: time="2025-05-08T23:54:40.489817270Z" level=info msg="CreateContainer within sandbox \"8cc8af25b090dee7314f00f783cf4fc1e71f649e3fe7fac5b8855e1df8684ed2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 23:54:40.527335 containerd[1943]: time="2025-05-08T23:54:40.527095906Z" level=info msg="CreateContainer within sandbox \"8cc8af25b090dee7314f00f783cf4fc1e71f649e3fe7fac5b8855e1df8684ed2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"10331b41e425b09a2259396ed3b845fdaca31d417cfc5ac1d54ad7e0acbf0c62\"" May 8 23:54:40.529923 containerd[1943]: time="2025-05-08T23:54:40.529534382Z" level=info msg="StartContainer for \"10331b41e425b09a2259396ed3b845fdaca31d417cfc5ac1d54ad7e0acbf0c62\"" May 8 23:54:40.545253 containerd[1943]: time="2025-05-08T23:54:40.544421227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c5lgt,Uid:2a6df9ce-3e24-4558-9847-f75a50af3965,Namespace:kube-system,Attempt:0,} returns sandbox id \"47329f1b509ebe777af6544100aecf40676425aff007bba396062c0ec5e91814\"" May 8 23:54:40.557768 containerd[1943]: time="2025-05-08T23:54:40.557503670Z" level=info msg="CreateContainer within sandbox \"47329f1b509ebe777af6544100aecf40676425aff007bba396062c0ec5e91814\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 23:54:40.588062 containerd[1943]: time="2025-05-08T23:54:40.587936634Z" level=info msg="CreateContainer within sandbox \"47329f1b509ebe777af6544100aecf40676425aff007bba396062c0ec5e91814\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e4142cafb254dc764bb0bf5cdf9727ea123a87840af458e3d075b3459e58f7b\"" May 8 23:54:40.590607 containerd[1943]: time="2025-05-08T23:54:40.590545629Z" level=info msg="StartContainer for \"9e4142cafb254dc764bb0bf5cdf9727ea123a87840af458e3d075b3459e58f7b\"" May 8 23:54:40.623508 systemd[1]: Started cri-containerd-10331b41e425b09a2259396ed3b845fdaca31d417cfc5ac1d54ad7e0acbf0c62.scope - libcontainer container 10331b41e425b09a2259396ed3b845fdaca31d417cfc5ac1d54ad7e0acbf0c62. May 8 23:54:40.697797 systemd[1]: Started cri-containerd-9e4142cafb254dc764bb0bf5cdf9727ea123a87840af458e3d075b3459e58f7b.scope - libcontainer container 9e4142cafb254dc764bb0bf5cdf9727ea123a87840af458e3d075b3459e58f7b. May 8 23:54:40.776461 containerd[1943]: time="2025-05-08T23:54:40.775567461Z" level=info msg="StartContainer for \"10331b41e425b09a2259396ed3b845fdaca31d417cfc5ac1d54ad7e0acbf0c62\" returns successfully" May 8 23:54:40.840740 containerd[1943]: time="2025-05-08T23:54:40.840661382Z" level=info msg="StartContainer for \"9e4142cafb254dc764bb0bf5cdf9727ea123a87840af458e3d075b3459e58f7b\" returns successfully" May 8 23:54:41.814314 kubelet[3324]: I0508 23:54:41.814186 3324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vd85q" podStartSLOduration=30.814118688 podStartE2EDuration="30.814118688s" podCreationTimestamp="2025-05-08 23:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:54:40.843491894 +0000 UTC m=+33.658394373" watchObservedRunningTime="2025-05-08 23:54:41.814118688 +0000 UTC m=+34.629021143" May 8 23:54:41.848227 kubelet[3324]: I0508 23:54:41.847652 3324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-c5lgt" podStartSLOduration=30.847628401 podStartE2EDuration="30.847628401s" podCreationTimestamp="2025-05-08 23:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:54:41.846883106 +0000 UTC m=+34.661785573" watchObservedRunningTime="2025-05-08 23:54:41.847628401 +0000 UTC m=+34.662530832" May 8 23:54:44.931659 systemd[1]: Started sshd@9-172.31.31.246:22-139.178.68.195:36994.service - OpenSSH per-connection server daemon (139.178.68.195:36994). May 8 23:54:45.123374 sshd[4700]: Accepted publickey for core from 139.178.68.195 port 36994 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:54:45.125907 sshd-session[4700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:54:45.135235 systemd-logind[1921]: New session 10 of user core. May 8 23:54:45.141448 systemd[1]: Started session-10.scope - Session 10 of User core. May 8 23:54:45.419793 sshd[4702]: Connection closed by 139.178.68.195 port 36994 May 8 23:54:45.420713 sshd-session[4700]: pam_unix(sshd:session): session closed for user core May 8 23:54:45.427423 systemd[1]: sshd@9-172.31.31.246:22-139.178.68.195:36994.service: Deactivated successfully. May 8 23:54:45.432322 systemd[1]: session-10.scope: Deactivated successfully. May 8 23:54:45.435307 systemd-logind[1921]: Session 10 logged out. Waiting for processes to exit. May 8 23:54:45.437845 systemd-logind[1921]: Removed session 10. May 8 23:54:50.458640 systemd[1]: Started sshd@10-172.31.31.246:22-139.178.68.195:44040.service - OpenSSH per-connection server daemon (139.178.68.195:44040). May 8 23:54:50.638953 sshd[4718]: Accepted publickey for core from 139.178.68.195 port 44040 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:54:50.641490 sshd-session[4718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:54:50.648669 systemd-logind[1921]: New session 11 of user core. May 8 23:54:50.660452 systemd[1]: Started session-11.scope - Session 11 of User core. May 8 23:54:50.904628 sshd[4720]: Connection closed by 139.178.68.195 port 44040 May 8 23:54:50.905522 sshd-session[4718]: pam_unix(sshd:session): session closed for user core May 8 23:54:50.911928 systemd[1]: sshd@10-172.31.31.246:22-139.178.68.195:44040.service: Deactivated successfully. May 8 23:54:50.915584 systemd[1]: session-11.scope: Deactivated successfully. May 8 23:54:50.917057 systemd-logind[1921]: Session 11 logged out. Waiting for processes to exit. May 8 23:54:50.920609 systemd-logind[1921]: Removed session 11. May 8 23:54:55.946718 systemd[1]: Started sshd@11-172.31.31.246:22-139.178.68.195:40420.service - OpenSSH per-connection server daemon (139.178.68.195:40420). May 8 23:54:56.150882 sshd[4732]: Accepted publickey for core from 139.178.68.195 port 40420 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:54:56.153557 sshd-session[4732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:54:56.162465 systemd-logind[1921]: New session 12 of user core. May 8 23:54:56.175451 systemd[1]: Started session-12.scope - Session 12 of User core. May 8 23:54:56.434407 sshd[4734]: Connection closed by 139.178.68.195 port 40420 May 8 23:54:56.435383 sshd-session[4732]: pam_unix(sshd:session): session closed for user core May 8 23:54:56.442308 systemd[1]: sshd@11-172.31.31.246:22-139.178.68.195:40420.service: Deactivated successfully. May 8 23:54:56.447221 systemd[1]: session-12.scope: Deactivated successfully. May 8 23:54:56.450091 systemd-logind[1921]: Session 12 logged out. Waiting for processes to exit. May 8 23:54:56.452782 systemd-logind[1921]: Removed session 12. May 8 23:55:01.473826 systemd[1]: Started sshd@12-172.31.31.246:22-139.178.68.195:40428.service - OpenSSH per-connection server daemon (139.178.68.195:40428). May 8 23:55:01.673338 sshd[4746]: Accepted publickey for core from 139.178.68.195 port 40428 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:55:01.675961 sshd-session[4746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:01.684218 systemd-logind[1921]: New session 13 of user core. May 8 23:55:01.693420 systemd[1]: Started session-13.scope - Session 13 of User core. May 8 23:55:01.952216 sshd[4748]: Connection closed by 139.178.68.195 port 40428 May 8 23:55:01.951935 sshd-session[4746]: pam_unix(sshd:session): session closed for user core May 8 23:55:01.958439 systemd[1]: sshd@12-172.31.31.246:22-139.178.68.195:40428.service: Deactivated successfully. May 8 23:55:01.962963 systemd[1]: session-13.scope: Deactivated successfully. May 8 23:55:01.965436 systemd-logind[1921]: Session 13 logged out. Waiting for processes to exit. May 8 23:55:01.968374 systemd-logind[1921]: Removed session 13. May 8 23:55:06.989285 systemd[1]: Started sshd@13-172.31.31.246:22-139.178.68.195:36488.service - OpenSSH per-connection server daemon (139.178.68.195:36488). May 8 23:55:07.187265 sshd[4759]: Accepted publickey for core from 139.178.68.195 port 36488 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:55:07.190427 sshd-session[4759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:07.199798 systemd-logind[1921]: New session 14 of user core. May 8 23:55:07.213485 systemd[1]: Started session-14.scope - Session 14 of User core. May 8 23:55:07.485162 sshd[4761]: Connection closed by 139.178.68.195 port 36488 May 8 23:55:07.486543 sshd-session[4759]: pam_unix(sshd:session): session closed for user core May 8 23:55:07.492728 systemd-logind[1921]: Session 14 logged out. Waiting for processes to exit. May 8 23:55:07.494377 systemd[1]: sshd@13-172.31.31.246:22-139.178.68.195:36488.service: Deactivated successfully. May 8 23:55:07.499879 systemd[1]: session-14.scope: Deactivated successfully. May 8 23:55:07.503366 systemd-logind[1921]: Removed session 14. May 8 23:55:07.533841 systemd[1]: Started sshd@14-172.31.31.246:22-139.178.68.195:36498.service - OpenSSH per-connection server daemon (139.178.68.195:36498). May 8 23:55:07.730166 sshd[4772]: Accepted publickey for core from 139.178.68.195 port 36498 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:55:07.732478 sshd-session[4772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:07.742250 systemd-logind[1921]: New session 15 of user core. May 8 23:55:07.752480 systemd[1]: Started session-15.scope - Session 15 of User core. May 8 23:55:08.105444 sshd[4776]: Connection closed by 139.178.68.195 port 36498 May 8 23:55:08.105203 sshd-session[4772]: pam_unix(sshd:session): session closed for user core May 8 23:55:08.119785 systemd-logind[1921]: Session 15 logged out. Waiting for processes to exit. May 8 23:55:08.122464 systemd[1]: sshd@14-172.31.31.246:22-139.178.68.195:36498.service: Deactivated successfully. May 8 23:55:08.127777 systemd[1]: session-15.scope: Deactivated successfully. May 8 23:55:08.143907 systemd-logind[1921]: Removed session 15. May 8 23:55:08.160246 systemd[1]: Started sshd@15-172.31.31.246:22-139.178.68.195:36500.service - OpenSSH per-connection server daemon (139.178.68.195:36500). May 8 23:55:08.352987 sshd[4785]: Accepted publickey for core from 139.178.68.195 port 36500 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:55:08.355878 sshd-session[4785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:08.366222 systemd-logind[1921]: New session 16 of user core. May 8 23:55:08.372451 systemd[1]: Started session-16.scope - Session 16 of User core. May 8 23:55:08.625977 sshd[4787]: Connection closed by 139.178.68.195 port 36500 May 8 23:55:08.626862 sshd-session[4785]: pam_unix(sshd:session): session closed for user core May 8 23:55:08.633588 systemd[1]: sshd@15-172.31.31.246:22-139.178.68.195:36500.service: Deactivated successfully. May 8 23:55:08.638523 systemd[1]: session-16.scope: Deactivated successfully. May 8 23:55:08.640354 systemd-logind[1921]: Session 16 logged out. Waiting for processes to exit. May 8 23:55:08.642210 systemd-logind[1921]: Removed session 16. May 8 23:55:13.662706 systemd[1]: Started sshd@16-172.31.31.246:22-139.178.68.195:36516.service - OpenSSH per-connection server daemon (139.178.68.195:36516). May 8 23:55:13.856424 sshd[4801]: Accepted publickey for core from 139.178.68.195 port 36516 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:55:13.858931 sshd-session[4801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:13.868242 systemd-logind[1921]: New session 17 of user core. May 8 23:55:13.877539 systemd[1]: Started session-17.scope - Session 17 of User core. May 8 23:55:14.132520 sshd[4803]: Connection closed by 139.178.68.195 port 36516 May 8 23:55:14.133479 sshd-session[4801]: pam_unix(sshd:session): session closed for user core May 8 23:55:14.141726 systemd[1]: sshd@16-172.31.31.246:22-139.178.68.195:36516.service: Deactivated successfully. May 8 23:55:14.145717 systemd[1]: session-17.scope: Deactivated successfully. May 8 23:55:14.147860 systemd-logind[1921]: Session 17 logged out. Waiting for processes to exit. May 8 23:55:14.150972 systemd-logind[1921]: Removed session 17. May 8 23:55:19.170701 systemd[1]: Started sshd@17-172.31.31.246:22-139.178.68.195:54906.service - OpenSSH per-connection server daemon (139.178.68.195:54906). May 8 23:55:19.370588 sshd[4815]: Accepted publickey for core from 139.178.68.195 port 54906 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:55:19.373757 sshd-session[4815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:19.381874 systemd-logind[1921]: New session 18 of user core. May 8 23:55:19.391415 systemd[1]: Started session-18.scope - Session 18 of User core. May 8 23:55:19.641181 sshd[4817]: Connection closed by 139.178.68.195 port 54906 May 8 23:55:19.642035 sshd-session[4815]: pam_unix(sshd:session): session closed for user core May 8 23:55:19.647539 systemd[1]: sshd@17-172.31.31.246:22-139.178.68.195:54906.service: Deactivated successfully. May 8 23:55:19.651739 systemd[1]: session-18.scope: Deactivated successfully. May 8 23:55:19.655108 systemd-logind[1921]: Session 18 logged out. Waiting for processes to exit. May 8 23:55:19.659071 systemd-logind[1921]: Removed session 18. May 8 23:55:24.685818 systemd[1]: Started sshd@18-172.31.31.246:22-139.178.68.195:54920.service - OpenSSH per-connection server daemon (139.178.68.195:54920). May 8 23:55:24.883698 sshd[4829]: Accepted publickey for core from 139.178.68.195 port 54920 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:55:24.886846 sshd-session[4829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:24.894295 systemd-logind[1921]: New session 19 of user core. May 8 23:55:24.901523 systemd[1]: Started session-19.scope - Session 19 of User core. May 8 23:55:25.155843 sshd[4831]: Connection closed by 139.178.68.195 port 54920 May 8 23:55:25.156858 sshd-session[4829]: pam_unix(sshd:session): session closed for user core May 8 23:55:25.163212 systemd[1]: sshd@18-172.31.31.246:22-139.178.68.195:54920.service: Deactivated successfully. May 8 23:55:25.168626 systemd[1]: session-19.scope: Deactivated successfully. May 8 23:55:25.170435 systemd-logind[1921]: Session 19 logged out. Waiting for processes to exit. May 8 23:55:25.172494 systemd-logind[1921]: Removed session 19. May 8 23:55:25.198713 systemd[1]: Started sshd@19-172.31.31.246:22-139.178.68.195:44500.service - OpenSSH per-connection server daemon (139.178.68.195:44500). May 8 23:55:25.393712 sshd[4842]: Accepted publickey for core from 139.178.68.195 port 44500 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:55:25.396464 sshd-session[4842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:25.406274 systemd-logind[1921]: New session 20 of user core. May 8 23:55:25.411507 systemd[1]: Started session-20.scope - Session 20 of User core. May 8 23:55:25.732172 sshd[4844]: Connection closed by 139.178.68.195 port 44500 May 8 23:55:25.733211 sshd-session[4842]: pam_unix(sshd:session): session closed for user core May 8 23:55:25.741601 systemd-logind[1921]: Session 20 logged out. Waiting for processes to exit. May 8 23:55:25.741970 systemd[1]: sshd@19-172.31.31.246:22-139.178.68.195:44500.service: Deactivated successfully. May 8 23:55:25.745736 systemd[1]: session-20.scope: Deactivated successfully. May 8 23:55:25.752529 systemd-logind[1921]: Removed session 20. May 8 23:55:25.774750 systemd[1]: Started sshd@20-172.31.31.246:22-139.178.68.195:44502.service - OpenSSH per-connection server daemon (139.178.68.195:44502). May 8 23:55:25.968915 sshd[4852]: Accepted publickey for core from 139.178.68.195 port 44502 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:55:25.971852 sshd-session[4852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:25.979975 systemd-logind[1921]: New session 21 of user core. May 8 23:55:25.993484 systemd[1]: Started session-21.scope - Session 21 of User core. May 8 23:55:27.319391 sshd[4854]: Connection closed by 139.178.68.195 port 44502 May 8 23:55:27.320400 sshd-session[4852]: pam_unix(sshd:session): session closed for user core May 8 23:55:27.330076 systemd[1]: sshd@20-172.31.31.246:22-139.178.68.195:44502.service: Deactivated successfully. May 8 23:55:27.337949 systemd[1]: session-21.scope: Deactivated successfully. May 8 23:55:27.344045 systemd-logind[1921]: Session 21 logged out. Waiting for processes to exit. May 8 23:55:27.376231 systemd[1]: Started sshd@21-172.31.31.246:22-139.178.68.195:44504.service - OpenSSH per-connection server daemon (139.178.68.195:44504). May 8 23:55:27.378235 systemd-logind[1921]: Removed session 21. May 8 23:55:27.576180 sshd[4870]: Accepted publickey for core from 139.178.68.195 port 44504 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:55:27.579524 sshd-session[4870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:27.589416 systemd-logind[1921]: New session 22 of user core. May 8 23:55:27.599470 systemd[1]: Started session-22.scope - Session 22 of User core. May 8 23:55:28.212400 sshd[4872]: Connection closed by 139.178.68.195 port 44504 May 8 23:55:28.214107 sshd-session[4870]: pam_unix(sshd:session): session closed for user core May 8 23:55:28.222432 systemd[1]: sshd@21-172.31.31.246:22-139.178.68.195:44504.service: Deactivated successfully. May 8 23:55:28.226268 systemd[1]: session-22.scope: Deactivated successfully. May 8 23:55:28.230602 systemd-logind[1921]: Session 22 logged out. Waiting for processes to exit. May 8 23:55:28.246805 systemd-logind[1921]: Removed session 22. May 8 23:55:28.253852 systemd[1]: Started sshd@22-172.31.31.246:22-139.178.68.195:44514.service - OpenSSH per-connection server daemon (139.178.68.195:44514). May 8 23:55:28.456231 sshd[4881]: Accepted publickey for core from 139.178.68.195 port 44514 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:55:28.458893 sshd-session[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:28.467387 systemd-logind[1921]: New session 23 of user core. May 8 23:55:28.479498 systemd[1]: Started session-23.scope - Session 23 of User core. May 8 23:55:28.733075 sshd[4883]: Connection closed by 139.178.68.195 port 44514 May 8 23:55:28.732844 sshd-session[4881]: pam_unix(sshd:session): session closed for user core May 8 23:55:28.743757 systemd[1]: sshd@22-172.31.31.246:22-139.178.68.195:44514.service: Deactivated successfully. May 8 23:55:28.747625 systemd[1]: session-23.scope: Deactivated successfully. May 8 23:55:28.750250 systemd-logind[1921]: Session 23 logged out. Waiting for processes to exit. May 8 23:55:28.753278 systemd-logind[1921]: Removed session 23. May 8 23:55:33.772677 systemd[1]: Started sshd@23-172.31.31.246:22-139.178.68.195:44526.service - OpenSSH per-connection server daemon (139.178.68.195:44526). May 8 23:55:33.971766 sshd[4895]: Accepted publickey for core from 139.178.68.195 port 44526 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:55:33.974393 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:33.983995 systemd-logind[1921]: New session 24 of user core. May 8 23:55:33.992416 systemd[1]: Started session-24.scope - Session 24 of User core. May 8 23:55:34.249580 sshd[4899]: Connection closed by 139.178.68.195 port 44526 May 8 23:55:34.249365 sshd-session[4895]: pam_unix(sshd:session): session closed for user core May 8 23:55:34.256497 systemd-logind[1921]: Session 24 logged out. Waiting for processes to exit. May 8 23:55:34.257959 systemd[1]: sshd@23-172.31.31.246:22-139.178.68.195:44526.service: Deactivated successfully. May 8 23:55:34.263553 systemd[1]: session-24.scope: Deactivated successfully. May 8 23:55:34.269707 systemd-logind[1921]: Removed session 24. May 8 23:55:39.291705 systemd[1]: Started sshd@24-172.31.31.246:22-139.178.68.195:58296.service - OpenSSH per-connection server daemon (139.178.68.195:58296). May 8 23:55:39.480659 sshd[4910]: Accepted publickey for core from 139.178.68.195 port 58296 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:55:39.483478 sshd-session[4910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:39.494631 systemd-logind[1921]: New session 25 of user core. May 8 23:55:39.499773 systemd[1]: Started session-25.scope - Session 25 of User core. May 8 23:55:39.754405 sshd[4912]: Connection closed by 139.178.68.195 port 58296 May 8 23:55:39.754908 sshd-session[4910]: pam_unix(sshd:session): session closed for user core May 8 23:55:39.761531 systemd[1]: sshd@24-172.31.31.246:22-139.178.68.195:58296.service: Deactivated successfully. May 8 23:55:39.765003 systemd[1]: session-25.scope: Deactivated successfully. May 8 23:55:39.767958 systemd-logind[1921]: Session 25 logged out. Waiting for processes to exit. May 8 23:55:39.770667 systemd-logind[1921]: Removed session 25. May 8 23:55:44.806442 systemd[1]: Started sshd@25-172.31.31.246:22-139.178.68.195:58302.service - OpenSSH per-connection server daemon (139.178.68.195:58302). May 8 23:55:44.992707 sshd[4926]: Accepted publickey for core from 139.178.68.195 port 58302 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:55:44.995629 sshd-session[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:45.007249 systemd-logind[1921]: New session 26 of user core. May 8 23:55:45.019479 systemd[1]: Started session-26.scope - Session 26 of User core. May 8 23:55:45.280291 sshd[4928]: Connection closed by 139.178.68.195 port 58302 May 8 23:55:45.281330 sshd-session[4926]: pam_unix(sshd:session): session closed for user core May 8 23:55:45.292877 systemd[1]: sshd@25-172.31.31.246:22-139.178.68.195:58302.service: Deactivated successfully. May 8 23:55:45.298773 systemd[1]: session-26.scope: Deactivated successfully. May 8 23:55:45.301795 systemd-logind[1921]: Session 26 logged out. Waiting for processes to exit. May 8 23:55:45.305230 systemd-logind[1921]: Removed session 26. May 8 23:55:50.321869 systemd[1]: Started sshd@26-172.31.31.246:22-139.178.68.195:56882.service - OpenSSH per-connection server daemon (139.178.68.195:56882). May 8 23:55:50.504933 sshd[4939]: Accepted publickey for core from 139.178.68.195 port 56882 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:55:50.507510 sshd-session[4939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:50.515083 systemd-logind[1921]: New session 27 of user core. May 8 23:55:50.524383 systemd[1]: Started session-27.scope - Session 27 of User core. May 8 23:55:50.767198 sshd[4941]: Connection closed by 139.178.68.195 port 56882 May 8 23:55:50.767687 sshd-session[4939]: pam_unix(sshd:session): session closed for user core May 8 23:55:50.774749 systemd[1]: sshd@26-172.31.31.246:22-139.178.68.195:56882.service: Deactivated successfully. May 8 23:55:50.779071 systemd[1]: session-27.scope: Deactivated successfully. May 8 23:55:50.781000 systemd-logind[1921]: Session 27 logged out. Waiting for processes to exit. May 8 23:55:50.783080 systemd-logind[1921]: Removed session 27. May 8 23:55:50.807741 systemd[1]: Started sshd@27-172.31.31.246:22-139.178.68.195:56898.service - OpenSSH per-connection server daemon (139.178.68.195:56898). May 8 23:55:51.003106 sshd[4952]: Accepted publickey for core from 139.178.68.195 port 56898 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:55:51.005720 sshd-session[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:51.014350 systemd-logind[1921]: New session 28 of user core. May 8 23:55:51.018388 systemd[1]: Started session-28.scope - Session 28 of User core. May 8 23:55:54.129670 containerd[1943]: time="2025-05-08T23:55:54.129416553Z" level=info msg="StopContainer for \"30cd8f75268573a3b72fafb11255bb8ddc35937386ca0d71d1ff3d68f3d7cb1c\" with timeout 30 (s)" May 8 23:55:54.136173 containerd[1943]: time="2025-05-08T23:55:54.135349209Z" level=info msg="Stop container \"30cd8f75268573a3b72fafb11255bb8ddc35937386ca0d71d1ff3d68f3d7cb1c\" with signal terminated" May 8 23:55:54.182756 systemd[1]: cri-containerd-30cd8f75268573a3b72fafb11255bb8ddc35937386ca0d71d1ff3d68f3d7cb1c.scope: Deactivated successfully. May 8 23:55:54.191952 containerd[1943]: time="2025-05-08T23:55:54.191863173Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 23:55:54.214199 containerd[1943]: time="2025-05-08T23:55:54.213768046Z" level=info msg="StopContainer for \"3fbebff3af534a0b6a848f1c60b317b35bee1041e1ece071158c6cec48071ed1\" with timeout 2 (s)" May 8 23:55:54.215308 containerd[1943]: time="2025-05-08T23:55:54.215212426Z" level=info msg="Stop container \"3fbebff3af534a0b6a848f1c60b317b35bee1041e1ece071158c6cec48071ed1\" with signal terminated" May 8 23:55:54.236573 systemd-networkd[1862]: lxc_health: Link DOWN May 8 23:55:54.237607 systemd-networkd[1862]: lxc_health: Lost carrier May 8 23:55:54.252055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30cd8f75268573a3b72fafb11255bb8ddc35937386ca0d71d1ff3d68f3d7cb1c-rootfs.mount: Deactivated successfully. May 8 23:55:54.278456 containerd[1943]: time="2025-05-08T23:55:54.277965862Z" level=info msg="shim disconnected" id=30cd8f75268573a3b72fafb11255bb8ddc35937386ca0d71d1ff3d68f3d7cb1c namespace=k8s.io May 8 23:55:54.278456 containerd[1943]: time="2025-05-08T23:55:54.278185054Z" level=warning msg="cleaning up after shim disconnected" id=30cd8f75268573a3b72fafb11255bb8ddc35937386ca0d71d1ff3d68f3d7cb1c namespace=k8s.io May 8 23:55:54.278456 containerd[1943]: time="2025-05-08T23:55:54.278208754Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:55:54.280272 systemd[1]: cri-containerd-3fbebff3af534a0b6a848f1c60b317b35bee1041e1ece071158c6cec48071ed1.scope: Deactivated successfully. May 8 23:55:54.281463 systemd[1]: cri-containerd-3fbebff3af534a0b6a848f1c60b317b35bee1041e1ece071158c6cec48071ed1.scope: Consumed 14.171s CPU time. May 8 23:55:54.315081 containerd[1943]: time="2025-05-08T23:55:54.314902126Z" level=info msg="StopContainer for \"30cd8f75268573a3b72fafb11255bb8ddc35937386ca0d71d1ff3d68f3d7cb1c\" returns successfully" May 8 23:55:54.323200 containerd[1943]: time="2025-05-08T23:55:54.317520574Z" level=info msg="StopPodSandbox for \"8478b9d8a6b83d4a7ddd19ea5734e596ee70cfd6075865c4e6c461aa37743129\"" May 8 23:55:54.323200 containerd[1943]: time="2025-05-08T23:55:54.317673226Z" level=info msg="Container to stop \"30cd8f75268573a3b72fafb11255bb8ddc35937386ca0d71d1ff3d68f3d7cb1c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:55:54.328296 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8478b9d8a6b83d4a7ddd19ea5734e596ee70cfd6075865c4e6c461aa37743129-shm.mount: Deactivated successfully. May 8 23:55:54.340940 systemd[1]: cri-containerd-8478b9d8a6b83d4a7ddd19ea5734e596ee70cfd6075865c4e6c461aa37743129.scope: Deactivated successfully. May 8 23:55:54.359999 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fbebff3af534a0b6a848f1c60b317b35bee1041e1ece071158c6cec48071ed1-rootfs.mount: Deactivated successfully. May 8 23:55:54.371346 containerd[1943]: time="2025-05-08T23:55:54.370914490Z" level=info msg="shim disconnected" id=3fbebff3af534a0b6a848f1c60b317b35bee1041e1ece071158c6cec48071ed1 namespace=k8s.io May 8 23:55:54.371346 containerd[1943]: time="2025-05-08T23:55:54.371010190Z" level=warning msg="cleaning up after shim disconnected" id=3fbebff3af534a0b6a848f1c60b317b35bee1041e1ece071158c6cec48071ed1 namespace=k8s.io May 8 23:55:54.371346 containerd[1943]: time="2025-05-08T23:55:54.371030062Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:55:54.396451 containerd[1943]: time="2025-05-08T23:55:54.396102742Z" level=info msg="shim disconnected" id=8478b9d8a6b83d4a7ddd19ea5734e596ee70cfd6075865c4e6c461aa37743129 namespace=k8s.io May 8 23:55:54.396451 containerd[1943]: time="2025-05-08T23:55:54.396272026Z" level=warning msg="cleaning up after shim disconnected" id=8478b9d8a6b83d4a7ddd19ea5734e596ee70cfd6075865c4e6c461aa37743129 namespace=k8s.io May 8 23:55:54.396451 containerd[1943]: time="2025-05-08T23:55:54.396294910Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:55:54.409873 containerd[1943]: time="2025-05-08T23:55:54.409510259Z" level=info msg="StopContainer for \"3fbebff3af534a0b6a848f1c60b317b35bee1041e1ece071158c6cec48071ed1\" returns successfully" May 8 23:55:54.411018 containerd[1943]: time="2025-05-08T23:55:54.410883551Z" level=info msg="StopPodSandbox for \"f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b\"" May 8 23:55:54.411194 containerd[1943]: time="2025-05-08T23:55:54.411076979Z" level=info msg="Container to stop \"3fbebff3af534a0b6a848f1c60b317b35bee1041e1ece071158c6cec48071ed1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:55:54.411194 containerd[1943]: time="2025-05-08T23:55:54.411107243Z" level=info msg="Container to stop \"4cbf921a96822efed3a1ed3fa04f12979b734fa9f01608666805307318500a0a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:55:54.411322 containerd[1943]: time="2025-05-08T23:55:54.411185135Z" level=info msg="Container to stop \"20edb7dbf223febe69788e8f91720ef3e9e5b41fcd2623d800466973c9a4e5b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:55:54.411322 containerd[1943]: time="2025-05-08T23:55:54.411211067Z" level=info msg="Container to stop \"35147072a90e65edc710805b601b74f122d037f60bdd0d7307ca6b947ccd06db\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:55:54.411322 containerd[1943]: time="2025-05-08T23:55:54.411231755Z" level=info msg="Container to stop \"ffdbac54786c928f281679479991468bc405c2ea479289566611e9100ea31377\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 23:55:54.432684 systemd[1]: cri-containerd-f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b.scope: Deactivated successfully. May 8 23:55:54.439147 containerd[1943]: time="2025-05-08T23:55:54.438388427Z" level=info msg="TearDown network for sandbox \"8478b9d8a6b83d4a7ddd19ea5734e596ee70cfd6075865c4e6c461aa37743129\" successfully" May 8 23:55:54.439147 containerd[1943]: time="2025-05-08T23:55:54.438527747Z" level=info msg="StopPodSandbox for \"8478b9d8a6b83d4a7ddd19ea5734e596ee70cfd6075865c4e6c461aa37743129\" returns successfully" May 8 23:55:54.493617 containerd[1943]: time="2025-05-08T23:55:54.493283399Z" level=info msg="shim disconnected" id=f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b namespace=k8s.io May 8 23:55:54.493617 containerd[1943]: time="2025-05-08T23:55:54.493362251Z" level=warning msg="cleaning up after shim disconnected" id=f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b namespace=k8s.io May 8 23:55:54.493617 containerd[1943]: time="2025-05-08T23:55:54.493380671Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:55:54.516927 containerd[1943]: time="2025-05-08T23:55:54.516852683Z" level=info msg="TearDown network for sandbox \"f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b\" successfully" May 8 23:55:54.516927 containerd[1943]: time="2025-05-08T23:55:54.516907607Z" level=info msg="StopPodSandbox for \"f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b\" returns successfully" May 8 23:55:54.546897 kubelet[3324]: I0508 23:55:54.546392 3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj528\" (UniqueName: \"kubernetes.io/projected/10482c1b-76cb-4e93-afaf-1ac2020938ed-kube-api-access-sj528\") pod \"10482c1b-76cb-4e93-afaf-1ac2020938ed\" (UID: \"10482c1b-76cb-4e93-afaf-1ac2020938ed\") " May 8 23:55:54.546897 kubelet[3324]: I0508 23:55:54.546488 3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10482c1b-76cb-4e93-afaf-1ac2020938ed-cilium-config-path\") pod \"10482c1b-76cb-4e93-afaf-1ac2020938ed\" (UID: \"10482c1b-76cb-4e93-afaf-1ac2020938ed\") " May 8 23:55:54.560468 kubelet[3324]: I0508 23:55:54.560346 3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10482c1b-76cb-4e93-afaf-1ac2020938ed-kube-api-access-sj528" (OuterVolumeSpecName: "kube-api-access-sj528") pod "10482c1b-76cb-4e93-afaf-1ac2020938ed" (UID: "10482c1b-76cb-4e93-afaf-1ac2020938ed"). InnerVolumeSpecName "kube-api-access-sj528". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 23:55:54.562361 kubelet[3324]: I0508 23:55:54.562024 3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10482c1b-76cb-4e93-afaf-1ac2020938ed-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "10482c1b-76cb-4e93-afaf-1ac2020938ed" (UID: "10482c1b-76cb-4e93-afaf-1ac2020938ed"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 8 23:55:54.648349 kubelet[3324]: I0508 23:55:54.647241 3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-cilium-config-path\") pod \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " May 8 23:55:54.648349 kubelet[3324]: I0508 23:55:54.647318 3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctvmc\" (UniqueName: \"kubernetes.io/projected/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-kube-api-access-ctvmc\") pod \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " May 8 23:55:54.648349 kubelet[3324]: I0508 23:55:54.647357 3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-cni-path\") pod \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " May 8 23:55:54.648349 kubelet[3324]: I0508 23:55:54.647401 3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-bpf-maps\") pod \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " May 8 23:55:54.648349 kubelet[3324]: I0508 23:55:54.647434 3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-cilium-cgroup\") pod \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " May 8 23:55:54.648349 kubelet[3324]: I0508 23:55:54.647474 3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-clustermesh-secrets\") pod \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " May 8 23:55:54.648774 kubelet[3324]: I0508 23:55:54.647511 3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-etc-cni-netd\") pod \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " May 8 23:55:54.648774 kubelet[3324]: I0508 23:55:54.647544 3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-host-proc-sys-kernel\") pod \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " May 8 23:55:54.648774 kubelet[3324]: I0508 23:55:54.647579 3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-host-proc-sys-net\") pod \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " May 8 23:55:54.648774 kubelet[3324]: I0508 23:55:54.647618 3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-hubble-tls\") pod \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " May 8 23:55:54.648774 kubelet[3324]: I0508 23:55:54.647652 3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-lib-modules\") pod \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " May 8 23:55:54.648774 kubelet[3324]: I0508 23:55:54.647687 3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-xtables-lock\") pod \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " May 8 23:55:54.649081 kubelet[3324]: I0508 23:55:54.647720 3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-cilium-run\") pod \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " May 8 23:55:54.649081 kubelet[3324]: I0508 23:55:54.647756 3324 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-hostproc\") pod \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\" (UID: \"b4eae93c-c1d1-4fb5-94a5-790665ce2bea\") " May 8 23:55:54.649081 kubelet[3324]: I0508 23:55:54.647822 3324 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/10482c1b-76cb-4e93-afaf-1ac2020938ed-cilium-config-path\") on node \"ip-172-31-31-246\" DevicePath \"\"" May 8 23:55:54.649081 kubelet[3324]: I0508 23:55:54.647848 3324 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sj528\" (UniqueName: \"kubernetes.io/projected/10482c1b-76cb-4e93-afaf-1ac2020938ed-kube-api-access-sj528\") on node \"ip-172-31-31-246\" DevicePath \"\"" May 8 23:55:54.649081 kubelet[3324]: I0508 23:55:54.647890 3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-hostproc" (OuterVolumeSpecName: "hostproc") pod "b4eae93c-c1d1-4fb5-94a5-790665ce2bea" (UID: "b4eae93c-c1d1-4fb5-94a5-790665ce2bea"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 23:55:54.654595 kubelet[3324]: I0508 23:55:54.653948 3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-kube-api-access-ctvmc" (OuterVolumeSpecName: "kube-api-access-ctvmc") pod "b4eae93c-c1d1-4fb5-94a5-790665ce2bea" (UID: "b4eae93c-c1d1-4fb5-94a5-790665ce2bea"). InnerVolumeSpecName "kube-api-access-ctvmc". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 23:55:54.654595 kubelet[3324]: I0508 23:55:54.653958 3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b4eae93c-c1d1-4fb5-94a5-790665ce2bea" (UID: "b4eae93c-c1d1-4fb5-94a5-790665ce2bea"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 8 23:55:54.654595 kubelet[3324]: I0508 23:55:54.654021 3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b4eae93c-c1d1-4fb5-94a5-790665ce2bea" (UID: "b4eae93c-c1d1-4fb5-94a5-790665ce2bea"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 23:55:54.654595 kubelet[3324]: I0508 23:55:54.654053 3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-cni-path" (OuterVolumeSpecName: "cni-path") pod "b4eae93c-c1d1-4fb5-94a5-790665ce2bea" (UID: "b4eae93c-c1d1-4fb5-94a5-790665ce2bea"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 23:55:54.654906 kubelet[3324]: I0508 23:55:54.654078 3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b4eae93c-c1d1-4fb5-94a5-790665ce2bea" (UID: "b4eae93c-c1d1-4fb5-94a5-790665ce2bea"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 23:55:54.654906 kubelet[3324]: I0508 23:55:54.654097 3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b4eae93c-c1d1-4fb5-94a5-790665ce2bea" (UID: "b4eae93c-c1d1-4fb5-94a5-790665ce2bea"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 23:55:54.654906 kubelet[3324]: I0508 23:55:54.654161 3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b4eae93c-c1d1-4fb5-94a5-790665ce2bea" (UID: "b4eae93c-c1d1-4fb5-94a5-790665ce2bea"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 23:55:54.658938 kubelet[3324]: I0508 23:55:54.658859 3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b4eae93c-c1d1-4fb5-94a5-790665ce2bea" (UID: "b4eae93c-c1d1-4fb5-94a5-790665ce2bea"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 8 23:55:54.659088 kubelet[3324]: I0508 23:55:54.658965 3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b4eae93c-c1d1-4fb5-94a5-790665ce2bea" (UID: "b4eae93c-c1d1-4fb5-94a5-790665ce2bea"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 23:55:54.659088 kubelet[3324]: I0508 23:55:54.659019 3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b4eae93c-c1d1-4fb5-94a5-790665ce2bea" (UID: "b4eae93c-c1d1-4fb5-94a5-790665ce2bea"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 23:55:54.659088 kubelet[3324]: I0508 23:55:54.659055 3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b4eae93c-c1d1-4fb5-94a5-790665ce2bea" (UID: "b4eae93c-c1d1-4fb5-94a5-790665ce2bea"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 23:55:54.659345 kubelet[3324]: I0508 23:55:54.659092 3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b4eae93c-c1d1-4fb5-94a5-790665ce2bea" (UID: "b4eae93c-c1d1-4fb5-94a5-790665ce2bea"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 23:55:54.659615 kubelet[3324]: I0508 23:55:54.659560 3324 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b4eae93c-c1d1-4fb5-94a5-790665ce2bea" (UID: "b4eae93c-c1d1-4fb5-94a5-790665ce2bea"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 23:55:54.748620 kubelet[3324]: I0508 23:55:54.748567 3324 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-lib-modules\") on node \"ip-172-31-31-246\" DevicePath \"\"" May 8 23:55:54.748620 kubelet[3324]: I0508 23:55:54.748617 3324 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-cilium-run\") on node \"ip-172-31-31-246\" DevicePath \"\"" May 8 23:55:54.748862 kubelet[3324]: I0508 23:55:54.748641 3324 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-xtables-lock\") on node \"ip-172-31-31-246\" DevicePath \"\"" May 8 23:55:54.748862 kubelet[3324]: I0508 23:55:54.748663 3324 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-hostproc\") on node \"ip-172-31-31-246\" DevicePath \"\"" May 8 23:55:54.748862 kubelet[3324]: I0508 23:55:54.748685 3324 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-cilium-config-path\") on node \"ip-172-31-31-246\" DevicePath \"\"" May 8 23:55:54.748862 kubelet[3324]: I0508 23:55:54.748745 3324 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ctvmc\" (UniqueName: \"kubernetes.io/projected/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-kube-api-access-ctvmc\") on node \"ip-172-31-31-246\" DevicePath \"\"" May 8 23:55:54.748862 kubelet[3324]: I0508 23:55:54.748768 3324 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-cni-path\") on node \"ip-172-31-31-246\" DevicePath \"\"" May 8 23:55:54.748862 kubelet[3324]: I0508 23:55:54.748793 3324 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-bpf-maps\") on node \"ip-172-31-31-246\" DevicePath \"\"" May 8 23:55:54.748862 kubelet[3324]: I0508 23:55:54.748814 3324 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-cilium-cgroup\") on node \"ip-172-31-31-246\" DevicePath \"\"" May 8 23:55:54.748862 kubelet[3324]: I0508 23:55:54.748839 3324 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-clustermesh-secrets\") on node \"ip-172-31-31-246\" DevicePath \"\"" May 8 23:55:54.749311 kubelet[3324]: I0508 23:55:54.748862 3324 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-etc-cni-netd\") on node \"ip-172-31-31-246\" DevicePath \"\"" May 8 23:55:54.749311 kubelet[3324]: I0508 23:55:54.748883 3324 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-host-proc-sys-kernel\") on node \"ip-172-31-31-246\" DevicePath \"\"" May 8 23:55:54.749311 kubelet[3324]: I0508 23:55:54.748903 3324 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-hubble-tls\") on node \"ip-172-31-31-246\" DevicePath \"\"" May 8 23:55:54.749311 kubelet[3324]: I0508 23:55:54.748924 3324 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4eae93c-c1d1-4fb5-94a5-790665ce2bea-host-proc-sys-net\") on node \"ip-172-31-31-246\" DevicePath \"\"" May 8 23:55:54.999346 kubelet[3324]: I0508 23:55:54.999149 3324 scope.go:117] "RemoveContainer" containerID="3fbebff3af534a0b6a848f1c60b317b35bee1041e1ece071158c6cec48071ed1" May 8 23:55:55.003887 containerd[1943]: time="2025-05-08T23:55:55.002788390Z" level=info msg="RemoveContainer for \"3fbebff3af534a0b6a848f1c60b317b35bee1041e1ece071158c6cec48071ed1\"" May 8 23:55:55.018104 systemd[1]: Removed slice kubepods-burstable-podb4eae93c_c1d1_4fb5_94a5_790665ce2bea.slice - libcontainer container kubepods-burstable-podb4eae93c_c1d1_4fb5_94a5_790665ce2bea.slice. May 8 23:55:55.018353 systemd[1]: kubepods-burstable-podb4eae93c_c1d1_4fb5_94a5_790665ce2bea.slice: Consumed 14.315s CPU time. May 8 23:55:55.020105 containerd[1943]: time="2025-05-08T23:55:55.019742986Z" level=info msg="RemoveContainer for \"3fbebff3af534a0b6a848f1c60b317b35bee1041e1ece071158c6cec48071ed1\" returns successfully" May 8 23:55:55.021770 kubelet[3324]: I0508 23:55:55.021688 3324 scope.go:117] "RemoveContainer" containerID="ffdbac54786c928f281679479991468bc405c2ea479289566611e9100ea31377" May 8 23:55:55.029485 containerd[1943]: time="2025-05-08T23:55:55.027618934Z" level=info msg="RemoveContainer for \"ffdbac54786c928f281679479991468bc405c2ea479289566611e9100ea31377\"" May 8 23:55:55.030643 systemd[1]: Removed slice kubepods-besteffort-pod10482c1b_76cb_4e93_afaf_1ac2020938ed.slice - libcontainer container kubepods-besteffort-pod10482c1b_76cb_4e93_afaf_1ac2020938ed.slice. May 8 23:55:55.037481 containerd[1943]: time="2025-05-08T23:55:55.037411378Z" level=info msg="RemoveContainer for \"ffdbac54786c928f281679479991468bc405c2ea479289566611e9100ea31377\" returns successfully" May 8 23:55:55.037856 kubelet[3324]: I0508 23:55:55.037809 3324 scope.go:117] "RemoveContainer" containerID="35147072a90e65edc710805b601b74f122d037f60bdd0d7307ca6b947ccd06db" May 8 23:55:55.042330 containerd[1943]: time="2025-05-08T23:55:55.042283402Z" level=info msg="RemoveContainer for \"35147072a90e65edc710805b601b74f122d037f60bdd0d7307ca6b947ccd06db\"" May 8 23:55:55.051255 containerd[1943]: time="2025-05-08T23:55:55.051107338Z" level=info msg="RemoveContainer for \"35147072a90e65edc710805b601b74f122d037f60bdd0d7307ca6b947ccd06db\" returns successfully" May 8 23:55:55.053918 kubelet[3324]: I0508 23:55:55.053576 3324 scope.go:117] "RemoveContainer" containerID="20edb7dbf223febe69788e8f91720ef3e9e5b41fcd2623d800466973c9a4e5b1" May 8 23:55:55.057605 containerd[1943]: time="2025-05-08T23:55:55.057290938Z" level=info msg="RemoveContainer for \"20edb7dbf223febe69788e8f91720ef3e9e5b41fcd2623d800466973c9a4e5b1\"" May 8 23:55:55.066295 containerd[1943]: time="2025-05-08T23:55:55.066163834Z" level=info msg="RemoveContainer for \"20edb7dbf223febe69788e8f91720ef3e9e5b41fcd2623d800466973c9a4e5b1\" returns successfully" May 8 23:55:55.066812 kubelet[3324]: I0508 23:55:55.066573 3324 scope.go:117] "RemoveContainer" containerID="4cbf921a96822efed3a1ed3fa04f12979b734fa9f01608666805307318500a0a" May 8 23:55:55.071036 containerd[1943]: time="2025-05-08T23:55:55.070356106Z" level=info msg="RemoveContainer for \"4cbf921a96822efed3a1ed3fa04f12979b734fa9f01608666805307318500a0a\"" May 8 23:55:55.079172 containerd[1943]: time="2025-05-08T23:55:55.079029010Z" level=info msg="RemoveContainer for \"4cbf921a96822efed3a1ed3fa04f12979b734fa9f01608666805307318500a0a\" returns successfully" May 8 23:55:55.079617 kubelet[3324]: I0508 23:55:55.079416 3324 scope.go:117] "RemoveContainer" containerID="3fbebff3af534a0b6a848f1c60b317b35bee1041e1ece071158c6cec48071ed1" May 8 23:55:55.080397 containerd[1943]: time="2025-05-08T23:55:55.080269666Z" level=error msg="ContainerStatus for \"3fbebff3af534a0b6a848f1c60b317b35bee1041e1ece071158c6cec48071ed1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3fbebff3af534a0b6a848f1c60b317b35bee1041e1ece071158c6cec48071ed1\": not found" May 8 23:55:55.080633 kubelet[3324]: E0508 23:55:55.080585 3324 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3fbebff3af534a0b6a848f1c60b317b35bee1041e1ece071158c6cec48071ed1\": not found" containerID="3fbebff3af534a0b6a848f1c60b317b35bee1041e1ece071158c6cec48071ed1" May 8 23:55:55.080770 kubelet[3324]: I0508 23:55:55.080636 3324 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3fbebff3af534a0b6a848f1c60b317b35bee1041e1ece071158c6cec48071ed1"} err="failed to get container status \"3fbebff3af534a0b6a848f1c60b317b35bee1041e1ece071158c6cec48071ed1\": rpc error: code = NotFound desc = an error occurred when try to find container \"3fbebff3af534a0b6a848f1c60b317b35bee1041e1ece071158c6cec48071ed1\": not found" May 8 23:55:55.080858 kubelet[3324]: I0508 23:55:55.080769 3324 scope.go:117] "RemoveContainer" containerID="ffdbac54786c928f281679479991468bc405c2ea479289566611e9100ea31377" May 8 23:55:55.081454 containerd[1943]: time="2025-05-08T23:55:55.081316090Z" level=error msg="ContainerStatus for \"ffdbac54786c928f281679479991468bc405c2ea479289566611e9100ea31377\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ffdbac54786c928f281679479991468bc405c2ea479289566611e9100ea31377\": not found" May 8 23:55:55.081604 kubelet[3324]: E0508 23:55:55.081567 3324 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ffdbac54786c928f281679479991468bc405c2ea479289566611e9100ea31377\": not found" containerID="ffdbac54786c928f281679479991468bc405c2ea479289566611e9100ea31377" May 8 23:55:55.081604 kubelet[3324]: I0508 23:55:55.081610 3324 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ffdbac54786c928f281679479991468bc405c2ea479289566611e9100ea31377"} err="failed to get container status \"ffdbac54786c928f281679479991468bc405c2ea479289566611e9100ea31377\": rpc error: code = NotFound desc = an error occurred when try to find container \"ffdbac54786c928f281679479991468bc405c2ea479289566611e9100ea31377\": not found" May 8 23:55:55.081833 kubelet[3324]: I0508 23:55:55.081645 3324 scope.go:117] "RemoveContainer" containerID="35147072a90e65edc710805b601b74f122d037f60bdd0d7307ca6b947ccd06db" May 8 23:55:55.082020 containerd[1943]: time="2025-05-08T23:55:55.081946690Z" level=error msg="ContainerStatus for \"35147072a90e65edc710805b601b74f122d037f60bdd0d7307ca6b947ccd06db\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"35147072a90e65edc710805b601b74f122d037f60bdd0d7307ca6b947ccd06db\": not found" May 8 23:55:55.082510 kubelet[3324]: E0508 23:55:55.082255 3324 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"35147072a90e65edc710805b601b74f122d037f60bdd0d7307ca6b947ccd06db\": not found" containerID="35147072a90e65edc710805b601b74f122d037f60bdd0d7307ca6b947ccd06db" May 8 23:55:55.082510 kubelet[3324]: I0508 23:55:55.082299 3324 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"35147072a90e65edc710805b601b74f122d037f60bdd0d7307ca6b947ccd06db"} err="failed to get container status \"35147072a90e65edc710805b601b74f122d037f60bdd0d7307ca6b947ccd06db\": rpc error: code = NotFound desc = an error occurred when try to find container \"35147072a90e65edc710805b601b74f122d037f60bdd0d7307ca6b947ccd06db\": not found" May 8 23:55:55.082760 kubelet[3324]: I0508 23:55:55.082351 3324 scope.go:117] "RemoveContainer" containerID="20edb7dbf223febe69788e8f91720ef3e9e5b41fcd2623d800466973c9a4e5b1" May 8 23:55:55.083586 containerd[1943]: time="2025-05-08T23:55:55.083529670Z" level=error msg="ContainerStatus for \"20edb7dbf223febe69788e8f91720ef3e9e5b41fcd2623d800466973c9a4e5b1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"20edb7dbf223febe69788e8f91720ef3e9e5b41fcd2623d800466973c9a4e5b1\": not found" May 8 23:55:55.083809 kubelet[3324]: E0508 23:55:55.083767 3324 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"20edb7dbf223febe69788e8f91720ef3e9e5b41fcd2623d800466973c9a4e5b1\": not found" containerID="20edb7dbf223febe69788e8f91720ef3e9e5b41fcd2623d800466973c9a4e5b1" May 8 23:55:55.083953 kubelet[3324]: I0508 23:55:55.083819 3324 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"20edb7dbf223febe69788e8f91720ef3e9e5b41fcd2623d800466973c9a4e5b1"} err="failed to get container status \"20edb7dbf223febe69788e8f91720ef3e9e5b41fcd2623d800466973c9a4e5b1\": rpc error: code = NotFound desc = an error occurred when try to find container \"20edb7dbf223febe69788e8f91720ef3e9e5b41fcd2623d800466973c9a4e5b1\": not found" May 8 23:55:55.083953 kubelet[3324]: I0508 23:55:55.083856 3324 scope.go:117] "RemoveContainer" containerID="4cbf921a96822efed3a1ed3fa04f12979b734fa9f01608666805307318500a0a" May 8 23:55:55.084238 containerd[1943]: time="2025-05-08T23:55:55.084181642Z" level=error msg="ContainerStatus for \"4cbf921a96822efed3a1ed3fa04f12979b734fa9f01608666805307318500a0a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4cbf921a96822efed3a1ed3fa04f12979b734fa9f01608666805307318500a0a\": not found" May 8 23:55:55.084703 kubelet[3324]: E0508 23:55:55.084481 3324 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4cbf921a96822efed3a1ed3fa04f12979b734fa9f01608666805307318500a0a\": not found" containerID="4cbf921a96822efed3a1ed3fa04f12979b734fa9f01608666805307318500a0a" May 8 23:55:55.084703 kubelet[3324]: I0508 23:55:55.084526 3324 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4cbf921a96822efed3a1ed3fa04f12979b734fa9f01608666805307318500a0a"} err="failed to get container status \"4cbf921a96822efed3a1ed3fa04f12979b734fa9f01608666805307318500a0a\": rpc error: code = NotFound desc = an error occurred when try to find container \"4cbf921a96822efed3a1ed3fa04f12979b734fa9f01608666805307318500a0a\": not found" May 8 23:55:55.084703 kubelet[3324]: I0508 23:55:55.084569 3324 scope.go:117] "RemoveContainer" containerID="30cd8f75268573a3b72fafb11255bb8ddc35937386ca0d71d1ff3d68f3d7cb1c" May 8 23:55:55.086756 containerd[1943]: time="2025-05-08T23:55:55.086703178Z" level=info msg="RemoveContainer for \"30cd8f75268573a3b72fafb11255bb8ddc35937386ca0d71d1ff3d68f3d7cb1c\"" May 8 23:55:55.092660 containerd[1943]: time="2025-05-08T23:55:55.092600506Z" level=info msg="RemoveContainer for \"30cd8f75268573a3b72fafb11255bb8ddc35937386ca0d71d1ff3d68f3d7cb1c\" returns successfully" May 8 23:55:55.093048 kubelet[3324]: I0508 23:55:55.093008 3324 scope.go:117] "RemoveContainer" containerID="30cd8f75268573a3b72fafb11255bb8ddc35937386ca0d71d1ff3d68f3d7cb1c" May 8 23:55:55.093472 containerd[1943]: time="2025-05-08T23:55:55.093413662Z" level=error msg="ContainerStatus for \"30cd8f75268573a3b72fafb11255bb8ddc35937386ca0d71d1ff3d68f3d7cb1c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"30cd8f75268573a3b72fafb11255bb8ddc35937386ca0d71d1ff3d68f3d7cb1c\": not found" May 8 23:55:55.093726 kubelet[3324]: E0508 23:55:55.093653 3324 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"30cd8f75268573a3b72fafb11255bb8ddc35937386ca0d71d1ff3d68f3d7cb1c\": not found" containerID="30cd8f75268573a3b72fafb11255bb8ddc35937386ca0d71d1ff3d68f3d7cb1c" May 8 23:55:55.093726 kubelet[3324]: I0508 23:55:55.093698 3324 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"30cd8f75268573a3b72fafb11255bb8ddc35937386ca0d71d1ff3d68f3d7cb1c"} err="failed to get container status \"30cd8f75268573a3b72fafb11255bb8ddc35937386ca0d71d1ff3d68f3d7cb1c\": rpc error: code = NotFound desc = an error occurred when try to find container \"30cd8f75268573a3b72fafb11255bb8ddc35937386ca0d71d1ff3d68f3d7cb1c\": not found" May 8 23:55:55.160965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b-rootfs.mount: Deactivated successfully. May 8 23:55:55.161199 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b-shm.mount: Deactivated successfully. May 8 23:55:55.161343 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8478b9d8a6b83d4a7ddd19ea5734e596ee70cfd6075865c4e6c461aa37743129-rootfs.mount: Deactivated successfully. May 8 23:55:55.161474 systemd[1]: var-lib-kubelet-pods-b4eae93c\x2dc1d1\x2d4fb5\x2d94a5\x2d790665ce2bea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dctvmc.mount: Deactivated successfully. May 8 23:55:55.161607 systemd[1]: var-lib-kubelet-pods-10482c1b\x2d76cb\x2d4e93\x2dafaf\x2d1ac2020938ed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsj528.mount: Deactivated successfully. May 8 23:55:55.161745 systemd[1]: var-lib-kubelet-pods-b4eae93c\x2dc1d1\x2d4fb5\x2d94a5\x2d790665ce2bea-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 23:55:55.161873 systemd[1]: var-lib-kubelet-pods-b4eae93c\x2dc1d1\x2d4fb5\x2d94a5\x2d790665ce2bea-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 23:55:55.520312 kubelet[3324]: I0508 23:55:55.520265 3324 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10482c1b-76cb-4e93-afaf-1ac2020938ed" path="/var/lib/kubelet/pods/10482c1b-76cb-4e93-afaf-1ac2020938ed/volumes" May 8 23:55:55.521334 kubelet[3324]: I0508 23:55:55.521298 3324 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4eae93c-c1d1-4fb5-94a5-790665ce2bea" path="/var/lib/kubelet/pods/b4eae93c-c1d1-4fb5-94a5-790665ce2bea/volumes" May 8 23:55:56.064618 sshd[4954]: Connection closed by 139.178.68.195 port 56898 May 8 23:55:56.066028 sshd-session[4952]: pam_unix(sshd:session): session closed for user core May 8 23:55:56.072919 systemd[1]: session-28.scope: Deactivated successfully. May 8 23:55:56.074192 systemd[1]: session-28.scope: Consumed 2.369s CPU time. May 8 23:55:56.075497 systemd[1]: sshd@27-172.31.31.246:22-139.178.68.195:56898.service: Deactivated successfully. May 8 23:55:56.082998 systemd-logind[1921]: Session 28 logged out. Waiting for processes to exit. May 8 23:55:56.086084 systemd-logind[1921]: Removed session 28. May 8 23:55:56.110824 systemd[1]: Started sshd@28-172.31.31.246:22-139.178.68.195:57606.service - OpenSSH per-connection server daemon (139.178.68.195:57606). May 8 23:55:56.313260 sshd[5113]: Accepted publickey for core from 139.178.68.195 port 57606 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:55:56.318694 sshd-session[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:56.330062 systemd-logind[1921]: New session 29 of user core. May 8 23:55:56.336516 systemd[1]: Started session-29.scope - Session 29 of User core. May 8 23:55:57.106335 ntpd[1913]: Deleting interface #12 lxc_health, fe80::cd9:e9ff:fe6d:74dc%8#123, interface stats: received=0, sent=0, dropped=0, active_time=81 secs May 8 23:55:57.106872 ntpd[1913]: 8 May 23:55:57 ntpd[1913]: Deleting interface #12 lxc_health, fe80::cd9:e9ff:fe6d:74dc%8#123, interface stats: received=0, sent=0, dropped=0, active_time=81 secs May 8 23:55:57.700184 kubelet[3324]: E0508 23:55:57.699951 3324 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 23:55:58.454100 sshd[5115]: Connection closed by 139.178.68.195 port 57606 May 8 23:55:58.455516 sshd-session[5113]: pam_unix(sshd:session): session closed for user core May 8 23:55:58.468991 kubelet[3324]: I0508 23:55:58.468922 3324 memory_manager.go:355] "RemoveStaleState removing state" podUID="b4eae93c-c1d1-4fb5-94a5-790665ce2bea" containerName="cilium-agent" May 8 23:55:58.468991 kubelet[3324]: I0508 23:55:58.468978 3324 memory_manager.go:355] "RemoveStaleState removing state" podUID="10482c1b-76cb-4e93-afaf-1ac2020938ed" containerName="cilium-operator" May 8 23:55:58.473397 systemd[1]: sshd@28-172.31.31.246:22-139.178.68.195:57606.service: Deactivated successfully. May 8 23:55:58.480741 systemd[1]: session-29.scope: Deactivated successfully. May 8 23:55:58.482366 systemd[1]: session-29.scope: Consumed 1.899s CPU time. May 8 23:55:58.486343 systemd-logind[1921]: Session 29 logged out. Waiting for processes to exit. May 8 23:55:58.528101 systemd[1]: Started sshd@29-172.31.31.246:22-139.178.68.195:57610.service - OpenSSH per-connection server daemon (139.178.68.195:57610). May 8 23:55:58.535562 systemd-logind[1921]: Removed session 29. May 8 23:55:58.553850 systemd[1]: Created slice kubepods-burstable-poddee4a897_87c4_4a19_b8d8_8f690ab46d4d.slice - libcontainer container kubepods-burstable-poddee4a897_87c4_4a19_b8d8_8f690ab46d4d.slice. May 8 23:55:58.571232 kubelet[3324]: I0508 23:55:58.569703 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dee4a897-87c4-4a19-b8d8-8f690ab46d4d-bpf-maps\") pod \"cilium-chzg2\" (UID: \"dee4a897-87c4-4a19-b8d8-8f690ab46d4d\") " pod="kube-system/cilium-chzg2" May 8 23:55:58.571232 kubelet[3324]: I0508 23:55:58.569778 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dee4a897-87c4-4a19-b8d8-8f690ab46d4d-cilium-cgroup\") pod \"cilium-chzg2\" (UID: \"dee4a897-87c4-4a19-b8d8-8f690ab46d4d\") " pod="kube-system/cilium-chzg2" May 8 23:55:58.571232 kubelet[3324]: I0508 23:55:58.569818 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dee4a897-87c4-4a19-b8d8-8f690ab46d4d-cni-path\") pod \"cilium-chzg2\" (UID: \"dee4a897-87c4-4a19-b8d8-8f690ab46d4d\") " pod="kube-system/cilium-chzg2" May 8 23:55:58.571232 kubelet[3324]: I0508 23:55:58.569857 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dee4a897-87c4-4a19-b8d8-8f690ab46d4d-clustermesh-secrets\") pod \"cilium-chzg2\" (UID: \"dee4a897-87c4-4a19-b8d8-8f690ab46d4d\") " pod="kube-system/cilium-chzg2" May 8 23:55:58.571232 kubelet[3324]: I0508 23:55:58.569900 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dee4a897-87c4-4a19-b8d8-8f690ab46d4d-etc-cni-netd\") pod \"cilium-chzg2\" (UID: \"dee4a897-87c4-4a19-b8d8-8f690ab46d4d\") " pod="kube-system/cilium-chzg2" May 8 23:55:58.571232 kubelet[3324]: I0508 23:55:58.569942 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dee4a897-87c4-4a19-b8d8-8f690ab46d4d-hubble-tls\") pod \"cilium-chzg2\" (UID: \"dee4a897-87c4-4a19-b8d8-8f690ab46d4d\") " pod="kube-system/cilium-chzg2" May 8 23:55:58.571659 kubelet[3324]: I0508 23:55:58.569996 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dee4a897-87c4-4a19-b8d8-8f690ab46d4d-cilium-config-path\") pod \"cilium-chzg2\" (UID: \"dee4a897-87c4-4a19-b8d8-8f690ab46d4d\") " pod="kube-system/cilium-chzg2" May 8 23:55:58.571659 kubelet[3324]: I0508 23:55:58.570047 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dee4a897-87c4-4a19-b8d8-8f690ab46d4d-cilium-ipsec-secrets\") pod \"cilium-chzg2\" (UID: \"dee4a897-87c4-4a19-b8d8-8f690ab46d4d\") " pod="kube-system/cilium-chzg2" May 8 23:55:58.571659 kubelet[3324]: I0508 23:55:58.570108 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dee4a897-87c4-4a19-b8d8-8f690ab46d4d-cilium-run\") pod \"cilium-chzg2\" (UID: \"dee4a897-87c4-4a19-b8d8-8f690ab46d4d\") " pod="kube-system/cilium-chzg2" May 8 23:55:58.572037 kubelet[3324]: I0508 23:55:58.571948 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dee4a897-87c4-4a19-b8d8-8f690ab46d4d-lib-modules\") pod \"cilium-chzg2\" (UID: \"dee4a897-87c4-4a19-b8d8-8f690ab46d4d\") " pod="kube-system/cilium-chzg2" May 8 23:55:58.572280 kubelet[3324]: I0508 23:55:58.572202 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dee4a897-87c4-4a19-b8d8-8f690ab46d4d-host-proc-sys-net\") pod \"cilium-chzg2\" (UID: \"dee4a897-87c4-4a19-b8d8-8f690ab46d4d\") " pod="kube-system/cilium-chzg2" May 8 23:55:58.572546 kubelet[3324]: I0508 23:55:58.572509 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dee4a897-87c4-4a19-b8d8-8f690ab46d4d-host-proc-sys-kernel\") pod \"cilium-chzg2\" (UID: \"dee4a897-87c4-4a19-b8d8-8f690ab46d4d\") " pod="kube-system/cilium-chzg2" May 8 23:55:58.573549 kubelet[3324]: I0508 23:55:58.573288 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dee4a897-87c4-4a19-b8d8-8f690ab46d4d-hostproc\") pod \"cilium-chzg2\" (UID: \"dee4a897-87c4-4a19-b8d8-8f690ab46d4d\") " pod="kube-system/cilium-chzg2" May 8 23:55:58.573549 kubelet[3324]: I0508 23:55:58.573384 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dee4a897-87c4-4a19-b8d8-8f690ab46d4d-xtables-lock\") pod \"cilium-chzg2\" (UID: \"dee4a897-87c4-4a19-b8d8-8f690ab46d4d\") " pod="kube-system/cilium-chzg2" May 8 23:55:58.573549 kubelet[3324]: I0508 23:55:58.573488 3324 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk86r\" (UniqueName: \"kubernetes.io/projected/dee4a897-87c4-4a19-b8d8-8f690ab46d4d-kube-api-access-hk86r\") pod \"cilium-chzg2\" (UID: \"dee4a897-87c4-4a19-b8d8-8f690ab46d4d\") " pod="kube-system/cilium-chzg2" May 8 23:55:58.760001 sshd[5125]: Accepted publickey for core from 139.178.68.195 port 57610 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:55:58.763665 sshd-session[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:58.773792 systemd-logind[1921]: New session 30 of user core. May 8 23:55:58.781449 systemd[1]: Started session-30.scope - Session 30 of User core. May 8 23:55:58.866772 containerd[1943]: time="2025-05-08T23:55:58.866619809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-chzg2,Uid:dee4a897-87c4-4a19-b8d8-8f690ab46d4d,Namespace:kube-system,Attempt:0,}" May 8 23:55:58.915119 sshd[5131]: Connection closed by 139.178.68.195 port 57610 May 8 23:55:58.912955 sshd-session[5125]: pam_unix(sshd:session): session closed for user core May 8 23:55:58.923454 systemd[1]: sshd@29-172.31.31.246:22-139.178.68.195:57610.service: Deactivated successfully. May 8 23:55:58.931274 systemd[1]: session-30.scope: Deactivated successfully. May 8 23:55:58.935385 systemd-logind[1921]: Session 30 logged out. Waiting for processes to exit. May 8 23:55:58.947491 containerd[1943]: time="2025-05-08T23:55:58.947331425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 23:55:58.948737 containerd[1943]: time="2025-05-08T23:55:58.947439389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 23:55:58.948737 containerd[1943]: time="2025-05-08T23:55:58.947480873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:55:58.948737 containerd[1943]: time="2025-05-08T23:55:58.947654093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 23:55:58.960820 systemd[1]: Started sshd@30-172.31.31.246:22-139.178.68.195:57626.service - OpenSSH per-connection server daemon (139.178.68.195:57626). May 8 23:55:58.963582 systemd-logind[1921]: Removed session 30. May 8 23:55:58.992441 systemd[1]: Started cri-containerd-44aee65b83388304e5756d9338bdebbfa15174fc4bcb54245e6aa92016ece78c.scope - libcontainer container 44aee65b83388304e5756d9338bdebbfa15174fc4bcb54245e6aa92016ece78c. May 8 23:55:59.052648 containerd[1943]: time="2025-05-08T23:55:59.051600098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-chzg2,Uid:dee4a897-87c4-4a19-b8d8-8f690ab46d4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"44aee65b83388304e5756d9338bdebbfa15174fc4bcb54245e6aa92016ece78c\"" May 8 23:55:59.061935 containerd[1943]: time="2025-05-08T23:55:59.061856294Z" level=info msg="CreateContainer within sandbox \"44aee65b83388304e5756d9338bdebbfa15174fc4bcb54245e6aa92016ece78c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 23:55:59.089827 containerd[1943]: time="2025-05-08T23:55:59.089601914Z" level=info msg="CreateContainer within sandbox \"44aee65b83388304e5756d9338bdebbfa15174fc4bcb54245e6aa92016ece78c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"559d08cd353a60b5cead8db0c3caadf3c8a01643b6a0631ea395e9a15cf32817\"" May 8 23:55:59.091522 containerd[1943]: time="2025-05-08T23:55:59.091316702Z" level=info msg="StartContainer for \"559d08cd353a60b5cead8db0c3caadf3c8a01643b6a0631ea395e9a15cf32817\"" May 8 23:55:59.141450 systemd[1]: Started cri-containerd-559d08cd353a60b5cead8db0c3caadf3c8a01643b6a0631ea395e9a15cf32817.scope - libcontainer container 559d08cd353a60b5cead8db0c3caadf3c8a01643b6a0631ea395e9a15cf32817. May 8 23:55:59.178703 sshd[5154]: Accepted publickey for core from 139.178.68.195 port 57626 ssh2: RSA SHA256:zF+nAIg7m4nEZ7vmprw09zI9gcWGW1m4QuHNdUfesN8 May 8 23:55:59.184853 sshd-session[5154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 8 23:55:59.200275 systemd-logind[1921]: New session 31 of user core. May 8 23:55:59.206810 systemd[1]: Started session-31.scope - Session 31 of User core. May 8 23:55:59.219280 containerd[1943]: time="2025-05-08T23:55:59.219207050Z" level=info msg="StartContainer for \"559d08cd353a60b5cead8db0c3caadf3c8a01643b6a0631ea395e9a15cf32817\" returns successfully" May 8 23:55:59.230791 systemd[1]: cri-containerd-559d08cd353a60b5cead8db0c3caadf3c8a01643b6a0631ea395e9a15cf32817.scope: Deactivated successfully. May 8 23:55:59.293915 containerd[1943]: time="2025-05-08T23:55:59.293715963Z" level=info msg="shim disconnected" id=559d08cd353a60b5cead8db0c3caadf3c8a01643b6a0631ea395e9a15cf32817 namespace=k8s.io May 8 23:55:59.293915 containerd[1943]: time="2025-05-08T23:55:59.293825175Z" level=warning msg="cleaning up after shim disconnected" id=559d08cd353a60b5cead8db0c3caadf3c8a01643b6a0631ea395e9a15cf32817 namespace=k8s.io May 8 23:55:59.293915 containerd[1943]: time="2025-05-08T23:55:59.293846727Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:55:59.516455 kubelet[3324]: E0508 23:55:59.516233 3324 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-c5lgt" podUID="2a6df9ce-3e24-4558-9847-f75a50af3965" May 8 23:56:00.013052 kubelet[3324]: I0508 23:56:00.012961 3324 setters.go:602] "Node became not ready" node="ip-172-31-31-246" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-08T23:56:00Z","lastTransitionTime":"2025-05-08T23:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 8 23:56:00.040381 containerd[1943]: time="2025-05-08T23:56:00.040299579Z" level=info msg="CreateContainer within sandbox \"44aee65b83388304e5756d9338bdebbfa15174fc4bcb54245e6aa92016ece78c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 23:56:00.075106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3756812205.mount: Deactivated successfully. May 8 23:56:00.078108 containerd[1943]: time="2025-05-08T23:56:00.077814099Z" level=info msg="CreateContainer within sandbox \"44aee65b83388304e5756d9338bdebbfa15174fc4bcb54245e6aa92016ece78c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b29c525b5ed9f868c2976c84bebc304c60d743a5d16c018b106f60818ec1e5d4\"" May 8 23:56:00.079796 containerd[1943]: time="2025-05-08T23:56:00.079672575Z" level=info msg="StartContainer for \"b29c525b5ed9f868c2976c84bebc304c60d743a5d16c018b106f60818ec1e5d4\"" May 8 23:56:00.181297 systemd[1]: Started cri-containerd-b29c525b5ed9f868c2976c84bebc304c60d743a5d16c018b106f60818ec1e5d4.scope - libcontainer container b29c525b5ed9f868c2976c84bebc304c60d743a5d16c018b106f60818ec1e5d4. May 8 23:56:00.296586 containerd[1943]: time="2025-05-08T23:56:00.296259712Z" level=info msg="StartContainer for \"b29c525b5ed9f868c2976c84bebc304c60d743a5d16c018b106f60818ec1e5d4\" returns successfully" May 8 23:56:00.331703 systemd[1]: cri-containerd-b29c525b5ed9f868c2976c84bebc304c60d743a5d16c018b106f60818ec1e5d4.scope: Deactivated successfully. May 8 23:56:00.376059 containerd[1943]: time="2025-05-08T23:56:00.375976024Z" level=info msg="shim disconnected" id=b29c525b5ed9f868c2976c84bebc304c60d743a5d16c018b106f60818ec1e5d4 namespace=k8s.io May 8 23:56:00.376724 containerd[1943]: time="2025-05-08T23:56:00.376409332Z" level=warning msg="cleaning up after shim disconnected" id=b29c525b5ed9f868c2976c84bebc304c60d743a5d16c018b106f60818ec1e5d4 namespace=k8s.io May 8 23:56:00.376724 containerd[1943]: time="2025-05-08T23:56:00.376451800Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:56:00.696690 systemd[1]: run-containerd-runc-k8s.io-b29c525b5ed9f868c2976c84bebc304c60d743a5d16c018b106f60818ec1e5d4-runc.i6Gz3y.mount: Deactivated successfully. May 8 23:56:00.697386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b29c525b5ed9f868c2976c84bebc304c60d743a5d16c018b106f60818ec1e5d4-rootfs.mount: Deactivated successfully. May 8 23:56:01.046227 containerd[1943]: time="2025-05-08T23:56:01.045666664Z" level=info msg="CreateContainer within sandbox \"44aee65b83388304e5756d9338bdebbfa15174fc4bcb54245e6aa92016ece78c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 23:56:01.079200 containerd[1943]: time="2025-05-08T23:56:01.078531988Z" level=info msg="CreateContainer within sandbox \"44aee65b83388304e5756d9338bdebbfa15174fc4bcb54245e6aa92016ece78c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3670b385c9459a2f7cd6a77ccbee7615c450c1a53e2065025ba79c3894893893\"" May 8 23:56:01.081749 containerd[1943]: time="2025-05-08T23:56:01.081653500Z" level=info msg="StartContainer for \"3670b385c9459a2f7cd6a77ccbee7615c450c1a53e2065025ba79c3894893893\"" May 8 23:56:01.151474 systemd[1]: Started cri-containerd-3670b385c9459a2f7cd6a77ccbee7615c450c1a53e2065025ba79c3894893893.scope - libcontainer container 3670b385c9459a2f7cd6a77ccbee7615c450c1a53e2065025ba79c3894893893. May 8 23:56:01.216903 containerd[1943]: time="2025-05-08T23:56:01.216727708Z" level=info msg="StartContainer for \"3670b385c9459a2f7cd6a77ccbee7615c450c1a53e2065025ba79c3894893893\" returns successfully" May 8 23:56:01.224754 systemd[1]: cri-containerd-3670b385c9459a2f7cd6a77ccbee7615c450c1a53e2065025ba79c3894893893.scope: Deactivated successfully. May 8 23:56:01.272711 containerd[1943]: time="2025-05-08T23:56:01.272437505Z" level=info msg="shim disconnected" id=3670b385c9459a2f7cd6a77ccbee7615c450c1a53e2065025ba79c3894893893 namespace=k8s.io May 8 23:56:01.272711 containerd[1943]: time="2025-05-08T23:56:01.272517017Z" level=warning msg="cleaning up after shim disconnected" id=3670b385c9459a2f7cd6a77ccbee7615c450c1a53e2065025ba79c3894893893 namespace=k8s.io May 8 23:56:01.272711 containerd[1943]: time="2025-05-08T23:56:01.272540201Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:56:01.517898 kubelet[3324]: E0508 23:56:01.516004 3324 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-c5lgt" podUID="2a6df9ce-3e24-4558-9847-f75a50af3965" May 8 23:56:01.697385 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3670b385c9459a2f7cd6a77ccbee7615c450c1a53e2065025ba79c3894893893-rootfs.mount: Deactivated successfully. May 8 23:56:02.053325 containerd[1943]: time="2025-05-08T23:56:02.052989701Z" level=info msg="CreateContainer within sandbox \"44aee65b83388304e5756d9338bdebbfa15174fc4bcb54245e6aa92016ece78c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 23:56:02.094928 containerd[1943]: time="2025-05-08T23:56:02.094819289Z" level=info msg="CreateContainer within sandbox \"44aee65b83388304e5756d9338bdebbfa15174fc4bcb54245e6aa92016ece78c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e61bce06cee2b7907bd0e5a3c79da8eb5f69cbd3f605f9bf44bbc1724d6c760c\"" May 8 23:56:02.098043 containerd[1943]: time="2025-05-08T23:56:02.097898153Z" level=info msg="StartContainer for \"e61bce06cee2b7907bd0e5a3c79da8eb5f69cbd3f605f9bf44bbc1724d6c760c\"" May 8 23:56:02.168499 systemd[1]: Started cri-containerd-e61bce06cee2b7907bd0e5a3c79da8eb5f69cbd3f605f9bf44bbc1724d6c760c.scope - libcontainer container e61bce06cee2b7907bd0e5a3c79da8eb5f69cbd3f605f9bf44bbc1724d6c760c. May 8 23:56:02.218970 systemd[1]: cri-containerd-e61bce06cee2b7907bd0e5a3c79da8eb5f69cbd3f605f9bf44bbc1724d6c760c.scope: Deactivated successfully. May 8 23:56:02.222994 containerd[1943]: time="2025-05-08T23:56:02.222684113Z" level=info msg="StartContainer for \"e61bce06cee2b7907bd0e5a3c79da8eb5f69cbd3f605f9bf44bbc1724d6c760c\" returns successfully" May 8 23:56:02.270510 containerd[1943]: time="2025-05-08T23:56:02.270352914Z" level=info msg="shim disconnected" id=e61bce06cee2b7907bd0e5a3c79da8eb5f69cbd3f605f9bf44bbc1724d6c760c namespace=k8s.io May 8 23:56:02.270510 containerd[1943]: time="2025-05-08T23:56:02.270427278Z" level=warning msg="cleaning up after shim disconnected" id=e61bce06cee2b7907bd0e5a3c79da8eb5f69cbd3f605f9bf44bbc1724d6c760c namespace=k8s.io May 8 23:56:02.270510 containerd[1943]: time="2025-05-08T23:56:02.270446178Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:56:02.697559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e61bce06cee2b7907bd0e5a3c79da8eb5f69cbd3f605f9bf44bbc1724d6c760c-rootfs.mount: Deactivated successfully. May 8 23:56:02.701493 kubelet[3324]: E0508 23:56:02.701415 3324 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 23:56:03.063051 containerd[1943]: time="2025-05-08T23:56:03.062874318Z" level=info msg="CreateContainer within sandbox \"44aee65b83388304e5756d9338bdebbfa15174fc4bcb54245e6aa92016ece78c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 23:56:03.103216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2490675176.mount: Deactivated successfully. May 8 23:56:03.107001 containerd[1943]: time="2025-05-08T23:56:03.106748610Z" level=info msg="CreateContainer within sandbox \"44aee65b83388304e5756d9338bdebbfa15174fc4bcb54245e6aa92016ece78c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ee8ca65e1fb1bc1157b5f5c0658ab8ee4434479d50aa411cee6acf80e5b9b660\"" May 8 23:56:03.108489 containerd[1943]: time="2025-05-08T23:56:03.107880126Z" level=info msg="StartContainer for \"ee8ca65e1fb1bc1157b5f5c0658ab8ee4434479d50aa411cee6acf80e5b9b660\"" May 8 23:56:03.170456 systemd[1]: Started cri-containerd-ee8ca65e1fb1bc1157b5f5c0658ab8ee4434479d50aa411cee6acf80e5b9b660.scope - libcontainer container ee8ca65e1fb1bc1157b5f5c0658ab8ee4434479d50aa411cee6acf80e5b9b660. May 8 23:56:03.226890 containerd[1943]: time="2025-05-08T23:56:03.226719642Z" level=info msg="StartContainer for \"ee8ca65e1fb1bc1157b5f5c0658ab8ee4434479d50aa411cee6acf80e5b9b660\" returns successfully" May 8 23:56:03.517515 kubelet[3324]: E0508 23:56:03.517432 3324 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-c5lgt" podUID="2a6df9ce-3e24-4558-9847-f75a50af3965" May 8 23:56:04.039262 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 8 23:56:05.517183 kubelet[3324]: E0508 23:56:05.516017 3324 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-c5lgt" podUID="2a6df9ce-3e24-4558-9847-f75a50af3965" May 8 23:56:07.419751 containerd[1943]: time="2025-05-08T23:56:07.419653751Z" level=info msg="StopPodSandbox for \"8478b9d8a6b83d4a7ddd19ea5734e596ee70cfd6075865c4e6c461aa37743129\"" May 8 23:56:07.420317 containerd[1943]: time="2025-05-08T23:56:07.419879027Z" level=info msg="TearDown network for sandbox \"8478b9d8a6b83d4a7ddd19ea5734e596ee70cfd6075865c4e6c461aa37743129\" successfully" May 8 23:56:07.420317 containerd[1943]: time="2025-05-08T23:56:07.419953295Z" level=info msg="StopPodSandbox for \"8478b9d8a6b83d4a7ddd19ea5734e596ee70cfd6075865c4e6c461aa37743129\" returns successfully" May 8 23:56:07.421500 containerd[1943]: time="2025-05-08T23:56:07.421414283Z" level=info msg="RemovePodSandbox for \"8478b9d8a6b83d4a7ddd19ea5734e596ee70cfd6075865c4e6c461aa37743129\"" May 8 23:56:07.421811 containerd[1943]: time="2025-05-08T23:56:07.421501619Z" level=info msg="Forcibly stopping sandbox \"8478b9d8a6b83d4a7ddd19ea5734e596ee70cfd6075865c4e6c461aa37743129\"" May 8 23:56:07.421811 containerd[1943]: time="2025-05-08T23:56:07.421649039Z" level=info msg="TearDown network for sandbox \"8478b9d8a6b83d4a7ddd19ea5734e596ee70cfd6075865c4e6c461aa37743129\" successfully" May 8 23:56:07.428661 containerd[1943]: time="2025-05-08T23:56:07.428588471Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8478b9d8a6b83d4a7ddd19ea5734e596ee70cfd6075865c4e6c461aa37743129\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 23:56:07.428851 containerd[1943]: time="2025-05-08T23:56:07.428686595Z" level=info msg="RemovePodSandbox \"8478b9d8a6b83d4a7ddd19ea5734e596ee70cfd6075865c4e6c461aa37743129\" returns successfully" May 8 23:56:07.430702 containerd[1943]: time="2025-05-08T23:56:07.429746915Z" level=info msg="StopPodSandbox for \"f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b\"" May 8 23:56:07.430702 containerd[1943]: time="2025-05-08T23:56:07.429888515Z" level=info msg="TearDown network for sandbox \"f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b\" successfully" May 8 23:56:07.430702 containerd[1943]: time="2025-05-08T23:56:07.429931007Z" level=info msg="StopPodSandbox for \"f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b\" returns successfully" May 8 23:56:07.431175 containerd[1943]: time="2025-05-08T23:56:07.431065139Z" level=info msg="RemovePodSandbox for \"f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b\"" May 8 23:56:07.431175 containerd[1943]: time="2025-05-08T23:56:07.431167007Z" level=info msg="Forcibly stopping sandbox \"f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b\"" May 8 23:56:07.431371 containerd[1943]: time="2025-05-08T23:56:07.431325875Z" level=info msg="TearDown network for sandbox \"f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b\" successfully" May 8 23:56:07.438610 containerd[1943]: time="2025-05-08T23:56:07.438340559Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 8 23:56:07.438610 containerd[1943]: time="2025-05-08T23:56:07.438438971Z" level=info msg="RemovePodSandbox \"f489733346503a37ed85b271ae92ed0f2fc60608b499276f5b82ab6a1295239b\" returns successfully" May 8 23:56:07.518003 kubelet[3324]: E0508 23:56:07.517886 3324 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-c5lgt" podUID="2a6df9ce-3e24-4558-9847-f75a50af3965" May 8 23:56:08.423895 systemd-networkd[1862]: lxc_health: Link UP May 8 23:56:08.434480 (udev-worker)[5978]: Network interface NamePolicy= disabled on kernel command line. May 8 23:56:08.435064 (udev-worker)[5979]: Network interface NamePolicy= disabled on kernel command line. May 8 23:56:08.439320 systemd-networkd[1862]: lxc_health: Gained carrier May 8 23:56:08.912421 kubelet[3324]: I0508 23:56:08.911528 3324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-chzg2" podStartSLOduration=10.911509179 podStartE2EDuration="10.911509179s" podCreationTimestamp="2025-05-08 23:55:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 23:56:04.109865551 +0000 UTC m=+116.924768006" watchObservedRunningTime="2025-05-08 23:56:08.911509179 +0000 UTC m=+121.726411610" May 8 23:56:10.312414 systemd-networkd[1862]: lxc_health: Gained IPv6LL May 8 23:56:10.438423 systemd[1]: run-containerd-runc-k8s.io-ee8ca65e1fb1bc1157b5f5c0658ab8ee4434479d50aa411cee6acf80e5b9b660-runc.jN67Th.mount: Deactivated successfully. May 8 23:56:12.895728 kubelet[3324]: E0508 23:56:12.895305 3324 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:60762->127.0.0.1:37313: write tcp 127.0.0.1:60762->127.0.0.1:37313: write: connection reset by peer May 8 23:56:13.106380 ntpd[1913]: Listen normally on 15 lxc_health [fe80::a0cb:b4ff:fe34:f8d9%14]:123 May 8 23:56:13.107435 ntpd[1913]: 8 May 23:56:13 ntpd[1913]: Listen normally on 15 lxc_health [fe80::a0cb:b4ff:fe34:f8d9%14]:123 May 8 23:56:15.081387 systemd[1]: run-containerd-runc-k8s.io-ee8ca65e1fb1bc1157b5f5c0658ab8ee4434479d50aa411cee6acf80e5b9b660-runc.7RE9y7.mount: Deactivated successfully. May 8 23:56:15.195432 kubelet[3324]: E0508 23:56:15.195365 3324 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:60776->127.0.0.1:37313: read tcp 127.0.0.1:60776->127.0.0.1:37313: read: connection reset by peer May 8 23:56:15.223441 sshd[5213]: Connection closed by 139.178.68.195 port 57626 May 8 23:56:15.224552 sshd-session[5154]: pam_unix(sshd:session): session closed for user core May 8 23:56:15.231695 systemd[1]: sshd@30-172.31.31.246:22-139.178.68.195:57626.service: Deactivated successfully. May 8 23:56:15.239685 systemd[1]: session-31.scope: Deactivated successfully. May 8 23:56:15.242822 systemd-logind[1921]: Session 31 logged out. Waiting for processes to exit. May 8 23:56:15.248747 systemd-logind[1921]: Removed session 31. May 8 23:56:28.538395 systemd[1]: cri-containerd-09271303eae6883860c28cb6bc81467cd729526ddf455a4252dd3f3589097868.scope: Deactivated successfully. May 8 23:56:28.539566 systemd[1]: cri-containerd-09271303eae6883860c28cb6bc81467cd729526ddf455a4252dd3f3589097868.scope: Consumed 5.032s CPU time, 18.0M memory peak, 0B memory swap peak. May 8 23:56:28.580831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09271303eae6883860c28cb6bc81467cd729526ddf455a4252dd3f3589097868-rootfs.mount: Deactivated successfully. May 8 23:56:28.588594 containerd[1943]: time="2025-05-08T23:56:28.588495968Z" level=info msg="shim disconnected" id=09271303eae6883860c28cb6bc81467cd729526ddf455a4252dd3f3589097868 namespace=k8s.io May 8 23:56:28.588594 containerd[1943]: time="2025-05-08T23:56:28.588578720Z" level=warning msg="cleaning up after shim disconnected" id=09271303eae6883860c28cb6bc81467cd729526ddf455a4252dd3f3589097868 namespace=k8s.io May 8 23:56:28.589859 containerd[1943]: time="2025-05-08T23:56:28.588600380Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:56:28.608870 containerd[1943]: time="2025-05-08T23:56:28.608759360Z" level=warning msg="cleanup warnings time=\"2025-05-08T23:56:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 8 23:56:29.142973 kubelet[3324]: I0508 23:56:29.142788 3324 scope.go:117] "RemoveContainer" containerID="09271303eae6883860c28cb6bc81467cd729526ddf455a4252dd3f3589097868" May 8 23:56:29.146762 containerd[1943]: time="2025-05-08T23:56:29.146661487Z" level=info msg="CreateContainer within sandbox \"c728bed030f3bc1e844c96643ea244bea1fa86dc115d9bace5782bde7be2f79f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 8 23:56:29.174884 containerd[1943]: time="2025-05-08T23:56:29.174693535Z" level=info msg="CreateContainer within sandbox \"c728bed030f3bc1e844c96643ea244bea1fa86dc115d9bace5782bde7be2f79f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e8adfe81826d1cc71579a0f82e6edc3690f08d040c1febc86851a2a06194c091\"" May 8 23:56:29.175784 containerd[1943]: time="2025-05-08T23:56:29.175691131Z" level=info msg="StartContainer for \"e8adfe81826d1cc71579a0f82e6edc3690f08d040c1febc86851a2a06194c091\"" May 8 23:56:29.231449 systemd[1]: Started cri-containerd-e8adfe81826d1cc71579a0f82e6edc3690f08d040c1febc86851a2a06194c091.scope - libcontainer container e8adfe81826d1cc71579a0f82e6edc3690f08d040c1febc86851a2a06194c091. May 8 23:56:29.310267 containerd[1943]: time="2025-05-08T23:56:29.310092908Z" level=info msg="StartContainer for \"e8adfe81826d1cc71579a0f82e6edc3690f08d040c1febc86851a2a06194c091\" returns successfully" May 8 23:56:30.275890 kubelet[3324]: E0508 23:56:30.275306 3324 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-246?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 8 23:56:34.616117 systemd[1]: cri-containerd-61229df5358bb672314e805ee1ae519e6c46ae91bf5a50e8fb74c568e43aa657.scope: Deactivated successfully. May 8 23:56:34.617261 systemd[1]: cri-containerd-61229df5358bb672314e805ee1ae519e6c46ae91bf5a50e8fb74c568e43aa657.scope: Consumed 5.953s CPU time, 15.6M memory peak, 0B memory swap peak. May 8 23:56:34.659316 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61229df5358bb672314e805ee1ae519e6c46ae91bf5a50e8fb74c568e43aa657-rootfs.mount: Deactivated successfully. May 8 23:56:34.672025 containerd[1943]: time="2025-05-08T23:56:34.671718903Z" level=info msg="shim disconnected" id=61229df5358bb672314e805ee1ae519e6c46ae91bf5a50e8fb74c568e43aa657 namespace=k8s.io May 8 23:56:34.672025 containerd[1943]: time="2025-05-08T23:56:34.671792979Z" level=warning msg="cleaning up after shim disconnected" id=61229df5358bb672314e805ee1ae519e6c46ae91bf5a50e8fb74c568e43aa657 namespace=k8s.io May 8 23:56:34.672025 containerd[1943]: time="2025-05-08T23:56:34.671813307Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 8 23:56:35.165186 kubelet[3324]: I0508 23:56:35.164858 3324 scope.go:117] "RemoveContainer" containerID="61229df5358bb672314e805ee1ae519e6c46ae91bf5a50e8fb74c568e43aa657" May 8 23:56:35.167845 containerd[1943]: time="2025-05-08T23:56:35.167581669Z" level=info msg="CreateContainer within sandbox \"ad17dfe7ee9453fc961421930a169ac29534889c16425d885542d0b022b069d5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 8 23:56:35.196703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3156656759.mount: Deactivated successfully. May 8 23:56:35.200537 containerd[1943]: time="2025-05-08T23:56:35.200462305Z" level=info msg="CreateContainer within sandbox \"ad17dfe7ee9453fc961421930a169ac29534889c16425d885542d0b022b069d5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2cd7383f7cfdad26c668d463d70e7724c02a1a76011a8f1351bf03c9860394dd\"" May 8 23:56:35.201463 containerd[1943]: time="2025-05-08T23:56:35.201215905Z" level=info msg="StartContainer for \"2cd7383f7cfdad26c668d463d70e7724c02a1a76011a8f1351bf03c9860394dd\"" May 8 23:56:35.255478 systemd[1]: Started cri-containerd-2cd7383f7cfdad26c668d463d70e7724c02a1a76011a8f1351bf03c9860394dd.scope - libcontainer container 2cd7383f7cfdad26c668d463d70e7724c02a1a76011a8f1351bf03c9860394dd. May 8 23:56:35.327830 containerd[1943]: time="2025-05-08T23:56:35.327720914Z" level=info msg="StartContainer for \"2cd7383f7cfdad26c668d463d70e7724c02a1a76011a8f1351bf03c9860394dd\" returns successfully"