May 9 23:57:48.271046 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] May 9 23:57:48.271126 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 9 22:39:45 -00 2025 May 9 23:57:48.275594 kernel: KASLR disabled due to lack of seed May 9 23:57:48.275627 kernel: efi: EFI v2.7 by EDK II May 9 23:57:48.275646 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b000a98 MEMRESERVE=0x7852ee18 May 9 23:57:48.275664 kernel: ACPI: Early table checksum verification disabled May 9 23:57:48.275684 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) May 9 23:57:48.275701 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) May 9 23:57:48.275719 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) May 9 23:57:48.275735 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) May 9 23:57:48.275767 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) May 9 23:57:48.275785 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) May 9 23:57:48.275801 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) May 9 23:57:48.275819 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) May 9 23:57:48.275839 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) May 9 23:57:48.275862 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) May 9 23:57:48.275881 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) May 9 23:57:48.275898 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 May 9 23:57:48.275916 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') May 9 23:57:48.275934 kernel: printk: bootconsole [uart0] enabled May 9 23:57:48.275951 kernel: NUMA: Failed to initialise from firmware May 9 23:57:48.275969 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] May 9 23:57:48.275987 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] May 9 23:57:48.276007 kernel: Zone ranges: May 9 23:57:48.276024 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 9 23:57:48.276041 kernel: DMA32 empty May 9 23:57:48.276066 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] May 9 23:57:48.276084 kernel: Movable zone start for each node May 9 23:57:48.276101 kernel: Early memory node ranges May 9 23:57:48.276119 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] May 9 23:57:48.276136 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] May 9 23:57:48.277199 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] May 9 23:57:48.277273 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] May 9 23:57:48.277292 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] May 9 23:57:48.277311 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] May 9 23:57:48.277329 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] May 9 23:57:48.277347 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] May 9 23:57:48.277365 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] May 9 23:57:48.277396 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges May 9 23:57:48.277415 kernel: psci: probing for conduit method from ACPI. May 9 23:57:48.277442 kernel: psci: PSCIv1.0 detected in firmware. May 9 23:57:48.277464 kernel: psci: Using standard PSCI v0.2 function IDs May 9 23:57:48.277483 kernel: psci: Trusted OS migration not required May 9 23:57:48.277506 kernel: psci: SMC Calling Convention v1.1 May 9 23:57:48.277525 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 9 23:57:48.277543 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 9 23:57:48.277563 kernel: pcpu-alloc: [0] 0 [0] 1 May 9 23:57:48.277581 kernel: Detected PIPT I-cache on CPU0 May 9 23:57:48.277599 kernel: CPU features: detected: GIC system register CPU interface May 9 23:57:48.277617 kernel: CPU features: detected: Spectre-v2 May 9 23:57:48.277635 kernel: CPU features: detected: Spectre-v3a May 9 23:57:48.277653 kernel: CPU features: detected: Spectre-BHB May 9 23:57:48.277672 kernel: CPU features: detected: ARM erratum 1742098 May 9 23:57:48.277690 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 May 9 23:57:48.277714 kernel: alternatives: applying boot alternatives May 9 23:57:48.277735 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 9 23:57:48.277755 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 9 23:57:48.277773 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 9 23:57:48.277792 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 9 23:57:48.277810 kernel: Fallback order for Node 0: 0 May 9 23:57:48.277828 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 May 9 23:57:48.277846 kernel: Policy zone: Normal May 9 23:57:48.277864 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 9 23:57:48.277882 kernel: software IO TLB: area num 2. May 9 23:57:48.277900 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) May 9 23:57:48.277927 kernel: Memory: 3820088K/4030464K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 210376K reserved, 0K cma-reserved) May 9 23:57:48.277946 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 9 23:57:48.277964 kernel: rcu: Preemptible hierarchical RCU implementation. May 9 23:57:48.277983 kernel: rcu: RCU event tracing is enabled. May 9 23:57:48.278002 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 9 23:57:48.278021 kernel: Trampoline variant of Tasks RCU enabled. May 9 23:57:48.278040 kernel: Tracing variant of Tasks RCU enabled. May 9 23:57:48.278058 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 9 23:57:48.278076 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 9 23:57:48.278094 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 9 23:57:48.278112 kernel: GICv3: 96 SPIs implemented May 9 23:57:48.278136 kernel: GICv3: 0 Extended SPIs implemented May 9 23:57:48.279280 kernel: Root IRQ handler: gic_handle_irq May 9 23:57:48.279315 kernel: GICv3: GICv3 features: 16 PPIs May 9 23:57:48.279335 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 May 9 23:57:48.279355 kernel: ITS [mem 0x10080000-0x1009ffff] May 9 23:57:48.279375 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) May 9 23:57:48.279396 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) May 9 23:57:48.279414 kernel: GICv3: using LPI property table @0x00000004000d0000 May 9 23:57:48.279434 kernel: ITS: Using hypervisor restricted LPI range [128] May 9 23:57:48.279456 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 May 9 23:57:48.279475 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 9 23:57:48.279496 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). May 9 23:57:48.279532 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns May 9 23:57:48.279553 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns May 9 23:57:48.279573 kernel: Console: colour dummy device 80x25 May 9 23:57:48.279593 kernel: printk: console [tty1] enabled May 9 23:57:48.279611 kernel: ACPI: Core revision 20230628 May 9 23:57:48.279634 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) May 9 23:57:48.279652 kernel: pid_max: default: 32768 minimum: 301 May 9 23:57:48.279671 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 9 23:57:48.279690 kernel: landlock: Up and running. May 9 23:57:48.279714 kernel: SELinux: Initializing. May 9 23:57:48.279734 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:57:48.279754 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 9 23:57:48.279774 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 23:57:48.279795 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 9 23:57:48.279816 kernel: rcu: Hierarchical SRCU implementation. May 9 23:57:48.279835 kernel: rcu: Max phase no-delay instances is 400. May 9 23:57:48.279855 kernel: Platform MSI: ITS@0x10080000 domain created May 9 23:57:48.279873 kernel: PCI/MSI: ITS@0x10080000 domain created May 9 23:57:48.279898 kernel: Remapping and enabling EFI services. May 9 23:57:48.279917 kernel: smp: Bringing up secondary CPUs ... May 9 23:57:48.279934 kernel: Detected PIPT I-cache on CPU1 May 9 23:57:48.279953 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 May 9 23:57:48.279971 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 May 9 23:57:48.279990 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] May 9 23:57:48.280008 kernel: smp: Brought up 1 node, 2 CPUs May 9 23:57:48.280026 kernel: SMP: Total of 2 processors activated. May 9 23:57:48.280044 kernel: CPU features: detected: 32-bit EL0 Support May 9 23:57:48.280067 kernel: CPU features: detected: 32-bit EL1 Support May 9 23:57:48.280087 kernel: CPU features: detected: CRC32 instructions May 9 23:57:48.280106 kernel: CPU: All CPU(s) started at EL1 May 9 23:57:48.280138 kernel: alternatives: applying system-wide alternatives May 9 23:57:48.280213 kernel: devtmpfs: initialized May 9 23:57:48.280237 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 9 23:57:48.280256 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 9 23:57:48.280276 kernel: pinctrl core: initialized pinctrl subsystem May 9 23:57:48.280295 kernel: SMBIOS 3.0.0 present. May 9 23:57:48.280315 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 May 9 23:57:48.280347 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 9 23:57:48.280368 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 9 23:57:48.280387 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 9 23:57:48.280407 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 9 23:57:48.280426 kernel: audit: initializing netlink subsys (disabled) May 9 23:57:48.280446 kernel: audit: type=2000 audit(0.291:1): state=initialized audit_enabled=0 res=1 May 9 23:57:48.280466 kernel: thermal_sys: Registered thermal governor 'step_wise' May 9 23:57:48.280492 kernel: cpuidle: using governor menu May 9 23:57:48.280511 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 9 23:57:48.280530 kernel: ASID allocator initialised with 65536 entries May 9 23:57:48.280549 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 9 23:57:48.280578 kernel: Serial: AMBA PL011 UART driver May 9 23:57:48.280598 kernel: Modules: 17488 pages in range for non-PLT usage May 9 23:57:48.280616 kernel: Modules: 509008 pages in range for PLT usage May 9 23:57:48.280635 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 9 23:57:48.280654 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 9 23:57:48.280681 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 9 23:57:48.280700 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 9 23:57:48.280719 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 9 23:57:48.280739 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 9 23:57:48.280758 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 9 23:57:48.280777 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 9 23:57:48.280797 kernel: ACPI: Added _OSI(Module Device) May 9 23:57:48.280817 kernel: ACPI: Added _OSI(Processor Device) May 9 23:57:48.280836 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 9 23:57:48.280862 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 9 23:57:48.280882 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 9 23:57:48.280907 kernel: ACPI: Interpreter enabled May 9 23:57:48.280932 kernel: ACPI: Using GIC for interrupt routing May 9 23:57:48.280951 kernel: ACPI: MCFG table detected, 1 entries May 9 23:57:48.280970 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) May 9 23:57:48.284452 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 9 23:57:48.284733 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 9 23:57:48.284990 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 9 23:57:48.285300 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 May 9 23:57:48.285545 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] May 9 23:57:48.285578 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] May 9 23:57:48.285598 kernel: acpiphp: Slot [1] registered May 9 23:57:48.285617 kernel: acpiphp: Slot [2] registered May 9 23:57:48.285638 kernel: acpiphp: Slot [3] registered May 9 23:57:48.285658 kernel: acpiphp: Slot [4] registered May 9 23:57:48.285691 kernel: acpiphp: Slot [5] registered May 9 23:57:48.285712 kernel: acpiphp: Slot [6] registered May 9 23:57:48.285731 kernel: acpiphp: Slot [7] registered May 9 23:57:48.285751 kernel: acpiphp: Slot [8] registered May 9 23:57:48.285770 kernel: acpiphp: Slot [9] registered May 9 23:57:48.285790 kernel: acpiphp: Slot [10] registered May 9 23:57:48.285809 kernel: acpiphp: Slot [11] registered May 9 23:57:48.285828 kernel: acpiphp: Slot [12] registered May 9 23:57:48.285848 kernel: acpiphp: Slot [13] registered May 9 23:57:48.285872 kernel: acpiphp: Slot [14] registered May 9 23:57:48.285892 kernel: acpiphp: Slot [15] registered May 9 23:57:48.285911 kernel: acpiphp: Slot [16] registered May 9 23:57:48.285930 kernel: acpiphp: Slot [17] registered May 9 23:57:48.285950 kernel: acpiphp: Slot [18] registered May 9 23:57:48.285969 kernel: acpiphp: Slot [19] registered May 9 23:57:48.285988 kernel: acpiphp: Slot [20] registered May 9 23:57:48.286007 kernel: acpiphp: Slot [21] registered May 9 23:57:48.286027 kernel: acpiphp: Slot [22] registered May 9 23:57:48.286046 kernel: acpiphp: Slot [23] registered May 9 23:57:48.286071 kernel: acpiphp: Slot [24] registered May 9 23:57:48.286090 kernel: acpiphp: Slot [25] registered May 9 23:57:48.286110 kernel: acpiphp: Slot [26] registered May 9 23:57:48.286129 kernel: acpiphp: Slot [27] registered May 9 23:57:48.288210 kernel: acpiphp: Slot [28] registered May 9 23:57:48.288263 kernel: acpiphp: Slot [29] registered May 9 23:57:48.288283 kernel: acpiphp: Slot [30] registered May 9 23:57:48.288304 kernel: acpiphp: Slot [31] registered May 9 23:57:48.288324 kernel: PCI host bridge to bus 0000:00 May 9 23:57:48.288656 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] May 9 23:57:48.288886 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 9 23:57:48.289104 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] May 9 23:57:48.289413 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] May 9 23:57:48.289695 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 May 9 23:57:48.289977 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 May 9 23:57:48.290749 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] May 9 23:57:48.291045 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 May 9 23:57:48.292815 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] May 9 23:57:48.293068 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold May 9 23:57:48.297509 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 May 9 23:57:48.297781 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] May 9 23:57:48.298027 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] May 9 23:57:48.300078 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] May 9 23:57:48.300426 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold May 9 23:57:48.300658 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] May 9 23:57:48.300896 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] May 9 23:57:48.301126 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] May 9 23:57:48.301422 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] May 9 23:57:48.301674 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] May 9 23:57:48.301911 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] May 9 23:57:48.302118 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 9 23:57:48.305454 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] May 9 23:57:48.305504 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 9 23:57:48.305525 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 9 23:57:48.305545 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 9 23:57:48.305564 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 9 23:57:48.305584 kernel: iommu: Default domain type: Translated May 9 23:57:48.305618 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 9 23:57:48.305639 kernel: efivars: Registered efivars operations May 9 23:57:48.305658 kernel: vgaarb: loaded May 9 23:57:48.305678 kernel: clocksource: Switched to clocksource arch_sys_counter May 9 23:57:48.305699 kernel: VFS: Disk quotas dquot_6.6.0 May 9 23:57:48.305718 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 9 23:57:48.305737 kernel: pnp: PnP ACPI init May 9 23:57:48.305993 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved May 9 23:57:48.306029 kernel: pnp: PnP ACPI: found 1 devices May 9 23:57:48.306059 kernel: NET: Registered PF_INET protocol family May 9 23:57:48.306079 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 9 23:57:48.306114 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 9 23:57:48.306209 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 9 23:57:48.306231 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 9 23:57:48.306251 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 9 23:57:48.306270 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 9 23:57:48.306289 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:57:48.306309 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 9 23:57:48.306338 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 9 23:57:48.306358 kernel: PCI: CLS 0 bytes, default 64 May 9 23:57:48.306377 kernel: kvm [1]: HYP mode not available May 9 23:57:48.306395 kernel: Initialise system trusted keyrings May 9 23:57:48.306415 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 9 23:57:48.306434 kernel: Key type asymmetric registered May 9 23:57:48.306452 kernel: Asymmetric key parser 'x509' registered May 9 23:57:48.306471 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 9 23:57:48.306490 kernel: io scheduler mq-deadline registered May 9 23:57:48.306514 kernel: io scheduler kyber registered May 9 23:57:48.306534 kernel: io scheduler bfq registered May 9 23:57:48.306835 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered May 9 23:57:48.306870 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 9 23:57:48.306890 kernel: ACPI: button: Power Button [PWRB] May 9 23:57:48.306910 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 May 9 23:57:48.306929 kernel: ACPI: button: Sleep Button [SLPB] May 9 23:57:48.306948 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 9 23:57:48.306979 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 9 23:57:48.309525 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) May 9 23:57:48.309570 kernel: printk: console [ttyS0] disabled May 9 23:57:48.309591 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A May 9 23:57:48.309611 kernel: printk: console [ttyS0] enabled May 9 23:57:48.309630 kernel: printk: bootconsole [uart0] disabled May 9 23:57:48.309649 kernel: thunder_xcv, ver 1.0 May 9 23:57:48.309668 kernel: thunder_bgx, ver 1.0 May 9 23:57:48.309688 kernel: nicpf, ver 1.0 May 9 23:57:48.309721 kernel: nicvf, ver 1.0 May 9 23:57:48.309989 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 9 23:57:48.313615 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-09T23:57:47 UTC (1746835067) May 9 23:57:48.313665 kernel: hid: raw HID events driver (C) Jiri Kosina May 9 23:57:48.313688 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available May 9 23:57:48.313708 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 9 23:57:48.313727 kernel: watchdog: Hard watchdog permanently disabled May 9 23:57:48.313747 kernel: NET: Registered PF_INET6 protocol family May 9 23:57:48.313780 kernel: Segment Routing with IPv6 May 9 23:57:48.313799 kernel: In-situ OAM (IOAM) with IPv6 May 9 23:57:48.313817 kernel: NET: Registered PF_PACKET protocol family May 9 23:57:48.313836 kernel: Key type dns_resolver registered May 9 23:57:48.313855 kernel: registered taskstats version 1 May 9 23:57:48.313874 kernel: Loading compiled-in X.509 certificates May 9 23:57:48.313893 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 02a1572fa4e3e92c40cffc658d8dbcab2e5537ff' May 9 23:57:48.313912 kernel: Key type .fscrypt registered May 9 23:57:48.313930 kernel: Key type fscrypt-provisioning registered May 9 23:57:48.313954 kernel: ima: No TPM chip found, activating TPM-bypass! May 9 23:57:48.313974 kernel: ima: Allocated hash algorithm: sha1 May 9 23:57:48.313992 kernel: ima: No architecture policies found May 9 23:57:48.314011 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 9 23:57:48.314030 kernel: clk: Disabling unused clocks May 9 23:57:48.314049 kernel: Freeing unused kernel memory: 39424K May 9 23:57:48.314068 kernel: Run /init as init process May 9 23:57:48.314086 kernel: with arguments: May 9 23:57:48.314105 kernel: /init May 9 23:57:48.314124 kernel: with environment: May 9 23:57:48.315225 kernel: HOME=/ May 9 23:57:48.315270 kernel: TERM=linux May 9 23:57:48.315291 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 9 23:57:48.315317 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 23:57:48.315343 systemd[1]: Detected virtualization amazon. May 9 23:57:48.315365 systemd[1]: Detected architecture arm64. May 9 23:57:48.315386 systemd[1]: Running in initrd. May 9 23:57:48.315420 systemd[1]: No hostname configured, using default hostname. May 9 23:57:48.315441 systemd[1]: Hostname set to . May 9 23:57:48.315462 systemd[1]: Initializing machine ID from VM UUID. May 9 23:57:48.315483 systemd[1]: Queued start job for default target initrd.target. May 9 23:57:48.315504 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:57:48.315525 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:57:48.315547 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 9 23:57:48.315569 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:57:48.315596 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 9 23:57:48.315617 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 9 23:57:48.315642 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 9 23:57:48.315664 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 9 23:57:48.315686 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:57:48.315708 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:57:48.315729 systemd[1]: Reached target paths.target - Path Units. May 9 23:57:48.315757 systemd[1]: Reached target slices.target - Slice Units. May 9 23:57:48.315779 systemd[1]: Reached target swap.target - Swaps. May 9 23:57:48.315800 systemd[1]: Reached target timers.target - Timer Units. May 9 23:57:48.316539 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:57:48.317614 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:57:48.317890 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 9 23:57:48.317914 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 9 23:57:48.317936 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:57:48.317974 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:57:48.317997 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:57:48.318020 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:57:48.318043 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 9 23:57:48.318064 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:57:48.318086 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 9 23:57:48.318108 systemd[1]: Starting systemd-fsck-usr.service... May 9 23:57:48.318129 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:57:48.318246 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:57:48.318288 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:57:48.318309 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 9 23:57:48.318331 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:57:48.318352 systemd[1]: Finished systemd-fsck-usr.service. May 9 23:57:48.318424 systemd-journald[250]: Collecting audit messages is disabled. May 9 23:57:48.318481 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 23:57:48.318504 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 9 23:57:48.318527 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:57:48.318555 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 23:57:48.318576 kernel: Bridge firewalling registered May 9 23:57:48.318597 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:57:48.318620 systemd-journald[250]: Journal started May 9 23:57:48.318660 systemd-journald[250]: Runtime Journal (/run/log/journal/ec26a5ce1f4f4dc8648af33d986294a5) is 8.0M, max 75.3M, 67.3M free. May 9 23:57:48.263054 systemd-modules-load[251]: Inserted module 'overlay' May 9 23:57:48.310405 systemd-modules-load[251]: Inserted module 'br_netfilter' May 9 23:57:48.331855 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:57:48.342189 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:57:48.356509 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:57:48.359996 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:57:48.367624 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:57:48.387918 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:57:48.408284 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:57:48.415381 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:57:48.432551 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:57:48.436812 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:57:48.448510 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 9 23:57:48.482025 dracut-cmdline[289]: dracut-dracut-053 May 9 23:57:48.491380 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=6ddfb314c5db7ed82ab49390a2bb52fe12211605ed2a5a27fb38ec34b3cca5b4 May 9 23:57:48.519698 systemd-resolved[287]: Positive Trust Anchors: May 9 23:57:48.521298 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:57:48.521552 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:57:48.661298 kernel: SCSI subsystem initialized May 9 23:57:48.668274 kernel: Loading iSCSI transport class v2.0-870. May 9 23:57:48.681269 kernel: iscsi: registered transport (tcp) May 9 23:57:48.703278 kernel: iscsi: registered transport (qla4xxx) May 9 23:57:48.703350 kernel: QLogic iSCSI HBA Driver May 9 23:57:48.758237 kernel: random: crng init done May 9 23:57:48.758547 systemd-resolved[287]: Defaulting to hostname 'linux'. May 9 23:57:48.762190 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:57:48.764395 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:57:48.792260 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 9 23:57:48.804492 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 9 23:57:48.850458 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 9 23:57:48.850553 kernel: device-mapper: uevent: version 1.0.3 May 9 23:57:48.852272 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 9 23:57:48.924235 kernel: raid6: neonx8 gen() 6566 MB/s May 9 23:57:48.941215 kernel: raid6: neonx4 gen() 6366 MB/s May 9 23:57:48.958210 kernel: raid6: neonx2 gen() 5375 MB/s May 9 23:57:48.975205 kernel: raid6: neonx1 gen() 3911 MB/s May 9 23:57:48.992200 kernel: raid6: int64x8 gen() 3814 MB/s May 9 23:57:49.009191 kernel: raid6: int64x4 gen() 3722 MB/s May 9 23:57:49.026198 kernel: raid6: int64x2 gen() 3565 MB/s May 9 23:57:49.044087 kernel: raid6: int64x1 gen() 2750 MB/s May 9 23:57:49.044190 kernel: raid6: using algorithm neonx8 gen() 6566 MB/s May 9 23:57:49.062061 kernel: raid6: .... xor() 4772 MB/s, rmw enabled May 9 23:57:49.062162 kernel: raid6: using neon recovery algorithm May 9 23:57:49.070866 kernel: xor: measuring software checksum speed May 9 23:57:49.070942 kernel: 8regs : 11016 MB/sec May 9 23:57:49.071184 kernel: 32regs : 11429 MB/sec May 9 23:57:49.074206 kernel: arm64_neon : 8799 MB/sec May 9 23:57:49.074275 kernel: xor: using function: 32regs (11429 MB/sec) May 9 23:57:49.162229 kernel: Btrfs loaded, zoned=no, fsverity=no May 9 23:57:49.184754 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 9 23:57:49.195502 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:57:49.241703 systemd-udevd[471]: Using default interface naming scheme 'v255'. May 9 23:57:49.251654 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:57:49.262428 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 9 23:57:49.302797 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation May 9 23:57:49.361722 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:57:49.370455 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:57:49.497990 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:57:49.510700 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 9 23:57:49.564635 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 9 23:57:49.570615 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:57:49.576268 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:57:49.578631 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:57:49.598589 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 9 23:57:49.631603 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 9 23:57:49.728316 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 9 23:57:49.728418 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) May 9 23:57:49.730637 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:57:49.730924 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:57:49.736132 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:57:49.739914 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:57:49.740234 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:57:49.744411 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:57:49.770039 kernel: ena 0000:00:05.0: ENA device version: 0.10 May 9 23:57:49.770544 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 May 9 23:57:49.776494 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:57:49.782372 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:62:bc:4e:9b:6b May 9 23:57:49.786972 (udev-worker)[527]: Network interface NamePolicy= disabled on kernel command line. May 9 23:57:49.791187 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 9 23:57:49.793218 kernel: nvme nvme0: pci function 0000:00:04.0 May 9 23:57:49.804214 kernel: nvme nvme0: 2/0/0 default/read/poll queues May 9 23:57:49.812399 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 9 23:57:49.812468 kernel: GPT:9289727 != 16777215 May 9 23:57:49.812495 kernel: GPT:Alternate GPT header not at the end of the disk. May 9 23:57:49.814333 kernel: GPT:9289727 != 16777215 May 9 23:57:49.816656 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 23:57:49.819515 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 23:57:49.819377 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:57:49.831510 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 9 23:57:49.878489 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:57:49.961236 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (526) May 9 23:57:49.983198 kernel: BTRFS: device fsid 7278434d-1c51-4098-9ab9-92db46b8a354 devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (547) May 9 23:57:50.063470 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. May 9 23:57:50.095244 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 9 23:57:50.117256 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. May 9 23:57:50.144718 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. May 9 23:57:50.147803 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. May 9 23:57:50.162586 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 9 23:57:50.179960 disk-uuid[660]: Primary Header is updated. May 9 23:57:50.179960 disk-uuid[660]: Secondary Entries is updated. May 9 23:57:50.179960 disk-uuid[660]: Secondary Header is updated. May 9 23:57:50.189261 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 23:57:50.197310 kernel: GPT:disk_guids don't match. May 9 23:57:50.197398 kernel: GPT: Use GNU Parted to correct GPT errors. May 9 23:57:50.197428 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 23:57:50.207215 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 23:57:51.209568 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 9 23:57:51.209826 disk-uuid[661]: The operation has completed successfully. May 9 23:57:51.399063 systemd[1]: disk-uuid.service: Deactivated successfully. May 9 23:57:51.399334 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 9 23:57:51.450480 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 9 23:57:51.471870 sh[1006]: Success May 9 23:57:51.497510 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 9 23:57:51.605231 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 9 23:57:51.625379 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 9 23:57:51.631250 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 9 23:57:51.661996 kernel: BTRFS info (device dm-0): first mount of filesystem 7278434d-1c51-4098-9ab9-92db46b8a354 May 9 23:57:51.662080 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 9 23:57:51.662108 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 9 23:57:51.665054 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 9 23:57:51.665106 kernel: BTRFS info (device dm-0): using free space tree May 9 23:57:51.782184 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 9 23:57:51.804578 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 9 23:57:51.808664 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 9 23:57:51.822393 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 9 23:57:51.830441 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 9 23:57:51.867036 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:57:51.867117 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 9 23:57:51.868357 kernel: BTRFS info (device nvme0n1p6): using free space tree May 9 23:57:51.876186 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 9 23:57:51.894297 systemd[1]: mnt-oem.mount: Deactivated successfully. May 9 23:57:51.897344 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:57:51.908352 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 9 23:57:51.920515 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 9 23:57:52.016501 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:57:52.035947 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:57:52.085464 systemd-networkd[1199]: lo: Link UP May 9 23:57:52.085943 systemd-networkd[1199]: lo: Gained carrier May 9 23:57:52.090209 systemd-networkd[1199]: Enumeration completed May 9 23:57:52.090836 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:57:52.091727 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:57:52.091733 systemd-networkd[1199]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:57:52.094668 systemd[1]: Reached target network.target - Network. May 9 23:57:52.111844 systemd-networkd[1199]: eth0: Link UP May 9 23:57:52.111857 systemd-networkd[1199]: eth0: Gained carrier May 9 23:57:52.111874 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:57:52.130241 systemd-networkd[1199]: eth0: DHCPv4 address 172.31.30.213/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 9 23:57:52.272877 ignition[1116]: Ignition 2.19.0 May 9 23:57:52.272906 ignition[1116]: Stage: fetch-offline May 9 23:57:52.274541 ignition[1116]: no configs at "/usr/lib/ignition/base.d" May 9 23:57:52.274571 ignition[1116]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:57:52.276276 ignition[1116]: Ignition finished successfully May 9 23:57:52.282349 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:57:52.295466 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 9 23:57:52.322622 ignition[1209]: Ignition 2.19.0 May 9 23:57:52.322649 ignition[1209]: Stage: fetch May 9 23:57:52.323380 ignition[1209]: no configs at "/usr/lib/ignition/base.d" May 9 23:57:52.323407 ignition[1209]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:57:52.323864 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:57:52.355561 ignition[1209]: PUT result: OK May 9 23:57:52.373205 ignition[1209]: parsed url from cmdline: "" May 9 23:57:52.373229 ignition[1209]: no config URL provided May 9 23:57:52.373246 ignition[1209]: reading system config file "/usr/lib/ignition/user.ign" May 9 23:57:52.373276 ignition[1209]: no config at "/usr/lib/ignition/user.ign" May 9 23:57:52.373321 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:57:52.379423 ignition[1209]: PUT result: OK May 9 23:57:52.383113 ignition[1209]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 May 9 23:57:52.385586 ignition[1209]: GET result: OK May 9 23:57:52.385863 ignition[1209]: parsing config with SHA512: 01624f5b5b8b47d39eac1454e3ccdbe366aeee5bad700f892a4a16f8b0f2cf09252785a4ec51e6e7d26fb14bec6b745d3bbdc7d3dcd4a7c80846ecddc90427f3 May 9 23:57:52.394560 unknown[1209]: fetched base config from "system" May 9 23:57:52.396036 ignition[1209]: fetch: fetch complete May 9 23:57:52.394584 unknown[1209]: fetched base config from "system" May 9 23:57:52.396053 ignition[1209]: fetch: fetch passed May 9 23:57:52.394598 unknown[1209]: fetched user config from "aws" May 9 23:57:52.396229 ignition[1209]: Ignition finished successfully May 9 23:57:52.401304 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 9 23:57:52.419532 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 9 23:57:52.459910 ignition[1215]: Ignition 2.19.0 May 9 23:57:52.459938 ignition[1215]: Stage: kargs May 9 23:57:52.461806 ignition[1215]: no configs at "/usr/lib/ignition/base.d" May 9 23:57:52.461836 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:57:52.462104 ignition[1215]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:57:52.470434 ignition[1215]: PUT result: OK May 9 23:57:52.476118 ignition[1215]: kargs: kargs passed May 9 23:57:52.476528 ignition[1215]: Ignition finished successfully May 9 23:57:52.482627 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 9 23:57:52.502561 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 9 23:57:52.527114 ignition[1222]: Ignition 2.19.0 May 9 23:57:52.527143 ignition[1222]: Stage: disks May 9 23:57:52.528435 ignition[1222]: no configs at "/usr/lib/ignition/base.d" May 9 23:57:52.528464 ignition[1222]: no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:57:52.528638 ignition[1222]: PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:57:52.530875 ignition[1222]: PUT result: OK May 9 23:57:52.541697 ignition[1222]: disks: disks passed May 9 23:57:52.542058 ignition[1222]: Ignition finished successfully May 9 23:57:52.549224 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 9 23:57:52.552870 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 9 23:57:52.558143 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 9 23:57:52.563651 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:57:52.565733 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:57:52.567804 systemd[1]: Reached target basic.target - Basic System. May 9 23:57:52.584479 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 9 23:57:52.631000 systemd-fsck[1230]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 9 23:57:52.637293 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 9 23:57:52.648608 systemd[1]: Mounting sysroot.mount - /sysroot... May 9 23:57:52.754223 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ffdb9517-5190-4050-8f70-de9d48dc1858 r/w with ordered data mode. Quota mode: none. May 9 23:57:52.756671 systemd[1]: Mounted sysroot.mount - /sysroot. May 9 23:57:52.760435 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 9 23:57:52.781333 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:57:52.787569 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 9 23:57:52.791473 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 9 23:57:52.791577 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 9 23:57:52.791633 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:57:52.817205 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1249) May 9 23:57:52.817287 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:57:52.819017 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 9 23:57:52.819102 kernel: BTRFS info (device nvme0n1p6): using free space tree May 9 23:57:52.826300 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 9 23:57:52.843297 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 9 23:57:52.843587 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 9 23:57:52.850347 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:57:53.276768 initrd-setup-root[1273]: cut: /sysroot/etc/passwd: No such file or directory May 9 23:57:53.297105 initrd-setup-root[1280]: cut: /sysroot/etc/group: No such file or directory May 9 23:57:53.307129 initrd-setup-root[1287]: cut: /sysroot/etc/shadow: No such file or directory May 9 23:57:53.315545 initrd-setup-root[1294]: cut: /sysroot/etc/gshadow: No such file or directory May 9 23:57:53.347374 systemd-networkd[1199]: eth0: Gained IPv6LL May 9 23:57:53.636414 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 9 23:57:53.651887 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 9 23:57:53.659561 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 9 23:57:53.675016 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 9 23:57:53.679348 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:57:53.721728 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 9 23:57:53.727418 ignition[1361]: INFO : Ignition 2.19.0 May 9 23:57:53.727418 ignition[1361]: INFO : Stage: mount May 9 23:57:53.731117 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:57:53.731117 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:57:53.731117 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:57:53.738590 ignition[1361]: INFO : PUT result: OK May 9 23:57:53.743832 ignition[1361]: INFO : mount: mount passed May 9 23:57:53.745520 ignition[1361]: INFO : Ignition finished successfully May 9 23:57:53.749641 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 9 23:57:53.759396 systemd[1]: Starting ignition-files.service - Ignition (files)... May 9 23:57:53.790522 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 9 23:57:53.822207 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1373) May 9 23:57:53.826790 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 3b69b342-5bf7-4a79-8c13-5043d2a95a48 May 9 23:57:53.826883 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 9 23:57:53.826916 kernel: BTRFS info (device nvme0n1p6): using free space tree May 9 23:57:53.835183 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 9 23:57:53.836575 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 9 23:57:53.875812 ignition[1390]: INFO : Ignition 2.19.0 May 9 23:57:53.875812 ignition[1390]: INFO : Stage: files May 9 23:57:53.879415 ignition[1390]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:57:53.879415 ignition[1390]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:57:53.879415 ignition[1390]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:57:53.886346 ignition[1390]: INFO : PUT result: OK May 9 23:57:53.891833 ignition[1390]: DEBUG : files: compiled without relabeling support, skipping May 9 23:57:53.894560 ignition[1390]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 9 23:57:53.894560 ignition[1390]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 9 23:57:53.904513 ignition[1390]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 9 23:57:53.907280 ignition[1390]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 9 23:57:53.909945 unknown[1390]: wrote ssh authorized keys file for user: core May 9 23:57:53.912318 ignition[1390]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 9 23:57:53.923192 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 9 23:57:53.926996 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 9 23:57:54.013094 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 9 23:57:54.159334 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 9 23:57:54.159334 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 23:57:54.166675 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 9 23:57:54.526866 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 9 23:57:54.689649 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 9 23:57:54.694197 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 9 23:57:54.694197 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 9 23:57:54.694197 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 9 23:57:54.694197 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 9 23:57:54.694197 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 23:57:54.713961 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 9 23:57:54.713961 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 23:57:54.713961 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 9 23:57:54.713961 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:57:54.713961 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 9 23:57:54.713961 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:57:54.713961 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:57:54.713961 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:57:54.713961 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 9 23:57:55.001278 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 9 23:57:55.374613 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 9 23:57:55.374613 ignition[1390]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 9 23:57:55.387229 ignition[1390]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 23:57:55.391261 ignition[1390]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 9 23:57:55.391261 ignition[1390]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 9 23:57:55.391261 ignition[1390]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 9 23:57:55.391261 ignition[1390]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 9 23:57:55.402889 ignition[1390]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 9 23:57:55.402889 ignition[1390]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 9 23:57:55.402889 ignition[1390]: INFO : files: files passed May 9 23:57:55.402889 ignition[1390]: INFO : Ignition finished successfully May 9 23:57:55.415063 systemd[1]: Finished ignition-files.service - Ignition (files). May 9 23:57:55.431692 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 9 23:57:55.441463 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 9 23:57:55.447600 systemd[1]: ignition-quench.service: Deactivated successfully. May 9 23:57:55.452253 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 9 23:57:55.484016 initrd-setup-root-after-ignition[1419]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:57:55.487832 initrd-setup-root-after-ignition[1419]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 9 23:57:55.491178 initrd-setup-root-after-ignition[1423]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 9 23:57:55.498061 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:57:55.501729 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 9 23:57:55.525624 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 9 23:57:55.577352 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 9 23:57:55.577583 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 9 23:57:55.581698 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 9 23:57:55.584819 systemd[1]: Reached target initrd.target - Initrd Default Target. May 9 23:57:55.595247 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 9 23:57:55.604574 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 9 23:57:55.640912 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:57:55.654495 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 9 23:57:55.683132 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 9 23:57:55.687764 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:57:55.690465 systemd[1]: Stopped target timers.target - Timer Units. May 9 23:57:55.692726 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 9 23:57:55.693041 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 9 23:57:55.703658 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 9 23:57:55.707040 systemd[1]: Stopped target basic.target - Basic System. May 9 23:57:55.711142 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 9 23:57:55.717811 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 9 23:57:55.720737 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 9 23:57:55.724029 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 9 23:57:55.728302 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 9 23:57:55.733433 systemd[1]: Stopped target sysinit.target - System Initialization. May 9 23:57:55.740873 systemd[1]: Stopped target local-fs.target - Local File Systems. May 9 23:57:55.745055 systemd[1]: Stopped target swap.target - Swaps. May 9 23:57:55.751090 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 9 23:57:55.752562 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 9 23:57:55.758546 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 9 23:57:55.761587 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:57:55.769275 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 9 23:57:55.773319 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:57:55.779002 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 9 23:57:55.779973 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 9 23:57:55.785849 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 9 23:57:55.786391 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 9 23:57:55.793994 systemd[1]: ignition-files.service: Deactivated successfully. May 9 23:57:55.794315 systemd[1]: Stopped ignition-files.service - Ignition (files). May 9 23:57:55.807658 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 9 23:57:55.817807 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 9 23:57:55.821373 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 9 23:57:55.821723 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:57:55.824581 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 9 23:57:55.824868 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 9 23:57:55.850911 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 9 23:57:55.853479 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 9 23:57:55.875108 ignition[1443]: INFO : Ignition 2.19.0 May 9 23:57:55.878370 ignition[1443]: INFO : Stage: umount May 9 23:57:55.878370 ignition[1443]: INFO : no configs at "/usr/lib/ignition/base.d" May 9 23:57:55.878370 ignition[1443]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" May 9 23:57:55.885347 ignition[1443]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 May 9 23:57:55.892065 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 9 23:57:55.900078 ignition[1443]: INFO : PUT result: OK May 9 23:57:55.900078 ignition[1443]: INFO : umount: umount passed May 9 23:57:55.900078 ignition[1443]: INFO : Ignition finished successfully May 9 23:57:55.894249 systemd[1]: ignition-mount.service: Deactivated successfully. May 9 23:57:55.894907 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 9 23:57:55.897877 systemd[1]: sysroot-boot.service: Deactivated successfully. May 9 23:57:55.898319 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 9 23:57:55.901868 systemd[1]: ignition-disks.service: Deactivated successfully. May 9 23:57:55.902074 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 9 23:57:55.911998 systemd[1]: ignition-kargs.service: Deactivated successfully. May 9 23:57:55.912123 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 9 23:57:55.912637 systemd[1]: ignition-fetch.service: Deactivated successfully. May 9 23:57:55.912749 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 9 23:57:55.913356 systemd[1]: Stopped target network.target - Network. May 9 23:57:55.914023 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 9 23:57:55.914734 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 9 23:57:55.940926 systemd[1]: Stopped target paths.target - Path Units. May 9 23:57:55.950762 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 9 23:57:55.954264 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:57:55.959489 systemd[1]: Stopped target slices.target - Slice Units. May 9 23:57:55.959693 systemd[1]: Stopped target sockets.target - Socket Units. May 9 23:57:55.966694 systemd[1]: iscsid.socket: Deactivated successfully. May 9 23:57:55.966857 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 9 23:57:55.973056 systemd[1]: iscsiuio.socket: Deactivated successfully. May 9 23:57:55.973199 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 9 23:57:55.978446 systemd[1]: ignition-setup.service: Deactivated successfully. May 9 23:57:55.978590 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 9 23:57:55.982574 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 9 23:57:55.982701 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 9 23:57:55.984975 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 9 23:57:55.985097 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 9 23:57:55.987653 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 9 23:57:55.990343 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 9 23:57:56.005284 systemd-networkd[1199]: eth0: DHCPv6 lease lost May 9 23:57:56.011051 systemd[1]: systemd-networkd.service: Deactivated successfully. May 9 23:57:56.012108 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 9 23:57:56.017956 systemd[1]: systemd-resolved.service: Deactivated successfully. May 9 23:57:56.020311 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 9 23:57:56.027434 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 9 23:57:56.028794 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 9 23:57:56.040468 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 9 23:57:56.044924 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 9 23:57:56.045048 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 9 23:57:56.048196 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 23:57:56.048305 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 23:57:56.051243 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 9 23:57:56.051350 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 9 23:57:56.053945 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 9 23:57:56.054047 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:57:56.058949 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:57:56.101225 systemd[1]: systemd-udevd.service: Deactivated successfully. May 9 23:57:56.103308 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:57:56.109513 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 9 23:57:56.109642 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 9 23:57:56.114037 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 9 23:57:56.114338 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:57:56.118136 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 9 23:57:56.118875 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 9 23:57:56.128040 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 9 23:57:56.128222 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 9 23:57:56.132600 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 9 23:57:56.132706 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 9 23:57:56.158581 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 9 23:57:56.161142 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 9 23:57:56.161290 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:57:56.163904 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 9 23:57:56.164019 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 23:57:56.166537 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 9 23:57:56.166636 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:57:56.169054 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 9 23:57:56.169303 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:57:56.193387 systemd[1]: network-cleanup.service: Deactivated successfully. May 9 23:57:56.195218 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 9 23:57:56.199012 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 9 23:57:56.199289 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 9 23:57:56.207421 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 9 23:57:56.232583 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 9 23:57:56.251271 systemd[1]: Switching root. May 9 23:57:56.303248 systemd-journald[250]: Journal stopped May 9 23:57:58.973641 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). May 9 23:57:58.973814 kernel: SELinux: policy capability network_peer_controls=1 May 9 23:57:58.973872 kernel: SELinux: policy capability open_perms=1 May 9 23:57:58.973905 kernel: SELinux: policy capability extended_socket_class=1 May 9 23:57:58.973937 kernel: SELinux: policy capability always_check_network=0 May 9 23:57:58.973969 kernel: SELinux: policy capability cgroup_seclabel=1 May 9 23:57:58.974002 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 9 23:57:58.974056 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 9 23:57:58.974087 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 9 23:57:58.974120 kernel: audit: type=1403 audit(1746835076.916:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 9 23:57:58.977305 systemd[1]: Successfully loaded SELinux policy in 53.764ms. May 9 23:57:58.977384 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.931ms. May 9 23:57:58.977423 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 9 23:57:58.977463 systemd[1]: Detected virtualization amazon. May 9 23:57:58.977497 systemd[1]: Detected architecture arm64. May 9 23:57:58.977528 systemd[1]: Detected first boot. May 9 23:57:58.977560 systemd[1]: Initializing machine ID from VM UUID. May 9 23:57:58.977594 zram_generator::config[1485]: No configuration found. May 9 23:57:58.977637 systemd[1]: Populated /etc with preset unit settings. May 9 23:57:58.977670 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 9 23:57:58.977704 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 9 23:57:58.977737 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 9 23:57:58.977768 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 9 23:57:58.977803 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 9 23:57:58.977835 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 9 23:57:58.977869 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 9 23:57:58.977910 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 9 23:57:58.977947 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 9 23:57:58.977977 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 9 23:57:58.978007 systemd[1]: Created slice user.slice - User and Session Slice. May 9 23:57:58.978037 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 9 23:57:58.978069 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 9 23:57:58.978101 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 9 23:57:58.978134 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 9 23:57:58.978199 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 9 23:57:58.978240 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 9 23:57:58.978272 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 9 23:57:58.978302 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 9 23:57:58.978357 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 9 23:57:58.978391 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 9 23:57:58.978426 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 9 23:57:58.978456 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 9 23:57:58.978487 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 9 23:57:58.978524 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 9 23:57:58.978555 systemd[1]: Reached target slices.target - Slice Units. May 9 23:57:58.978588 systemd[1]: Reached target swap.target - Swaps. May 9 23:57:58.978620 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 9 23:57:58.978652 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 9 23:57:58.978686 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 9 23:57:58.978718 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 9 23:57:58.978750 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 9 23:57:58.978780 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 9 23:57:58.978814 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 9 23:57:58.978850 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 9 23:57:58.978880 systemd[1]: Mounting media.mount - External Media Directory... May 9 23:57:58.978923 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 9 23:57:58.978956 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 9 23:57:58.978989 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 9 23:57:58.979041 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 9 23:57:58.979078 systemd[1]: Reached target machines.target - Containers. May 9 23:57:58.979108 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 9 23:57:58.979143 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:57:58.983273 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 9 23:57:58.983311 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 9 23:57:58.983350 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:57:58.983382 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 23:57:58.983414 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:57:58.983447 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 9 23:57:58.983478 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:57:58.983509 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 9 23:57:58.983551 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 9 23:57:58.983583 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 9 23:57:58.983616 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 9 23:57:58.983648 systemd[1]: Stopped systemd-fsck-usr.service. May 9 23:57:58.983682 systemd[1]: Starting systemd-journald.service - Journal Service... May 9 23:57:58.983713 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 9 23:57:58.983745 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 9 23:57:58.983774 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 9 23:57:58.983812 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 9 23:57:58.983845 systemd[1]: verity-setup.service: Deactivated successfully. May 9 23:57:58.983875 systemd[1]: Stopped verity-setup.service. May 9 23:57:58.983908 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 9 23:57:58.983940 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 9 23:57:58.983974 systemd[1]: Mounted media.mount - External Media Directory. May 9 23:57:58.984005 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 9 23:57:58.984036 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 9 23:57:58.984072 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 9 23:57:58.984106 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 9 23:57:58.984136 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 9 23:57:58.984199 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 9 23:57:58.984232 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:57:58.984262 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:57:58.984299 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:57:58.984332 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:57:58.984362 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 9 23:57:58.984391 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 9 23:57:58.984421 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 9 23:57:58.984452 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 9 23:57:58.984486 systemd[1]: Reached target local-fs.target - Local File Systems. May 9 23:57:58.984519 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 9 23:57:58.984548 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 9 23:57:58.984578 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 9 23:57:58.984611 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:57:58.984645 kernel: fuse: init (API version 7.39) May 9 23:57:58.984677 kernel: loop: module loaded May 9 23:57:58.984771 systemd-journald[1564]: Collecting audit messages is disabled. May 9 23:57:58.984830 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 9 23:57:58.984867 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 23:57:58.984897 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 9 23:57:58.984927 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:57:58.984958 kernel: ACPI: bus type drm_connector registered May 9 23:57:58.984988 systemd-journald[1564]: Journal started May 9 23:57:58.985043 systemd-journald[1564]: Runtime Journal (/run/log/journal/ec26a5ce1f4f4dc8648af33d986294a5) is 8.0M, max 75.3M, 67.3M free. May 9 23:57:59.002606 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 9 23:57:58.254524 systemd[1]: Queued start job for default target multi-user.target. May 9 23:57:59.014323 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 9 23:57:58.320610 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. May 9 23:57:58.321526 systemd[1]: systemd-journald.service: Deactivated successfully. May 9 23:57:59.063568 systemd[1]: Started systemd-journald.service - Journal Service. May 9 23:57:59.028120 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 23:57:59.028506 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 23:57:59.033919 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 9 23:57:59.036332 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 9 23:57:59.039849 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:57:59.040248 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:57:59.044272 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 9 23:57:59.047616 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 9 23:57:59.051406 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 9 23:57:59.060644 systemd[1]: Reached target network-pre.target - Preparation for Network. May 9 23:57:59.075364 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 9 23:57:59.085580 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 9 23:57:59.090493 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 23:57:59.117510 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 9 23:57:59.163437 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 9 23:57:59.189303 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 9 23:57:59.208713 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 9 23:57:59.228422 kernel: loop0: detected capacity change from 0 to 52536 May 9 23:57:59.229596 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 9 23:57:59.269739 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:57:59.281600 systemd-journald[1564]: Time spent on flushing to /var/log/journal/ec26a5ce1f4f4dc8648af33d986294a5 is 70.086ms for 919 entries. May 9 23:57:59.281600 systemd-journald[1564]: System Journal (/var/log/journal/ec26a5ce1f4f4dc8648af33d986294a5) is 8.0M, max 195.6M, 187.6M free. May 9 23:57:59.381524 systemd-journald[1564]: Received client request to flush runtime journal. May 9 23:57:59.381633 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 9 23:57:59.301770 systemd-tmpfiles[1593]: ACLs are not supported, ignoring. May 9 23:57:59.301797 systemd-tmpfiles[1593]: ACLs are not supported, ignoring. May 9 23:57:59.309692 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 9 23:57:59.313236 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 9 23:57:59.317047 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 9 23:57:59.334977 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 9 23:57:59.339279 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 9 23:57:59.363602 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 9 23:57:59.388926 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 9 23:57:59.411205 kernel: loop1: detected capacity change from 0 to 114432 May 9 23:57:59.416346 udevadm[1627]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 9 23:57:59.470481 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 9 23:57:59.483480 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 9 23:57:59.542876 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. May 9 23:57:59.542914 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. May 9 23:57:59.558904 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 9 23:57:59.563196 kernel: loop2: detected capacity change from 0 to 114328 May 9 23:57:59.665190 kernel: loop3: detected capacity change from 0 to 194096 May 9 23:57:59.835214 kernel: loop4: detected capacity change from 0 to 52536 May 9 23:57:59.854187 kernel: loop5: detected capacity change from 0 to 114432 May 9 23:57:59.871651 kernel: loop6: detected capacity change from 0 to 114328 May 9 23:57:59.884612 kernel: loop7: detected capacity change from 0 to 194096 May 9 23:57:59.909732 (sd-merge)[1642]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. May 9 23:57:59.910998 (sd-merge)[1642]: Merged extensions into '/usr'. May 9 23:57:59.924402 systemd[1]: Reloading requested from client PID 1592 ('systemd-sysext') (unit systemd-sysext.service)... May 9 23:57:59.924439 systemd[1]: Reloading... May 9 23:58:00.116223 zram_generator::config[1668]: No configuration found. May 9 23:58:00.478009 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:58:00.607337 systemd[1]: Reloading finished in 681 ms. May 9 23:58:00.657202 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 9 23:58:00.661433 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 9 23:58:00.678520 systemd[1]: Starting ensure-sysext.service... May 9 23:58:00.693372 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 9 23:58:00.699581 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 9 23:58:00.725567 systemd[1]: Reloading requested from client PID 1720 ('systemctl') (unit ensure-sysext.service)... May 9 23:58:00.725782 systemd[1]: Reloading... May 9 23:58:00.786125 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 9 23:58:00.786967 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 9 23:58:00.792951 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 9 23:58:00.795598 ldconfig[1584]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 9 23:58:00.797396 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. May 9 23:58:00.798622 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. May 9 23:58:00.808179 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. May 9 23:58:00.808223 systemd-tmpfiles[1721]: Skipping /boot May 9 23:58:00.839195 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. May 9 23:58:00.841436 systemd-tmpfiles[1721]: Skipping /boot May 9 23:58:00.877849 systemd-udevd[1722]: Using default interface naming scheme 'v255'. May 9 23:58:01.009213 zram_generator::config[1767]: No configuration found. May 9 23:58:01.152349 (udev-worker)[1765]: Network interface NamePolicy= disabled on kernel command line. May 9 23:58:01.433311 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:58:01.568200 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1772) May 9 23:58:01.654415 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 9 23:58:01.655429 systemd[1]: Reloading finished in 928 ms. May 9 23:58:01.689494 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 9 23:58:01.692967 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 9 23:58:01.697209 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 9 23:58:01.801275 systemd[1]: Finished ensure-sysext.service. May 9 23:58:01.827717 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 9 23:58:01.846358 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. May 9 23:58:01.855518 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 23:58:01.871641 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 9 23:58:01.874727 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 9 23:58:01.879593 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 9 23:58:01.886509 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 9 23:58:01.900913 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 9 23:58:01.907296 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 9 23:58:01.919576 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 9 23:58:01.922804 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 9 23:58:01.927599 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 9 23:58:01.943565 lvm[1921]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 23:58:01.935578 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 9 23:58:01.943998 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 9 23:58:01.953496 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 9 23:58:01.955740 systemd[1]: Reached target time-set.target - System Time Set. May 9 23:58:01.964839 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 9 23:58:01.972802 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 9 23:58:02.010105 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 9 23:58:02.011567 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 9 23:58:02.030497 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 9 23:58:02.030885 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 9 23:58:02.033945 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 9 23:58:02.056812 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 9 23:58:02.076710 systemd[1]: modprobe@drm.service: Deactivated successfully. May 9 23:58:02.079060 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 9 23:58:02.082777 systemd[1]: modprobe@loop.service: Deactivated successfully. May 9 23:58:02.084514 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 9 23:58:02.092360 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 9 23:58:02.122391 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 9 23:58:02.136449 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 9 23:58:02.140396 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 9 23:58:02.157045 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 9 23:58:02.162339 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 9 23:58:02.177585 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 9 23:58:02.201742 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 9 23:58:02.229047 lvm[1953]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 9 23:58:02.263690 augenrules[1960]: No rules May 9 23:58:02.264287 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 9 23:58:02.274576 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 23:58:02.278698 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 9 23:58:02.285176 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 9 23:58:02.293069 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 9 23:58:02.314860 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 9 23:58:02.345094 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 9 23:58:02.439813 systemd-networkd[1934]: lo: Link UP May 9 23:58:02.439843 systemd-networkd[1934]: lo: Gained carrier May 9 23:58:02.443373 systemd-networkd[1934]: Enumeration completed May 9 23:58:02.443583 systemd[1]: Started systemd-networkd.service - Network Configuration. May 9 23:58:02.447426 systemd-networkd[1934]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:58:02.447451 systemd-networkd[1934]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 9 23:58:02.449574 systemd-resolved[1935]: Positive Trust Anchors: May 9 23:58:02.450075 systemd-resolved[1935]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 9 23:58:02.450302 systemd-resolved[1935]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 9 23:58:02.454098 systemd-networkd[1934]: eth0: Link UP May 9 23:58:02.454595 systemd-networkd[1934]: eth0: Gained carrier May 9 23:58:02.454650 systemd-networkd[1934]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 9 23:58:02.456613 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 9 23:58:02.466361 systemd-resolved[1935]: Defaulting to hostname 'linux'. May 9 23:58:02.468410 systemd-networkd[1934]: eth0: DHCPv4 address 172.31.30.213/20, gateway 172.31.16.1 acquired from 172.31.16.1 May 9 23:58:02.472101 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 9 23:58:02.474869 systemd[1]: Reached target network.target - Network. May 9 23:58:02.477421 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 9 23:58:02.479844 systemd[1]: Reached target sysinit.target - System Initialization. May 9 23:58:02.482592 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 9 23:58:02.488073 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 9 23:58:02.490979 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 9 23:58:02.493473 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 9 23:58:02.496072 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 9 23:58:02.498546 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 9 23:58:02.498616 systemd[1]: Reached target paths.target - Path Units. May 9 23:58:02.500522 systemd[1]: Reached target timers.target - Timer Units. May 9 23:58:02.503764 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 9 23:58:02.508807 systemd[1]: Starting docker.socket - Docker Socket for the API... May 9 23:58:02.516849 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 9 23:58:02.520338 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 9 23:58:02.523285 systemd[1]: Reached target sockets.target - Socket Units. May 9 23:58:02.525410 systemd[1]: Reached target basic.target - Basic System. May 9 23:58:02.527476 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 9 23:58:02.527540 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 9 23:58:02.536484 systemd[1]: Starting containerd.service - containerd container runtime... May 9 23:58:02.543515 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 9 23:58:02.548651 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 9 23:58:02.559532 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 9 23:58:02.564686 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 9 23:58:02.566846 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 9 23:58:02.570808 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 9 23:58:02.582702 systemd[1]: Started ntpd.service - Network Time Service. May 9 23:58:02.600472 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 9 23:58:02.605380 systemd[1]: Starting setup-oem.service - Setup OEM... May 9 23:58:02.611420 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 9 23:58:02.617270 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 9 23:58:02.631798 systemd[1]: Starting systemd-logind.service - User Login Management... May 9 23:58:02.635043 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 9 23:58:02.636018 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 9 23:58:02.640462 systemd[1]: Starting update-engine.service - Update Engine... May 9 23:58:02.650531 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 9 23:58:02.721753 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 9 23:58:02.723546 jq[1996]: true May 9 23:58:02.723941 jq[1984]: false May 9 23:58:02.724455 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 9 23:58:02.725117 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 9 23:58:02.725480 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 9 23:58:02.826977 jq[2005]: true May 9 23:58:02.829820 tar[1999]: linux-arm64/helm May 9 23:58:02.839874 dbus-daemon[1983]: [system] SELinux support is enabled May 9 23:58:02.856123 dbus-daemon[1983]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1934 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 9 23:58:02.856836 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 9 23:58:02.864015 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 9 23:58:02.864085 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 9 23:58:02.876183 extend-filesystems[1985]: Found loop4 May 9 23:58:02.876183 extend-filesystems[1985]: Found loop5 May 9 23:58:02.876183 extend-filesystems[1985]: Found loop6 May 9 23:58:02.876183 extend-filesystems[1985]: Found loop7 May 9 23:58:02.876183 extend-filesystems[1985]: Found nvme0n1 May 9 23:58:02.867624 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 9 23:58:02.904931 ntpd[1987]: 9 May 23:58:02 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Fri May 9 22:02:28 UTC 2025 (1): Starting May 9 23:58:02.904931 ntpd[1987]: 9 May 23:58:02 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 9 23:58:02.904931 ntpd[1987]: 9 May 23:58:02 ntpd[1987]: ---------------------------------------------------- May 9 23:58:02.904931 ntpd[1987]: 9 May 23:58:02 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, May 9 23:58:02.904931 ntpd[1987]: 9 May 23:58:02 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 9 23:58:02.904931 ntpd[1987]: 9 May 23:58:02 ntpd[1987]: corporation. Support and training for ntp-4 are May 9 23:58:02.904931 ntpd[1987]: 9 May 23:58:02 ntpd[1987]: available at https://www.nwtime.org/support May 9 23:58:02.904931 ntpd[1987]: 9 May 23:58:02 ntpd[1987]: ---------------------------------------------------- May 9 23:58:02.920311 extend-filesystems[1985]: Found nvme0n1p1 May 9 23:58:02.920311 extend-filesystems[1985]: Found nvme0n1p2 May 9 23:58:02.920311 extend-filesystems[1985]: Found nvme0n1p3 May 9 23:58:02.920311 extend-filesystems[1985]: Found usr May 9 23:58:02.920311 extend-filesystems[1985]: Found nvme0n1p4 May 9 23:58:02.920311 extend-filesystems[1985]: Found nvme0n1p6 May 9 23:58:02.920311 extend-filesystems[1985]: Found nvme0n1p7 May 9 23:58:02.920311 extend-filesystems[1985]: Found nvme0n1p9 May 9 23:58:02.920311 extend-filesystems[1985]: Checking size of /dev/nvme0n1p9 May 9 23:58:02.893551 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.systemd1' May 9 23:58:02.867675 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 9 23:58:02.965748 ntpd[1987]: 9 May 23:58:02 ntpd[1987]: proto: precision = 0.108 usec (-23) May 9 23:58:02.965748 ntpd[1987]: 9 May 23:58:02 ntpd[1987]: basedate set to 2025-04-27 May 9 23:58:02.965748 ntpd[1987]: 9 May 23:58:02 ntpd[1987]: gps base set to 2025-04-27 (week 2364) May 9 23:58:02.965748 ntpd[1987]: 9 May 23:58:02 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 May 9 23:58:02.965748 ntpd[1987]: 9 May 23:58:02 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 9 23:58:02.965748 ntpd[1987]: 9 May 23:58:02 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 May 9 23:58:02.965748 ntpd[1987]: 9 May 23:58:02 ntpd[1987]: Listen normally on 3 eth0 172.31.30.213:123 May 9 23:58:02.965748 ntpd[1987]: 9 May 23:58:02 ntpd[1987]: Listen normally on 4 lo [::1]:123 May 9 23:58:02.965748 ntpd[1987]: 9 May 23:58:02 ntpd[1987]: bind(21) AF_INET6 fe80::462:bcff:fe4e:9b6b%2#123 flags 0x11 failed: Cannot assign requested address May 9 23:58:02.965748 ntpd[1987]: 9 May 23:58:02 ntpd[1987]: unable to create socket on eth0 (5) for fe80::462:bcff:fe4e:9b6b%2#123 May 9 23:58:02.965748 ntpd[1987]: 9 May 23:58:02 ntpd[1987]: failed to init interface for address fe80::462:bcff:fe4e:9b6b%2 May 9 23:58:02.965748 ntpd[1987]: 9 May 23:58:02 ntpd[1987]: Listening on routing socket on fd #21 for interface updates May 9 23:58:02.897471 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Fri May 9 22:02:28 UTC 2025 (1): Starting May 9 23:58:02.891893 (ntainerd)[2018]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 9 23:58:02.897522 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp May 9 23:58:02.993966 update_engine[1994]: I20250509 23:58:02.991281 1994 main.cc:92] Flatcar Update Engine starting May 9 23:58:02.938891 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 9 23:58:02.897544 ntpd[1987]: ---------------------------------------------------- May 9 23:58:02.994218 systemd[1]: motdgen.service: Deactivated successfully. May 9 23:58:02.897564 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, May 9 23:58:02.994605 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 9 23:58:02.897583 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit May 9 23:58:02.897602 ntpd[1987]: corporation. Support and training for ntp-4 are May 9 23:58:02.897620 ntpd[1987]: available at https://www.nwtime.org/support May 9 23:58:02.897638 ntpd[1987]: ---------------------------------------------------- May 9 23:58:02.912970 ntpd[1987]: proto: precision = 0.108 usec (-23) May 9 23:58:02.923563 ntpd[1987]: basedate set to 2025-04-27 May 9 23:58:02.923603 ntpd[1987]: gps base set to 2025-04-27 (week 2364) May 9 23:58:02.951388 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 May 9 23:58:02.951488 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 May 9 23:58:02.952921 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 May 9 23:58:02.952999 ntpd[1987]: Listen normally on 3 eth0 172.31.30.213:123 May 9 23:58:02.953069 ntpd[1987]: Listen normally on 4 lo [::1]:123 May 9 23:58:02.953171 ntpd[1987]: bind(21) AF_INET6 fe80::462:bcff:fe4e:9b6b%2#123 flags 0x11 failed: Cannot assign requested address May 9 23:58:02.953218 ntpd[1987]: unable to create socket on eth0 (5) for fe80::462:bcff:fe4e:9b6b%2#123 May 9 23:58:02.953252 ntpd[1987]: failed to init interface for address fe80::462:bcff:fe4e:9b6b%2 May 9 23:58:02.953312 ntpd[1987]: Listening on routing socket on fd #21 for interface updates May 9 23:58:03.011304 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 9 23:58:03.028336 update_engine[1994]: I20250509 23:58:03.025522 1994 update_check_scheduler.cc:74] Next update check in 9m9s May 9 23:58:03.028442 ntpd[1987]: 9 May 23:58:03 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 9 23:58:03.028442 ntpd[1987]: 9 May 23:58:03 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 9 23:58:03.015089 systemd[1]: Finished setup-oem.service - Setup OEM. May 9 23:58:03.011380 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized May 9 23:58:03.017601 systemd[1]: Started update-engine.service - Update Engine. May 9 23:58:03.032111 systemd-logind[1992]: Watching system buttons on /dev/input/event0 (Power Button) May 9 23:58:03.033938 systemd-logind[1992]: Watching system buttons on /dev/input/event1 (Sleep Button) May 9 23:58:03.036487 systemd-logind[1992]: New seat seat0. May 9 23:58:03.047665 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 9 23:58:03.050594 systemd[1]: Started systemd-logind.service - User Login Management. May 9 23:58:03.059221 extend-filesystems[1985]: Resized partition /dev/nvme0n1p9 May 9 23:58:03.062791 extend-filesystems[2050]: resize2fs 1.47.1 (20-May-2024) May 9 23:58:03.084196 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks May 9 23:58:03.123183 coreos-metadata[1982]: May 09 23:58:03.121 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 9 23:58:03.131941 coreos-metadata[1982]: May 09 23:58:03.131 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 May 9 23:58:03.132808 coreos-metadata[1982]: May 09 23:58:03.132 INFO Fetch successful May 9 23:58:03.132808 coreos-metadata[1982]: May 09 23:58:03.132 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 May 9 23:58:03.181305 coreos-metadata[1982]: May 09 23:58:03.133 INFO Fetch successful May 9 23:58:03.181305 coreos-metadata[1982]: May 09 23:58:03.133 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 May 9 23:58:03.181305 coreos-metadata[1982]: May 09 23:58:03.134 INFO Fetch successful May 9 23:58:03.181305 coreos-metadata[1982]: May 09 23:58:03.134 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 May 9 23:58:03.181305 coreos-metadata[1982]: May 09 23:58:03.134 INFO Fetch successful May 9 23:58:03.181305 coreos-metadata[1982]: May 09 23:58:03.134 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 May 9 23:58:03.181305 coreos-metadata[1982]: May 09 23:58:03.138 INFO Fetch failed with 404: resource not found May 9 23:58:03.181305 coreos-metadata[1982]: May 09 23:58:03.138 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 May 9 23:58:03.181305 coreos-metadata[1982]: May 09 23:58:03.139 INFO Fetch successful May 9 23:58:03.181305 coreos-metadata[1982]: May 09 23:58:03.139 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 May 9 23:58:03.181305 coreos-metadata[1982]: May 09 23:58:03.139 INFO Fetch successful May 9 23:58:03.181305 coreos-metadata[1982]: May 09 23:58:03.143 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 May 9 23:58:03.181305 coreos-metadata[1982]: May 09 23:58:03.144 INFO Fetch successful May 9 23:58:03.181305 coreos-metadata[1982]: May 09 23:58:03.144 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 May 9 23:58:03.181305 coreos-metadata[1982]: May 09 23:58:03.147 INFO Fetch successful May 9 23:58:03.181305 coreos-metadata[1982]: May 09 23:58:03.147 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 May 9 23:58:03.181305 coreos-metadata[1982]: May 09 23:58:03.152 INFO Fetch successful May 9 23:58:03.192345 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 May 9 23:58:03.213337 extend-filesystems[2050]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 9 23:58:03.213337 extend-filesystems[2050]: old_desc_blocks = 1, new_desc_blocks = 1 May 9 23:58:03.213337 extend-filesystems[2050]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. May 9 23:58:03.225016 bash[2052]: Updated "/home/core/.ssh/authorized_keys" May 9 23:58:03.219059 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 9 23:58:03.249035 extend-filesystems[1985]: Resized filesystem in /dev/nvme0n1p9 May 9 23:58:03.260830 systemd[1]: Starting sshkeys.service... May 9 23:58:03.263341 systemd[1]: extend-filesystems.service: Deactivated successfully. May 9 23:58:03.263778 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 9 23:58:03.283690 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 9 23:58:03.293793 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 9 23:58:03.298573 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 9 23:58:03.340822 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 9 23:58:03.349841 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 9 23:58:03.417251 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1753) May 9 23:58:03.505722 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.hostname1' May 9 23:58:03.505999 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 9 23:58:03.513868 dbus-daemon[1983]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2032 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 9 23:58:03.537421 systemd[1]: Starting polkit.service - Authorization Manager... May 9 23:58:03.545420 locksmithd[2044]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 9 23:58:03.625737 polkitd[2123]: Started polkitd version 121 May 9 23:58:03.650003 polkitd[2123]: Loading rules from directory /etc/polkit-1/rules.d May 9 23:58:03.652506 polkitd[2123]: Loading rules from directory /usr/share/polkit-1/rules.d May 9 23:58:03.658638 polkitd[2123]: Finished loading, compiling and executing 2 rules May 9 23:58:03.677268 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 9 23:58:03.678835 systemd[1]: Started polkit.service - Authorization Manager. May 9 23:58:03.682500 polkitd[2123]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 9 23:58:03.785075 systemd-hostnamed[2032]: Hostname set to (transient) May 9 23:58:03.787247 systemd-resolved[1935]: System hostname changed to 'ip-172-31-30-213'. May 9 23:58:03.823926 coreos-metadata[2070]: May 09 23:58:03.822 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 May 9 23:58:03.825972 coreos-metadata[2070]: May 09 23:58:03.825 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 May 9 23:58:03.828561 coreos-metadata[2070]: May 09 23:58:03.828 INFO Fetch successful May 9 23:58:03.828561 coreos-metadata[2070]: May 09 23:58:03.828 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 May 9 23:58:03.834833 coreos-metadata[2070]: May 09 23:58:03.834 INFO Fetch successful May 9 23:58:03.836523 unknown[2070]: wrote ssh authorized keys file for user: core May 9 23:58:03.898259 ntpd[1987]: bind(24) AF_INET6 fe80::462:bcff:fe4e:9b6b%2#123 flags 0x11 failed: Cannot assign requested address May 9 23:58:03.902803 ntpd[1987]: 9 May 23:58:03 ntpd[1987]: bind(24) AF_INET6 fe80::462:bcff:fe4e:9b6b%2#123 flags 0x11 failed: Cannot assign requested address May 9 23:58:03.902803 ntpd[1987]: 9 May 23:58:03 ntpd[1987]: unable to create socket on eth0 (6) for fe80::462:bcff:fe4e:9b6b%2#123 May 9 23:58:03.902803 ntpd[1987]: 9 May 23:58:03 ntpd[1987]: failed to init interface for address fe80::462:bcff:fe4e:9b6b%2 May 9 23:58:03.898327 ntpd[1987]: unable to create socket on eth0 (6) for fe80::462:bcff:fe4e:9b6b%2#123 May 9 23:58:03.898357 ntpd[1987]: failed to init interface for address fe80::462:bcff:fe4e:9b6b%2 May 9 23:58:03.907384 systemd-networkd[1934]: eth0: Gained IPv6LL May 9 23:58:03.918779 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 9 23:58:03.922523 update-ssh-keys[2173]: Updated "/home/core/.ssh/authorized_keys" May 9 23:58:03.930514 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 9 23:58:03.936795 systemd[1]: Reached target network-online.target - Network is Online. May 9 23:58:03.947729 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. May 9 23:58:03.961064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:03.967052 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 9 23:58:03.973287 systemd[1]: Finished sshkeys.service. May 9 23:58:04.014224 containerd[2018]: time="2025-05-09T23:58:04.011974450Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 9 23:58:04.129780 amazon-ssm-agent[2184]: Initializing new seelog logger May 9 23:58:04.131434 amazon-ssm-agent[2184]: New Seelog Logger Creation Complete May 9 23:58:04.131434 amazon-ssm-agent[2184]: 2025/05/09 23:58:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:58:04.131434 amazon-ssm-agent[2184]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:58:04.132496 amazon-ssm-agent[2184]: 2025/05/09 23:58:04 processing appconfig overrides May 9 23:58:04.136189 amazon-ssm-agent[2184]: 2025/05/09 23:58:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:58:04.136189 amazon-ssm-agent[2184]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:58:04.136189 amazon-ssm-agent[2184]: 2025/05/09 23:58:04 processing appconfig overrides May 9 23:58:04.136189 amazon-ssm-agent[2184]: 2025/05/09 23:58:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:58:04.136189 amazon-ssm-agent[2184]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:58:04.136189 amazon-ssm-agent[2184]: 2025/05/09 23:58:04 processing appconfig overrides May 9 23:58:04.136616 amazon-ssm-agent[2184]: 2025-05-09 23:58:04 INFO Proxy environment variables: May 9 23:58:04.143197 amazon-ssm-agent[2184]: 2025/05/09 23:58:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:58:04.143197 amazon-ssm-agent[2184]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. May 9 23:58:04.143197 amazon-ssm-agent[2184]: 2025/05/09 23:58:04 processing appconfig overrides May 9 23:58:04.169834 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 9 23:58:04.179452 containerd[2018]: time="2025-05-09T23:58:04.179378698Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 9 23:58:04.187485 containerd[2018]: time="2025-05-09T23:58:04.187404874Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 9 23:58:04.189202 containerd[2018]: time="2025-05-09T23:58:04.187674214Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 9 23:58:04.189202 containerd[2018]: time="2025-05-09T23:58:04.187728826Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 9 23:58:04.189202 containerd[2018]: time="2025-05-09T23:58:04.188079238Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 9 23:58:04.189202 containerd[2018]: time="2025-05-09T23:58:04.188123914Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 9 23:58:04.189202 containerd[2018]: time="2025-05-09T23:58:04.188325058Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:58:04.189202 containerd[2018]: time="2025-05-09T23:58:04.188380858Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 9 23:58:04.189202 containerd[2018]: time="2025-05-09T23:58:04.188744542Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:58:04.189202 containerd[2018]: time="2025-05-09T23:58:04.188789710Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 9 23:58:04.189202 containerd[2018]: time="2025-05-09T23:58:04.188824558Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:58:04.189202 containerd[2018]: time="2025-05-09T23:58:04.188850166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 9 23:58:04.189202 containerd[2018]: time="2025-05-09T23:58:04.189097498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 9 23:58:04.191881 containerd[2018]: time="2025-05-09T23:58:04.191821630Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 9 23:58:04.192388 containerd[2018]: time="2025-05-09T23:58:04.192335434Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 9 23:58:04.192967 containerd[2018]: time="2025-05-09T23:58:04.192917098Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 9 23:58:04.193481 containerd[2018]: time="2025-05-09T23:58:04.193437346Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 9 23:58:04.194227 containerd[2018]: time="2025-05-09T23:58:04.194170246Z" level=info msg="metadata content store policy set" policy=shared May 9 23:58:04.203438 containerd[2018]: time="2025-05-09T23:58:04.203380270Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 9 23:58:04.203669 containerd[2018]: time="2025-05-09T23:58:04.203636014Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 9 23:58:04.203982 containerd[2018]: time="2025-05-09T23:58:04.203939422Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 9 23:58:04.204169 containerd[2018]: time="2025-05-09T23:58:04.204116638Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 9 23:58:04.207204 containerd[2018]: time="2025-05-09T23:58:04.205284046Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 9 23:58:04.207204 containerd[2018]: time="2025-05-09T23:58:04.205631230Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 9 23:58:04.207204 containerd[2018]: time="2025-05-09T23:58:04.206821162Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 9 23:58:04.211941 containerd[2018]: time="2025-05-09T23:58:04.211870259Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 9 23:58:04.212292 containerd[2018]: time="2025-05-09T23:58:04.212233031Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 9 23:58:04.213331 containerd[2018]: time="2025-05-09T23:58:04.213268031Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 9 23:58:04.213565 containerd[2018]: time="2025-05-09T23:58:04.213525767Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 9 23:58:04.213706 containerd[2018]: time="2025-05-09T23:58:04.213672359Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 9 23:58:04.213838 containerd[2018]: time="2025-05-09T23:58:04.213805931Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 9 23:58:04.213987 containerd[2018]: time="2025-05-09T23:58:04.213940619Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 9 23:58:04.217407 containerd[2018]: time="2025-05-09T23:58:04.217344971Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 9 23:58:04.217705 containerd[2018]: time="2025-05-09T23:58:04.217668251Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 9 23:58:04.217831 containerd[2018]: time="2025-05-09T23:58:04.217800923Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 9 23:58:04.218060 containerd[2018]: time="2025-05-09T23:58:04.218024039Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 9 23:58:04.219364 containerd[2018]: time="2025-05-09T23:58:04.219307943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 9 23:58:04.222210 containerd[2018]: time="2025-05-09T23:58:04.219524843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 9 23:58:04.222210 containerd[2018]: time="2025-05-09T23:58:04.219569339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 9 23:58:04.222210 containerd[2018]: time="2025-05-09T23:58:04.219605747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 9 23:58:04.222210 containerd[2018]: time="2025-05-09T23:58:04.219647795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 9 23:58:04.222210 containerd[2018]: time="2025-05-09T23:58:04.219684071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 9 23:58:04.222210 containerd[2018]: time="2025-05-09T23:58:04.219714023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 9 23:58:04.222210 containerd[2018]: time="2025-05-09T23:58:04.219744815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 9 23:58:04.222210 containerd[2018]: time="2025-05-09T23:58:04.219775487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 9 23:58:04.222210 containerd[2018]: time="2025-05-09T23:58:04.219813011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 9 23:58:04.222210 containerd[2018]: time="2025-05-09T23:58:04.219843947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 9 23:58:04.222210 containerd[2018]: time="2025-05-09T23:58:04.219874763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 9 23:58:04.222210 containerd[2018]: time="2025-05-09T23:58:04.219904667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 9 23:58:04.222210 containerd[2018]: time="2025-05-09T23:58:04.219942575Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 9 23:58:04.222210 containerd[2018]: time="2025-05-09T23:58:04.219991835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 9 23:58:04.222210 containerd[2018]: time="2025-05-09T23:58:04.220022987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 9 23:58:04.222959 containerd[2018]: time="2025-05-09T23:58:04.220050695Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 9 23:58:04.222959 containerd[2018]: time="2025-05-09T23:58:04.220213607Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 9 23:58:04.222959 containerd[2018]: time="2025-05-09T23:58:04.220257395Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 9 23:58:04.222959 containerd[2018]: time="2025-05-09T23:58:04.220284131Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 9 23:58:04.222959 containerd[2018]: time="2025-05-09T23:58:04.220312811Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 9 23:58:04.222959 containerd[2018]: time="2025-05-09T23:58:04.220337903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 9 23:58:04.222959 containerd[2018]: time="2025-05-09T23:58:04.220367051Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 9 23:58:04.222959 containerd[2018]: time="2025-05-09T23:58:04.220391759Z" level=info msg="NRI interface is disabled by configuration." May 9 23:58:04.222959 containerd[2018]: time="2025-05-09T23:58:04.220417499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 9 23:58:04.223460 containerd[2018]: time="2025-05-09T23:58:04.220966355Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 9 23:58:04.223460 containerd[2018]: time="2025-05-09T23:58:04.221091839Z" level=info msg="Connect containerd service" May 9 23:58:04.227352 containerd[2018]: time="2025-05-09T23:58:04.225344159Z" level=info msg="using legacy CRI server" May 9 23:58:04.227352 containerd[2018]: time="2025-05-09T23:58:04.225401927Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 9 23:58:04.227352 containerd[2018]: time="2025-05-09T23:58:04.225596015Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 9 23:58:04.227352 containerd[2018]: time="2025-05-09T23:58:04.226792487Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 9 23:58:04.227352 containerd[2018]: time="2025-05-09T23:58:04.227308559Z" level=info msg="Start subscribing containerd event" May 9 23:58:04.227664 containerd[2018]: time="2025-05-09T23:58:04.227403455Z" level=info msg="Start recovering state" May 9 23:58:04.227664 containerd[2018]: time="2025-05-09T23:58:04.227534027Z" level=info msg="Start event monitor" May 9 23:58:04.227664 containerd[2018]: time="2025-05-09T23:58:04.227560715Z" level=info msg="Start snapshots syncer" May 9 23:58:04.227664 containerd[2018]: time="2025-05-09T23:58:04.227620871Z" level=info msg="Start cni network conf syncer for default" May 9 23:58:04.227664 containerd[2018]: time="2025-05-09T23:58:04.227643227Z" level=info msg="Start streaming server" May 9 23:58:04.234814 containerd[2018]: time="2025-05-09T23:58:04.234268535Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 9 23:58:04.234814 containerd[2018]: time="2025-05-09T23:58:04.234386747Z" level=info msg=serving... address=/run/containerd/containerd.sock May 9 23:58:04.234607 systemd[1]: Started containerd.service - containerd container runtime. May 9 23:58:04.240261 amazon-ssm-agent[2184]: 2025-05-09 23:58:04 INFO https_proxy: May 9 23:58:04.242409 containerd[2018]: time="2025-05-09T23:58:04.242347715Z" level=info msg="containerd successfully booted in 0.236958s" May 9 23:58:04.341358 amazon-ssm-agent[2184]: 2025-05-09 23:58:04 INFO http_proxy: May 9 23:58:04.439685 amazon-ssm-agent[2184]: 2025-05-09 23:58:04 INFO no_proxy: May 9 23:58:04.538233 amazon-ssm-agent[2184]: 2025-05-09 23:58:04 INFO Checking if agent identity type OnPrem can be assumed May 9 23:58:04.637291 amazon-ssm-agent[2184]: 2025-05-09 23:58:04 INFO Checking if agent identity type EC2 can be assumed May 9 23:58:04.737232 amazon-ssm-agent[2184]: 2025-05-09 23:58:04 INFO Agent will take identity from EC2 May 9 23:58:04.835965 amazon-ssm-agent[2184]: 2025-05-09 23:58:04 INFO [amazon-ssm-agent] using named pipe channel for IPC May 9 23:58:04.935260 amazon-ssm-agent[2184]: 2025-05-09 23:58:04 INFO [amazon-ssm-agent] using named pipe channel for IPC May 9 23:58:05.034618 amazon-ssm-agent[2184]: 2025-05-09 23:58:04 INFO [amazon-ssm-agent] using named pipe channel for IPC May 9 23:58:05.133780 amazon-ssm-agent[2184]: 2025-05-09 23:58:04 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 May 9 23:58:05.150567 tar[1999]: linux-arm64/LICENSE May 9 23:58:05.151293 tar[1999]: linux-arm64/README.md May 9 23:58:05.184306 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 9 23:58:05.234073 amazon-ssm-agent[2184]: 2025-05-09 23:58:04 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 May 9 23:58:05.334407 amazon-ssm-agent[2184]: 2025-05-09 23:58:04 INFO [amazon-ssm-agent] Starting Core Agent May 9 23:58:05.434632 amazon-ssm-agent[2184]: 2025-05-09 23:58:04 INFO [amazon-ssm-agent] registrar detected. Attempting registration May 9 23:58:05.534497 amazon-ssm-agent[2184]: 2025-05-09 23:58:04 INFO [Registrar] Starting registrar module May 9 23:58:05.567715 amazon-ssm-agent[2184]: 2025-05-09 23:58:04 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration May 9 23:58:05.568006 amazon-ssm-agent[2184]: 2025-05-09 23:58:05 INFO [EC2Identity] EC2 registration was successful. May 9 23:58:05.568265 amazon-ssm-agent[2184]: 2025-05-09 23:58:05 INFO [CredentialRefresher] credentialRefresher has started May 9 23:58:05.568265 amazon-ssm-agent[2184]: 2025-05-09 23:58:05 INFO [CredentialRefresher] Starting credentials refresher loop May 9 23:58:05.568265 amazon-ssm-agent[2184]: 2025-05-09 23:58:05 INFO EC2RoleProvider Successfully connected with instance profile role credentials May 9 23:58:05.634471 amazon-ssm-agent[2184]: 2025-05-09 23:58:05 INFO [CredentialRefresher] Next credential rotation will be in 31.674981384 minutes May 9 23:58:06.496572 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:06.513258 (kubelet)[2215]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:58:06.600740 amazon-ssm-agent[2184]: 2025-05-09 23:58:06 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process May 9 23:58:06.703055 amazon-ssm-agent[2184]: 2025-05-09 23:58:06 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2218) started May 9 23:58:06.803858 amazon-ssm-agent[2184]: 2025-05-09 23:58:06 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds May 9 23:58:06.898253 ntpd[1987]: Listen normally on 7 eth0 [fe80::462:bcff:fe4e:9b6b%2]:123 May 9 23:58:06.900119 ntpd[1987]: 9 May 23:58:06 ntpd[1987]: Listen normally on 7 eth0 [fe80::462:bcff:fe4e:9b6b%2]:123 May 9 23:58:07.796983 sshd_keygen[2017]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 9 23:58:07.857368 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 9 23:58:07.871833 systemd[1]: Starting issuegen.service - Generate /run/issue... May 9 23:58:07.877415 systemd[1]: Started sshd@0-172.31.30.213:22-147.75.109.163:52412.service - OpenSSH per-connection server daemon (147.75.109.163:52412). May 9 23:58:07.900689 systemd[1]: issuegen.service: Deactivated successfully. May 9 23:58:07.901520 systemd[1]: Finished issuegen.service - Generate /run/issue. May 9 23:58:07.915755 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 9 23:58:07.960441 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 9 23:58:07.973112 systemd[1]: Started getty@tty1.service - Getty on tty1. May 9 23:58:07.983830 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 9 23:58:07.987872 systemd[1]: Reached target getty.target - Login Prompts. May 9 23:58:07.991559 systemd[1]: Reached target multi-user.target - Multi-User System. May 9 23:58:07.996191 kubelet[2215]: E0509 23:58:07.996108 2215 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:58:07.997296 systemd[1]: Startup finished in 1.238s (kernel) + 9.095s (initrd) + 11.131s (userspace) = 21.465s. May 9 23:58:08.009954 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:58:08.010372 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:58:08.013333 systemd[1]: kubelet.service: Consumed 1.313s CPU time. May 9 23:58:08.118256 sshd[2243]: Accepted publickey for core from 147.75.109.163 port 52412 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:58:08.121828 sshd[2243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:08.138580 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 9 23:58:08.144644 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 9 23:58:08.151002 systemd-logind[1992]: New session 1 of user core. May 9 23:58:08.178807 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 9 23:58:08.187983 systemd[1]: Starting user@500.service - User Manager for UID 500... May 9 23:58:08.199650 (systemd)[2260]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 9 23:58:08.426898 systemd[2260]: Queued start job for default target default.target. May 9 23:58:08.434587 systemd[2260]: Created slice app.slice - User Application Slice. May 9 23:58:08.434655 systemd[2260]: Reached target paths.target - Paths. May 9 23:58:08.434688 systemd[2260]: Reached target timers.target - Timers. May 9 23:58:08.437422 systemd[2260]: Starting dbus.socket - D-Bus User Message Bus Socket... May 9 23:58:08.466610 systemd[2260]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 9 23:58:08.466855 systemd[2260]: Reached target sockets.target - Sockets. May 9 23:58:08.466890 systemd[2260]: Reached target basic.target - Basic System. May 9 23:58:08.467009 systemd[2260]: Reached target default.target - Main User Target. May 9 23:58:08.467077 systemd[2260]: Startup finished in 256ms. May 9 23:58:08.467264 systemd[1]: Started user@500.service - User Manager for UID 500. May 9 23:58:08.478425 systemd[1]: Started session-1.scope - Session 1 of User core. May 9 23:58:08.637730 systemd[1]: Started sshd@1-172.31.30.213:22-147.75.109.163:37848.service - OpenSSH per-connection server daemon (147.75.109.163:37848). May 9 23:58:08.806227 sshd[2271]: Accepted publickey for core from 147.75.109.163 port 37848 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:58:08.808649 sshd[2271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:08.817243 systemd-logind[1992]: New session 2 of user core. May 9 23:58:08.825438 systemd[1]: Started session-2.scope - Session 2 of User core. May 9 23:58:08.950321 sshd[2271]: pam_unix(sshd:session): session closed for user core May 9 23:58:08.956836 systemd[1]: sshd@1-172.31.30.213:22-147.75.109.163:37848.service: Deactivated successfully. May 9 23:58:08.960893 systemd[1]: session-2.scope: Deactivated successfully. May 9 23:58:08.962430 systemd-logind[1992]: Session 2 logged out. Waiting for processes to exit. May 9 23:58:08.964447 systemd-logind[1992]: Removed session 2. May 9 23:58:08.988684 systemd[1]: Started sshd@2-172.31.30.213:22-147.75.109.163:37864.service - OpenSSH per-connection server daemon (147.75.109.163:37864). May 9 23:58:09.153641 sshd[2278]: Accepted publickey for core from 147.75.109.163 port 37864 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:58:09.156768 sshd[2278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:09.164840 systemd-logind[1992]: New session 3 of user core. May 9 23:58:09.175424 systemd[1]: Started session-3.scope - Session 3 of User core. May 9 23:58:09.293381 sshd[2278]: pam_unix(sshd:session): session closed for user core May 9 23:58:09.298712 systemd[1]: sshd@2-172.31.30.213:22-147.75.109.163:37864.service: Deactivated successfully. May 9 23:58:09.301638 systemd[1]: session-3.scope: Deactivated successfully. May 9 23:58:09.304972 systemd-logind[1992]: Session 3 logged out. Waiting for processes to exit. May 9 23:58:09.307228 systemd-logind[1992]: Removed session 3. May 9 23:58:09.330507 systemd[1]: Started sshd@3-172.31.30.213:22-147.75.109.163:37880.service - OpenSSH per-connection server daemon (147.75.109.163:37880). May 9 23:58:09.510534 sshd[2285]: Accepted publickey for core from 147.75.109.163 port 37880 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:58:09.512610 sshd[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:09.519859 systemd-logind[1992]: New session 4 of user core. May 9 23:58:09.530420 systemd[1]: Started session-4.scope - Session 4 of User core. May 9 23:58:09.658730 sshd[2285]: pam_unix(sshd:session): session closed for user core May 9 23:58:09.665294 systemd[1]: sshd@3-172.31.30.213:22-147.75.109.163:37880.service: Deactivated successfully. May 9 23:58:09.668857 systemd[1]: session-4.scope: Deactivated successfully. May 9 23:58:09.670319 systemd-logind[1992]: Session 4 logged out. Waiting for processes to exit. May 9 23:58:09.672040 systemd-logind[1992]: Removed session 4. May 9 23:58:09.693126 systemd[1]: Started sshd@4-172.31.30.213:22-147.75.109.163:37888.service - OpenSSH per-connection server daemon (147.75.109.163:37888). May 9 23:58:09.873585 sshd[2292]: Accepted publickey for core from 147.75.109.163 port 37888 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:58:09.875653 sshd[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:09.883278 systemd-logind[1992]: New session 5 of user core. May 9 23:58:09.894423 systemd[1]: Started session-5.scope - Session 5 of User core. May 9 23:58:09.465935 systemd-resolved[1935]: Clock change detected. Flushing caches. May 9 23:58:09.474600 systemd-journald[1564]: Time jumped backwards, rotating. May 9 23:58:09.580467 sudo[2296]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 9 23:58:09.581136 sudo[2296]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:58:09.595530 sudo[2296]: pam_unix(sudo:session): session closed for user root May 9 23:58:09.620178 sshd[2292]: pam_unix(sshd:session): session closed for user core May 9 23:58:09.627012 systemd[1]: sshd@4-172.31.30.213:22-147.75.109.163:37888.service: Deactivated successfully. May 9 23:58:09.630372 systemd[1]: session-5.scope: Deactivated successfully. May 9 23:58:09.631816 systemd-logind[1992]: Session 5 logged out. Waiting for processes to exit. May 9 23:58:09.633999 systemd-logind[1992]: Removed session 5. May 9 23:58:09.661260 systemd[1]: Started sshd@5-172.31.30.213:22-147.75.109.163:37898.service - OpenSSH per-connection server daemon (147.75.109.163:37898). May 9 23:58:09.825657 sshd[2301]: Accepted publickey for core from 147.75.109.163 port 37898 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:58:09.827457 sshd[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:09.837064 systemd-logind[1992]: New session 6 of user core. May 9 23:58:09.841041 systemd[1]: Started session-6.scope - Session 6 of User core. May 9 23:58:09.943819 sudo[2305]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 9 23:58:09.944909 sudo[2305]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:58:09.951464 sudo[2305]: pam_unix(sudo:session): session closed for user root May 9 23:58:09.961884 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 9 23:58:09.962491 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:58:09.985261 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 9 23:58:09.991162 auditctl[2308]: No rules May 9 23:58:09.991883 systemd[1]: audit-rules.service: Deactivated successfully. May 9 23:58:09.992242 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 9 23:58:10.001467 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 9 23:58:10.053513 augenrules[2326]: No rules May 9 23:58:10.056832 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 9 23:58:10.059387 sudo[2304]: pam_unix(sudo:session): session closed for user root May 9 23:58:10.082034 sshd[2301]: pam_unix(sshd:session): session closed for user core May 9 23:58:10.089236 systemd[1]: sshd@5-172.31.30.213:22-147.75.109.163:37898.service: Deactivated successfully. May 9 23:58:10.093872 systemd[1]: session-6.scope: Deactivated successfully. May 9 23:58:10.095569 systemd-logind[1992]: Session 6 logged out. Waiting for processes to exit. May 9 23:58:10.097455 systemd-logind[1992]: Removed session 6. May 9 23:58:10.123237 systemd[1]: Started sshd@6-172.31.30.213:22-147.75.109.163:37902.service - OpenSSH per-connection server daemon (147.75.109.163:37902). May 9 23:58:10.296456 sshd[2334]: Accepted publickey for core from 147.75.109.163 port 37902 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:58:10.299096 sshd[2334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:58:10.306440 systemd-logind[1992]: New session 7 of user core. May 9 23:58:10.318959 systemd[1]: Started session-7.scope - Session 7 of User core. May 9 23:58:10.423254 sudo[2337]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 9 23:58:10.424739 sudo[2337]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 9 23:58:10.998647 systemd[1]: Starting docker.service - Docker Application Container Engine... May 9 23:58:11.000320 (dockerd)[2353]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 9 23:58:11.463857 dockerd[2353]: time="2025-05-09T23:58:11.462900265Z" level=info msg="Starting up" May 9 23:58:11.690910 systemd[1]: var-lib-docker-metacopy\x2dcheck2268778971-merged.mount: Deactivated successfully. May 9 23:58:11.704103 dockerd[2353]: time="2025-05-09T23:58:11.703791543Z" level=info msg="Loading containers: start." May 9 23:58:11.907768 kernel: Initializing XFRM netlink socket May 9 23:58:11.958901 (udev-worker)[2376]: Network interface NamePolicy= disabled on kernel command line. May 9 23:58:12.045499 systemd-networkd[1934]: docker0: Link UP May 9 23:58:12.072170 dockerd[2353]: time="2025-05-09T23:58:12.072020736Z" level=info msg="Loading containers: done." May 9 23:58:12.095661 dockerd[2353]: time="2025-05-09T23:58:12.095511757Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 9 23:58:12.096503 dockerd[2353]: time="2025-05-09T23:58:12.095796385Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 9 23:58:12.096503 dockerd[2353]: time="2025-05-09T23:58:12.095984449Z" level=info msg="Daemon has completed initialization" May 9 23:58:12.160773 dockerd[2353]: time="2025-05-09T23:58:12.160494565Z" level=info msg="API listen on /run/docker.sock" May 9 23:58:12.161044 systemd[1]: Started docker.service - Docker Application Container Engine. May 9 23:58:13.668973 containerd[2018]: time="2025-05-09T23:58:13.668903512Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 9 23:58:14.295257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2683795311.mount: Deactivated successfully. May 9 23:58:16.476782 containerd[2018]: time="2025-05-09T23:58:16.476676522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:16.479642 containerd[2018]: time="2025-05-09T23:58:16.479525310Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794150" May 9 23:58:16.482064 containerd[2018]: time="2025-05-09T23:58:16.481985466Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:16.489927 containerd[2018]: time="2025-05-09T23:58:16.489821406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:16.493953 containerd[2018]: time="2025-05-09T23:58:16.493262442Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.824292066s" May 9 23:58:16.493953 containerd[2018]: time="2025-05-09T23:58:16.493338390Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 9 23:58:16.538481 containerd[2018]: time="2025-05-09T23:58:16.538365523Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 9 23:58:17.828300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 9 23:58:17.839075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:18.154067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:18.157496 (kubelet)[2565]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:58:18.241445 kubelet[2565]: E0509 23:58:18.241338 2565 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:58:18.247864 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:58:18.248178 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:58:19.142787 containerd[2018]: time="2025-05-09T23:58:19.140981432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:19.144225 containerd[2018]: time="2025-05-09T23:58:19.144160016Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855550" May 9 23:58:19.146147 containerd[2018]: time="2025-05-09T23:58:19.146057576Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:19.151553 containerd[2018]: time="2025-05-09T23:58:19.151449716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:19.154298 containerd[2018]: time="2025-05-09T23:58:19.154057568Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 2.615577229s" May 9 23:58:19.154298 containerd[2018]: time="2025-05-09T23:58:19.154138052Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 9 23:58:19.195585 containerd[2018]: time="2025-05-09T23:58:19.195457604Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 9 23:58:20.943827 containerd[2018]: time="2025-05-09T23:58:20.943698673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:20.945861 containerd[2018]: time="2025-05-09T23:58:20.945790657Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263945" May 9 23:58:20.946212 containerd[2018]: time="2025-05-09T23:58:20.946148029Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:20.951661 containerd[2018]: time="2025-05-09T23:58:20.951577357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:20.954119 containerd[2018]: time="2025-05-09T23:58:20.953935321Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.758413169s" May 9 23:58:20.954119 containerd[2018]: time="2025-05-09T23:58:20.953990497Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 9 23:58:20.993542 containerd[2018]: time="2025-05-09T23:58:20.993496537Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 9 23:58:22.351961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount526017632.mount: Deactivated successfully. May 9 23:58:22.873360 containerd[2018]: time="2025-05-09T23:58:22.873278138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:22.874882 containerd[2018]: time="2025-05-09T23:58:22.874825238Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775705" May 9 23:58:22.875788 containerd[2018]: time="2025-05-09T23:58:22.875667494Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:22.879502 containerd[2018]: time="2025-05-09T23:58:22.879401426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:22.881181 containerd[2018]: time="2025-05-09T23:58:22.880993814Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.887226185s" May 9 23:58:22.881181 containerd[2018]: time="2025-05-09T23:58:22.881048078Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 9 23:58:22.919043 containerd[2018]: time="2025-05-09T23:58:22.918989510Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 9 23:58:23.448043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2021444339.mount: Deactivated successfully. May 9 23:58:24.615324 containerd[2018]: time="2025-05-09T23:58:24.615253395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:24.620663 containerd[2018]: time="2025-05-09T23:58:24.620586099Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" May 9 23:58:24.624665 containerd[2018]: time="2025-05-09T23:58:24.624571899Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:24.634300 containerd[2018]: time="2025-05-09T23:58:24.633244359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:24.636155 containerd[2018]: time="2025-05-09T23:58:24.636082299Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.717026849s" May 9 23:58:24.636368 containerd[2018]: time="2025-05-09T23:58:24.636330687Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 9 23:58:24.678737 containerd[2018]: time="2025-05-09T23:58:24.678657519Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 9 23:58:25.224516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3750899716.mount: Deactivated successfully. May 9 23:58:25.238035 containerd[2018]: time="2025-05-09T23:58:25.237943490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:25.240037 containerd[2018]: time="2025-05-09T23:58:25.239954726Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" May 9 23:58:25.242685 containerd[2018]: time="2025-05-09T23:58:25.242585486Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:25.248221 containerd[2018]: time="2025-05-09T23:58:25.248109014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:25.250325 containerd[2018]: time="2025-05-09T23:58:25.250104530Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 571.381635ms" May 9 23:58:25.250325 containerd[2018]: time="2025-05-09T23:58:25.250172126Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 9 23:58:25.292903 containerd[2018]: time="2025-05-09T23:58:25.292650302Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 9 23:58:25.896953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount782217753.mount: Deactivated successfully. May 9 23:58:28.406784 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 9 23:58:28.414200 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:28.765324 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:28.780378 (kubelet)[2710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:58:28.879492 kubelet[2710]: E0509 23:58:28.879372 2710 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:58:28.884044 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:58:28.884439 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:58:29.906092 containerd[2018]: time="2025-05-09T23:58:29.905864709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:29.911882 containerd[2018]: time="2025-05-09T23:58:29.911285697Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" May 9 23:58:29.914099 containerd[2018]: time="2025-05-09T23:58:29.913966797Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:29.921222 containerd[2018]: time="2025-05-09T23:58:29.921134745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:58:29.924399 containerd[2018]: time="2025-05-09T23:58:29.924192885Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.631479835s" May 9 23:58:29.924399 containerd[2018]: time="2025-05-09T23:58:29.924255501Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 9 23:58:33.388681 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 9 23:58:38.906329 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 9 23:58:38.917109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:39.231143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:39.241465 (kubelet)[2789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 9 23:58:39.343081 kubelet[2789]: E0509 23:58:39.343023 2789 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 9 23:58:39.346801 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 9 23:58:39.347100 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 9 23:58:39.968227 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:39.984907 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:40.012790 systemd[1]: Reloading requested from client PID 2803 ('systemctl') (unit session-7.scope)... May 9 23:58:40.013015 systemd[1]: Reloading... May 9 23:58:40.266302 zram_generator::config[2847]: No configuration found. May 9 23:58:40.551381 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:58:40.729274 systemd[1]: Reloading finished in 715 ms. May 9 23:58:40.817609 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 9 23:58:40.818027 systemd[1]: kubelet.service: Failed with result 'signal'. May 9 23:58:40.818650 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:40.828348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:41.135996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:41.136528 (kubelet)[2905]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 23:58:41.222768 kubelet[2905]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:58:41.222768 kubelet[2905]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 23:58:41.222768 kubelet[2905]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:58:41.224704 kubelet[2905]: I0509 23:58:41.224611 2905 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 23:58:41.904754 kubelet[2905]: I0509 23:58:41.903915 2905 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 9 23:58:41.904754 kubelet[2905]: I0509 23:58:41.903958 2905 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 23:58:41.904754 kubelet[2905]: I0509 23:58:41.904277 2905 server.go:927] "Client rotation is on, will bootstrap in background" May 9 23:58:41.931308 kubelet[2905]: I0509 23:58:41.931255 2905 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 23:58:41.931683 kubelet[2905]: E0509 23:58:41.931656 2905 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.213:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.213:6443: connect: connection refused May 9 23:58:41.945786 kubelet[2905]: I0509 23:58:41.945737 2905 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 23:58:41.946578 kubelet[2905]: I0509 23:58:41.946513 2905 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 23:58:41.947237 kubelet[2905]: I0509 23:58:41.946710 2905 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-213","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 9 23:58:41.947773 kubelet[2905]: I0509 23:58:41.947488 2905 topology_manager.go:138] "Creating topology manager with none policy" May 9 23:58:41.947773 kubelet[2905]: I0509 23:58:41.947522 2905 container_manager_linux.go:301] "Creating device plugin manager" May 9 23:58:41.947971 kubelet[2905]: I0509 23:58:41.947948 2905 state_mem.go:36] "Initialized new in-memory state store" May 9 23:58:41.949956 kubelet[2905]: I0509 23:58:41.949573 2905 kubelet.go:400] "Attempting to sync node with API server" May 9 23:58:41.949956 kubelet[2905]: I0509 23:58:41.949626 2905 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 23:58:41.949956 kubelet[2905]: I0509 23:58:41.949713 2905 kubelet.go:312] "Adding apiserver pod source" May 9 23:58:41.949956 kubelet[2905]: I0509 23:58:41.949808 2905 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 23:58:41.954415 kubelet[2905]: W0509 23:58:41.953705 2905 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.213:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.213:6443: connect: connection refused May 9 23:58:41.954415 kubelet[2905]: E0509 23:58:41.953840 2905 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.213:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.213:6443: connect: connection refused May 9 23:58:41.954415 kubelet[2905]: W0509 23:58:41.954268 2905 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.213:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-213&limit=500&resourceVersion=0": dial tcp 172.31.30.213:6443: connect: connection refused May 9 23:58:41.954415 kubelet[2905]: E0509 23:58:41.954332 2905 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.213:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-213&limit=500&resourceVersion=0": dial tcp 172.31.30.213:6443: connect: connection refused May 9 23:58:41.954955 kubelet[2905]: I0509 23:58:41.954913 2905 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 23:58:41.955303 kubelet[2905]: I0509 23:58:41.955267 2905 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 23:58:41.955419 kubelet[2905]: W0509 23:58:41.955387 2905 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 9 23:58:41.956955 kubelet[2905]: I0509 23:58:41.956906 2905 server.go:1264] "Started kubelet" May 9 23:58:41.964487 kubelet[2905]: I0509 23:58:41.964420 2905 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 23:58:41.970077 kubelet[2905]: E0509 23:58:41.969628 2905 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.213:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.213:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-213.183e0149cd6a4871 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-213,UID:ip-172-31-30-213,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-213,},FirstTimestamp:2025-05-09 23:58:41.956866161 +0000 UTC m=+0.810280373,LastTimestamp:2025-05-09 23:58:41.956866161 +0000 UTC m=+0.810280373,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-213,}" May 9 23:58:41.976205 kubelet[2905]: I0509 23:58:41.976170 2905 volume_manager.go:291] "Starting Kubelet Volume Manager" May 9 23:58:41.980768 kubelet[2905]: I0509 23:58:41.978781 2905 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 23:58:41.980768 kubelet[2905]: I0509 23:58:41.980420 2905 server.go:455] "Adding debug handlers to kubelet server" May 9 23:58:41.982296 kubelet[2905]: I0509 23:58:41.982210 2905 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 23:58:41.982837 kubelet[2905]: I0509 23:58:41.982807 2905 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 23:58:41.983223 kubelet[2905]: I0509 23:58:41.983175 2905 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 23:58:41.983826 kubelet[2905]: E0509 23:58:41.983708 2905 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.213:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-213?timeout=10s\": dial tcp 172.31.30.213:6443: connect: connection refused" interval="200ms" May 9 23:58:41.984327 kubelet[2905]: I0509 23:58:41.984297 2905 factory.go:221] Registration of the systemd container factory successfully May 9 23:58:41.984586 kubelet[2905]: I0509 23:58:41.984557 2905 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 23:58:41.985047 kubelet[2905]: I0509 23:58:41.984999 2905 reconciler.go:26] "Reconciler: start to sync state" May 9 23:58:41.987593 kubelet[2905]: E0509 23:58:41.987553 2905 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 23:58:41.988596 kubelet[2905]: I0509 23:58:41.988562 2905 factory.go:221] Registration of the containerd container factory successfully May 9 23:58:42.003124 kubelet[2905]: W0509 23:58:42.003045 2905 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.213:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.213:6443: connect: connection refused May 9 23:58:42.004097 kubelet[2905]: E0509 23:58:42.004021 2905 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.213:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.213:6443: connect: connection refused May 9 23:58:42.009427 kubelet[2905]: I0509 23:58:42.009370 2905 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 23:58:42.013224 kubelet[2905]: I0509 23:58:42.013158 2905 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 23:58:42.013342 kubelet[2905]: I0509 23:58:42.013295 2905 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 23:58:42.013423 kubelet[2905]: I0509 23:58:42.013352 2905 kubelet.go:2337] "Starting kubelet main sync loop" May 9 23:58:42.013474 kubelet[2905]: E0509 23:58:42.013444 2905 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 23:58:42.016100 kubelet[2905]: W0509 23:58:42.016012 2905 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.213:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.213:6443: connect: connection refused May 9 23:58:42.016100 kubelet[2905]: E0509 23:58:42.016106 2905 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.213:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.213:6443: connect: connection refused May 9 23:58:42.028189 kubelet[2905]: I0509 23:58:42.028148 2905 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 23:58:42.028189 kubelet[2905]: I0509 23:58:42.028186 2905 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 23:58:42.028395 kubelet[2905]: I0509 23:58:42.028221 2905 state_mem.go:36] "Initialized new in-memory state store" May 9 23:58:42.033460 kubelet[2905]: I0509 23:58:42.033408 2905 policy_none.go:49] "None policy: Start" May 9 23:58:42.034847 kubelet[2905]: I0509 23:58:42.034808 2905 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 23:58:42.034946 kubelet[2905]: I0509 23:58:42.034856 2905 state_mem.go:35] "Initializing new in-memory state store" May 9 23:58:42.048825 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 9 23:58:42.066273 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 9 23:58:42.081011 kubelet[2905]: I0509 23:58:42.080434 2905 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-213" May 9 23:58:42.082474 kubelet[2905]: E0509 23:58:42.082379 2905 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.213:6443/api/v1/nodes\": dial tcp 172.31.30.213:6443: connect: connection refused" node="ip-172-31-30-213" May 9 23:58:42.086509 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 9 23:58:42.090792 kubelet[2905]: I0509 23:58:42.090176 2905 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 23:58:42.090792 kubelet[2905]: I0509 23:58:42.090499 2905 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 23:58:42.090792 kubelet[2905]: I0509 23:58:42.090684 2905 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 23:58:42.097160 kubelet[2905]: E0509 23:58:42.097118 2905 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-213\" not found" May 9 23:58:42.113766 kubelet[2905]: I0509 23:58:42.113688 2905 topology_manager.go:215] "Topology Admit Handler" podUID="59dc2c0abe925b6d9d2b72ec08929578" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-213" May 9 23:58:42.115815 kubelet[2905]: I0509 23:58:42.115757 2905 topology_manager.go:215] "Topology Admit Handler" podUID="95cc07fdab67cb59d888d4f240c17f38" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-213" May 9 23:58:42.118702 kubelet[2905]: I0509 23:58:42.118029 2905 topology_manager.go:215] "Topology Admit Handler" podUID="3607f126a6e78a9f575d1ac7208d4d7c" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-213" May 9 23:58:42.132334 systemd[1]: Created slice kubepods-burstable-pod59dc2c0abe925b6d9d2b72ec08929578.slice - libcontainer container kubepods-burstable-pod59dc2c0abe925b6d9d2b72ec08929578.slice. May 9 23:58:42.148907 systemd[1]: Created slice kubepods-burstable-pod95cc07fdab67cb59d888d4f240c17f38.slice - libcontainer container kubepods-burstable-pod95cc07fdab67cb59d888d4f240c17f38.slice. May 9 23:58:42.166709 systemd[1]: Created slice kubepods-burstable-pod3607f126a6e78a9f575d1ac7208d4d7c.slice - libcontainer container kubepods-burstable-pod3607f126a6e78a9f575d1ac7208d4d7c.slice. May 9 23:58:42.184868 kubelet[2905]: E0509 23:58:42.184806 2905 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.213:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-213?timeout=10s\": dial tcp 172.31.30.213:6443: connect: connection refused" interval="400ms" May 9 23:58:42.187948 kubelet[2905]: I0509 23:58:42.187914 2905 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3607f126a6e78a9f575d1ac7208d4d7c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-213\" (UID: \"3607f126a6e78a9f575d1ac7208d4d7c\") " pod="kube-system/kube-controller-manager-ip-172-31-30-213" May 9 23:58:42.188527 kubelet[2905]: I0509 23:58:42.188157 2905 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/59dc2c0abe925b6d9d2b72ec08929578-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-213\" (UID: \"59dc2c0abe925b6d9d2b72ec08929578\") " pod="kube-system/kube-scheduler-ip-172-31-30-213" May 9 23:58:42.188527 kubelet[2905]: I0509 23:58:42.188207 2905 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3607f126a6e78a9f575d1ac7208d4d7c-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-213\" (UID: \"3607f126a6e78a9f575d1ac7208d4d7c\") " pod="kube-system/kube-controller-manager-ip-172-31-30-213" May 9 23:58:42.188527 kubelet[2905]: I0509 23:58:42.188279 2905 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3607f126a6e78a9f575d1ac7208d4d7c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-213\" (UID: \"3607f126a6e78a9f575d1ac7208d4d7c\") " pod="kube-system/kube-controller-manager-ip-172-31-30-213" May 9 23:58:42.188527 kubelet[2905]: I0509 23:58:42.188320 2905 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3607f126a6e78a9f575d1ac7208d4d7c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-213\" (UID: \"3607f126a6e78a9f575d1ac7208d4d7c\") " pod="kube-system/kube-controller-manager-ip-172-31-30-213" May 9 23:58:42.188527 kubelet[2905]: I0509 23:58:42.188359 2905 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3607f126a6e78a9f575d1ac7208d4d7c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-213\" (UID: \"3607f126a6e78a9f575d1ac7208d4d7c\") " pod="kube-system/kube-controller-manager-ip-172-31-30-213" May 9 23:58:42.188827 kubelet[2905]: I0509 23:58:42.188398 2905 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/95cc07fdab67cb59d888d4f240c17f38-ca-certs\") pod \"kube-apiserver-ip-172-31-30-213\" (UID: \"95cc07fdab67cb59d888d4f240c17f38\") " pod="kube-system/kube-apiserver-ip-172-31-30-213" May 9 23:58:42.188827 kubelet[2905]: I0509 23:58:42.188432 2905 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/95cc07fdab67cb59d888d4f240c17f38-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-213\" (UID: \"95cc07fdab67cb59d888d4f240c17f38\") " pod="kube-system/kube-apiserver-ip-172-31-30-213" May 9 23:58:42.188827 kubelet[2905]: I0509 23:58:42.188466 2905 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/95cc07fdab67cb59d888d4f240c17f38-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-213\" (UID: \"95cc07fdab67cb59d888d4f240c17f38\") " pod="kube-system/kube-apiserver-ip-172-31-30-213" May 9 23:58:42.285162 kubelet[2905]: I0509 23:58:42.285105 2905 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-213" May 9 23:58:42.285785 kubelet[2905]: E0509 23:58:42.285540 2905 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.213:6443/api/v1/nodes\": dial tcp 172.31.30.213:6443: connect: connection refused" node="ip-172-31-30-213" May 9 23:58:42.445666 containerd[2018]: time="2025-05-09T23:58:42.445425811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-213,Uid:59dc2c0abe925b6d9d2b72ec08929578,Namespace:kube-system,Attempt:0,}" May 9 23:58:42.462201 containerd[2018]: time="2025-05-09T23:58:42.462093355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-213,Uid:95cc07fdab67cb59d888d4f240c17f38,Namespace:kube-system,Attempt:0,}" May 9 23:58:42.473169 containerd[2018]: time="2025-05-09T23:58:42.473086591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-213,Uid:3607f126a6e78a9f575d1ac7208d4d7c,Namespace:kube-system,Attempt:0,}" May 9 23:58:42.586006 kubelet[2905]: E0509 23:58:42.585940 2905 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.213:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-213?timeout=10s\": dial tcp 172.31.30.213:6443: connect: connection refused" interval="800ms" May 9 23:58:42.688651 kubelet[2905]: I0509 23:58:42.688574 2905 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-213" May 9 23:58:42.689164 kubelet[2905]: E0509 23:58:42.689097 2905 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.213:6443/api/v1/nodes\": dial tcp 172.31.30.213:6443: connect: connection refused" node="ip-172-31-30-213" May 9 23:58:42.944642 kubelet[2905]: W0509 23:58:42.944500 2905 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.213:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-213&limit=500&resourceVersion=0": dial tcp 172.31.30.213:6443: connect: connection refused May 9 23:58:42.944642 kubelet[2905]: E0509 23:58:42.944608 2905 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.213:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-213&limit=500&resourceVersion=0": dial tcp 172.31.30.213:6443: connect: connection refused May 9 23:58:42.996087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4289411162.mount: Deactivated successfully. May 9 23:58:43.007817 containerd[2018]: time="2025-05-09T23:58:43.007337226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:58:43.013017 kubelet[2905]: W0509 23:58:43.012929 2905 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.213:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.213:6443: connect: connection refused May 9 23:58:43.013017 kubelet[2905]: E0509 23:58:43.013022 2905 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.213:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.213:6443: connect: connection refused May 9 23:58:43.013745 containerd[2018]: time="2025-05-09T23:58:43.013684890Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" May 9 23:58:43.016564 containerd[2018]: time="2025-05-09T23:58:43.015603654Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:58:43.019641 containerd[2018]: time="2025-05-09T23:58:43.018318030Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:58:43.021205 containerd[2018]: time="2025-05-09T23:58:43.020957874Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:58:43.023008 containerd[2018]: time="2025-05-09T23:58:43.022959294Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 23:58:43.024830 containerd[2018]: time="2025-05-09T23:58:43.024789030Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 9 23:58:43.032174 containerd[2018]: time="2025-05-09T23:58:43.032076402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 9 23:58:43.034086 containerd[2018]: time="2025-05-09T23:58:43.033772098Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 571.567935ms" May 9 23:58:43.037497 containerd[2018]: time="2025-05-09T23:58:43.037418562Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 591.883071ms" May 9 23:58:43.075477 containerd[2018]: time="2025-05-09T23:58:43.075066474Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 601.867959ms" May 9 23:58:43.082314 kubelet[2905]: W0509 23:58:43.081866 2905 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.213:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.213:6443: connect: connection refused May 9 23:58:43.082314 kubelet[2905]: E0509 23:58:43.081958 2905 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.213:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.213:6443: connect: connection refused May 9 23:58:43.127779 kubelet[2905]: W0509 23:58:43.127631 2905 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.213:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.213:6443: connect: connection refused May 9 23:58:43.129615 kubelet[2905]: E0509 23:58:43.129546 2905 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.213:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.213:6443: connect: connection refused May 9 23:58:43.235075 containerd[2018]: time="2025-05-09T23:58:43.234496807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:58:43.235075 containerd[2018]: time="2025-05-09T23:58:43.234679603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:58:43.235075 containerd[2018]: time="2025-05-09T23:58:43.234745795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:43.236433 containerd[2018]: time="2025-05-09T23:58:43.236320867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:43.247411 containerd[2018]: time="2025-05-09T23:58:43.246049591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:58:43.247411 containerd[2018]: time="2025-05-09T23:58:43.247206739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:58:43.247411 containerd[2018]: time="2025-05-09T23:58:43.247240735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:43.247765 containerd[2018]: time="2025-05-09T23:58:43.247579063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:43.248920 containerd[2018]: time="2025-05-09T23:58:43.248513455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:58:43.248920 containerd[2018]: time="2025-05-09T23:58:43.248635723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:58:43.248920 containerd[2018]: time="2025-05-09T23:58:43.248676883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:43.251620 containerd[2018]: time="2025-05-09T23:58:43.251468203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:58:43.279095 systemd[1]: Started cri-containerd-0e1eb90f85ff66f32e6c516850642f8abde94e2dc327a049c3d89dd8acb373fb.scope - libcontainer container 0e1eb90f85ff66f32e6c516850642f8abde94e2dc327a049c3d89dd8acb373fb. May 9 23:58:43.306052 systemd[1]: Started cri-containerd-b39465919d9561f5ba6f9a44c9ac487f0bc840ed4e10a3b884b9d1e382307dcc.scope - libcontainer container b39465919d9561f5ba6f9a44c9ac487f0bc840ed4e10a3b884b9d1e382307dcc. May 9 23:58:43.320500 systemd[1]: Started cri-containerd-3947db0e30b45f68fb9c8b63c86d22ea327cf38e7e9abcfec841e75cb34612d3.scope - libcontainer container 3947db0e30b45f68fb9c8b63c86d22ea327cf38e7e9abcfec841e75cb34612d3. May 9 23:58:43.387425 kubelet[2905]: E0509 23:58:43.387220 2905 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.213:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-213?timeout=10s\": dial tcp 172.31.30.213:6443: connect: connection refused" interval="1.6s" May 9 23:58:43.430323 containerd[2018]: time="2025-05-09T23:58:43.429366752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-213,Uid:95cc07fdab67cb59d888d4f240c17f38,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e1eb90f85ff66f32e6c516850642f8abde94e2dc327a049c3d89dd8acb373fb\"" May 9 23:58:43.441714 containerd[2018]: time="2025-05-09T23:58:43.441661832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-213,Uid:59dc2c0abe925b6d9d2b72ec08929578,Namespace:kube-system,Attempt:0,} returns sandbox id \"b39465919d9561f5ba6f9a44c9ac487f0bc840ed4e10a3b884b9d1e382307dcc\"" May 9 23:58:43.442523 containerd[2018]: time="2025-05-09T23:58:43.442405784Z" level=info msg="CreateContainer within sandbox \"0e1eb90f85ff66f32e6c516850642f8abde94e2dc327a049c3d89dd8acb373fb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 9 23:58:43.450495 containerd[2018]: time="2025-05-09T23:58:43.450425984Z" level=info msg="CreateContainer within sandbox \"b39465919d9561f5ba6f9a44c9ac487f0bc840ed4e10a3b884b9d1e382307dcc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 9 23:58:43.451829 containerd[2018]: time="2025-05-09T23:58:43.451083476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-213,Uid:3607f126a6e78a9f575d1ac7208d4d7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3947db0e30b45f68fb9c8b63c86d22ea327cf38e7e9abcfec841e75cb34612d3\"" May 9 23:58:43.457040 containerd[2018]: time="2025-05-09T23:58:43.456888284Z" level=info msg="CreateContainer within sandbox \"3947db0e30b45f68fb9c8b63c86d22ea327cf38e7e9abcfec841e75cb34612d3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 9 23:58:43.492109 containerd[2018]: time="2025-05-09T23:58:43.491641773Z" level=info msg="CreateContainer within sandbox \"0e1eb90f85ff66f32e6c516850642f8abde94e2dc327a049c3d89dd8acb373fb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bbdd3ab541f908049d540564d3dadeed733884ded76ff5a4bc9d83aeea5a1140\"" May 9 23:58:43.493789 kubelet[2905]: I0509 23:58:43.493549 2905 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-213" May 9 23:58:43.494747 containerd[2018]: time="2025-05-09T23:58:43.494336505Z" level=info msg="StartContainer for \"bbdd3ab541f908049d540564d3dadeed733884ded76ff5a4bc9d83aeea5a1140\"" May 9 23:58:43.495368 kubelet[2905]: E0509 23:58:43.495096 2905 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.213:6443/api/v1/nodes\": dial tcp 172.31.30.213:6443: connect: connection refused" node="ip-172-31-30-213" May 9 23:58:43.498704 containerd[2018]: time="2025-05-09T23:58:43.498454941Z" level=info msg="CreateContainer within sandbox \"b39465919d9561f5ba6f9a44c9ac487f0bc840ed4e10a3b884b9d1e382307dcc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"189aea05db9f96634906940e4b25402c42b4c3be9e87b5405ad2673ea1076d54\"" May 9 23:58:43.500763 containerd[2018]: time="2025-05-09T23:58:43.500407377Z" level=info msg="StartContainer for \"189aea05db9f96634906940e4b25402c42b4c3be9e87b5405ad2673ea1076d54\"" May 9 23:58:43.517034 containerd[2018]: time="2025-05-09T23:58:43.516952281Z" level=info msg="CreateContainer within sandbox \"3947db0e30b45f68fb9c8b63c86d22ea327cf38e7e9abcfec841e75cb34612d3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c72309b2536b468d65cade522f19892a0dcda3bd5317fc521d4c86b95104d1b4\"" May 9 23:58:43.518101 containerd[2018]: time="2025-05-09T23:58:43.518028009Z" level=info msg="StartContainer for \"c72309b2536b468d65cade522f19892a0dcda3bd5317fc521d4c86b95104d1b4\"" May 9 23:58:43.565591 systemd[1]: Started cri-containerd-bbdd3ab541f908049d540564d3dadeed733884ded76ff5a4bc9d83aeea5a1140.scope - libcontainer container bbdd3ab541f908049d540564d3dadeed733884ded76ff5a4bc9d83aeea5a1140. May 9 23:58:43.597227 systemd[1]: Started cri-containerd-189aea05db9f96634906940e4b25402c42b4c3be9e87b5405ad2673ea1076d54.scope - libcontainer container 189aea05db9f96634906940e4b25402c42b4c3be9e87b5405ad2673ea1076d54. May 9 23:58:43.611074 systemd[1]: Started cri-containerd-c72309b2536b468d65cade522f19892a0dcda3bd5317fc521d4c86b95104d1b4.scope - libcontainer container c72309b2536b468d65cade522f19892a0dcda3bd5317fc521d4c86b95104d1b4. May 9 23:58:43.722915 containerd[2018]: time="2025-05-09T23:58:43.720319798Z" level=info msg="StartContainer for \"bbdd3ab541f908049d540564d3dadeed733884ded76ff5a4bc9d83aeea5a1140\" returns successfully" May 9 23:58:43.742875 containerd[2018]: time="2025-05-09T23:58:43.742520518Z" level=info msg="StartContainer for \"189aea05db9f96634906940e4b25402c42b4c3be9e87b5405ad2673ea1076d54\" returns successfully" May 9 23:58:43.783667 containerd[2018]: time="2025-05-09T23:58:43.783581314Z" level=info msg="StartContainer for \"c72309b2536b468d65cade522f19892a0dcda3bd5317fc521d4c86b95104d1b4\" returns successfully" May 9 23:58:45.099852 kubelet[2905]: I0509 23:58:45.099803 2905 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-213" May 9 23:58:47.915631 kubelet[2905]: E0509 23:58:47.915556 2905 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-213\" not found" node="ip-172-31-30-213" May 9 23:58:47.955198 kubelet[2905]: I0509 23:58:47.955119 2905 apiserver.go:52] "Watching apiserver" May 9 23:58:47.983541 kubelet[2905]: I0509 23:58:47.983473 2905 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 23:58:48.076447 kubelet[2905]: I0509 23:58:48.076215 2905 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-30-213" May 9 23:58:48.104217 update_engine[1994]: I20250509 23:58:48.103790 1994 update_attempter.cc:509] Updating boot flags... May 9 23:58:48.292913 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3204) May 9 23:58:48.327101 kubelet[2905]: E0509 23:58:48.326901 2905 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-30-213.183e0149cd6a4871 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-213,UID:ip-172-31-30-213,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-213,},FirstTimestamp:2025-05-09 23:58:41.956866161 +0000 UTC m=+0.810280373,LastTimestamp:2025-05-09 23:58:41.956866161 +0000 UTC m=+0.810280373,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-213,}" May 9 23:58:48.844793 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3204) May 9 23:58:49.314806 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3204) May 9 23:58:51.110975 systemd[1]: Reloading requested from client PID 3459 ('systemctl') (unit session-7.scope)... May 9 23:58:51.111002 systemd[1]: Reloading... May 9 23:58:51.290860 zram_generator::config[3499]: No configuration found. May 9 23:58:51.563891 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 9 23:58:51.778533 systemd[1]: Reloading finished in 666 ms. May 9 23:58:51.875314 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:51.897280 systemd[1]: kubelet.service: Deactivated successfully. May 9 23:58:51.898511 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:51.898789 systemd[1]: kubelet.service: Consumed 1.638s CPU time, 116.1M memory peak, 0B memory swap peak. May 9 23:58:51.908389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 9 23:58:52.276048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 9 23:58:52.290490 (kubelet)[3559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 9 23:58:52.409849 kubelet[3559]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:58:52.410669 kubelet[3559]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 9 23:58:52.411362 kubelet[3559]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 9 23:58:52.411362 kubelet[3559]: I0509 23:58:52.410923 3559 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 9 23:58:52.422665 kubelet[3559]: I0509 23:58:52.422607 3559 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 9 23:58:52.423048 kubelet[3559]: I0509 23:58:52.423023 3559 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 9 23:58:52.423867 kubelet[3559]: I0509 23:58:52.423824 3559 server.go:927] "Client rotation is on, will bootstrap in background" May 9 23:58:52.428315 kubelet[3559]: I0509 23:58:52.428177 3559 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 9 23:58:52.433395 kubelet[3559]: I0509 23:58:52.433089 3559 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 9 23:58:52.443804 sudo[3572]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 9 23:58:52.444513 sudo[3572]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 9 23:58:52.461446 kubelet[3559]: I0509 23:58:52.461365 3559 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 9 23:58:52.462633 kubelet[3559]: I0509 23:58:52.462511 3559 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 9 23:58:52.462633 kubelet[3559]: I0509 23:58:52.462683 3559 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-213","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 9 23:58:52.462633 kubelet[3559]: I0509 23:58:52.463101 3559 topology_manager.go:138] "Creating topology manager with none policy" May 9 23:58:52.462633 kubelet[3559]: I0509 23:58:52.463127 3559 container_manager_linux.go:301] "Creating device plugin manager" May 9 23:58:52.462633 kubelet[3559]: I0509 23:58:52.463191 3559 state_mem.go:36] "Initialized new in-memory state store" May 9 23:58:52.463674 kubelet[3559]: I0509 23:58:52.463428 3559 kubelet.go:400] "Attempting to sync node with API server" May 9 23:58:52.464406 kubelet[3559]: I0509 23:58:52.464342 3559 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 9 23:58:52.464568 kubelet[3559]: I0509 23:58:52.464430 3559 kubelet.go:312] "Adding apiserver pod source" May 9 23:58:52.464568 kubelet[3559]: I0509 23:58:52.464474 3559 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 9 23:58:52.469594 kubelet[3559]: I0509 23:58:52.469481 3559 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 9 23:58:52.469986 kubelet[3559]: I0509 23:58:52.469864 3559 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 9 23:58:52.470646 kubelet[3559]: I0509 23:58:52.470602 3559 server.go:1264] "Started kubelet" May 9 23:58:52.484771 kubelet[3559]: I0509 23:58:52.481065 3559 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 9 23:58:52.506377 kubelet[3559]: I0509 23:58:52.498853 3559 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 9 23:58:52.506377 kubelet[3559]: I0509 23:58:52.502241 3559 server.go:455] "Adding debug handlers to kubelet server" May 9 23:58:52.506377 kubelet[3559]: I0509 23:58:52.505931 3559 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 9 23:58:52.506377 kubelet[3559]: I0509 23:58:52.506331 3559 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 9 23:58:52.519760 kubelet[3559]: I0509 23:58:52.517696 3559 volume_manager.go:291] "Starting Kubelet Volume Manager" May 9 23:58:52.519760 kubelet[3559]: I0509 23:58:52.519641 3559 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 9 23:58:52.520138 kubelet[3559]: I0509 23:58:52.520094 3559 reconciler.go:26] "Reconciler: start to sync state" May 9 23:58:52.558086 kubelet[3559]: I0509 23:58:52.556671 3559 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 9 23:58:52.568344 kubelet[3559]: I0509 23:58:52.562681 3559 factory.go:221] Registration of the systemd container factory successfully May 9 23:58:52.568344 kubelet[3559]: I0509 23:58:52.567566 3559 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 9 23:58:52.573558 kubelet[3559]: E0509 23:58:52.573379 3559 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 9 23:58:52.583191 kubelet[3559]: I0509 23:58:52.583119 3559 factory.go:221] Registration of the containerd container factory successfully May 9 23:58:52.592066 kubelet[3559]: I0509 23:58:52.588833 3559 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 9 23:58:52.592066 kubelet[3559]: I0509 23:58:52.588928 3559 status_manager.go:217] "Starting to sync pod status with apiserver" May 9 23:58:52.592066 kubelet[3559]: I0509 23:58:52.588960 3559 kubelet.go:2337] "Starting kubelet main sync loop" May 9 23:58:52.592066 kubelet[3559]: E0509 23:58:52.589039 3559 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 9 23:58:52.661196 kubelet[3559]: I0509 23:58:52.661038 3559 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-213" May 9 23:58:52.690740 kubelet[3559]: E0509 23:58:52.690664 3559 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 9 23:58:52.691540 kubelet[3559]: I0509 23:58:52.691261 3559 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-30-213" May 9 23:58:52.691540 kubelet[3559]: I0509 23:58:52.691384 3559 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-30-213" May 9 23:58:52.820526 kubelet[3559]: I0509 23:58:52.820371 3559 cpu_manager.go:214] "Starting CPU manager" policy="none" May 9 23:58:52.820526 kubelet[3559]: I0509 23:58:52.820418 3559 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 9 23:58:52.820526 kubelet[3559]: I0509 23:58:52.820459 3559 state_mem.go:36] "Initialized new in-memory state store" May 9 23:58:52.822588 kubelet[3559]: I0509 23:58:52.821810 3559 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 9 23:58:52.822588 kubelet[3559]: I0509 23:58:52.821853 3559 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 9 23:58:52.822588 kubelet[3559]: I0509 23:58:52.821896 3559 policy_none.go:49] "None policy: Start" May 9 23:58:52.823930 kubelet[3559]: I0509 23:58:52.823618 3559 memory_manager.go:170] "Starting memorymanager" policy="None" May 9 23:58:52.823930 kubelet[3559]: I0509 23:58:52.823661 3559 state_mem.go:35] "Initializing new in-memory state store" May 9 23:58:52.825182 kubelet[3559]: I0509 23:58:52.824240 3559 state_mem.go:75] "Updated machine memory state" May 9 23:58:52.838084 kubelet[3559]: I0509 23:58:52.837837 3559 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 9 23:58:52.838253 kubelet[3559]: I0509 23:58:52.838179 3559 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 9 23:58:52.843541 kubelet[3559]: I0509 23:58:52.843096 3559 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 9 23:58:52.890915 kubelet[3559]: I0509 23:58:52.890819 3559 topology_manager.go:215] "Topology Admit Handler" podUID="59dc2c0abe925b6d9d2b72ec08929578" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-213" May 9 23:58:52.891079 kubelet[3559]: I0509 23:58:52.890982 3559 topology_manager.go:215] "Topology Admit Handler" podUID="95cc07fdab67cb59d888d4f240c17f38" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-213" May 9 23:58:52.891079 kubelet[3559]: I0509 23:58:52.891067 3559 topology_manager.go:215] "Topology Admit Handler" podUID="3607f126a6e78a9f575d1ac7208d4d7c" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-213" May 9 23:58:52.932805 kubelet[3559]: I0509 23:58:52.932052 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/59dc2c0abe925b6d9d2b72ec08929578-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-213\" (UID: \"59dc2c0abe925b6d9d2b72ec08929578\") " pod="kube-system/kube-scheduler-ip-172-31-30-213" May 9 23:58:52.932805 kubelet[3559]: I0509 23:58:52.932134 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/95cc07fdab67cb59d888d4f240c17f38-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-213\" (UID: \"95cc07fdab67cb59d888d4f240c17f38\") " pod="kube-system/kube-apiserver-ip-172-31-30-213" May 9 23:58:52.932805 kubelet[3559]: I0509 23:58:52.932187 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3607f126a6e78a9f575d1ac7208d4d7c-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-213\" (UID: \"3607f126a6e78a9f575d1ac7208d4d7c\") " pod="kube-system/kube-controller-manager-ip-172-31-30-213" May 9 23:58:52.932805 kubelet[3559]: I0509 23:58:52.932223 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3607f126a6e78a9f575d1ac7208d4d7c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-213\" (UID: \"3607f126a6e78a9f575d1ac7208d4d7c\") " pod="kube-system/kube-controller-manager-ip-172-31-30-213" May 9 23:58:52.932805 kubelet[3559]: I0509 23:58:52.932260 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3607f126a6e78a9f575d1ac7208d4d7c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-213\" (UID: \"3607f126a6e78a9f575d1ac7208d4d7c\") " pod="kube-system/kube-controller-manager-ip-172-31-30-213" May 9 23:58:52.933221 kubelet[3559]: I0509 23:58:52.932297 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3607f126a6e78a9f575d1ac7208d4d7c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-213\" (UID: \"3607f126a6e78a9f575d1ac7208d4d7c\") " pod="kube-system/kube-controller-manager-ip-172-31-30-213" May 9 23:58:52.933221 kubelet[3559]: I0509 23:58:52.932341 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3607f126a6e78a9f575d1ac7208d4d7c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-213\" (UID: \"3607f126a6e78a9f575d1ac7208d4d7c\") " pod="kube-system/kube-controller-manager-ip-172-31-30-213" May 9 23:58:52.933221 kubelet[3559]: I0509 23:58:52.932378 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/95cc07fdab67cb59d888d4f240c17f38-ca-certs\") pod \"kube-apiserver-ip-172-31-30-213\" (UID: \"95cc07fdab67cb59d888d4f240c17f38\") " pod="kube-system/kube-apiserver-ip-172-31-30-213" May 9 23:58:52.933221 kubelet[3559]: I0509 23:58:52.932411 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/95cc07fdab67cb59d888d4f240c17f38-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-213\" (UID: \"95cc07fdab67cb59d888d4f240c17f38\") " pod="kube-system/kube-apiserver-ip-172-31-30-213" May 9 23:58:53.466679 sudo[3572]: pam_unix(sudo:session): session closed for user root May 9 23:58:53.468541 kubelet[3559]: I0509 23:58:53.468455 3559 apiserver.go:52] "Watching apiserver" May 9 23:58:53.520777 kubelet[3559]: I0509 23:58:53.520622 3559 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 9 23:58:53.755839 kubelet[3559]: E0509 23:58:53.753933 3559 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-30-213\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-213" May 9 23:58:53.787867 kubelet[3559]: I0509 23:58:53.787759 3559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-213" podStartSLOduration=1.787712096 podStartE2EDuration="1.787712096s" podCreationTimestamp="2025-05-09 23:58:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:58:53.784894964 +0000 UTC m=+1.479949689" watchObservedRunningTime="2025-05-09 23:58:53.787712096 +0000 UTC m=+1.482766833" May 9 23:58:53.843666 kubelet[3559]: I0509 23:58:53.842958 3559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-213" podStartSLOduration=1.842933468 podStartE2EDuration="1.842933468s" podCreationTimestamp="2025-05-09 23:58:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:58:53.812989136 +0000 UTC m=+1.508043849" watchObservedRunningTime="2025-05-09 23:58:53.842933468 +0000 UTC m=+1.537988169" May 9 23:58:54.885879 kubelet[3559]: I0509 23:58:54.885689 3559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-213" podStartSLOduration=2.885662793 podStartE2EDuration="2.885662793s" podCreationTimestamp="2025-05-09 23:58:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:58:53.844585448 +0000 UTC m=+1.539640173" watchObservedRunningTime="2025-05-09 23:58:54.885662793 +0000 UTC m=+2.580717506" May 9 23:58:56.258249 sudo[2337]: pam_unix(sudo:session): session closed for user root May 9 23:58:56.281510 sshd[2334]: pam_unix(sshd:session): session closed for user core May 9 23:58:56.287775 systemd-logind[1992]: Session 7 logged out. Waiting for processes to exit. May 9 23:58:56.290007 systemd[1]: sshd@6-172.31.30.213:22-147.75.109.163:37902.service: Deactivated successfully. May 9 23:58:56.294478 systemd[1]: session-7.scope: Deactivated successfully. May 9 23:58:56.294879 systemd[1]: session-7.scope: Consumed 14.055s CPU time, 188.6M memory peak, 0B memory swap peak. May 9 23:58:56.296513 systemd-logind[1992]: Removed session 7. May 9 23:59:05.171449 kubelet[3559]: I0509 23:59:05.171082 3559 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 9 23:59:05.172411 kubelet[3559]: I0509 23:59:05.172190 3559 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 9 23:59:05.172914 containerd[2018]: time="2025-05-09T23:59:05.171669304Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 9 23:59:06.193764 kubelet[3559]: I0509 23:59:06.191186 3559 topology_manager.go:215] "Topology Admit Handler" podUID="54be84d1-3494-4cba-ae9d-e2bc1e0e6250" podNamespace="kube-system" podName="kube-proxy-zx9kk" May 9 23:59:06.213850 kubelet[3559]: W0509 23:59:06.212514 3559 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-30-213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-213' and this object May 9 23:59:06.213850 kubelet[3559]: E0509 23:59:06.212573 3559 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-30-213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-213' and this object May 9 23:59:06.213850 kubelet[3559]: W0509 23:59:06.212646 3559 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-30-213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-213' and this object May 9 23:59:06.213850 kubelet[3559]: E0509 23:59:06.212669 3559 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-30-213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-213' and this object May 9 23:59:06.215403 systemd[1]: Created slice kubepods-besteffort-pod54be84d1_3494_4cba_ae9d_e2bc1e0e6250.slice - libcontainer container kubepods-besteffort-pod54be84d1_3494_4cba_ae9d_e2bc1e0e6250.slice. May 9 23:59:06.217316 kubelet[3559]: I0509 23:59:06.216436 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54be84d1-3494-4cba-ae9d-e2bc1e0e6250-xtables-lock\") pod \"kube-proxy-zx9kk\" (UID: \"54be84d1-3494-4cba-ae9d-e2bc1e0e6250\") " pod="kube-system/kube-proxy-zx9kk" May 9 23:59:06.217316 kubelet[3559]: I0509 23:59:06.216506 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54be84d1-3494-4cba-ae9d-e2bc1e0e6250-lib-modules\") pod \"kube-proxy-zx9kk\" (UID: \"54be84d1-3494-4cba-ae9d-e2bc1e0e6250\") " pod="kube-system/kube-proxy-zx9kk" May 9 23:59:06.217316 kubelet[3559]: I0509 23:59:06.216549 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/54be84d1-3494-4cba-ae9d-e2bc1e0e6250-kube-proxy\") pod \"kube-proxy-zx9kk\" (UID: \"54be84d1-3494-4cba-ae9d-e2bc1e0e6250\") " pod="kube-system/kube-proxy-zx9kk" May 9 23:59:06.217316 kubelet[3559]: I0509 23:59:06.216590 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c697h\" (UniqueName: \"kubernetes.io/projected/54be84d1-3494-4cba-ae9d-e2bc1e0e6250-kube-api-access-c697h\") pod \"kube-proxy-zx9kk\" (UID: \"54be84d1-3494-4cba-ae9d-e2bc1e0e6250\") " pod="kube-system/kube-proxy-zx9kk" May 9 23:59:06.257588 kubelet[3559]: I0509 23:59:06.256355 3559 topology_manager.go:215] "Topology Admit Handler" podUID="357e29ce-751b-42f6-986b-180fcb0a1f31" podNamespace="kube-system" podName="cilium-pgq9t" May 9 23:59:06.276582 systemd[1]: Created slice kubepods-burstable-pod357e29ce_751b_42f6_986b_180fcb0a1f31.slice - libcontainer container kubepods-burstable-pod357e29ce_751b_42f6_986b_180fcb0a1f31.slice. May 9 23:59:06.317035 kubelet[3559]: I0509 23:59:06.316850 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-hostproc\") pod \"cilium-pgq9t\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " pod="kube-system/cilium-pgq9t" May 9 23:59:06.317303 kubelet[3559]: I0509 23:59:06.317268 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q58hl\" (UniqueName: \"kubernetes.io/projected/357e29ce-751b-42f6-986b-180fcb0a1f31-kube-api-access-q58hl\") pod \"cilium-pgq9t\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " pod="kube-system/cilium-pgq9t" May 9 23:59:06.319755 kubelet[3559]: I0509 23:59:06.317974 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-xtables-lock\") pod \"cilium-pgq9t\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " pod="kube-system/cilium-pgq9t" May 9 23:59:06.319755 kubelet[3559]: I0509 23:59:06.318052 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/357e29ce-751b-42f6-986b-180fcb0a1f31-hubble-tls\") pod \"cilium-pgq9t\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " pod="kube-system/cilium-pgq9t" May 9 23:59:06.319755 kubelet[3559]: I0509 23:59:06.318089 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/357e29ce-751b-42f6-986b-180fcb0a1f31-clustermesh-secrets\") pod \"cilium-pgq9t\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " pod="kube-system/cilium-pgq9t" May 9 23:59:06.319755 kubelet[3559]: I0509 23:59:06.318128 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-host-proc-sys-net\") pod \"cilium-pgq9t\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " pod="kube-system/cilium-pgq9t" May 9 23:59:06.319755 kubelet[3559]: I0509 23:59:06.318208 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-cilium-cgroup\") pod \"cilium-pgq9t\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " pod="kube-system/cilium-pgq9t" May 9 23:59:06.319755 kubelet[3559]: I0509 23:59:06.318248 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-etc-cni-netd\") pod \"cilium-pgq9t\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " pod="kube-system/cilium-pgq9t" May 9 23:59:06.320210 kubelet[3559]: I0509 23:59:06.318313 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-lib-modules\") pod \"cilium-pgq9t\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " pod="kube-system/cilium-pgq9t" May 9 23:59:06.320210 kubelet[3559]: I0509 23:59:06.318355 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/357e29ce-751b-42f6-986b-180fcb0a1f31-cilium-config-path\") pod \"cilium-pgq9t\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " pod="kube-system/cilium-pgq9t" May 9 23:59:06.320210 kubelet[3559]: I0509 23:59:06.318445 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-cilium-run\") pod \"cilium-pgq9t\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " pod="kube-system/cilium-pgq9t" May 9 23:59:06.320210 kubelet[3559]: I0509 23:59:06.318483 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-bpf-maps\") pod \"cilium-pgq9t\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " pod="kube-system/cilium-pgq9t" May 9 23:59:06.320210 kubelet[3559]: I0509 23:59:06.318520 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-cni-path\") pod \"cilium-pgq9t\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " pod="kube-system/cilium-pgq9t" May 9 23:59:06.320210 kubelet[3559]: I0509 23:59:06.318566 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-host-proc-sys-kernel\") pod \"cilium-pgq9t\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " pod="kube-system/cilium-pgq9t" May 9 23:59:06.365408 kubelet[3559]: I0509 23:59:06.365353 3559 topology_manager.go:215] "Topology Admit Handler" podUID="6f8adc2a-de67-41c6-9dca-5ba5311b69ac" podNamespace="kube-system" podName="cilium-operator-599987898-f4qgz" May 9 23:59:06.384991 systemd[1]: Created slice kubepods-besteffort-pod6f8adc2a_de67_41c6_9dca_5ba5311b69ac.slice - libcontainer container kubepods-besteffort-pod6f8adc2a_de67_41c6_9dca_5ba5311b69ac.slice. May 9 23:59:06.419872 kubelet[3559]: I0509 23:59:06.419818 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f8adc2a-de67-41c6-9dca-5ba5311b69ac-cilium-config-path\") pod \"cilium-operator-599987898-f4qgz\" (UID: \"6f8adc2a-de67-41c6-9dca-5ba5311b69ac\") " pod="kube-system/cilium-operator-599987898-f4qgz" May 9 23:59:06.420160 kubelet[3559]: I0509 23:59:06.420108 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plcrg\" (UniqueName: \"kubernetes.io/projected/6f8adc2a-de67-41c6-9dca-5ba5311b69ac-kube-api-access-plcrg\") pod \"cilium-operator-599987898-f4qgz\" (UID: \"6f8adc2a-de67-41c6-9dca-5ba5311b69ac\") " pod="kube-system/cilium-operator-599987898-f4qgz" May 9 23:59:07.294817 containerd[2018]: time="2025-05-09T23:59:07.294683803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-f4qgz,Uid:6f8adc2a-de67-41c6-9dca-5ba5311b69ac,Namespace:kube-system,Attempt:0,}" May 9 23:59:07.321939 kubelet[3559]: E0509 23:59:07.321831 3559 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition May 9 23:59:07.325037 kubelet[3559]: E0509 23:59:07.321966 3559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/54be84d1-3494-4cba-ae9d-e2bc1e0e6250-kube-proxy podName:54be84d1-3494-4cba-ae9d-e2bc1e0e6250 nodeName:}" failed. No retries permitted until 2025-05-09 23:59:07.821933487 +0000 UTC m=+15.516988200 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/54be84d1-3494-4cba-ae9d-e2bc1e0e6250-kube-proxy") pod "kube-proxy-zx9kk" (UID: "54be84d1-3494-4cba-ae9d-e2bc1e0e6250") : failed to sync configmap cache: timed out waiting for the condition May 9 23:59:07.352317 containerd[2018]: time="2025-05-09T23:59:07.351991735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:59:07.352317 containerd[2018]: time="2025-05-09T23:59:07.352152547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:59:07.352317 containerd[2018]: time="2025-05-09T23:59:07.352200583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:07.352878 containerd[2018]: time="2025-05-09T23:59:07.352392655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:07.393092 systemd[1]: Started cri-containerd-c323a7571bd868bf13f947bbf3fcfc652c38d714839ca4a413de9fbf7dce06a1.scope - libcontainer container c323a7571bd868bf13f947bbf3fcfc652c38d714839ca4a413de9fbf7dce06a1. May 9 23:59:07.461951 containerd[2018]: time="2025-05-09T23:59:07.461548676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-f4qgz,Uid:6f8adc2a-de67-41c6-9dca-5ba5311b69ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"c323a7571bd868bf13f947bbf3fcfc652c38d714839ca4a413de9fbf7dce06a1\"" May 9 23:59:07.466563 containerd[2018]: time="2025-05-09T23:59:07.465686252Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 9 23:59:07.485943 containerd[2018]: time="2025-05-09T23:59:07.485797124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pgq9t,Uid:357e29ce-751b-42f6-986b-180fcb0a1f31,Namespace:kube-system,Attempt:0,}" May 9 23:59:07.536862 containerd[2018]: time="2025-05-09T23:59:07.536472260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:59:07.536862 containerd[2018]: time="2025-05-09T23:59:07.536791124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:59:07.536862 containerd[2018]: time="2025-05-09T23:59:07.536856368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:07.537347 containerd[2018]: time="2025-05-09T23:59:07.537083984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:07.580657 systemd[1]: Started cri-containerd-e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254.scope - libcontainer container e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254. May 9 23:59:07.639834 containerd[2018]: time="2025-05-09T23:59:07.639772700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pgq9t,Uid:357e29ce-751b-42f6-986b-180fcb0a1f31,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254\"" May 9 23:59:08.034588 containerd[2018]: time="2025-05-09T23:59:08.034493682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zx9kk,Uid:54be84d1-3494-4cba-ae9d-e2bc1e0e6250,Namespace:kube-system,Attempt:0,}" May 9 23:59:08.085625 containerd[2018]: time="2025-05-09T23:59:08.084946555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:59:08.085625 containerd[2018]: time="2025-05-09T23:59:08.085050175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:59:08.085625 containerd[2018]: time="2025-05-09T23:59:08.085095379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:08.085625 containerd[2018]: time="2025-05-09T23:59:08.085287811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:08.122208 systemd[1]: Started cri-containerd-db24e9fc642e21f2c0c4412818fd0efb6c66b189a7b6474bd812b0d033fe11fa.scope - libcontainer container db24e9fc642e21f2c0c4412818fd0efb6c66b189a7b6474bd812b0d033fe11fa. May 9 23:59:08.185622 containerd[2018]: time="2025-05-09T23:59:08.184872967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zx9kk,Uid:54be84d1-3494-4cba-ae9d-e2bc1e0e6250,Namespace:kube-system,Attempt:0,} returns sandbox id \"db24e9fc642e21f2c0c4412818fd0efb6c66b189a7b6474bd812b0d033fe11fa\"" May 9 23:59:08.195035 containerd[2018]: time="2025-05-09T23:59:08.194969071Z" level=info msg="CreateContainer within sandbox \"db24e9fc642e21f2c0c4412818fd0efb6c66b189a7b6474bd812b0d033fe11fa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 9 23:59:08.230005 containerd[2018]: time="2025-05-09T23:59:08.229904431Z" level=info msg="CreateContainer within sandbox \"db24e9fc642e21f2c0c4412818fd0efb6c66b189a7b6474bd812b0d033fe11fa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ed66ee62c1067ae7def10e092f443abc949352064ab0360754b366c685bfa588\"" May 9 23:59:08.231993 containerd[2018]: time="2025-05-09T23:59:08.231907855Z" level=info msg="StartContainer for \"ed66ee62c1067ae7def10e092f443abc949352064ab0360754b366c685bfa588\"" May 9 23:59:08.286074 systemd[1]: Started cri-containerd-ed66ee62c1067ae7def10e092f443abc949352064ab0360754b366c685bfa588.scope - libcontainer container ed66ee62c1067ae7def10e092f443abc949352064ab0360754b366c685bfa588. May 9 23:59:08.355002 containerd[2018]: time="2025-05-09T23:59:08.354629132Z" level=info msg="StartContainer for \"ed66ee62c1067ae7def10e092f443abc949352064ab0360754b366c685bfa588\" returns successfully" May 9 23:59:08.755007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3700969292.mount: Deactivated successfully. May 9 23:59:09.523261 containerd[2018]: time="2025-05-09T23:59:09.523175662Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:09.526167 containerd[2018]: time="2025-05-09T23:59:09.526074970Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 9 23:59:09.529336 containerd[2018]: time="2025-05-09T23:59:09.529247014Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:09.533200 containerd[2018]: time="2025-05-09T23:59:09.532994806Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.067219754s" May 9 23:59:09.533200 containerd[2018]: time="2025-05-09T23:59:09.533064370Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 9 23:59:09.536289 containerd[2018]: time="2025-05-09T23:59:09.536213482Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 9 23:59:09.538896 containerd[2018]: time="2025-05-09T23:59:09.538819498Z" level=info msg="CreateContainer within sandbox \"c323a7571bd868bf13f947bbf3fcfc652c38d714839ca4a413de9fbf7dce06a1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 9 23:59:09.570390 containerd[2018]: time="2025-05-09T23:59:09.570326266Z" level=info msg="CreateContainer within sandbox \"c323a7571bd868bf13f947bbf3fcfc652c38d714839ca4a413de9fbf7dce06a1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e3fcb1baedd7161a5e9cdf2d65bfc22b06c21c31ce17431fd3bd4472213b25bb\"" May 9 23:59:09.571749 containerd[2018]: time="2025-05-09T23:59:09.571643926Z" level=info msg="StartContainer for \"e3fcb1baedd7161a5e9cdf2d65bfc22b06c21c31ce17431fd3bd4472213b25bb\"" May 9 23:59:09.624068 systemd[1]: Started cri-containerd-e3fcb1baedd7161a5e9cdf2d65bfc22b06c21c31ce17431fd3bd4472213b25bb.scope - libcontainer container e3fcb1baedd7161a5e9cdf2d65bfc22b06c21c31ce17431fd3bd4472213b25bb. May 9 23:59:09.676896 containerd[2018]: time="2025-05-09T23:59:09.676799747Z" level=info msg="StartContainer for \"e3fcb1baedd7161a5e9cdf2d65bfc22b06c21c31ce17431fd3bd4472213b25bb\" returns successfully" May 9 23:59:09.862031 kubelet[3559]: I0509 23:59:09.861812 3559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zx9kk" podStartSLOduration=3.861789299 podStartE2EDuration="3.861789299s" podCreationTimestamp="2025-05-09 23:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:59:08.820487818 +0000 UTC m=+16.515542555" watchObservedRunningTime="2025-05-09 23:59:09.861789299 +0000 UTC m=+17.556844024" May 9 23:59:12.626661 kubelet[3559]: I0509 23:59:12.626371 3559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-f4qgz" podStartSLOduration=4.556114031 podStartE2EDuration="6.626342845s" podCreationTimestamp="2025-05-09 23:59:06 +0000 UTC" firstStartedPulling="2025-05-09 23:59:07.464647736 +0000 UTC m=+15.159702449" lastFinishedPulling="2025-05-09 23:59:09.53487655 +0000 UTC m=+17.229931263" observedRunningTime="2025-05-09 23:59:09.86524476 +0000 UTC m=+17.560299497" watchObservedRunningTime="2025-05-09 23:59:12.626342845 +0000 UTC m=+20.321397654" May 9 23:59:15.537148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2703261351.mount: Deactivated successfully. May 9 23:59:18.212656 containerd[2018]: time="2025-05-09T23:59:18.212593637Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:18.215326 containerd[2018]: time="2025-05-09T23:59:18.215239133Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 9 23:59:18.217499 containerd[2018]: time="2025-05-09T23:59:18.217421549Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 9 23:59:18.226259 containerd[2018]: time="2025-05-09T23:59:18.226011233Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.689728067s" May 9 23:59:18.226259 containerd[2018]: time="2025-05-09T23:59:18.226098857Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 9 23:59:18.232188 containerd[2018]: time="2025-05-09T23:59:18.232067489Z" level=info msg="CreateContainer within sandbox \"e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 9 23:59:18.260284 containerd[2018]: time="2025-05-09T23:59:18.260225561Z" level=info msg="CreateContainer within sandbox \"e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1d1508532dbe321dee310b9f106c283c1422eeec8ff5e3eaa930cdbecfac2621\"" May 9 23:59:18.261999 containerd[2018]: time="2025-05-09T23:59:18.261715205Z" level=info msg="StartContainer for \"1d1508532dbe321dee310b9f106c283c1422eeec8ff5e3eaa930cdbecfac2621\"" May 9 23:59:18.318063 systemd[1]: Started cri-containerd-1d1508532dbe321dee310b9f106c283c1422eeec8ff5e3eaa930cdbecfac2621.scope - libcontainer container 1d1508532dbe321dee310b9f106c283c1422eeec8ff5e3eaa930cdbecfac2621. May 9 23:59:18.373030 containerd[2018]: time="2025-05-09T23:59:18.369532026Z" level=info msg="StartContainer for \"1d1508532dbe321dee310b9f106c283c1422eeec8ff5e3eaa930cdbecfac2621\" returns successfully" May 9 23:59:18.391688 systemd[1]: cri-containerd-1d1508532dbe321dee310b9f106c283c1422eeec8ff5e3eaa930cdbecfac2621.scope: Deactivated successfully. May 9 23:59:19.029128 containerd[2018]: time="2025-05-09T23:59:19.028817021Z" level=info msg="shim disconnected" id=1d1508532dbe321dee310b9f106c283c1422eeec8ff5e3eaa930cdbecfac2621 namespace=k8s.io May 9 23:59:19.029128 containerd[2018]: time="2025-05-09T23:59:19.029047985Z" level=warning msg="cleaning up after shim disconnected" id=1d1508532dbe321dee310b9f106c283c1422eeec8ff5e3eaa930cdbecfac2621 namespace=k8s.io May 9 23:59:19.029128 containerd[2018]: time="2025-05-09T23:59:19.029071757Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:59:19.248086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d1508532dbe321dee310b9f106c283c1422eeec8ff5e3eaa930cdbecfac2621-rootfs.mount: Deactivated successfully. May 9 23:59:19.864415 containerd[2018]: time="2025-05-09T23:59:19.864104973Z" level=info msg="CreateContainer within sandbox \"e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 9 23:59:19.900895 containerd[2018]: time="2025-05-09T23:59:19.899803389Z" level=info msg="CreateContainer within sandbox \"e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3fab6a9f591789430bca360268e0d3692915c34e822591980b7dd1d416cada39\"" May 9 23:59:19.901686 containerd[2018]: time="2025-05-09T23:59:19.901640097Z" level=info msg="StartContainer for \"3fab6a9f591789430bca360268e0d3692915c34e822591980b7dd1d416cada39\"" May 9 23:59:19.970061 systemd[1]: Started cri-containerd-3fab6a9f591789430bca360268e0d3692915c34e822591980b7dd1d416cada39.scope - libcontainer container 3fab6a9f591789430bca360268e0d3692915c34e822591980b7dd1d416cada39. May 9 23:59:20.017079 containerd[2018]: time="2025-05-09T23:59:20.017000814Z" level=info msg="StartContainer for \"3fab6a9f591789430bca360268e0d3692915c34e822591980b7dd1d416cada39\" returns successfully" May 9 23:59:20.043611 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 9 23:59:20.045949 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 9 23:59:20.046081 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 9 23:59:20.053622 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 9 23:59:20.057334 systemd[1]: cri-containerd-3fab6a9f591789430bca360268e0d3692915c34e822591980b7dd1d416cada39.scope: Deactivated successfully. May 9 23:59:20.096540 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 9 23:59:20.114772 containerd[2018]: time="2025-05-09T23:59:20.114419754Z" level=info msg="shim disconnected" id=3fab6a9f591789430bca360268e0d3692915c34e822591980b7dd1d416cada39 namespace=k8s.io May 9 23:59:20.114772 containerd[2018]: time="2025-05-09T23:59:20.114507606Z" level=warning msg="cleaning up after shim disconnected" id=3fab6a9f591789430bca360268e0d3692915c34e822591980b7dd1d416cada39 namespace=k8s.io May 9 23:59:20.114772 containerd[2018]: time="2025-05-09T23:59:20.114673746Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:59:20.136773 containerd[2018]: time="2025-05-09T23:59:20.136456555Z" level=warning msg="cleanup warnings time=\"2025-05-09T23:59:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 9 23:59:20.250299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fab6a9f591789430bca360268e0d3692915c34e822591980b7dd1d416cada39-rootfs.mount: Deactivated successfully. May 9 23:59:20.879256 containerd[2018]: time="2025-05-09T23:59:20.878985934Z" level=info msg="CreateContainer within sandbox \"e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 9 23:59:20.918881 containerd[2018]: time="2025-05-09T23:59:20.918794578Z" level=info msg="CreateContainer within sandbox \"e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"31b5d8c4890f063a317dd15e81b2b47157720dc3fc835b5fb39da4530c69b555\"" May 9 23:59:20.920017 containerd[2018]: time="2025-05-09T23:59:20.919771690Z" level=info msg="StartContainer for \"31b5d8c4890f063a317dd15e81b2b47157720dc3fc835b5fb39da4530c69b555\"" May 9 23:59:20.985096 systemd[1]: Started cri-containerd-31b5d8c4890f063a317dd15e81b2b47157720dc3fc835b5fb39da4530c69b555.scope - libcontainer container 31b5d8c4890f063a317dd15e81b2b47157720dc3fc835b5fb39da4530c69b555. May 9 23:59:21.039669 containerd[2018]: time="2025-05-09T23:59:21.039597823Z" level=info msg="StartContainer for \"31b5d8c4890f063a317dd15e81b2b47157720dc3fc835b5fb39da4530c69b555\" returns successfully" May 9 23:59:21.044163 systemd[1]: cri-containerd-31b5d8c4890f063a317dd15e81b2b47157720dc3fc835b5fb39da4530c69b555.scope: Deactivated successfully. May 9 23:59:21.090327 containerd[2018]: time="2025-05-09T23:59:21.090020215Z" level=info msg="shim disconnected" id=31b5d8c4890f063a317dd15e81b2b47157720dc3fc835b5fb39da4530c69b555 namespace=k8s.io May 9 23:59:21.090327 containerd[2018]: time="2025-05-09T23:59:21.090093679Z" level=warning msg="cleaning up after shim disconnected" id=31b5d8c4890f063a317dd15e81b2b47157720dc3fc835b5fb39da4530c69b555 namespace=k8s.io May 9 23:59:21.090327 containerd[2018]: time="2025-05-09T23:59:21.090115519Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:59:21.249467 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31b5d8c4890f063a317dd15e81b2b47157720dc3fc835b5fb39da4530c69b555-rootfs.mount: Deactivated successfully. May 9 23:59:21.881867 containerd[2018]: time="2025-05-09T23:59:21.881662151Z" level=info msg="CreateContainer within sandbox \"e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 9 23:59:21.926782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2674865752.mount: Deactivated successfully. May 9 23:59:21.938019 containerd[2018]: time="2025-05-09T23:59:21.937642895Z" level=info msg="CreateContainer within sandbox \"e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8128909c76ee08676f6576b22e449c73566ec5e47c247cd54c3dfaff2cc85da5\"" May 9 23:59:21.940925 containerd[2018]: time="2025-05-09T23:59:21.940851155Z" level=info msg="StartContainer for \"8128909c76ee08676f6576b22e449c73566ec5e47c247cd54c3dfaff2cc85da5\"" May 9 23:59:22.000058 systemd[1]: Started cri-containerd-8128909c76ee08676f6576b22e449c73566ec5e47c247cd54c3dfaff2cc85da5.scope - libcontainer container 8128909c76ee08676f6576b22e449c73566ec5e47c247cd54c3dfaff2cc85da5. May 9 23:59:22.045323 systemd[1]: cri-containerd-8128909c76ee08676f6576b22e449c73566ec5e47c247cd54c3dfaff2cc85da5.scope: Deactivated successfully. May 9 23:59:22.051106 containerd[2018]: time="2025-05-09T23:59:22.049967660Z" level=info msg="StartContainer for \"8128909c76ee08676f6576b22e449c73566ec5e47c247cd54c3dfaff2cc85da5\" returns successfully" May 9 23:59:22.089284 containerd[2018]: time="2025-05-09T23:59:22.089127488Z" level=info msg="shim disconnected" id=8128909c76ee08676f6576b22e449c73566ec5e47c247cd54c3dfaff2cc85da5 namespace=k8s.io May 9 23:59:22.089284 containerd[2018]: time="2025-05-09T23:59:22.089201468Z" level=warning msg="cleaning up after shim disconnected" id=8128909c76ee08676f6576b22e449c73566ec5e47c247cd54c3dfaff2cc85da5 namespace=k8s.io May 9 23:59:22.089284 containerd[2018]: time="2025-05-09T23:59:22.089221028Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 9 23:59:22.249478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8128909c76ee08676f6576b22e449c73566ec5e47c247cd54c3dfaff2cc85da5-rootfs.mount: Deactivated successfully. May 9 23:59:22.892442 containerd[2018]: time="2025-05-09T23:59:22.892063152Z" level=info msg="CreateContainer within sandbox \"e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 9 23:59:22.931281 containerd[2018]: time="2025-05-09T23:59:22.931125636Z" level=info msg="CreateContainer within sandbox \"e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c5cd3106c7bf05cfb52a6c6330c860b2f60209e552b2c1e13b8dcbf298910a37\"" May 9 23:59:22.933006 containerd[2018]: time="2025-05-09T23:59:22.932926320Z" level=info msg="StartContainer for \"c5cd3106c7bf05cfb52a6c6330c860b2f60209e552b2c1e13b8dcbf298910a37\"" May 9 23:59:22.996043 systemd[1]: Started cri-containerd-c5cd3106c7bf05cfb52a6c6330c860b2f60209e552b2c1e13b8dcbf298910a37.scope - libcontainer container c5cd3106c7bf05cfb52a6c6330c860b2f60209e552b2c1e13b8dcbf298910a37. May 9 23:59:23.070201 containerd[2018]: time="2025-05-09T23:59:23.069536829Z" level=info msg="StartContainer for \"c5cd3106c7bf05cfb52a6c6330c860b2f60209e552b2c1e13b8dcbf298910a37\" returns successfully" May 9 23:59:23.293488 kubelet[3559]: I0509 23:59:23.293425 3559 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 9 23:59:23.340954 kubelet[3559]: I0509 23:59:23.340882 3559 topology_manager.go:215] "Topology Admit Handler" podUID="4a051fdc-df98-4230-bc1e-ebb6d1375667" podNamespace="kube-system" podName="coredns-7db6d8ff4d-m8lk7" May 9 23:59:23.346005 kubelet[3559]: I0509 23:59:23.345928 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nc29\" (UniqueName: \"kubernetes.io/projected/4a051fdc-df98-4230-bc1e-ebb6d1375667-kube-api-access-6nc29\") pod \"coredns-7db6d8ff4d-m8lk7\" (UID: \"4a051fdc-df98-4230-bc1e-ebb6d1375667\") " pod="kube-system/coredns-7db6d8ff4d-m8lk7" May 9 23:59:23.346161 kubelet[3559]: I0509 23:59:23.346019 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a051fdc-df98-4230-bc1e-ebb6d1375667-config-volume\") pod \"coredns-7db6d8ff4d-m8lk7\" (UID: \"4a051fdc-df98-4230-bc1e-ebb6d1375667\") " pod="kube-system/coredns-7db6d8ff4d-m8lk7" May 9 23:59:23.360417 systemd[1]: Created slice kubepods-burstable-pod4a051fdc_df98_4230_bc1e_ebb6d1375667.slice - libcontainer container kubepods-burstable-pod4a051fdc_df98_4230_bc1e_ebb6d1375667.slice. May 9 23:59:23.370966 kubelet[3559]: W0509 23:59:23.370905 3559 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-30-213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-213' and this object May 9 23:59:23.371135 kubelet[3559]: E0509 23:59:23.371081 3559 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-30-213" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-213' and this object May 9 23:59:23.390711 kubelet[3559]: I0509 23:59:23.390637 3559 topology_manager.go:215] "Topology Admit Handler" podUID="b4d816ec-dcd4-409b-b76d-2f1990a82ea0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hgzgf" May 9 23:59:23.437283 systemd[1]: Created slice kubepods-burstable-podb4d816ec_dcd4_409b_b76d_2f1990a82ea0.slice - libcontainer container kubepods-burstable-podb4d816ec_dcd4_409b_b76d_2f1990a82ea0.slice. May 9 23:59:23.547695 kubelet[3559]: I0509 23:59:23.547545 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4d816ec-dcd4-409b-b76d-2f1990a82ea0-config-volume\") pod \"coredns-7db6d8ff4d-hgzgf\" (UID: \"b4d816ec-dcd4-409b-b76d-2f1990a82ea0\") " pod="kube-system/coredns-7db6d8ff4d-hgzgf" May 9 23:59:23.547695 kubelet[3559]: I0509 23:59:23.547618 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58skw\" (UniqueName: \"kubernetes.io/projected/b4d816ec-dcd4-409b-b76d-2f1990a82ea0-kube-api-access-58skw\") pod \"coredns-7db6d8ff4d-hgzgf\" (UID: \"b4d816ec-dcd4-409b-b76d-2f1990a82ea0\") " pod="kube-system/coredns-7db6d8ff4d-hgzgf" May 9 23:59:23.941892 kubelet[3559]: I0509 23:59:23.941789 3559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pgq9t" podStartSLOduration=7.355629972 podStartE2EDuration="17.941757001s" podCreationTimestamp="2025-05-09 23:59:06 +0000 UTC" firstStartedPulling="2025-05-09 23:59:07.642419636 +0000 UTC m=+15.337474349" lastFinishedPulling="2025-05-09 23:59:18.228546677 +0000 UTC m=+25.923601378" observedRunningTime="2025-05-09 23:59:23.934939753 +0000 UTC m=+31.629994490" watchObservedRunningTime="2025-05-09 23:59:23.941757001 +0000 UTC m=+31.636811906" May 9 23:59:24.447547 kubelet[3559]: E0509 23:59:24.447485 3559 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition May 9 23:59:24.448186 kubelet[3559]: E0509 23:59:24.447603 3559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4a051fdc-df98-4230-bc1e-ebb6d1375667-config-volume podName:4a051fdc-df98-4230-bc1e-ebb6d1375667 nodeName:}" failed. No retries permitted until 2025-05-09 23:59:24.94757516 +0000 UTC m=+32.642629885 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4a051fdc-df98-4230-bc1e-ebb6d1375667-config-volume") pod "coredns-7db6d8ff4d-m8lk7" (UID: "4a051fdc-df98-4230-bc1e-ebb6d1375667") : failed to sync configmap cache: timed out waiting for the condition May 9 23:59:24.648267 containerd[2018]: time="2025-05-09T23:59:24.648158845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hgzgf,Uid:b4d816ec-dcd4-409b-b76d-2f1990a82ea0,Namespace:kube-system,Attempt:0,}" May 9 23:59:25.200337 containerd[2018]: time="2025-05-09T23:59:25.200235180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m8lk7,Uid:4a051fdc-df98-4230-bc1e-ebb6d1375667,Namespace:kube-system,Attempt:0,}" May 9 23:59:25.865527 (udev-worker)[4328]: Network interface NamePolicy= disabled on kernel command line. May 9 23:59:25.867908 systemd-networkd[1934]: cilium_host: Link UP May 9 23:59:25.869925 systemd-networkd[1934]: cilium_net: Link UP May 9 23:59:25.871398 systemd-networkd[1934]: cilium_net: Gained carrier May 9 23:59:25.872808 (udev-worker)[4383]: Network interface NamePolicy= disabled on kernel command line. May 9 23:59:25.873148 systemd-networkd[1934]: cilium_host: Gained carrier May 9 23:59:26.071791 systemd-networkd[1934]: cilium_vxlan: Link UP May 9 23:59:26.071806 systemd-networkd[1934]: cilium_vxlan: Gained carrier May 9 23:59:26.235001 systemd-networkd[1934]: cilium_net: Gained IPv6LL May 9 23:59:26.579940 kernel: NET: Registered PF_ALG protocol family May 9 23:59:26.803119 systemd-networkd[1934]: cilium_host: Gained IPv6LL May 9 23:59:27.187052 systemd-networkd[1934]: cilium_vxlan: Gained IPv6LL May 9 23:59:27.994617 (udev-worker)[4394]: Network interface NamePolicy= disabled on kernel command line. May 9 23:59:27.997202 systemd-networkd[1934]: lxc_health: Link UP May 9 23:59:28.019800 systemd-networkd[1934]: lxc_health: Gained carrier May 9 23:59:28.281218 systemd-networkd[1934]: lxcef26185ed414: Link UP May 9 23:59:28.289859 kernel: eth0: renamed from tmp75792 May 9 23:59:28.296372 systemd-networkd[1934]: lxcef26185ed414: Gained carrier May 9 23:59:28.723304 systemd-networkd[1934]: lxc790ef56780e1: Link UP May 9 23:59:28.730777 kernel: eth0: renamed from tmp7c369 May 9 23:59:28.736872 systemd-networkd[1934]: lxc790ef56780e1: Gained carrier May 9 23:59:29.684008 systemd-networkd[1934]: lxc_health: Gained IPv6LL May 9 23:59:29.810989 systemd-networkd[1934]: lxcef26185ed414: Gained IPv6LL May 9 23:59:30.131006 systemd-networkd[1934]: lxc790ef56780e1: Gained IPv6LL May 9 23:59:30.184430 systemd[1]: Started sshd@7-172.31.30.213:22-147.75.109.163:41540.service - OpenSSH per-connection server daemon (147.75.109.163:41540). May 9 23:59:30.374315 sshd[4741]: Accepted publickey for core from 147.75.109.163 port 41540 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:30.376757 sshd[4741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:30.387871 systemd-logind[1992]: New session 8 of user core. May 9 23:59:30.397099 systemd[1]: Started session-8.scope - Session 8 of User core. May 9 23:59:30.758290 sshd[4741]: pam_unix(sshd:session): session closed for user core May 9 23:59:30.768029 systemd[1]: sshd@7-172.31.30.213:22-147.75.109.163:41540.service: Deactivated successfully. May 9 23:59:30.775619 systemd[1]: session-8.scope: Deactivated successfully. May 9 23:59:30.778024 systemd-logind[1992]: Session 8 logged out. Waiting for processes to exit. May 9 23:59:30.783089 systemd-logind[1992]: Removed session 8. May 9 23:59:32.467187 ntpd[1987]: Listen normally on 8 cilium_host 192.168.0.100:123 May 9 23:59:32.468657 ntpd[1987]: 9 May 23:59:32 ntpd[1987]: Listen normally on 8 cilium_host 192.168.0.100:123 May 9 23:59:32.468657 ntpd[1987]: 9 May 23:59:32 ntpd[1987]: Listen normally on 9 cilium_net [fe80::9c9d:9ff:febe:c627%4]:123 May 9 23:59:32.468657 ntpd[1987]: 9 May 23:59:32 ntpd[1987]: Listen normally on 10 cilium_host [fe80::6894:cbff:fec7:30b6%5]:123 May 9 23:59:32.468657 ntpd[1987]: 9 May 23:59:32 ntpd[1987]: Listen normally on 11 cilium_vxlan [fe80::5009:f3ff:fe1b:92da%6]:123 May 9 23:59:32.468657 ntpd[1987]: 9 May 23:59:32 ntpd[1987]: Listen normally on 12 lxc_health [fe80::980c:7cff:fe63:5fd7%8]:123 May 9 23:59:32.468657 ntpd[1987]: 9 May 23:59:32 ntpd[1987]: Listen normally on 13 lxcef26185ed414 [fe80::8c65:9ff:fe74:5a33%10]:123 May 9 23:59:32.468657 ntpd[1987]: 9 May 23:59:32 ntpd[1987]: Listen normally on 14 lxc790ef56780e1 [fe80::4c65:f6ff:fec6:cda%12]:123 May 9 23:59:32.467323 ntpd[1987]: Listen normally on 9 cilium_net [fe80::9c9d:9ff:febe:c627%4]:123 May 9 23:59:32.467417 ntpd[1987]: Listen normally on 10 cilium_host [fe80::6894:cbff:fec7:30b6%5]:123 May 9 23:59:32.467492 ntpd[1987]: Listen normally on 11 cilium_vxlan [fe80::5009:f3ff:fe1b:92da%6]:123 May 9 23:59:32.467559 ntpd[1987]: Listen normally on 12 lxc_health [fe80::980c:7cff:fe63:5fd7%8]:123 May 9 23:59:32.467628 ntpd[1987]: Listen normally on 13 lxcef26185ed414 [fe80::8c65:9ff:fe74:5a33%10]:123 May 9 23:59:32.467694 ntpd[1987]: Listen normally on 14 lxc790ef56780e1 [fe80::4c65:f6ff:fec6:cda%12]:123 May 9 23:59:35.801377 systemd[1]: Started sshd@8-172.31.30.213:22-147.75.109.163:41542.service - OpenSSH per-connection server daemon (147.75.109.163:41542). May 9 23:59:35.992626 sshd[4763]: Accepted publickey for core from 147.75.109.163 port 41542 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:35.996503 sshd[4763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:36.009097 systemd-logind[1992]: New session 9 of user core. May 9 23:59:36.018310 systemd[1]: Started session-9.scope - Session 9 of User core. May 9 23:59:36.295661 sshd[4763]: pam_unix(sshd:session): session closed for user core May 9 23:59:36.303579 systemd[1]: sshd@8-172.31.30.213:22-147.75.109.163:41542.service: Deactivated successfully. May 9 23:59:36.313380 systemd[1]: session-9.scope: Deactivated successfully. May 9 23:59:36.318646 systemd-logind[1992]: Session 9 logged out. Waiting for processes to exit. May 9 23:59:36.322091 systemd-logind[1992]: Removed session 9. May 9 23:59:37.630581 containerd[2018]: time="2025-05-09T23:59:37.630233041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:59:37.630581 containerd[2018]: time="2025-05-09T23:59:37.630521353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:59:37.632218 containerd[2018]: time="2025-05-09T23:59:37.630563425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:37.636221 containerd[2018]: time="2025-05-09T23:59:37.636077653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:37.646087 containerd[2018]: time="2025-05-09T23:59:37.645888949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 9 23:59:37.646787 containerd[2018]: time="2025-05-09T23:59:37.646477405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 9 23:59:37.648604 containerd[2018]: time="2025-05-09T23:59:37.647883830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:37.650535 containerd[2018]: time="2025-05-09T23:59:37.650434010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 9 23:59:37.724121 systemd[1]: Started cri-containerd-757926b1e57d54eb677491f12a1cdf1783cdd123364754a026f5b933d412011e.scope - libcontainer container 757926b1e57d54eb677491f12a1cdf1783cdd123364754a026f5b933d412011e. May 9 23:59:37.745210 systemd[1]: Started cri-containerd-7c369c2507b8046bf00dd284938a55a47d76607281967fbec79f15a710a3eb68.scope - libcontainer container 7c369c2507b8046bf00dd284938a55a47d76607281967fbec79f15a710a3eb68. May 9 23:59:37.873930 containerd[2018]: time="2025-05-09T23:59:37.873854571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m8lk7,Uid:4a051fdc-df98-4230-bc1e-ebb6d1375667,Namespace:kube-system,Attempt:0,} returns sandbox id \"757926b1e57d54eb677491f12a1cdf1783cdd123364754a026f5b933d412011e\"" May 9 23:59:37.883186 containerd[2018]: time="2025-05-09T23:59:37.882861687Z" level=info msg="CreateContainer within sandbox \"757926b1e57d54eb677491f12a1cdf1783cdd123364754a026f5b933d412011e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 23:59:37.908905 containerd[2018]: time="2025-05-09T23:59:37.908534079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hgzgf,Uid:b4d816ec-dcd4-409b-b76d-2f1990a82ea0,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c369c2507b8046bf00dd284938a55a47d76607281967fbec79f15a710a3eb68\"" May 9 23:59:37.918980 containerd[2018]: time="2025-05-09T23:59:37.918279543Z" level=info msg="CreateContainer within sandbox \"7c369c2507b8046bf00dd284938a55a47d76607281967fbec79f15a710a3eb68\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 9 23:59:37.942710 containerd[2018]: time="2025-05-09T23:59:37.942606975Z" level=info msg="CreateContainer within sandbox \"757926b1e57d54eb677491f12a1cdf1783cdd123364754a026f5b933d412011e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3dedb1d162d9d3b97fc4d1801d3e00ee3abc7570a274e6fa8b6e851fc6a81e0d\"" May 9 23:59:37.948882 containerd[2018]: time="2025-05-09T23:59:37.946529079Z" level=info msg="StartContainer for \"3dedb1d162d9d3b97fc4d1801d3e00ee3abc7570a274e6fa8b6e851fc6a81e0d\"" May 9 23:59:37.964845 containerd[2018]: time="2025-05-09T23:59:37.964773135Z" level=info msg="CreateContainer within sandbox \"7c369c2507b8046bf00dd284938a55a47d76607281967fbec79f15a710a3eb68\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3da74c1c1e336181ff2fd5415ea0d5a079f1c2218b13d0939bc4d82dd2fabdae\"" May 9 23:59:37.969871 containerd[2018]: time="2025-05-09T23:59:37.969148707Z" level=info msg="StartContainer for \"3da74c1c1e336181ff2fd5415ea0d5a079f1c2218b13d0939bc4d82dd2fabdae\"" May 9 23:59:38.031079 systemd[1]: Started cri-containerd-3dedb1d162d9d3b97fc4d1801d3e00ee3abc7570a274e6fa8b6e851fc6a81e0d.scope - libcontainer container 3dedb1d162d9d3b97fc4d1801d3e00ee3abc7570a274e6fa8b6e851fc6a81e0d. May 9 23:59:38.096084 systemd[1]: Started cri-containerd-3da74c1c1e336181ff2fd5415ea0d5a079f1c2218b13d0939bc4d82dd2fabdae.scope - libcontainer container 3da74c1c1e336181ff2fd5415ea0d5a079f1c2218b13d0939bc4d82dd2fabdae. May 9 23:59:38.133852 containerd[2018]: time="2025-05-09T23:59:38.133165392Z" level=info msg="StartContainer for \"3dedb1d162d9d3b97fc4d1801d3e00ee3abc7570a274e6fa8b6e851fc6a81e0d\" returns successfully" May 9 23:59:38.205917 containerd[2018]: time="2025-05-09T23:59:38.205799364Z" level=info msg="StartContainer for \"3da74c1c1e336181ff2fd5415ea0d5a079f1c2218b13d0939bc4d82dd2fabdae\" returns successfully" May 9 23:59:39.012453 kubelet[3559]: I0509 23:59:39.012332 3559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-m8lk7" podStartSLOduration=33.012286536 podStartE2EDuration="33.012286536s" podCreationTimestamp="2025-05-09 23:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:59:39.011828472 +0000 UTC m=+46.706883269" watchObservedRunningTime="2025-05-09 23:59:39.012286536 +0000 UTC m=+46.707341249" May 9 23:59:41.341865 systemd[1]: Started sshd@9-172.31.30.213:22-147.75.109.163:59034.service - OpenSSH per-connection server daemon (147.75.109.163:59034). May 9 23:59:41.510026 sshd[4949]: Accepted publickey for core from 147.75.109.163 port 59034 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:41.512798 sshd[4949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:41.521609 systemd-logind[1992]: New session 10 of user core. May 9 23:59:41.527038 systemd[1]: Started session-10.scope - Session 10 of User core. May 9 23:59:41.771585 sshd[4949]: pam_unix(sshd:session): session closed for user core May 9 23:59:41.777434 systemd-logind[1992]: Session 10 logged out. Waiting for processes to exit. May 9 23:59:41.778333 systemd[1]: sshd@9-172.31.30.213:22-147.75.109.163:59034.service: Deactivated successfully. May 9 23:59:41.782992 systemd[1]: session-10.scope: Deactivated successfully. May 9 23:59:41.788543 systemd-logind[1992]: Removed session 10. May 9 23:59:46.811262 systemd[1]: Started sshd@10-172.31.30.213:22-147.75.109.163:47552.service - OpenSSH per-connection server daemon (147.75.109.163:47552). May 9 23:59:46.991021 sshd[4965]: Accepted publickey for core from 147.75.109.163 port 47552 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:46.993881 sshd[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:47.002286 systemd-logind[1992]: New session 11 of user core. May 9 23:59:47.010049 systemd[1]: Started session-11.scope - Session 11 of User core. May 9 23:59:47.264946 sshd[4965]: pam_unix(sshd:session): session closed for user core May 9 23:59:47.272874 systemd-logind[1992]: Session 11 logged out. Waiting for processes to exit. May 9 23:59:47.274355 systemd[1]: sshd@10-172.31.30.213:22-147.75.109.163:47552.service: Deactivated successfully. May 9 23:59:47.278709 systemd[1]: session-11.scope: Deactivated successfully. May 9 23:59:47.284294 systemd-logind[1992]: Removed session 11. May 9 23:59:52.306280 systemd[1]: Started sshd@11-172.31.30.213:22-147.75.109.163:47560.service - OpenSSH per-connection server daemon (147.75.109.163:47560). May 9 23:59:52.485639 sshd[4979]: Accepted publickey for core from 147.75.109.163 port 47560 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:52.488353 sshd[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:52.497119 systemd-logind[1992]: New session 12 of user core. May 9 23:59:52.503023 systemd[1]: Started session-12.scope - Session 12 of User core. May 9 23:59:52.747710 sshd[4979]: pam_unix(sshd:session): session closed for user core May 9 23:59:52.753930 systemd[1]: sshd@11-172.31.30.213:22-147.75.109.163:47560.service: Deactivated successfully. May 9 23:59:52.757265 systemd[1]: session-12.scope: Deactivated successfully. May 9 23:59:52.761446 systemd-logind[1992]: Session 12 logged out. Waiting for processes to exit. May 9 23:59:52.763392 systemd-logind[1992]: Removed session 12. May 9 23:59:52.785276 systemd[1]: Started sshd@12-172.31.30.213:22-147.75.109.163:47572.service - OpenSSH per-connection server daemon (147.75.109.163:47572). May 9 23:59:52.960778 sshd[4995]: Accepted publickey for core from 147.75.109.163 port 47572 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:52.963888 sshd[4995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:52.974165 systemd-logind[1992]: New session 13 of user core. May 9 23:59:52.982053 systemd[1]: Started session-13.scope - Session 13 of User core. May 9 23:59:53.313065 sshd[4995]: pam_unix(sshd:session): session closed for user core May 9 23:59:53.322396 systemd[1]: sshd@12-172.31.30.213:22-147.75.109.163:47572.service: Deactivated successfully. May 9 23:59:53.330450 systemd[1]: session-13.scope: Deactivated successfully. May 9 23:59:53.340133 systemd-logind[1992]: Session 13 logged out. Waiting for processes to exit. May 9 23:59:53.364540 systemd[1]: Started sshd@13-172.31.30.213:22-147.75.109.163:47576.service - OpenSSH per-connection server daemon (147.75.109.163:47576). May 9 23:59:53.368493 systemd-logind[1992]: Removed session 13. May 9 23:59:53.561383 sshd[5005]: Accepted publickey for core from 147.75.109.163 port 47576 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:53.564181 sshd[5005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:53.572217 systemd-logind[1992]: New session 14 of user core. May 9 23:59:53.585112 systemd[1]: Started session-14.scope - Session 14 of User core. May 9 23:59:53.839085 sshd[5005]: pam_unix(sshd:session): session closed for user core May 9 23:59:53.846134 systemd[1]: sshd@13-172.31.30.213:22-147.75.109.163:47576.service: Deactivated successfully. May 9 23:59:53.850369 systemd[1]: session-14.scope: Deactivated successfully. May 9 23:59:53.852260 systemd-logind[1992]: Session 14 logged out. Waiting for processes to exit. May 9 23:59:53.854641 systemd-logind[1992]: Removed session 14. May 9 23:59:58.880249 systemd[1]: Started sshd@14-172.31.30.213:22-147.75.109.163:40186.service - OpenSSH per-connection server daemon (147.75.109.163:40186). May 9 23:59:59.059961 sshd[5017]: Accepted publickey for core from 147.75.109.163 port 40186 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 9 23:59:59.062004 sshd[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 9 23:59:59.070624 systemd-logind[1992]: New session 15 of user core. May 9 23:59:59.078006 systemd[1]: Started session-15.scope - Session 15 of User core. May 9 23:59:59.319019 sshd[5017]: pam_unix(sshd:session): session closed for user core May 9 23:59:59.326572 systemd[1]: sshd@14-172.31.30.213:22-147.75.109.163:40186.service: Deactivated successfully. May 9 23:59:59.332439 systemd[1]: session-15.scope: Deactivated successfully. May 9 23:59:59.335117 systemd-logind[1992]: Session 15 logged out. Waiting for processes to exit. May 9 23:59:59.337015 systemd-logind[1992]: Removed session 15. May 10 00:00:04.368026 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. May 10 00:00:04.373107 systemd[1]: Started sshd@15-172.31.30.213:22-147.75.109.163:40192.service - OpenSSH per-connection server daemon (147.75.109.163:40192). May 10 00:00:04.386694 systemd[1]: logrotate.service: Deactivated successfully. May 10 00:00:04.567554 sshd[5031]: Accepted publickey for core from 147.75.109.163 port 40192 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:04.570288 sshd[5031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:04.578448 systemd-logind[1992]: New session 16 of user core. May 10 00:00:04.589290 systemd[1]: Started session-16.scope - Session 16 of User core. May 10 00:00:04.829679 sshd[5031]: pam_unix(sshd:session): session closed for user core May 10 00:00:04.836537 systemd-logind[1992]: Session 16 logged out. Waiting for processes to exit. May 10 00:00:04.838991 systemd[1]: sshd@15-172.31.30.213:22-147.75.109.163:40192.service: Deactivated successfully. May 10 00:00:04.843925 systemd[1]: session-16.scope: Deactivated successfully. May 10 00:00:04.846507 systemd-logind[1992]: Removed session 16. May 10 00:00:09.872273 systemd[1]: Started sshd@16-172.31.30.213:22-147.75.109.163:35440.service - OpenSSH per-connection server daemon (147.75.109.163:35440). May 10 00:00:10.052023 sshd[5048]: Accepted publickey for core from 147.75.109.163 port 35440 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:10.054845 sshd[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:10.062662 systemd-logind[1992]: New session 17 of user core. May 10 00:00:10.069024 systemd[1]: Started session-17.scope - Session 17 of User core. May 10 00:00:10.309607 sshd[5048]: pam_unix(sshd:session): session closed for user core May 10 00:00:10.316368 systemd[1]: sshd@16-172.31.30.213:22-147.75.109.163:35440.service: Deactivated successfully. May 10 00:00:10.320682 systemd[1]: session-17.scope: Deactivated successfully. May 10 00:00:10.324449 systemd-logind[1992]: Session 17 logged out. Waiting for processes to exit. May 10 00:00:10.327256 systemd-logind[1992]: Removed session 17. May 10 00:00:15.349261 systemd[1]: Started sshd@17-172.31.30.213:22-147.75.109.163:35454.service - OpenSSH per-connection server daemon (147.75.109.163:35454). May 10 00:00:15.526633 sshd[5061]: Accepted publickey for core from 147.75.109.163 port 35454 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:15.530082 sshd[5061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:15.537635 systemd-logind[1992]: New session 18 of user core. May 10 00:00:15.546033 systemd[1]: Started session-18.scope - Session 18 of User core. May 10 00:00:15.780931 sshd[5061]: pam_unix(sshd:session): session closed for user core May 10 00:00:15.787896 systemd[1]: sshd@17-172.31.30.213:22-147.75.109.163:35454.service: Deactivated successfully. May 10 00:00:15.791709 systemd[1]: session-18.scope: Deactivated successfully. May 10 00:00:15.793548 systemd-logind[1992]: Session 18 logged out. Waiting for processes to exit. May 10 00:00:15.796064 systemd-logind[1992]: Removed session 18. May 10 00:00:15.816235 systemd[1]: Started sshd@18-172.31.30.213:22-147.75.109.163:35466.service - OpenSSH per-connection server daemon (147.75.109.163:35466). May 10 00:00:15.990557 sshd[5073]: Accepted publickey for core from 147.75.109.163 port 35466 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:15.993355 sshd[5073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:16.001453 systemd-logind[1992]: New session 19 of user core. May 10 00:00:16.009002 systemd[1]: Started session-19.scope - Session 19 of User core. May 10 00:00:16.322891 sshd[5073]: pam_unix(sshd:session): session closed for user core May 10 00:00:16.328288 systemd-logind[1992]: Session 19 logged out. Waiting for processes to exit. May 10 00:00:16.329266 systemd[1]: sshd@18-172.31.30.213:22-147.75.109.163:35466.service: Deactivated successfully. May 10 00:00:16.333836 systemd[1]: session-19.scope: Deactivated successfully. May 10 00:00:16.338053 systemd-logind[1992]: Removed session 19. May 10 00:00:16.359260 systemd[1]: Started sshd@19-172.31.30.213:22-147.75.109.163:35474.service - OpenSSH per-connection server daemon (147.75.109.163:35474). May 10 00:00:16.540154 sshd[5084]: Accepted publickey for core from 147.75.109.163 port 35474 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:16.543096 sshd[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:16.551455 systemd-logind[1992]: New session 20 of user core. May 10 00:00:16.557010 systemd[1]: Started session-20.scope - Session 20 of User core. May 10 00:00:19.099927 sshd[5084]: pam_unix(sshd:session): session closed for user core May 10 00:00:19.111533 systemd[1]: sshd@19-172.31.30.213:22-147.75.109.163:35474.service: Deactivated successfully. May 10 00:00:19.123838 systemd[1]: session-20.scope: Deactivated successfully. May 10 00:00:19.130006 systemd-logind[1992]: Session 20 logged out. Waiting for processes to exit. May 10 00:00:19.154280 systemd[1]: Started sshd@20-172.31.30.213:22-147.75.109.163:36450.service - OpenSSH per-connection server daemon (147.75.109.163:36450). May 10 00:00:19.160223 systemd-logind[1992]: Removed session 20. May 10 00:00:19.339436 sshd[5102]: Accepted publickey for core from 147.75.109.163 port 36450 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:19.342292 sshd[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:19.352385 systemd-logind[1992]: New session 21 of user core. May 10 00:00:19.358039 systemd[1]: Started session-21.scope - Session 21 of User core. May 10 00:00:19.846652 sshd[5102]: pam_unix(sshd:session): session closed for user core May 10 00:00:19.855696 systemd[1]: sshd@20-172.31.30.213:22-147.75.109.163:36450.service: Deactivated successfully. May 10 00:00:19.859395 systemd[1]: session-21.scope: Deactivated successfully. May 10 00:00:19.861940 systemd-logind[1992]: Session 21 logged out. Waiting for processes to exit. May 10 00:00:19.865142 systemd-logind[1992]: Removed session 21. May 10 00:00:19.885276 systemd[1]: Started sshd@21-172.31.30.213:22-147.75.109.163:36458.service - OpenSSH per-connection server daemon (147.75.109.163:36458). May 10 00:00:20.060467 sshd[5114]: Accepted publickey for core from 147.75.109.163 port 36458 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:20.063204 sshd[5114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:20.070713 systemd-logind[1992]: New session 22 of user core. May 10 00:00:20.080012 systemd[1]: Started session-22.scope - Session 22 of User core. May 10 00:00:20.315376 sshd[5114]: pam_unix(sshd:session): session closed for user core May 10 00:00:20.322268 systemd[1]: sshd@21-172.31.30.213:22-147.75.109.163:36458.service: Deactivated successfully. May 10 00:00:20.326103 systemd[1]: session-22.scope: Deactivated successfully. May 10 00:00:20.329221 systemd-logind[1992]: Session 22 logged out. Waiting for processes to exit. May 10 00:00:20.331083 systemd-logind[1992]: Removed session 22. May 10 00:00:25.359242 systemd[1]: Started sshd@22-172.31.30.213:22-147.75.109.163:36466.service - OpenSSH per-connection server daemon (147.75.109.163:36466). May 10 00:00:25.543853 sshd[5127]: Accepted publickey for core from 147.75.109.163 port 36466 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:25.546771 sshd[5127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:25.554948 systemd-logind[1992]: New session 23 of user core. May 10 00:00:25.562987 systemd[1]: Started session-23.scope - Session 23 of User core. May 10 00:00:25.799073 sshd[5127]: pam_unix(sshd:session): session closed for user core May 10 00:00:25.805994 systemd[1]: sshd@22-172.31.30.213:22-147.75.109.163:36466.service: Deactivated successfully. May 10 00:00:25.810264 systemd[1]: session-23.scope: Deactivated successfully. May 10 00:00:25.812105 systemd-logind[1992]: Session 23 logged out. Waiting for processes to exit. May 10 00:00:25.815506 systemd-logind[1992]: Removed session 23. May 10 00:00:30.843269 systemd[1]: Started sshd@23-172.31.30.213:22-147.75.109.163:59668.service - OpenSSH per-connection server daemon (147.75.109.163:59668). May 10 00:00:31.016359 sshd[5142]: Accepted publickey for core from 147.75.109.163 port 59668 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:31.019151 sshd[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:31.027560 systemd-logind[1992]: New session 24 of user core. May 10 00:00:31.033624 systemd[1]: Started session-24.scope - Session 24 of User core. May 10 00:00:31.277255 sshd[5142]: pam_unix(sshd:session): session closed for user core May 10 00:00:31.282158 systemd-logind[1992]: Session 24 logged out. Waiting for processes to exit. May 10 00:00:31.282581 systemd[1]: sshd@23-172.31.30.213:22-147.75.109.163:59668.service: Deactivated successfully. May 10 00:00:31.287014 systemd[1]: session-24.scope: Deactivated successfully. May 10 00:00:31.291687 systemd-logind[1992]: Removed session 24. May 10 00:00:36.322314 systemd[1]: Started sshd@24-172.31.30.213:22-147.75.109.163:59672.service - OpenSSH per-connection server daemon (147.75.109.163:59672). May 10 00:00:36.501083 sshd[5155]: Accepted publickey for core from 147.75.109.163 port 59672 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:36.503859 sshd[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:36.511825 systemd-logind[1992]: New session 25 of user core. May 10 00:00:36.522071 systemd[1]: Started session-25.scope - Session 25 of User core. May 10 00:00:36.759118 sshd[5155]: pam_unix(sshd:session): session closed for user core May 10 00:00:36.765283 systemd[1]: sshd@24-172.31.30.213:22-147.75.109.163:59672.service: Deactivated successfully. May 10 00:00:36.770079 systemd[1]: session-25.scope: Deactivated successfully. May 10 00:00:36.771606 systemd-logind[1992]: Session 25 logged out. Waiting for processes to exit. May 10 00:00:36.774199 systemd-logind[1992]: Removed session 25. May 10 00:00:41.797371 systemd[1]: Started sshd@25-172.31.30.213:22-147.75.109.163:35456.service - OpenSSH per-connection server daemon (147.75.109.163:35456). May 10 00:00:41.971669 sshd[5170]: Accepted publickey for core from 147.75.109.163 port 35456 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:41.974851 sshd[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:41.983749 systemd-logind[1992]: New session 26 of user core. May 10 00:00:41.996040 systemd[1]: Started session-26.scope - Session 26 of User core. May 10 00:00:42.225923 sshd[5170]: pam_unix(sshd:session): session closed for user core May 10 00:00:42.232286 systemd[1]: sshd@25-172.31.30.213:22-147.75.109.163:35456.service: Deactivated successfully. May 10 00:00:42.237442 systemd[1]: session-26.scope: Deactivated successfully. May 10 00:00:42.241255 systemd-logind[1992]: Session 26 logged out. Waiting for processes to exit. May 10 00:00:42.244866 systemd-logind[1992]: Removed session 26. May 10 00:00:42.268278 systemd[1]: Started sshd@26-172.31.30.213:22-147.75.109.163:35472.service - OpenSSH per-connection server daemon (147.75.109.163:35472). May 10 00:00:42.446389 sshd[5182]: Accepted publickey for core from 147.75.109.163 port 35472 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:42.449711 sshd[5182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:42.461918 systemd-logind[1992]: New session 27 of user core. May 10 00:00:42.467058 systemd[1]: Started session-27.scope - Session 27 of User core. May 10 00:00:45.408926 kubelet[3559]: I0510 00:00:45.408261 3559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hgzgf" podStartSLOduration=99.40821729 podStartE2EDuration="1m39.40821729s" podCreationTimestamp="2025-05-09 23:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-09 23:59:39.063415165 +0000 UTC m=+46.758469890" watchObservedRunningTime="2025-05-10 00:00:45.40821729 +0000 UTC m=+113.103272027" May 10 00:00:45.451442 containerd[2018]: time="2025-05-10T00:00:45.447317850Z" level=info msg="StopContainer for \"e3fcb1baedd7161a5e9cdf2d65bfc22b06c21c31ce17431fd3bd4472213b25bb\" with timeout 30 (s)" May 10 00:00:45.451442 containerd[2018]: time="2025-05-10T00:00:45.451286262Z" level=info msg="Stop container \"e3fcb1baedd7161a5e9cdf2d65bfc22b06c21c31ce17431fd3bd4472213b25bb\" with signal terminated" May 10 00:00:45.482527 systemd[1]: cri-containerd-e3fcb1baedd7161a5e9cdf2d65bfc22b06c21c31ce17431fd3bd4472213b25bb.scope: Deactivated successfully. May 10 00:00:45.494260 containerd[2018]: time="2025-05-10T00:00:45.492116718Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:00:45.508108 containerd[2018]: time="2025-05-10T00:00:45.508058503Z" level=info msg="StopContainer for \"c5cd3106c7bf05cfb52a6c6330c860b2f60209e552b2c1e13b8dcbf298910a37\" with timeout 2 (s)" May 10 00:00:45.509107 containerd[2018]: time="2025-05-10T00:00:45.509043223Z" level=info msg="Stop container \"c5cd3106c7bf05cfb52a6c6330c860b2f60209e552b2c1e13b8dcbf298910a37\" with signal terminated" May 10 00:00:45.532415 systemd-networkd[1934]: lxc_health: Link DOWN May 10 00:00:45.532438 systemd-networkd[1934]: lxc_health: Lost carrier May 10 00:00:45.548267 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3fcb1baedd7161a5e9cdf2d65bfc22b06c21c31ce17431fd3bd4472213b25bb-rootfs.mount: Deactivated successfully. May 10 00:00:45.569660 containerd[2018]: time="2025-05-10T00:00:45.569009191Z" level=info msg="shim disconnected" id=e3fcb1baedd7161a5e9cdf2d65bfc22b06c21c31ce17431fd3bd4472213b25bb namespace=k8s.io May 10 00:00:45.569660 containerd[2018]: time="2025-05-10T00:00:45.569120287Z" level=warning msg="cleaning up after shim disconnected" id=e3fcb1baedd7161a5e9cdf2d65bfc22b06c21c31ce17431fd3bd4472213b25bb namespace=k8s.io May 10 00:00:45.569660 containerd[2018]: time="2025-05-10T00:00:45.569163763Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:45.571983 systemd[1]: cri-containerd-c5cd3106c7bf05cfb52a6c6330c860b2f60209e552b2c1e13b8dcbf298910a37.scope: Deactivated successfully. May 10 00:00:45.572458 systemd[1]: cri-containerd-c5cd3106c7bf05cfb52a6c6330c860b2f60209e552b2c1e13b8dcbf298910a37.scope: Consumed 15.115s CPU time. May 10 00:00:45.610246 containerd[2018]: time="2025-05-10T00:00:45.609918031Z" level=info msg="StopContainer for \"e3fcb1baedd7161a5e9cdf2d65bfc22b06c21c31ce17431fd3bd4472213b25bb\" returns successfully" May 10 00:00:45.612143 containerd[2018]: time="2025-05-10T00:00:45.611886739Z" level=info msg="StopPodSandbox for \"c323a7571bd868bf13f947bbf3fcfc652c38d714839ca4a413de9fbf7dce06a1\"" May 10 00:00:45.612143 containerd[2018]: time="2025-05-10T00:00:45.611972503Z" level=info msg="Container to stop \"e3fcb1baedd7161a5e9cdf2d65bfc22b06c21c31ce17431fd3bd4472213b25bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:00:45.619670 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c323a7571bd868bf13f947bbf3fcfc652c38d714839ca4a413de9fbf7dce06a1-shm.mount: Deactivated successfully. May 10 00:00:45.630357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5cd3106c7bf05cfb52a6c6330c860b2f60209e552b2c1e13b8dcbf298910a37-rootfs.mount: Deactivated successfully. May 10 00:00:45.635956 systemd[1]: cri-containerd-c323a7571bd868bf13f947bbf3fcfc652c38d714839ca4a413de9fbf7dce06a1.scope: Deactivated successfully. May 10 00:00:45.645251 containerd[2018]: time="2025-05-10T00:00:45.645122959Z" level=info msg="shim disconnected" id=c5cd3106c7bf05cfb52a6c6330c860b2f60209e552b2c1e13b8dcbf298910a37 namespace=k8s.io May 10 00:00:45.645251 containerd[2018]: time="2025-05-10T00:00:45.645202795Z" level=warning msg="cleaning up after shim disconnected" id=c5cd3106c7bf05cfb52a6c6330c860b2f60209e552b2c1e13b8dcbf298910a37 namespace=k8s.io May 10 00:00:45.645875 containerd[2018]: time="2025-05-10T00:00:45.645222547Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:45.675077 containerd[2018]: time="2025-05-10T00:00:45.674911903Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:00:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 10 00:00:45.684064 containerd[2018]: time="2025-05-10T00:00:45.683999935Z" level=info msg="StopContainer for \"c5cd3106c7bf05cfb52a6c6330c860b2f60209e552b2c1e13b8dcbf298910a37\" returns successfully" May 10 00:00:45.685035 containerd[2018]: time="2025-05-10T00:00:45.684976111Z" level=info msg="StopPodSandbox for \"e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254\"" May 10 00:00:45.685133 containerd[2018]: time="2025-05-10T00:00:45.685036975Z" level=info msg="Container to stop \"1d1508532dbe321dee310b9f106c283c1422eeec8ff5e3eaa930cdbecfac2621\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:00:45.685133 containerd[2018]: time="2025-05-10T00:00:45.685064035Z" level=info msg="Container to stop \"c5cd3106c7bf05cfb52a6c6330c860b2f60209e552b2c1e13b8dcbf298910a37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:00:45.685133 containerd[2018]: time="2025-05-10T00:00:45.685093111Z" level=info msg="Container to stop \"3fab6a9f591789430bca360268e0d3692915c34e822591980b7dd1d416cada39\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:00:45.685133 containerd[2018]: time="2025-05-10T00:00:45.685117099Z" level=info msg="Container to stop \"31b5d8c4890f063a317dd15e81b2b47157720dc3fc835b5fb39da4530c69b555\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:00:45.685368 containerd[2018]: time="2025-05-10T00:00:45.685138867Z" level=info msg="Container to stop \"8128909c76ee08676f6576b22e449c73566ec5e47c247cd54c3dfaff2cc85da5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:00:45.692123 containerd[2018]: time="2025-05-10T00:00:45.691621039Z" level=info msg="shim disconnected" id=c323a7571bd868bf13f947bbf3fcfc652c38d714839ca4a413de9fbf7dce06a1 namespace=k8s.io May 10 00:00:45.692123 containerd[2018]: time="2025-05-10T00:00:45.691707727Z" level=warning msg="cleaning up after shim disconnected" id=c323a7571bd868bf13f947bbf3fcfc652c38d714839ca4a413de9fbf7dce06a1 namespace=k8s.io May 10 00:00:45.692123 containerd[2018]: time="2025-05-10T00:00:45.691766851Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:45.700159 systemd[1]: cri-containerd-e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254.scope: Deactivated successfully. May 10 00:00:45.725669 containerd[2018]: time="2025-05-10T00:00:45.725584964Z" level=info msg="TearDown network for sandbox \"c323a7571bd868bf13f947bbf3fcfc652c38d714839ca4a413de9fbf7dce06a1\" successfully" May 10 00:00:45.725669 containerd[2018]: time="2025-05-10T00:00:45.725642552Z" level=info msg="StopPodSandbox for \"c323a7571bd868bf13f947bbf3fcfc652c38d714839ca4a413de9fbf7dce06a1\" returns successfully" May 10 00:00:45.761688 containerd[2018]: time="2025-05-10T00:00:45.759498368Z" level=info msg="shim disconnected" id=e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254 namespace=k8s.io May 10 00:00:45.761688 containerd[2018]: time="2025-05-10T00:00:45.760019600Z" level=warning msg="cleaning up after shim disconnected" id=e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254 namespace=k8s.io May 10 00:00:45.761688 containerd[2018]: time="2025-05-10T00:00:45.760047140Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:45.785017 containerd[2018]: time="2025-05-10T00:00:45.784928996Z" level=info msg="TearDown network for sandbox \"e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254\" successfully" May 10 00:00:45.785017 containerd[2018]: time="2025-05-10T00:00:45.784992224Z" level=info msg="StopPodSandbox for \"e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254\" returns successfully" May 10 00:00:45.885752 kubelet[3559]: I0510 00:00:45.884226 3559 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plcrg\" (UniqueName: \"kubernetes.io/projected/6f8adc2a-de67-41c6-9dca-5ba5311b69ac-kube-api-access-plcrg\") pod \"6f8adc2a-de67-41c6-9dca-5ba5311b69ac\" (UID: \"6f8adc2a-de67-41c6-9dca-5ba5311b69ac\") " May 10 00:00:45.885752 kubelet[3559]: I0510 00:00:45.884310 3559 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f8adc2a-de67-41c6-9dca-5ba5311b69ac-cilium-config-path\") pod \"6f8adc2a-de67-41c6-9dca-5ba5311b69ac\" (UID: \"6f8adc2a-de67-41c6-9dca-5ba5311b69ac\") " May 10 00:00:45.889242 kubelet[3559]: I0510 00:00:45.889163 3559 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f8adc2a-de67-41c6-9dca-5ba5311b69ac-kube-api-access-plcrg" (OuterVolumeSpecName: "kube-api-access-plcrg") pod "6f8adc2a-de67-41c6-9dca-5ba5311b69ac" (UID: "6f8adc2a-de67-41c6-9dca-5ba5311b69ac"). InnerVolumeSpecName "kube-api-access-plcrg". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:00:45.892039 kubelet[3559]: I0510 00:00:45.891988 3559 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f8adc2a-de67-41c6-9dca-5ba5311b69ac-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6f8adc2a-de67-41c6-9dca-5ba5311b69ac" (UID: "6f8adc2a-de67-41c6-9dca-5ba5311b69ac"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:00:45.985199 kubelet[3559]: I0510 00:00:45.985053 3559 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/357e29ce-751b-42f6-986b-180fcb0a1f31-cilium-config-path\") pod \"357e29ce-751b-42f6-986b-180fcb0a1f31\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " May 10 00:00:45.985199 kubelet[3559]: I0510 00:00:45.985123 3559 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-cilium-run\") pod \"357e29ce-751b-42f6-986b-180fcb0a1f31\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " May 10 00:00:45.985199 kubelet[3559]: I0510 00:00:45.985169 3559 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-hostproc\") pod \"357e29ce-751b-42f6-986b-180fcb0a1f31\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " May 10 00:00:45.985440 kubelet[3559]: I0510 00:00:45.985221 3559 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-cilium-cgroup\") pod \"357e29ce-751b-42f6-986b-180fcb0a1f31\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " May 10 00:00:45.985440 kubelet[3559]: I0510 00:00:45.985259 3559 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-cni-path\") pod \"357e29ce-751b-42f6-986b-180fcb0a1f31\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " May 10 00:00:45.985440 kubelet[3559]: I0510 00:00:45.985297 3559 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/357e29ce-751b-42f6-986b-180fcb0a1f31-clustermesh-secrets\") pod \"357e29ce-751b-42f6-986b-180fcb0a1f31\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " May 10 00:00:45.985440 kubelet[3559]: I0510 00:00:45.985335 3559 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q58hl\" (UniqueName: \"kubernetes.io/projected/357e29ce-751b-42f6-986b-180fcb0a1f31-kube-api-access-q58hl\") pod \"357e29ce-751b-42f6-986b-180fcb0a1f31\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " May 10 00:00:45.985440 kubelet[3559]: I0510 00:00:45.985367 3559 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-host-proc-sys-kernel\") pod \"357e29ce-751b-42f6-986b-180fcb0a1f31\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " May 10 00:00:45.985440 kubelet[3559]: I0510 00:00:45.985404 3559 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/357e29ce-751b-42f6-986b-180fcb0a1f31-hubble-tls\") pod \"357e29ce-751b-42f6-986b-180fcb0a1f31\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " May 10 00:00:45.985763 kubelet[3559]: I0510 00:00:45.985435 3559 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-etc-cni-netd\") pod \"357e29ce-751b-42f6-986b-180fcb0a1f31\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " May 10 00:00:45.985763 kubelet[3559]: I0510 00:00:45.985466 3559 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-lib-modules\") pod \"357e29ce-751b-42f6-986b-180fcb0a1f31\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " May 10 00:00:45.985763 kubelet[3559]: I0510 00:00:45.985499 3559 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-xtables-lock\") pod \"357e29ce-751b-42f6-986b-180fcb0a1f31\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " May 10 00:00:45.985763 kubelet[3559]: I0510 00:00:45.985529 3559 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-bpf-maps\") pod \"357e29ce-751b-42f6-986b-180fcb0a1f31\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " May 10 00:00:45.985763 kubelet[3559]: I0510 00:00:45.985583 3559 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-host-proc-sys-net\") pod \"357e29ce-751b-42f6-986b-180fcb0a1f31\" (UID: \"357e29ce-751b-42f6-986b-180fcb0a1f31\") " May 10 00:00:45.985763 kubelet[3559]: I0510 00:00:45.985643 3559 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-plcrg\" (UniqueName: \"kubernetes.io/projected/6f8adc2a-de67-41c6-9dca-5ba5311b69ac-kube-api-access-plcrg\") on node \"ip-172-31-30-213\" DevicePath \"\"" May 10 00:00:45.986095 kubelet[3559]: I0510 00:00:45.985670 3559 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f8adc2a-de67-41c6-9dca-5ba5311b69ac-cilium-config-path\") on node \"ip-172-31-30-213\" DevicePath \"\"" May 10 00:00:45.986095 kubelet[3559]: I0510 00:00:45.985758 3559 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "357e29ce-751b-42f6-986b-180fcb0a1f31" (UID: "357e29ce-751b-42f6-986b-180fcb0a1f31"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:45.991944 kubelet[3559]: I0510 00:00:45.991215 3559 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/357e29ce-751b-42f6-986b-180fcb0a1f31-kube-api-access-q58hl" (OuterVolumeSpecName: "kube-api-access-q58hl") pod "357e29ce-751b-42f6-986b-180fcb0a1f31" (UID: "357e29ce-751b-42f6-986b-180fcb0a1f31"). InnerVolumeSpecName "kube-api-access-q58hl". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:00:45.991944 kubelet[3559]: I0510 00:00:45.991298 3559 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "357e29ce-751b-42f6-986b-180fcb0a1f31" (UID: "357e29ce-751b-42f6-986b-180fcb0a1f31"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:45.991944 kubelet[3559]: I0510 00:00:45.991343 3559 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-hostproc" (OuterVolumeSpecName: "hostproc") pod "357e29ce-751b-42f6-986b-180fcb0a1f31" (UID: "357e29ce-751b-42f6-986b-180fcb0a1f31"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:45.991944 kubelet[3559]: I0510 00:00:45.991383 3559 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "357e29ce-751b-42f6-986b-180fcb0a1f31" (UID: "357e29ce-751b-42f6-986b-180fcb0a1f31"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:45.991944 kubelet[3559]: I0510 00:00:45.991422 3559 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-cni-path" (OuterVolumeSpecName: "cni-path") pod "357e29ce-751b-42f6-986b-180fcb0a1f31" (UID: "357e29ce-751b-42f6-986b-180fcb0a1f31"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:45.994135 kubelet[3559]: I0510 00:00:45.994065 3559 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/357e29ce-751b-42f6-986b-180fcb0a1f31-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "357e29ce-751b-42f6-986b-180fcb0a1f31" (UID: "357e29ce-751b-42f6-986b-180fcb0a1f31"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:00:45.994272 kubelet[3559]: I0510 00:00:45.994165 3559 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "357e29ce-751b-42f6-986b-180fcb0a1f31" (UID: "357e29ce-751b-42f6-986b-180fcb0a1f31"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:45.997051 kubelet[3559]: I0510 00:00:45.996765 3559 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/357e29ce-751b-42f6-986b-180fcb0a1f31-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "357e29ce-751b-42f6-986b-180fcb0a1f31" (UID: "357e29ce-751b-42f6-986b-180fcb0a1f31"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:00:45.997051 kubelet[3559]: I0510 00:00:45.996872 3559 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "357e29ce-751b-42f6-986b-180fcb0a1f31" (UID: "357e29ce-751b-42f6-986b-180fcb0a1f31"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:45.997051 kubelet[3559]: I0510 00:00:45.996914 3559 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "357e29ce-751b-42f6-986b-180fcb0a1f31" (UID: "357e29ce-751b-42f6-986b-180fcb0a1f31"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:45.997051 kubelet[3559]: I0510 00:00:45.996954 3559 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "357e29ce-751b-42f6-986b-180fcb0a1f31" (UID: "357e29ce-751b-42f6-986b-180fcb0a1f31"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:45.997051 kubelet[3559]: I0510 00:00:45.996993 3559 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "357e29ce-751b-42f6-986b-180fcb0a1f31" (UID: "357e29ce-751b-42f6-986b-180fcb0a1f31"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:00:45.999118 kubelet[3559]: I0510 00:00:45.999046 3559 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/357e29ce-751b-42f6-986b-180fcb0a1f31-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "357e29ce-751b-42f6-986b-180fcb0a1f31" (UID: "357e29ce-751b-42f6-986b-180fcb0a1f31"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:00:46.086462 kubelet[3559]: I0510 00:00:46.086393 3559 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-xtables-lock\") on node \"ip-172-31-30-213\" DevicePath \"\"" May 10 00:00:46.086462 kubelet[3559]: I0510 00:00:46.086451 3559 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-bpf-maps\") on node \"ip-172-31-30-213\" DevicePath \"\"" May 10 00:00:46.086672 kubelet[3559]: I0510 00:00:46.086475 3559 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-host-proc-sys-net\") on node \"ip-172-31-30-213\" DevicePath \"\"" May 10 00:00:46.086672 kubelet[3559]: I0510 00:00:46.086505 3559 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/357e29ce-751b-42f6-986b-180fcb0a1f31-cilium-config-path\") on node \"ip-172-31-30-213\" DevicePath \"\"" May 10 00:00:46.086672 kubelet[3559]: I0510 00:00:46.086531 3559 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-cilium-run\") on node \"ip-172-31-30-213\" DevicePath \"\"" May 10 00:00:46.086672 kubelet[3559]: I0510 00:00:46.086550 3559 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-hostproc\") on node \"ip-172-31-30-213\" DevicePath \"\"" May 10 00:00:46.086672 kubelet[3559]: I0510 00:00:46.086569 3559 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/357e29ce-751b-42f6-986b-180fcb0a1f31-clustermesh-secrets\") on node \"ip-172-31-30-213\" DevicePath \"\"" May 10 00:00:46.086672 kubelet[3559]: I0510 00:00:46.086588 3559 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-cilium-cgroup\") on node \"ip-172-31-30-213\" DevicePath \"\"" May 10 00:00:46.086672 kubelet[3559]: I0510 00:00:46.086607 3559 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-cni-path\") on node \"ip-172-31-30-213\" DevicePath \"\"" May 10 00:00:46.086672 kubelet[3559]: I0510 00:00:46.086626 3559 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-etc-cni-netd\") on node \"ip-172-31-30-213\" DevicePath \"\"" May 10 00:00:46.087128 kubelet[3559]: I0510 00:00:46.086644 3559 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-q58hl\" (UniqueName: \"kubernetes.io/projected/357e29ce-751b-42f6-986b-180fcb0a1f31-kube-api-access-q58hl\") on node \"ip-172-31-30-213\" DevicePath \"\"" May 10 00:00:46.087128 kubelet[3559]: I0510 00:00:46.086663 3559 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-host-proc-sys-kernel\") on node \"ip-172-31-30-213\" DevicePath \"\"" May 10 00:00:46.087128 kubelet[3559]: I0510 00:00:46.086686 3559 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/357e29ce-751b-42f6-986b-180fcb0a1f31-hubble-tls\") on node \"ip-172-31-30-213\" DevicePath \"\"" May 10 00:00:46.087128 kubelet[3559]: I0510 00:00:46.086707 3559 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/357e29ce-751b-42f6-986b-180fcb0a1f31-lib-modules\") on node \"ip-172-31-30-213\" DevicePath \"\"" May 10 00:00:46.179867 kubelet[3559]: I0510 00:00:46.179684 3559 scope.go:117] "RemoveContainer" containerID="c5cd3106c7bf05cfb52a6c6330c860b2f60209e552b2c1e13b8dcbf298910a37" May 10 00:00:46.186385 containerd[2018]: time="2025-05-10T00:00:46.185268258Z" level=info msg="RemoveContainer for \"c5cd3106c7bf05cfb52a6c6330c860b2f60209e552b2c1e13b8dcbf298910a37\"" May 10 00:00:46.195319 systemd[1]: Removed slice kubepods-burstable-pod357e29ce_751b_42f6_986b_180fcb0a1f31.slice - libcontainer container kubepods-burstable-pod357e29ce_751b_42f6_986b_180fcb0a1f31.slice. May 10 00:00:46.195553 systemd[1]: kubepods-burstable-pod357e29ce_751b_42f6_986b_180fcb0a1f31.slice: Consumed 15.265s CPU time. May 10 00:00:46.200267 containerd[2018]: time="2025-05-10T00:00:46.200015346Z" level=info msg="RemoveContainer for \"c5cd3106c7bf05cfb52a6c6330c860b2f60209e552b2c1e13b8dcbf298910a37\" returns successfully" May 10 00:00:46.201402 kubelet[3559]: I0510 00:00:46.200984 3559 scope.go:117] "RemoveContainer" containerID="8128909c76ee08676f6576b22e449c73566ec5e47c247cd54c3dfaff2cc85da5" May 10 00:00:46.204017 systemd[1]: Removed slice kubepods-besteffort-pod6f8adc2a_de67_41c6_9dca_5ba5311b69ac.slice - libcontainer container kubepods-besteffort-pod6f8adc2a_de67_41c6_9dca_5ba5311b69ac.slice. May 10 00:00:46.205702 containerd[2018]: time="2025-05-10T00:00:46.205526190Z" level=info msg="RemoveContainer for \"8128909c76ee08676f6576b22e449c73566ec5e47c247cd54c3dfaff2cc85da5\"" May 10 00:00:46.215130 containerd[2018]: time="2025-05-10T00:00:46.214715382Z" level=info msg="RemoveContainer for \"8128909c76ee08676f6576b22e449c73566ec5e47c247cd54c3dfaff2cc85da5\" returns successfully" May 10 00:00:46.216013 kubelet[3559]: I0510 00:00:46.215596 3559 scope.go:117] "RemoveContainer" containerID="31b5d8c4890f063a317dd15e81b2b47157720dc3fc835b5fb39da4530c69b555" May 10 00:00:46.218091 containerd[2018]: time="2025-05-10T00:00:46.217973106Z" level=info msg="RemoveContainer for \"31b5d8c4890f063a317dd15e81b2b47157720dc3fc835b5fb39da4530c69b555\"" May 10 00:00:46.240239 containerd[2018]: time="2025-05-10T00:00:46.237771390Z" level=info msg="RemoveContainer for \"31b5d8c4890f063a317dd15e81b2b47157720dc3fc835b5fb39da4530c69b555\" returns successfully" May 10 00:00:46.241223 kubelet[3559]: I0510 00:00:46.240606 3559 scope.go:117] "RemoveContainer" containerID="3fab6a9f591789430bca360268e0d3692915c34e822591980b7dd1d416cada39" May 10 00:00:46.248654 containerd[2018]: time="2025-05-10T00:00:46.248401434Z" level=info msg="RemoveContainer for \"3fab6a9f591789430bca360268e0d3692915c34e822591980b7dd1d416cada39\"" May 10 00:00:46.258697 containerd[2018]: time="2025-05-10T00:00:46.258598974Z" level=info msg="RemoveContainer for \"3fab6a9f591789430bca360268e0d3692915c34e822591980b7dd1d416cada39\" returns successfully" May 10 00:00:46.259180 kubelet[3559]: I0510 00:00:46.259117 3559 scope.go:117] "RemoveContainer" containerID="1d1508532dbe321dee310b9f106c283c1422eeec8ff5e3eaa930cdbecfac2621" May 10 00:00:46.262881 containerd[2018]: time="2025-05-10T00:00:46.262750110Z" level=info msg="RemoveContainer for \"1d1508532dbe321dee310b9f106c283c1422eeec8ff5e3eaa930cdbecfac2621\"" May 10 00:00:46.269415 containerd[2018]: time="2025-05-10T00:00:46.269272926Z" level=info msg="RemoveContainer for \"1d1508532dbe321dee310b9f106c283c1422eeec8ff5e3eaa930cdbecfac2621\" returns successfully" May 10 00:00:46.269901 kubelet[3559]: I0510 00:00:46.269624 3559 scope.go:117] "RemoveContainer" containerID="c5cd3106c7bf05cfb52a6c6330c860b2f60209e552b2c1e13b8dcbf298910a37" May 10 00:00:46.270416 containerd[2018]: time="2025-05-10T00:00:46.270282630Z" level=error msg="ContainerStatus for \"c5cd3106c7bf05cfb52a6c6330c860b2f60209e552b2c1e13b8dcbf298910a37\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c5cd3106c7bf05cfb52a6c6330c860b2f60209e552b2c1e13b8dcbf298910a37\": not found" May 10 00:00:46.270652 kubelet[3559]: E0510 00:00:46.270543 3559 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c5cd3106c7bf05cfb52a6c6330c860b2f60209e552b2c1e13b8dcbf298910a37\": not found" containerID="c5cd3106c7bf05cfb52a6c6330c860b2f60209e552b2c1e13b8dcbf298910a37" May 10 00:00:46.270857 kubelet[3559]: I0510 00:00:46.270670 3559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c5cd3106c7bf05cfb52a6c6330c860b2f60209e552b2c1e13b8dcbf298910a37"} err="failed to get container status \"c5cd3106c7bf05cfb52a6c6330c860b2f60209e552b2c1e13b8dcbf298910a37\": rpc error: code = NotFound desc = an error occurred when try to find container \"c5cd3106c7bf05cfb52a6c6330c860b2f60209e552b2c1e13b8dcbf298910a37\": not found" May 10 00:00:46.270857 kubelet[3559]: I0510 00:00:46.270851 3559 scope.go:117] "RemoveContainer" containerID="8128909c76ee08676f6576b22e449c73566ec5e47c247cd54c3dfaff2cc85da5" May 10 00:00:46.271402 containerd[2018]: time="2025-05-10T00:00:46.271199118Z" level=error msg="ContainerStatus for \"8128909c76ee08676f6576b22e449c73566ec5e47c247cd54c3dfaff2cc85da5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8128909c76ee08676f6576b22e449c73566ec5e47c247cd54c3dfaff2cc85da5\": not found" May 10 00:00:46.271540 kubelet[3559]: E0510 00:00:46.271494 3559 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8128909c76ee08676f6576b22e449c73566ec5e47c247cd54c3dfaff2cc85da5\": not found" containerID="8128909c76ee08676f6576b22e449c73566ec5e47c247cd54c3dfaff2cc85da5" May 10 00:00:46.271618 kubelet[3559]: I0510 00:00:46.271558 3559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8128909c76ee08676f6576b22e449c73566ec5e47c247cd54c3dfaff2cc85da5"} err="failed to get container status \"8128909c76ee08676f6576b22e449c73566ec5e47c247cd54c3dfaff2cc85da5\": rpc error: code = NotFound desc = an error occurred when try to find container \"8128909c76ee08676f6576b22e449c73566ec5e47c247cd54c3dfaff2cc85da5\": not found" May 10 00:00:46.271618 kubelet[3559]: I0510 00:00:46.271601 3559 scope.go:117] "RemoveContainer" containerID="31b5d8c4890f063a317dd15e81b2b47157720dc3fc835b5fb39da4530c69b555" May 10 00:00:46.272005 containerd[2018]: time="2025-05-10T00:00:46.271929114Z" level=error msg="ContainerStatus for \"31b5d8c4890f063a317dd15e81b2b47157720dc3fc835b5fb39da4530c69b555\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"31b5d8c4890f063a317dd15e81b2b47157720dc3fc835b5fb39da4530c69b555\": not found" May 10 00:00:46.272506 kubelet[3559]: E0510 00:00:46.272286 3559 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"31b5d8c4890f063a317dd15e81b2b47157720dc3fc835b5fb39da4530c69b555\": not found" containerID="31b5d8c4890f063a317dd15e81b2b47157720dc3fc835b5fb39da4530c69b555" May 10 00:00:46.272506 kubelet[3559]: I0510 00:00:46.272334 3559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"31b5d8c4890f063a317dd15e81b2b47157720dc3fc835b5fb39da4530c69b555"} err="failed to get container status \"31b5d8c4890f063a317dd15e81b2b47157720dc3fc835b5fb39da4530c69b555\": rpc error: code = NotFound desc = an error occurred when try to find container \"31b5d8c4890f063a317dd15e81b2b47157720dc3fc835b5fb39da4530c69b555\": not found" May 10 00:00:46.272506 kubelet[3559]: I0510 00:00:46.272367 3559 scope.go:117] "RemoveContainer" containerID="3fab6a9f591789430bca360268e0d3692915c34e822591980b7dd1d416cada39" May 10 00:00:46.272831 containerd[2018]: time="2025-05-10T00:00:46.272704446Z" level=error msg="ContainerStatus for \"3fab6a9f591789430bca360268e0d3692915c34e822591980b7dd1d416cada39\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3fab6a9f591789430bca360268e0d3692915c34e822591980b7dd1d416cada39\": not found" May 10 00:00:46.273140 kubelet[3559]: E0510 00:00:46.273083 3559 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3fab6a9f591789430bca360268e0d3692915c34e822591980b7dd1d416cada39\": not found" containerID="3fab6a9f591789430bca360268e0d3692915c34e822591980b7dd1d416cada39" May 10 00:00:46.273216 kubelet[3559]: I0510 00:00:46.273141 3559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3fab6a9f591789430bca360268e0d3692915c34e822591980b7dd1d416cada39"} err="failed to get container status \"3fab6a9f591789430bca360268e0d3692915c34e822591980b7dd1d416cada39\": rpc error: code = NotFound desc = an error occurred when try to find container \"3fab6a9f591789430bca360268e0d3692915c34e822591980b7dd1d416cada39\": not found" May 10 00:00:46.273216 kubelet[3559]: I0510 00:00:46.273175 3559 scope.go:117] "RemoveContainer" containerID="1d1508532dbe321dee310b9f106c283c1422eeec8ff5e3eaa930cdbecfac2621" May 10 00:00:46.273590 containerd[2018]: time="2025-05-10T00:00:46.273471558Z" level=error msg="ContainerStatus for \"1d1508532dbe321dee310b9f106c283c1422eeec8ff5e3eaa930cdbecfac2621\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d1508532dbe321dee310b9f106c283c1422eeec8ff5e3eaa930cdbecfac2621\": not found" May 10 00:00:46.274376 kubelet[3559]: E0510 00:00:46.273863 3559 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d1508532dbe321dee310b9f106c283c1422eeec8ff5e3eaa930cdbecfac2621\": not found" containerID="1d1508532dbe321dee310b9f106c283c1422eeec8ff5e3eaa930cdbecfac2621" May 10 00:00:46.274376 kubelet[3559]: I0510 00:00:46.273914 3559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d1508532dbe321dee310b9f106c283c1422eeec8ff5e3eaa930cdbecfac2621"} err="failed to get container status \"1d1508532dbe321dee310b9f106c283c1422eeec8ff5e3eaa930cdbecfac2621\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d1508532dbe321dee310b9f106c283c1422eeec8ff5e3eaa930cdbecfac2621\": not found" May 10 00:00:46.274376 kubelet[3559]: I0510 00:00:46.273950 3559 scope.go:117] "RemoveContainer" containerID="e3fcb1baedd7161a5e9cdf2d65bfc22b06c21c31ce17431fd3bd4472213b25bb" May 10 00:00:46.277278 containerd[2018]: time="2025-05-10T00:00:46.276848370Z" level=info msg="RemoveContainer for \"e3fcb1baedd7161a5e9cdf2d65bfc22b06c21c31ce17431fd3bd4472213b25bb\"" May 10 00:00:46.282914 containerd[2018]: time="2025-05-10T00:00:46.282862674Z" level=info msg="RemoveContainer for \"e3fcb1baedd7161a5e9cdf2d65bfc22b06c21c31ce17431fd3bd4472213b25bb\" returns successfully" May 10 00:00:46.283546 kubelet[3559]: I0510 00:00:46.283392 3559 scope.go:117] "RemoveContainer" containerID="e3fcb1baedd7161a5e9cdf2d65bfc22b06c21c31ce17431fd3bd4472213b25bb" May 10 00:00:46.283935 containerd[2018]: time="2025-05-10T00:00:46.283876746Z" level=error msg="ContainerStatus for \"e3fcb1baedd7161a5e9cdf2d65bfc22b06c21c31ce17431fd3bd4472213b25bb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e3fcb1baedd7161a5e9cdf2d65bfc22b06c21c31ce17431fd3bd4472213b25bb\": not found" May 10 00:00:46.284202 kubelet[3559]: E0510 00:00:46.284125 3559 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e3fcb1baedd7161a5e9cdf2d65bfc22b06c21c31ce17431fd3bd4472213b25bb\": not found" containerID="e3fcb1baedd7161a5e9cdf2d65bfc22b06c21c31ce17431fd3bd4472213b25bb" May 10 00:00:46.284202 kubelet[3559]: I0510 00:00:46.284182 3559 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e3fcb1baedd7161a5e9cdf2d65bfc22b06c21c31ce17431fd3bd4472213b25bb"} err="failed to get container status \"e3fcb1baedd7161a5e9cdf2d65bfc22b06c21c31ce17431fd3bd4472213b25bb\": rpc error: code = NotFound desc = an error occurred when try to find container \"e3fcb1baedd7161a5e9cdf2d65bfc22b06c21c31ce17431fd3bd4472213b25bb\": not found" May 10 00:00:46.442601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254-rootfs.mount: Deactivated successfully. May 10 00:00:46.443040 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254-shm.mount: Deactivated successfully. May 10 00:00:46.443295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c323a7571bd868bf13f947bbf3fcfc652c38d714839ca4a413de9fbf7dce06a1-rootfs.mount: Deactivated successfully. May 10 00:00:46.443535 systemd[1]: var-lib-kubelet-pods-357e29ce\x2d751b\x2d42f6\x2d986b\x2d180fcb0a1f31-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq58hl.mount: Deactivated successfully. May 10 00:00:46.443818 systemd[1]: var-lib-kubelet-pods-6f8adc2a\x2dde67\x2d41c6\x2d9dca\x2d5ba5311b69ac-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dplcrg.mount: Deactivated successfully. May 10 00:00:46.444086 systemd[1]: var-lib-kubelet-pods-357e29ce\x2d751b\x2d42f6\x2d986b\x2d180fcb0a1f31-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 00:00:46.444322 systemd[1]: var-lib-kubelet-pods-357e29ce\x2d751b\x2d42f6\x2d986b\x2d180fcb0a1f31-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 00:00:46.594581 kubelet[3559]: I0510 00:00:46.594444 3559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="357e29ce-751b-42f6-986b-180fcb0a1f31" path="/var/lib/kubelet/pods/357e29ce-751b-42f6-986b-180fcb0a1f31/volumes" May 10 00:00:46.597210 kubelet[3559]: I0510 00:00:46.597166 3559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f8adc2a-de67-41c6-9dca-5ba5311b69ac" path="/var/lib/kubelet/pods/6f8adc2a-de67-41c6-9dca-5ba5311b69ac/volumes" May 10 00:00:47.373396 sshd[5182]: pam_unix(sshd:session): session closed for user core May 10 00:00:47.381789 systemd-logind[1992]: Session 27 logged out. Waiting for processes to exit. May 10 00:00:47.383326 systemd[1]: sshd@26-172.31.30.213:22-147.75.109.163:35472.service: Deactivated successfully. May 10 00:00:47.388459 systemd[1]: session-27.scope: Deactivated successfully. May 10 00:00:47.388827 systemd[1]: session-27.scope: Consumed 2.226s CPU time. May 10 00:00:47.393129 systemd-logind[1992]: Removed session 27. May 10 00:00:47.413301 systemd[1]: Started sshd@27-172.31.30.213:22-147.75.109.163:60246.service - OpenSSH per-connection server daemon (147.75.109.163:60246). May 10 00:00:47.585120 sshd[5341]: Accepted publickey for core from 147.75.109.163 port 60246 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:47.588135 sshd[5341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:47.597638 systemd-logind[1992]: New session 28 of user core. May 10 00:00:47.602996 systemd[1]: Started session-28.scope - Session 28 of User core. May 10 00:00:47.878252 kubelet[3559]: E0510 00:00:47.878109 3559 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:00:48.465884 ntpd[1987]: Deleting interface #12 lxc_health, fe80::980c:7cff:fe63:5fd7%8#123, interface stats: received=0, sent=0, dropped=0, active_time=76 secs May 10 00:00:48.466365 ntpd[1987]: 10 May 00:00:48 ntpd[1987]: Deleting interface #12 lxc_health, fe80::980c:7cff:fe63:5fd7%8#123, interface stats: received=0, sent=0, dropped=0, active_time=76 secs May 10 00:00:49.801311 sshd[5341]: pam_unix(sshd:session): session closed for user core May 10 00:00:49.811193 systemd[1]: sshd@27-172.31.30.213:22-147.75.109.163:60246.service: Deactivated successfully. May 10 00:00:49.817319 systemd[1]: session-28.scope: Deactivated successfully. May 10 00:00:49.820909 kubelet[3559]: I0510 00:00:49.819510 3559 topology_manager.go:215] "Topology Admit Handler" podUID="a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12" podNamespace="kube-system" podName="cilium-nj2bz" May 10 00:00:49.820909 kubelet[3559]: E0510 00:00:49.819595 3559 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="357e29ce-751b-42f6-986b-180fcb0a1f31" containerName="mount-cgroup" May 10 00:00:49.820909 kubelet[3559]: E0510 00:00:49.819616 3559 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="357e29ce-751b-42f6-986b-180fcb0a1f31" containerName="apply-sysctl-overwrites" May 10 00:00:49.820909 kubelet[3559]: E0510 00:00:49.819632 3559 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="357e29ce-751b-42f6-986b-180fcb0a1f31" containerName="mount-bpf-fs" May 10 00:00:49.820909 kubelet[3559]: E0510 00:00:49.819648 3559 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="357e29ce-751b-42f6-986b-180fcb0a1f31" containerName="clean-cilium-state" May 10 00:00:49.820909 kubelet[3559]: E0510 00:00:49.819663 3559 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="357e29ce-751b-42f6-986b-180fcb0a1f31" containerName="cilium-agent" May 10 00:00:49.820909 kubelet[3559]: E0510 00:00:49.819680 3559 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f8adc2a-de67-41c6-9dca-5ba5311b69ac" containerName="cilium-operator" May 10 00:00:49.825374 kubelet[3559]: I0510 00:00:49.821861 3559 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f8adc2a-de67-41c6-9dca-5ba5311b69ac" containerName="cilium-operator" May 10 00:00:49.825374 kubelet[3559]: I0510 00:00:49.821898 3559 memory_manager.go:354] "RemoveStaleState removing state" podUID="357e29ce-751b-42f6-986b-180fcb0a1f31" containerName="cilium-agent" May 10 00:00:49.821039 systemd[1]: session-28.scope: Consumed 2.011s CPU time. May 10 00:00:49.823654 systemd-logind[1992]: Session 28 logged out. Waiting for processes to exit. May 10 00:00:49.854147 systemd[1]: Started sshd@28-172.31.30.213:22-147.75.109.163:60262.service - OpenSSH per-connection server daemon (147.75.109.163:60262). May 10 00:00:49.856941 systemd-logind[1992]: Removed session 28. May 10 00:00:49.877929 systemd[1]: Created slice kubepods-burstable-poda9e29c33_e7c7_42d6_8815_2e3b1b0c5b12.slice - libcontainer container kubepods-burstable-poda9e29c33_e7c7_42d6_8815_2e3b1b0c5b12.slice. May 10 00:00:49.911978 kubelet[3559]: I0510 00:00:49.911912 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12-cilium-run\") pod \"cilium-nj2bz\" (UID: \"a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12\") " pod="kube-system/cilium-nj2bz" May 10 00:00:49.912296 kubelet[3559]: I0510 00:00:49.912265 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12-bpf-maps\") pod \"cilium-nj2bz\" (UID: \"a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12\") " pod="kube-system/cilium-nj2bz" May 10 00:00:49.912491 kubelet[3559]: I0510 00:00:49.912444 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12-hostproc\") pod \"cilium-nj2bz\" (UID: \"a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12\") " pod="kube-system/cilium-nj2bz" May 10 00:00:49.912670 kubelet[3559]: I0510 00:00:49.912633 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12-xtables-lock\") pod \"cilium-nj2bz\" (UID: \"a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12\") " pod="kube-system/cilium-nj2bz" May 10 00:00:49.912911 kubelet[3559]: I0510 00:00:49.912859 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12-hubble-tls\") pod \"cilium-nj2bz\" (UID: \"a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12\") " pod="kube-system/cilium-nj2bz" May 10 00:00:49.913120 kubelet[3559]: I0510 00:00:49.913072 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12-cilium-cgroup\") pod \"cilium-nj2bz\" (UID: \"a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12\") " pod="kube-system/cilium-nj2bz" May 10 00:00:49.913319 kubelet[3559]: I0510 00:00:49.913280 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12-etc-cni-netd\") pod \"cilium-nj2bz\" (UID: \"a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12\") " pod="kube-system/cilium-nj2bz" May 10 00:00:49.913473 kubelet[3559]: I0510 00:00:49.913436 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6cf5\" (UniqueName: \"kubernetes.io/projected/a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12-kube-api-access-v6cf5\") pod \"cilium-nj2bz\" (UID: \"a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12\") " pod="kube-system/cilium-nj2bz" May 10 00:00:49.913625 kubelet[3559]: I0510 00:00:49.913589 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12-host-proc-sys-kernel\") pod \"cilium-nj2bz\" (UID: \"a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12\") " pod="kube-system/cilium-nj2bz" May 10 00:00:49.913806 kubelet[3559]: I0510 00:00:49.913777 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12-clustermesh-secrets\") pod \"cilium-nj2bz\" (UID: \"a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12\") " pod="kube-system/cilium-nj2bz" May 10 00:00:49.913997 kubelet[3559]: I0510 00:00:49.913957 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12-cilium-config-path\") pod \"cilium-nj2bz\" (UID: \"a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12\") " pod="kube-system/cilium-nj2bz" May 10 00:00:49.914384 kubelet[3559]: I0510 00:00:49.914139 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12-cilium-ipsec-secrets\") pod \"cilium-nj2bz\" (UID: \"a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12\") " pod="kube-system/cilium-nj2bz" May 10 00:00:49.914384 kubelet[3559]: I0510 00:00:49.914221 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12-cni-path\") pod \"cilium-nj2bz\" (UID: \"a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12\") " pod="kube-system/cilium-nj2bz" May 10 00:00:49.914384 kubelet[3559]: I0510 00:00:49.914274 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12-lib-modules\") pod \"cilium-nj2bz\" (UID: \"a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12\") " pod="kube-system/cilium-nj2bz" May 10 00:00:49.914384 kubelet[3559]: I0510 00:00:49.914323 3559 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12-host-proc-sys-net\") pod \"cilium-nj2bz\" (UID: \"a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12\") " pod="kube-system/cilium-nj2bz" May 10 00:00:50.070132 sshd[5354]: Accepted publickey for core from 147.75.109.163 port 60262 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:50.078895 sshd[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:50.098689 systemd-logind[1992]: New session 29 of user core. May 10 00:00:50.107108 systemd[1]: Started session-29.scope - Session 29 of User core. May 10 00:00:50.191559 containerd[2018]: time="2025-05-10T00:00:50.191450362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nj2bz,Uid:a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12,Namespace:kube-system,Attempt:0,}" May 10 00:00:50.234028 sshd[5354]: pam_unix(sshd:session): session closed for user core May 10 00:00:50.244151 containerd[2018]: time="2025-05-10T00:00:50.243569650Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:00:50.244151 containerd[2018]: time="2025-05-10T00:00:50.243668410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:00:50.244151 containerd[2018]: time="2025-05-10T00:00:50.243694810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:50.244802 containerd[2018]: time="2025-05-10T00:00:50.244660846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:00:50.245436 systemd[1]: sshd@28-172.31.30.213:22-147.75.109.163:60262.service: Deactivated successfully. May 10 00:00:50.252686 systemd[1]: session-29.scope: Deactivated successfully. May 10 00:00:50.255935 systemd-logind[1992]: Session 29 logged out. Waiting for processes to exit. May 10 00:00:50.280019 systemd[1]: Started sshd@29-172.31.30.213:22-147.75.109.163:60272.service - OpenSSH per-connection server daemon (147.75.109.163:60272). May 10 00:00:50.282691 systemd-logind[1992]: Removed session 29. May 10 00:00:50.300006 systemd[1]: Started cri-containerd-995e6d0bf42aeb5d29abfeefb1a4bac607dd7a38a70dc8f231d5d438d3f1128c.scope - libcontainer container 995e6d0bf42aeb5d29abfeefb1a4bac607dd7a38a70dc8f231d5d438d3f1128c. May 10 00:00:50.356541 containerd[2018]: time="2025-05-10T00:00:50.355239095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nj2bz,Uid:a9e29c33-e7c7-42d6-8815-2e3b1b0c5b12,Namespace:kube-system,Attempt:0,} returns sandbox id \"995e6d0bf42aeb5d29abfeefb1a4bac607dd7a38a70dc8f231d5d438d3f1128c\"" May 10 00:00:50.362903 containerd[2018]: time="2025-05-10T00:00:50.362833727Z" level=info msg="CreateContainer within sandbox \"995e6d0bf42aeb5d29abfeefb1a4bac607dd7a38a70dc8f231d5d438d3f1128c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:00:50.390237 containerd[2018]: time="2025-05-10T00:00:50.390159527Z" level=info msg="CreateContainer within sandbox \"995e6d0bf42aeb5d29abfeefb1a4bac607dd7a38a70dc8f231d5d438d3f1128c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"acb69b85c80e8f881d5d76c4daf1f3b4479b1108ca375c2e1d448892c5e6ebcc\"" May 10 00:00:50.391107 containerd[2018]: time="2025-05-10T00:00:50.391011095Z" level=info msg="StartContainer for \"acb69b85c80e8f881d5d76c4daf1f3b4479b1108ca375c2e1d448892c5e6ebcc\"" May 10 00:00:50.438040 systemd[1]: Started cri-containerd-acb69b85c80e8f881d5d76c4daf1f3b4479b1108ca375c2e1d448892c5e6ebcc.scope - libcontainer container acb69b85c80e8f881d5d76c4daf1f3b4479b1108ca375c2e1d448892c5e6ebcc. May 10 00:00:50.474597 sshd[5393]: Accepted publickey for core from 147.75.109.163 port 60272 ssh2: RSA SHA256:yk6AfQWmMRYxezm8PvpiDSiRPBmf2ReLg5ZxrxD++D8 May 10 00:00:50.477555 sshd[5393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 10 00:00:50.492949 systemd-logind[1992]: New session 30 of user core. May 10 00:00:50.499634 containerd[2018]: time="2025-05-10T00:00:50.499575635Z" level=info msg="StartContainer for \"acb69b85c80e8f881d5d76c4daf1f3b4479b1108ca375c2e1d448892c5e6ebcc\" returns successfully" May 10 00:00:50.501011 systemd[1]: Started session-30.scope - Session 30 of User core. May 10 00:00:50.509272 systemd[1]: cri-containerd-acb69b85c80e8f881d5d76c4daf1f3b4479b1108ca375c2e1d448892c5e6ebcc.scope: Deactivated successfully. May 10 00:00:50.562413 containerd[2018]: time="2025-05-10T00:00:50.562334772Z" level=info msg="shim disconnected" id=acb69b85c80e8f881d5d76c4daf1f3b4479b1108ca375c2e1d448892c5e6ebcc namespace=k8s.io May 10 00:00:50.562858 containerd[2018]: time="2025-05-10T00:00:50.562668108Z" level=warning msg="cleaning up after shim disconnected" id=acb69b85c80e8f881d5d76c4daf1f3b4479b1108ca375c2e1d448892c5e6ebcc namespace=k8s.io May 10 00:00:50.562858 containerd[2018]: time="2025-05-10T00:00:50.562695864Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:51.214480 containerd[2018]: time="2025-05-10T00:00:51.214387559Z" level=info msg="CreateContainer within sandbox \"995e6d0bf42aeb5d29abfeefb1a4bac607dd7a38a70dc8f231d5d438d3f1128c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 00:00:51.243240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount390875105.mount: Deactivated successfully. May 10 00:00:51.245324 containerd[2018]: time="2025-05-10T00:00:51.244601987Z" level=info msg="CreateContainer within sandbox \"995e6d0bf42aeb5d29abfeefb1a4bac607dd7a38a70dc8f231d5d438d3f1128c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0f50e9a73b8b3f708f9e09d6a76c628f7bb2026964f7983151cab3737e2516df\"" May 10 00:00:51.247780 containerd[2018]: time="2025-05-10T00:00:51.247278479Z" level=info msg="StartContainer for \"0f50e9a73b8b3f708f9e09d6a76c628f7bb2026964f7983151cab3737e2516df\"" May 10 00:00:51.304037 systemd[1]: Started cri-containerd-0f50e9a73b8b3f708f9e09d6a76c628f7bb2026964f7983151cab3737e2516df.scope - libcontainer container 0f50e9a73b8b3f708f9e09d6a76c628f7bb2026964f7983151cab3737e2516df. May 10 00:00:51.364559 containerd[2018]: time="2025-05-10T00:00:51.364502052Z" level=info msg="StartContainer for \"0f50e9a73b8b3f708f9e09d6a76c628f7bb2026964f7983151cab3737e2516df\" returns successfully" May 10 00:00:51.384521 systemd[1]: cri-containerd-0f50e9a73b8b3f708f9e09d6a76c628f7bb2026964f7983151cab3737e2516df.scope: Deactivated successfully. May 10 00:00:51.426632 containerd[2018]: time="2025-05-10T00:00:51.426541692Z" level=info msg="shim disconnected" id=0f50e9a73b8b3f708f9e09d6a76c628f7bb2026964f7983151cab3737e2516df namespace=k8s.io May 10 00:00:51.426632 containerd[2018]: time="2025-05-10T00:00:51.426616560Z" level=warning msg="cleaning up after shim disconnected" id=0f50e9a73b8b3f708f9e09d6a76c628f7bb2026964f7983151cab3737e2516df namespace=k8s.io May 10 00:00:51.427079 containerd[2018]: time="2025-05-10T00:00:51.426639264Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:52.024588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f50e9a73b8b3f708f9e09d6a76c628f7bb2026964f7983151cab3737e2516df-rootfs.mount: Deactivated successfully. May 10 00:00:52.218633 containerd[2018]: time="2025-05-10T00:00:52.218399448Z" level=info msg="CreateContainer within sandbox \"995e6d0bf42aeb5d29abfeefb1a4bac607dd7a38a70dc8f231d5d438d3f1128c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 00:00:52.261486 containerd[2018]: time="2025-05-10T00:00:52.261409860Z" level=info msg="CreateContainer within sandbox \"995e6d0bf42aeb5d29abfeefb1a4bac607dd7a38a70dc8f231d5d438d3f1128c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0ad2ca984cb176e8880aef611c0de4767eea00e7fd3f9b3d9df98c6d10311f5e\"" May 10 00:00:52.263775 containerd[2018]: time="2025-05-10T00:00:52.262207560Z" level=info msg="StartContainer for \"0ad2ca984cb176e8880aef611c0de4767eea00e7fd3f9b3d9df98c6d10311f5e\"" May 10 00:00:52.322598 systemd[1]: Started cri-containerd-0ad2ca984cb176e8880aef611c0de4767eea00e7fd3f9b3d9df98c6d10311f5e.scope - libcontainer container 0ad2ca984cb176e8880aef611c0de4767eea00e7fd3f9b3d9df98c6d10311f5e. May 10 00:00:52.400692 containerd[2018]: time="2025-05-10T00:00:52.400612069Z" level=info msg="StartContainer for \"0ad2ca984cb176e8880aef611c0de4767eea00e7fd3f9b3d9df98c6d10311f5e\" returns successfully" May 10 00:00:52.403610 systemd[1]: cri-containerd-0ad2ca984cb176e8880aef611c0de4767eea00e7fd3f9b3d9df98c6d10311f5e.scope: Deactivated successfully. May 10 00:00:52.449876 containerd[2018]: time="2025-05-10T00:00:52.449649925Z" level=info msg="shim disconnected" id=0ad2ca984cb176e8880aef611c0de4767eea00e7fd3f9b3d9df98c6d10311f5e namespace=k8s.io May 10 00:00:52.449876 containerd[2018]: time="2025-05-10T00:00:52.449862937Z" level=warning msg="cleaning up after shim disconnected" id=0ad2ca984cb176e8880aef611c0de4767eea00e7fd3f9b3d9df98c6d10311f5e namespace=k8s.io May 10 00:00:52.450236 containerd[2018]: time="2025-05-10T00:00:52.449909605Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:52.476886 containerd[2018]: time="2025-05-10T00:00:52.476789965Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:00:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 10 00:00:52.577996 containerd[2018]: time="2025-05-10T00:00:52.577835822Z" level=info msg="StopPodSandbox for \"e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254\"" May 10 00:00:52.577996 containerd[2018]: time="2025-05-10T00:00:52.577980326Z" level=info msg="TearDown network for sandbox \"e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254\" successfully" May 10 00:00:52.577996 containerd[2018]: time="2025-05-10T00:00:52.578005682Z" level=info msg="StopPodSandbox for \"e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254\" returns successfully" May 10 00:00:52.579145 containerd[2018]: time="2025-05-10T00:00:52.579019334Z" level=info msg="RemovePodSandbox for \"e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254\"" May 10 00:00:52.579145 containerd[2018]: time="2025-05-10T00:00:52.579081338Z" level=info msg="Forcibly stopping sandbox \"e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254\"" May 10 00:00:52.579564 containerd[2018]: time="2025-05-10T00:00:52.579209846Z" level=info msg="TearDown network for sandbox \"e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254\" successfully" May 10 00:00:52.585474 containerd[2018]: time="2025-05-10T00:00:52.585391706Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:00:52.585629 containerd[2018]: time="2025-05-10T00:00:52.585488570Z" level=info msg="RemovePodSandbox \"e6495e39dcf3d2ae8f0f14e0140ae06bf8dab01b0a82a98d54f2e031aad86254\" returns successfully" May 10 00:00:52.586390 containerd[2018]: time="2025-05-10T00:00:52.586338938Z" level=info msg="StopPodSandbox for \"c323a7571bd868bf13f947bbf3fcfc652c38d714839ca4a413de9fbf7dce06a1\"" May 10 00:00:52.587016 containerd[2018]: time="2025-05-10T00:00:52.586481222Z" level=info msg="TearDown network for sandbox \"c323a7571bd868bf13f947bbf3fcfc652c38d714839ca4a413de9fbf7dce06a1\" successfully" May 10 00:00:52.587016 containerd[2018]: time="2025-05-10T00:00:52.586516778Z" level=info msg="StopPodSandbox for \"c323a7571bd868bf13f947bbf3fcfc652c38d714839ca4a413de9fbf7dce06a1\" returns successfully" May 10 00:00:52.588804 containerd[2018]: time="2025-05-10T00:00:52.587502614Z" level=info msg="RemovePodSandbox for \"c323a7571bd868bf13f947bbf3fcfc652c38d714839ca4a413de9fbf7dce06a1\"" May 10 00:00:52.588804 containerd[2018]: time="2025-05-10T00:00:52.587555222Z" level=info msg="Forcibly stopping sandbox \"c323a7571bd868bf13f947bbf3fcfc652c38d714839ca4a413de9fbf7dce06a1\"" May 10 00:00:52.588804 containerd[2018]: time="2025-05-10T00:00:52.587659130Z" level=info msg="TearDown network for sandbox \"c323a7571bd868bf13f947bbf3fcfc652c38d714839ca4a413de9fbf7dce06a1\" successfully" May 10 00:00:52.597664 containerd[2018]: time="2025-05-10T00:00:52.597605126Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c323a7571bd868bf13f947bbf3fcfc652c38d714839ca4a413de9fbf7dce06a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 10 00:00:52.597859 containerd[2018]: time="2025-05-10T00:00:52.597687602Z" level=info msg="RemovePodSandbox \"c323a7571bd868bf13f947bbf3fcfc652c38d714839ca4a413de9fbf7dce06a1\" returns successfully" May 10 00:00:52.879653 kubelet[3559]: E0510 00:00:52.879346 3559 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:00:53.024787 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ad2ca984cb176e8880aef611c0de4767eea00e7fd3f9b3d9df98c6d10311f5e-rootfs.mount: Deactivated successfully. May 10 00:00:53.223209 containerd[2018]: time="2025-05-10T00:00:53.223124557Z" level=info msg="CreateContainer within sandbox \"995e6d0bf42aeb5d29abfeefb1a4bac607dd7a38a70dc8f231d5d438d3f1128c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 00:00:53.255065 containerd[2018]: time="2025-05-10T00:00:53.254886325Z" level=info msg="CreateContainer within sandbox \"995e6d0bf42aeb5d29abfeefb1a4bac607dd7a38a70dc8f231d5d438d3f1128c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"30ec07e1900fd73fa61724afebae6fd2bc5421a6d037f0599f04643d446c5e7d\"" May 10 00:00:53.256367 containerd[2018]: time="2025-05-10T00:00:53.256087729Z" level=info msg="StartContainer for \"30ec07e1900fd73fa61724afebae6fd2bc5421a6d037f0599f04643d446c5e7d\"" May 10 00:00:53.315029 systemd[1]: Started cri-containerd-30ec07e1900fd73fa61724afebae6fd2bc5421a6d037f0599f04643d446c5e7d.scope - libcontainer container 30ec07e1900fd73fa61724afebae6fd2bc5421a6d037f0599f04643d446c5e7d. May 10 00:00:53.376471 systemd[1]: cri-containerd-30ec07e1900fd73fa61724afebae6fd2bc5421a6d037f0599f04643d446c5e7d.scope: Deactivated successfully. May 10 00:00:53.388793 containerd[2018]: time="2025-05-10T00:00:53.388390670Z" level=info msg="StartContainer for \"30ec07e1900fd73fa61724afebae6fd2bc5421a6d037f0599f04643d446c5e7d\" returns successfully" May 10 00:00:53.433715 containerd[2018]: time="2025-05-10T00:00:53.433613186Z" level=info msg="shim disconnected" id=30ec07e1900fd73fa61724afebae6fd2bc5421a6d037f0599f04643d446c5e7d namespace=k8s.io May 10 00:00:53.434264 containerd[2018]: time="2025-05-10T00:00:53.434022902Z" level=warning msg="cleaning up after shim disconnected" id=30ec07e1900fd73fa61724afebae6fd2bc5421a6d037f0599f04643d446c5e7d namespace=k8s.io May 10 00:00:53.434264 containerd[2018]: time="2025-05-10T00:00:53.434054594Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:00:53.464798 containerd[2018]: time="2025-05-10T00:00:53.463569110Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:00:53Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 10 00:00:54.024889 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30ec07e1900fd73fa61724afebae6fd2bc5421a6d037f0599f04643d446c5e7d-rootfs.mount: Deactivated successfully. May 10 00:00:54.235669 containerd[2018]: time="2025-05-10T00:00:54.235581014Z" level=info msg="CreateContainer within sandbox \"995e6d0bf42aeb5d29abfeefb1a4bac607dd7a38a70dc8f231d5d438d3f1128c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 00:00:54.277520 containerd[2018]: time="2025-05-10T00:00:54.277350578Z" level=info msg="CreateContainer within sandbox \"995e6d0bf42aeb5d29abfeefb1a4bac607dd7a38a70dc8f231d5d438d3f1128c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"11c754f67f2d41b62a4c80f03c052f08cb133190cd3bcedaec46cc6d68e60b54\"" May 10 00:00:54.279798 containerd[2018]: time="2025-05-10T00:00:54.278238638Z" level=info msg="StartContainer for \"11c754f67f2d41b62a4c80f03c052f08cb133190cd3bcedaec46cc6d68e60b54\"" May 10 00:00:54.343033 systemd[1]: Started cri-containerd-11c754f67f2d41b62a4c80f03c052f08cb133190cd3bcedaec46cc6d68e60b54.scope - libcontainer container 11c754f67f2d41b62a4c80f03c052f08cb133190cd3bcedaec46cc6d68e60b54. May 10 00:00:54.483274 containerd[2018]: time="2025-05-10T00:00:54.483183543Z" level=info msg="StartContainer for \"11c754f67f2d41b62a4c80f03c052f08cb133190cd3bcedaec46cc6d68e60b54\" returns successfully" May 10 00:00:55.025346 systemd[1]: run-containerd-runc-k8s.io-11c754f67f2d41b62a4c80f03c052f08cb133190cd3bcedaec46cc6d68e60b54-runc.LZ9bb0.mount: Deactivated successfully. May 10 00:00:55.172693 kubelet[3559]: I0510 00:00:55.169780 3559 setters.go:580] "Node became not ready" node="ip-172-31-30-213" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-10T00:00:55Z","lastTransitionTime":"2025-05-10T00:00:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 10 00:00:55.291239 kubelet[3559]: I0510 00:00:55.291061 3559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nj2bz" podStartSLOduration=6.291038223 podStartE2EDuration="6.291038223s" podCreationTimestamp="2025-05-10 00:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:00:55.290757399 +0000 UTC m=+122.985812160" watchObservedRunningTime="2025-05-10 00:00:55.291038223 +0000 UTC m=+122.986092948" May 10 00:00:55.498151 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 10 00:00:59.758447 systemd-networkd[1934]: lxc_health: Link UP May 10 00:00:59.768949 systemd-networkd[1934]: lxc_health: Gained carrier May 10 00:00:59.769253 (udev-worker)[6209]: Network interface NamePolicy= disabled on kernel command line. May 10 00:01:01.651611 systemd-networkd[1934]: lxc_health: Gained IPv6LL May 10 00:01:04.465955 ntpd[1987]: Listen normally on 15 lxc_health [fe80::e4f3:90ff:fe44:9264%14]:123 May 10 00:01:04.466486 ntpd[1987]: 10 May 00:01:04 ntpd[1987]: Listen normally on 15 lxc_health [fe80::e4f3:90ff:fe44:9264%14]:123 May 10 00:01:06.292214 sshd[5393]: pam_unix(sshd:session): session closed for user core May 10 00:01:06.301218 systemd[1]: sshd@29-172.31.30.213:22-147.75.109.163:60272.service: Deactivated successfully. May 10 00:01:06.307632 systemd[1]: session-30.scope: Deactivated successfully. May 10 00:01:06.311610 systemd-logind[1992]: Session 30 logged out. Waiting for processes to exit. May 10 00:01:06.315342 systemd-logind[1992]: Removed session 30. May 10 00:01:20.352115 systemd[1]: cri-containerd-c72309b2536b468d65cade522f19892a0dcda3bd5317fc521d4c86b95104d1b4.scope: Deactivated successfully. May 10 00:01:20.355101 systemd[1]: cri-containerd-c72309b2536b468d65cade522f19892a0dcda3bd5317fc521d4c86b95104d1b4.scope: Consumed 5.526s CPU time, 22.2M memory peak, 0B memory swap peak. May 10 00:01:20.393098 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c72309b2536b468d65cade522f19892a0dcda3bd5317fc521d4c86b95104d1b4-rootfs.mount: Deactivated successfully. May 10 00:01:20.407875 containerd[2018]: time="2025-05-10T00:01:20.407716660Z" level=info msg="shim disconnected" id=c72309b2536b468d65cade522f19892a0dcda3bd5317fc521d4c86b95104d1b4 namespace=k8s.io May 10 00:01:20.407875 containerd[2018]: time="2025-05-10T00:01:20.407819728Z" level=warning msg="cleaning up after shim disconnected" id=c72309b2536b468d65cade522f19892a0dcda3bd5317fc521d4c86b95104d1b4 namespace=k8s.io May 10 00:01:20.407875 containerd[2018]: time="2025-05-10T00:01:20.407840044Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:01:21.325219 kubelet[3559]: I0510 00:01:21.324304 3559 scope.go:117] "RemoveContainer" containerID="c72309b2536b468d65cade522f19892a0dcda3bd5317fc521d4c86b95104d1b4" May 10 00:01:21.329413 containerd[2018]: time="2025-05-10T00:01:21.329353972Z" level=info msg="CreateContainer within sandbox \"3947db0e30b45f68fb9c8b63c86d22ea327cf38e7e9abcfec841e75cb34612d3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 10 00:01:21.354815 containerd[2018]: time="2025-05-10T00:01:21.354695309Z" level=info msg="CreateContainer within sandbox \"3947db0e30b45f68fb9c8b63c86d22ea327cf38e7e9abcfec841e75cb34612d3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"72676428ad4d2b924174ec88bb4f458e0a6a4ddedd27e4e7f59c8010c5971f76\"" May 10 00:01:21.355874 containerd[2018]: time="2025-05-10T00:01:21.355826669Z" level=info msg="StartContainer for \"72676428ad4d2b924174ec88bb4f458e0a6a4ddedd27e4e7f59c8010c5971f76\"" May 10 00:01:21.406060 systemd[1]: Started cri-containerd-72676428ad4d2b924174ec88bb4f458e0a6a4ddedd27e4e7f59c8010c5971f76.scope - libcontainer container 72676428ad4d2b924174ec88bb4f458e0a6a4ddedd27e4e7f59c8010c5971f76. May 10 00:01:21.479559 containerd[2018]: time="2025-05-10T00:01:21.479477141Z" level=info msg="StartContainer for \"72676428ad4d2b924174ec88bb4f458e0a6a4ddedd27e4e7f59c8010c5971f76\" returns successfully" May 10 00:01:24.613124 systemd[1]: cri-containerd-189aea05db9f96634906940e4b25402c42b4c3be9e87b5405ad2673ea1076d54.scope: Deactivated successfully. May 10 00:01:24.613572 systemd[1]: cri-containerd-189aea05db9f96634906940e4b25402c42b4c3be9e87b5405ad2673ea1076d54.scope: Consumed 3.794s CPU time, 16.2M memory peak, 0B memory swap peak. May 10 00:01:24.656533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-189aea05db9f96634906940e4b25402c42b4c3be9e87b5405ad2673ea1076d54-rootfs.mount: Deactivated successfully. May 10 00:01:24.666161 containerd[2018]: time="2025-05-10T00:01:24.665933193Z" level=info msg="shim disconnected" id=189aea05db9f96634906940e4b25402c42b4c3be9e87b5405ad2673ea1076d54 namespace=k8s.io May 10 00:01:24.666161 containerd[2018]: time="2025-05-10T00:01:24.666007293Z" level=warning msg="cleaning up after shim disconnected" id=189aea05db9f96634906940e4b25402c42b4c3be9e87b5405ad2673ea1076d54 namespace=k8s.io May 10 00:01:24.666161 containerd[2018]: time="2025-05-10T00:01:24.666027849Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 10 00:01:25.340935 kubelet[3559]: I0510 00:01:25.340823 3559 scope.go:117] "RemoveContainer" containerID="189aea05db9f96634906940e4b25402c42b4c3be9e87b5405ad2673ea1076d54" May 10 00:01:25.344478 containerd[2018]: time="2025-05-10T00:01:25.344409908Z" level=info msg="CreateContainer within sandbox \"b39465919d9561f5ba6f9a44c9ac487f0bc840ed4e10a3b884b9d1e382307dcc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 10 00:01:25.376776 containerd[2018]: time="2025-05-10T00:01:25.375211377Z" level=info msg="CreateContainer within sandbox \"b39465919d9561f5ba6f9a44c9ac487f0bc840ed4e10a3b884b9d1e382307dcc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"308174ad70c74d8af50af0e96905f7ab7d1adc47a587b18e5cc8bed3dcda8237\"" May 10 00:01:25.378591 containerd[2018]: time="2025-05-10T00:01:25.378531189Z" level=info msg="StartContainer for \"308174ad70c74d8af50af0e96905f7ab7d1adc47a587b18e5cc8bed3dcda8237\"" May 10 00:01:25.435260 systemd[1]: Started cri-containerd-308174ad70c74d8af50af0e96905f7ab7d1adc47a587b18e5cc8bed3dcda8237.scope - libcontainer container 308174ad70c74d8af50af0e96905f7ab7d1adc47a587b18e5cc8bed3dcda8237. May 10 00:01:25.501370 containerd[2018]: time="2025-05-10T00:01:25.501281949Z" level=info msg="StartContainer for \"308174ad70c74d8af50af0e96905f7ab7d1adc47a587b18e5cc8bed3dcda8237\" returns successfully" May 10 00:01:25.625487 kubelet[3559]: E0510 00:01:25.625324 3559 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.213:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-213?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" May 10 00:01:35.626407 kubelet[3559]: E0510 00:01:35.625858 3559 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.213:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-213?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"