Feb 13 19:50:37.209328 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 19:50:37.209374 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 19:50:37.209399 kernel: KASLR disabled due to lack of seed Feb 13 19:50:37.209416 kernel: efi: EFI v2.7 by EDK II Feb 13 19:50:37.209432 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Feb 13 19:50:37.209447 kernel: ACPI: Early table checksum verification disabled Feb 13 19:50:37.209465 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 19:50:37.209480 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 19:50:37.209496 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 19:50:37.209512 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 19:50:37.209532 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 19:50:37.209548 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 19:50:37.209563 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 19:50:37.209579 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 19:50:37.209598 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 19:50:37.209618 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 19:50:37.209635 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 19:50:37.209651 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 19:50:37.209668 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 19:50:37.209684 kernel: printk: bootconsole [uart0] enabled Feb 13 19:50:37.209701 kernel: NUMA: Failed to initialise from firmware Feb 13 19:50:37.209717 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:50:37.209733 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 19:50:37.209750 kernel: Zone ranges: Feb 13 19:50:37.209766 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 19:50:37.209782 kernel: DMA32 empty Feb 13 19:50:37.209802 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 19:50:37.209818 kernel: Movable zone start for each node Feb 13 19:50:37.209835 kernel: Early memory node ranges Feb 13 19:50:37.209851 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 19:50:37.209867 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 19:50:37.209883 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 19:50:37.209899 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 19:50:37.209916 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 19:50:37.209932 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 19:50:37.209948 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 19:50:37.209964 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 19:50:37.209981 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:50:37.210001 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 19:50:37.210018 kernel: psci: probing for conduit method from ACPI. Feb 13 19:50:37.210041 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 19:50:37.210059 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:50:37.210076 kernel: psci: Trusted OS migration not required Feb 13 19:50:37.210097 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:50:37.210115 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:50:37.210132 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:50:37.210150 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 19:50:37.210168 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:50:37.210185 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:50:37.210217 kernel: CPU features: detected: Spectre-v2 Feb 13 19:50:37.210241 kernel: CPU features: detected: Spectre-v3a Feb 13 19:50:37.210284 kernel: CPU features: detected: Spectre-BHB Feb 13 19:50:37.210305 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 19:50:37.210323 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 19:50:37.210373 kernel: alternatives: applying boot alternatives Feb 13 19:50:37.210399 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:50:37.210418 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:50:37.210436 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:50:37.210453 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:50:37.210471 kernel: Fallback order for Node 0: 0 Feb 13 19:50:37.210488 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 19:50:37.210505 kernel: Policy zone: Normal Feb 13 19:50:37.210523 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:50:37.210541 kernel: software IO TLB: area num 2. Feb 13 19:50:37.210558 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 19:50:37.210582 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Feb 13 19:50:37.210600 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:50:37.210617 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:50:37.210635 kernel: rcu: RCU event tracing is enabled. Feb 13 19:50:37.210653 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:50:37.210671 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:50:37.210689 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:50:37.210706 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:50:37.210724 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:50:37.210741 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:50:37.210758 kernel: GICv3: 96 SPIs implemented Feb 13 19:50:37.210780 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:50:37.210798 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:50:37.210815 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 19:50:37.210832 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 19:50:37.210849 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 19:50:37.210867 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:50:37.210884 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:50:37.210901 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 19:50:37.210918 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 19:50:37.210936 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 19:50:37.210953 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:50:37.210970 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 19:50:37.210992 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 19:50:37.211010 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 19:50:37.211027 kernel: Console: colour dummy device 80x25 Feb 13 19:50:37.211045 kernel: printk: console [tty1] enabled Feb 13 19:50:37.211063 kernel: ACPI: Core revision 20230628 Feb 13 19:50:37.211081 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 19:50:37.211099 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:50:37.211117 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:50:37.211135 kernel: landlock: Up and running. Feb 13 19:50:37.211157 kernel: SELinux: Initializing. Feb 13 19:50:37.211175 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:50:37.211192 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:50:37.211239 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:50:37.211261 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:50:37.211279 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:50:37.211297 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:50:37.211315 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 19:50:37.211333 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 19:50:37.211357 kernel: Remapping and enabling EFI services. Feb 13 19:50:37.211375 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:50:37.211392 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:50:37.211410 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 19:50:37.211428 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 19:50:37.211445 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 19:50:37.211463 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:50:37.211481 kernel: SMP: Total of 2 processors activated. Feb 13 19:50:37.211513 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:50:37.211537 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 19:50:37.211555 kernel: CPU features: detected: CRC32 instructions Feb 13 19:50:37.211573 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:50:37.211603 kernel: alternatives: applying system-wide alternatives Feb 13 19:50:37.211626 kernel: devtmpfs: initialized Feb 13 19:50:37.211644 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:50:37.211663 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:50:37.211681 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:50:37.211700 kernel: SMBIOS 3.0.0 present. Feb 13 19:50:37.211718 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 19:50:37.211741 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:50:37.211760 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:50:37.211779 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:50:37.211797 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:50:37.211816 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:50:37.211835 kernel: audit: type=2000 audit(0.287:1): state=initialized audit_enabled=0 res=1 Feb 13 19:50:37.211853 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:50:37.211876 kernel: cpuidle: using governor menu Feb 13 19:50:37.211895 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:50:37.211913 kernel: ASID allocator initialised with 65536 entries Feb 13 19:50:37.211932 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:50:37.211950 kernel: Serial: AMBA PL011 UART driver Feb 13 19:50:37.211969 kernel: Modules: 17520 pages in range for non-PLT usage Feb 13 19:50:37.211987 kernel: Modules: 509040 pages in range for PLT usage Feb 13 19:50:37.212006 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:50:37.212024 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:50:37.212047 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:50:37.212066 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:50:37.212084 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:50:37.212103 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:50:37.212122 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:50:37.212141 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:50:37.212159 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:50:37.212178 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:50:37.212196 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:50:37.212258 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:50:37.212282 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:50:37.212300 kernel: ACPI: Interpreter enabled Feb 13 19:50:37.212319 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:50:37.212338 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:50:37.212356 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 19:50:37.214712 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:50:37.214937 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:50:37.215198 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:50:37.215436 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 19:50:37.215660 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 19:50:37.217275 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 19:50:37.217297 kernel: acpiphp: Slot [1] registered Feb 13 19:50:37.217316 kernel: acpiphp: Slot [2] registered Feb 13 19:50:37.217334 kernel: acpiphp: Slot [3] registered Feb 13 19:50:37.217353 kernel: acpiphp: Slot [4] registered Feb 13 19:50:37.217380 kernel: acpiphp: Slot [5] registered Feb 13 19:50:37.217399 kernel: acpiphp: Slot [6] registered Feb 13 19:50:37.217417 kernel: acpiphp: Slot [7] registered Feb 13 19:50:37.217435 kernel: acpiphp: Slot [8] registered Feb 13 19:50:37.217454 kernel: acpiphp: Slot [9] registered Feb 13 19:50:37.217472 kernel: acpiphp: Slot [10] registered Feb 13 19:50:37.217491 kernel: acpiphp: Slot [11] registered Feb 13 19:50:37.217509 kernel: acpiphp: Slot [12] registered Feb 13 19:50:37.217527 kernel: acpiphp: Slot [13] registered Feb 13 19:50:37.217546 kernel: acpiphp: Slot [14] registered Feb 13 19:50:37.217569 kernel: acpiphp: Slot [15] registered Feb 13 19:50:37.217587 kernel: acpiphp: Slot [16] registered Feb 13 19:50:37.217605 kernel: acpiphp: Slot [17] registered Feb 13 19:50:37.217623 kernel: acpiphp: Slot [18] registered Feb 13 19:50:37.217642 kernel: acpiphp: Slot [19] registered Feb 13 19:50:37.217660 kernel: acpiphp: Slot [20] registered Feb 13 19:50:37.217678 kernel: acpiphp: Slot [21] registered Feb 13 19:50:37.217696 kernel: acpiphp: Slot [22] registered Feb 13 19:50:37.217715 kernel: acpiphp: Slot [23] registered Feb 13 19:50:37.217737 kernel: acpiphp: Slot [24] registered Feb 13 19:50:37.217756 kernel: acpiphp: Slot [25] registered Feb 13 19:50:37.217774 kernel: acpiphp: Slot [26] registered Feb 13 19:50:37.217793 kernel: acpiphp: Slot [27] registered Feb 13 19:50:37.217811 kernel: acpiphp: Slot [28] registered Feb 13 19:50:37.217829 kernel: acpiphp: Slot [29] registered Feb 13 19:50:37.217848 kernel: acpiphp: Slot [30] registered Feb 13 19:50:37.217866 kernel: acpiphp: Slot [31] registered Feb 13 19:50:37.217884 kernel: PCI host bridge to bus 0000:00 Feb 13 19:50:37.218129 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 19:50:37.218374 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:50:37.218562 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 19:50:37.218754 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 19:50:37.218991 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 19:50:37.220336 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 19:50:37.220633 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 19:50:37.220881 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 19:50:37.221087 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 19:50:37.222554 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:50:37.222805 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 19:50:37.223040 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 19:50:37.223290 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 19:50:37.223510 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 19:50:37.223715 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:50:37.223954 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 19:50:37.224177 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 19:50:37.224413 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 19:50:37.224765 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 19:50:37.224991 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 19:50:37.225200 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 19:50:37.227537 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:50:37.227726 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 19:50:37.227752 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:50:37.227772 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:50:37.227792 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:50:37.227811 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:50:37.227829 kernel: iommu: Default domain type: Translated Feb 13 19:50:37.227848 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:50:37.227877 kernel: efivars: Registered efivars operations Feb 13 19:50:37.227896 kernel: vgaarb: loaded Feb 13 19:50:37.227915 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:50:37.227933 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:50:37.227951 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:50:37.227971 kernel: pnp: PnP ACPI init Feb 13 19:50:37.228183 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 19:50:37.228242 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:50:37.228271 kernel: NET: Registered PF_INET protocol family Feb 13 19:50:37.228291 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:50:37.228310 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:50:37.228330 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:50:37.228349 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:50:37.228368 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:50:37.228387 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:50:37.230384 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:50:37.230404 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:50:37.230431 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:50:37.230450 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:50:37.230469 kernel: kvm [1]: HYP mode not available Feb 13 19:50:37.230488 kernel: Initialise system trusted keyrings Feb 13 19:50:37.230507 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:50:37.230525 kernel: Key type asymmetric registered Feb 13 19:50:37.230544 kernel: Asymmetric key parser 'x509' registered Feb 13 19:50:37.230562 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:50:37.230581 kernel: io scheduler mq-deadline registered Feb 13 19:50:37.230604 kernel: io scheduler kyber registered Feb 13 19:50:37.230623 kernel: io scheduler bfq registered Feb 13 19:50:37.230883 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 19:50:37.230914 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:50:37.230933 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:50:37.230952 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 19:50:37.230971 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 19:50:37.230990 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:50:37.231016 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 19:50:37.233337 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 19:50:37.233383 kernel: printk: console [ttyS0] disabled Feb 13 19:50:37.233404 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 19:50:37.233424 kernel: printk: console [ttyS0] enabled Feb 13 19:50:37.233443 kernel: printk: bootconsole [uart0] disabled Feb 13 19:50:37.233462 kernel: thunder_xcv, ver 1.0 Feb 13 19:50:37.233480 kernel: thunder_bgx, ver 1.0 Feb 13 19:50:37.233499 kernel: nicpf, ver 1.0 Feb 13 19:50:37.233528 kernel: nicvf, ver 1.0 Feb 13 19:50:37.233789 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:50:37.234015 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:50:36 UTC (1739476236) Feb 13 19:50:37.234042 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:50:37.234061 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 19:50:37.234080 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:50:37.234099 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:50:37.234118 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:50:37.234143 kernel: Segment Routing with IPv6 Feb 13 19:50:37.234176 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:50:37.234196 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:50:37.234268 kernel: Key type dns_resolver registered Feb 13 19:50:37.234290 kernel: registered taskstats version 1 Feb 13 19:50:37.234309 kernel: Loading compiled-in X.509 certificates Feb 13 19:50:37.234328 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 19:50:37.234347 kernel: Key type .fscrypt registered Feb 13 19:50:37.234366 kernel: Key type fscrypt-provisioning registered Feb 13 19:50:37.234391 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:50:37.234411 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:50:37.234430 kernel: ima: No architecture policies found Feb 13 19:50:37.234449 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:50:37.234467 kernel: clk: Disabling unused clocks Feb 13 19:50:37.234485 kernel: Freeing unused kernel memory: 39360K Feb 13 19:50:37.234504 kernel: Run /init as init process Feb 13 19:50:37.234523 kernel: with arguments: Feb 13 19:50:37.234542 kernel: /init Feb 13 19:50:37.234560 kernel: with environment: Feb 13 19:50:37.234584 kernel: HOME=/ Feb 13 19:50:37.234602 kernel: TERM=linux Feb 13 19:50:37.234621 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:50:37.234644 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:50:37.234668 systemd[1]: Detected virtualization amazon. Feb 13 19:50:37.234690 systemd[1]: Detected architecture arm64. Feb 13 19:50:37.234710 systemd[1]: Running in initrd. Feb 13 19:50:37.234734 systemd[1]: No hostname configured, using default hostname. Feb 13 19:50:37.234755 systemd[1]: Hostname set to . Feb 13 19:50:37.234775 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:50:37.234796 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:50:37.234816 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:50:37.234836 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:50:37.234858 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:50:37.234879 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:50:37.234904 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:50:37.234926 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:50:37.234950 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:50:37.234984 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:50:37.235009 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:50:37.235030 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:50:37.235051 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:50:37.235077 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:50:37.235099 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:50:37.235119 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:50:37.235140 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:50:37.235160 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:50:37.235181 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:50:37.235201 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:50:37.235272 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:50:37.235294 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:50:37.235321 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:50:37.235343 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:50:37.235374 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:50:37.235400 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:50:37.235421 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:50:37.235442 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:50:37.235462 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:50:37.235483 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:50:37.235510 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:50:37.235531 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:50:37.235552 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:50:37.235572 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:50:37.235594 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:50:37.235663 systemd-journald[251]: Collecting audit messages is disabled. Feb 13 19:50:37.235708 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:50:37.235730 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:50:37.235756 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:50:37.235777 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:50:37.235797 kernel: Bridge firewalling registered Feb 13 19:50:37.235816 systemd-journald[251]: Journal started Feb 13 19:50:37.235855 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2d776c1bd321cf005ff98c5b9a3b9b) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:50:37.192729 systemd-modules-load[252]: Inserted module 'overlay' Feb 13 19:50:37.232835 systemd-modules-load[252]: Inserted module 'br_netfilter' Feb 13 19:50:37.245259 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:50:37.266374 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:50:37.250090 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:50:37.256604 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:50:37.263524 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:50:37.295336 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:50:37.316825 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:50:37.325342 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:50:37.330256 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:50:37.345648 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:50:37.352567 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:50:37.374039 dracut-cmdline[287]: dracut-dracut-053 Feb 13 19:50:37.384826 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:50:37.441028 systemd-resolved[288]: Positive Trust Anchors: Feb 13 19:50:37.441063 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:50:37.441125 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:50:37.513237 kernel: SCSI subsystem initialized Feb 13 19:50:37.519242 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:50:37.531244 kernel: iscsi: registered transport (tcp) Feb 13 19:50:37.553719 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:50:37.553792 kernel: QLogic iSCSI HBA Driver Feb 13 19:50:37.647345 kernel: random: crng init done Feb 13 19:50:37.647466 systemd-resolved[288]: Defaulting to hostname 'linux'. Feb 13 19:50:37.650955 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:50:37.653346 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:50:37.675267 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:50:37.684478 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:50:37.728636 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:50:37.728723 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:50:37.728751 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:50:37.796252 kernel: raid6: neonx8 gen() 6714 MB/s Feb 13 19:50:37.813255 kernel: raid6: neonx4 gen() 6546 MB/s Feb 13 19:50:37.830240 kernel: raid6: neonx2 gen() 5444 MB/s Feb 13 19:50:37.847239 kernel: raid6: neonx1 gen() 3938 MB/s Feb 13 19:50:37.864239 kernel: raid6: int64x8 gen() 3804 MB/s Feb 13 19:50:37.881237 kernel: raid6: int64x4 gen() 3698 MB/s Feb 13 19:50:37.898238 kernel: raid6: int64x2 gen() 3600 MB/s Feb 13 19:50:37.915977 kernel: raid6: int64x1 gen() 2758 MB/s Feb 13 19:50:37.916013 kernel: raid6: using algorithm neonx8 gen() 6714 MB/s Feb 13 19:50:37.933976 kernel: raid6: .... xor() 4828 MB/s, rmw enabled Feb 13 19:50:37.934022 kernel: raid6: using neon recovery algorithm Feb 13 19:50:37.942452 kernel: xor: measuring software checksum speed Feb 13 19:50:37.942508 kernel: 8regs : 10971 MB/sec Feb 13 19:50:37.943540 kernel: 32regs : 11943 MB/sec Feb 13 19:50:37.944706 kernel: arm64_neon : 9512 MB/sec Feb 13 19:50:37.944738 kernel: xor: using function: 32regs (11943 MB/sec) Feb 13 19:50:38.029255 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:50:38.048474 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:50:38.058616 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:50:38.102238 systemd-udevd[470]: Using default interface naming scheme 'v255'. Feb 13 19:50:38.111419 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:50:38.125512 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:50:38.159510 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Feb 13 19:50:38.216452 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:50:38.226510 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:50:38.348126 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:50:38.360620 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:50:38.403263 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:50:38.407562 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:50:38.410033 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:50:38.428386 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:50:38.451327 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:50:38.493055 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:50:38.550273 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:50:38.550341 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 19:50:38.561806 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 19:50:38.562064 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 19:50:38.562340 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:46:67:1f:04:bf Feb 13 19:50:38.565040 (udev-worker)[523]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:38.568828 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:50:38.570834 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:50:38.588943 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:50:38.591291 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:50:38.591582 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:50:38.591723 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:50:38.598330 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:50:38.621278 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 19:50:38.623239 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 19:50:38.630259 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 19:50:38.641094 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:50:38.641162 kernel: GPT:9289727 != 16777215 Feb 13 19:50:38.641188 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:50:38.641235 kernel: GPT:9289727 != 16777215 Feb 13 19:50:38.641263 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:50:38.641288 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:50:38.653142 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:50:38.667504 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:50:38.713278 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:50:38.743326 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (545) Feb 13 19:50:38.750511 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (527) Feb 13 19:50:38.826038 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 19:50:38.863631 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 19:50:38.889012 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 19:50:38.889916 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 19:50:38.909809 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:50:38.926556 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:50:38.938160 disk-uuid[663]: Primary Header is updated. Feb 13 19:50:38.938160 disk-uuid[663]: Secondary Entries is updated. Feb 13 19:50:38.938160 disk-uuid[663]: Secondary Header is updated. Feb 13 19:50:38.948235 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:50:38.958244 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:50:38.969249 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:50:39.968258 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:50:39.970373 disk-uuid[664]: The operation has completed successfully. Feb 13 19:50:40.155995 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:50:40.156595 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:50:40.207510 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:50:40.223317 sh[1008]: Success Feb 13 19:50:40.248245 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:50:40.383592 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:50:40.387730 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:50:40.394169 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:50:40.441659 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 19:50:40.441721 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:50:40.441748 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:50:40.443322 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:50:40.444515 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:50:40.547260 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:50:40.571364 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:50:40.575291 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:50:40.586498 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:50:40.591465 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:50:40.636164 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:50:40.636273 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:50:40.636307 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:50:40.644276 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:50:40.663320 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:50:40.666751 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:50:40.678344 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:50:40.688579 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:50:40.769872 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:50:40.781568 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:50:40.837831 systemd-networkd[1200]: lo: Link UP Feb 13 19:50:40.837854 systemd-networkd[1200]: lo: Gained carrier Feb 13 19:50:40.841488 systemd-networkd[1200]: Enumeration completed Feb 13 19:50:40.841665 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:50:40.843873 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:50:40.843880 systemd-networkd[1200]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:50:40.844231 systemd[1]: Reached target network.target - Network. Feb 13 19:50:40.849690 systemd-networkd[1200]: eth0: Link UP Feb 13 19:50:40.849698 systemd-networkd[1200]: eth0: Gained carrier Feb 13 19:50:40.849716 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:50:40.882311 systemd-networkd[1200]: eth0: DHCPv4 address 172.31.16.124/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:50:41.064132 ignition[1133]: Ignition 2.19.0 Feb 13 19:50:41.064153 ignition[1133]: Stage: fetch-offline Feb 13 19:50:41.064840 ignition[1133]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:41.064958 ignition[1133]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:41.069573 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:50:41.065930 ignition[1133]: Ignition finished successfully Feb 13 19:50:41.088489 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:50:41.112042 ignition[1210]: Ignition 2.19.0 Feb 13 19:50:41.112075 ignition[1210]: Stage: fetch Feb 13 19:50:41.113539 ignition[1210]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:41.113565 ignition[1210]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:41.113713 ignition[1210]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:41.122083 ignition[1210]: PUT result: OK Feb 13 19:50:41.125409 ignition[1210]: parsed url from cmdline: "" Feb 13 19:50:41.125425 ignition[1210]: no config URL provided Feb 13 19:50:41.125441 ignition[1210]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:50:41.125467 ignition[1210]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:50:41.125498 ignition[1210]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:41.127121 ignition[1210]: PUT result: OK Feb 13 19:50:41.127198 ignition[1210]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 19:50:41.133291 ignition[1210]: GET result: OK Feb 13 19:50:41.133860 ignition[1210]: parsing config with SHA512: a2041910dc16c76ae910bd70752809947df4815b0c2f99c08602bf3d9a5c983298831db83d93d3208f7c08af76c75e99072e570811b177c4f243d4015e238fb6 Feb 13 19:50:41.144034 unknown[1210]: fetched base config from "system" Feb 13 19:50:41.144058 unknown[1210]: fetched base config from "system" Feb 13 19:50:41.144071 unknown[1210]: fetched user config from "aws" Feb 13 19:50:41.149410 ignition[1210]: fetch: fetch complete Feb 13 19:50:41.150201 ignition[1210]: fetch: fetch passed Feb 13 19:50:41.151084 ignition[1210]: Ignition finished successfully Feb 13 19:50:41.158271 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:50:41.167565 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:50:41.198036 ignition[1217]: Ignition 2.19.0 Feb 13 19:50:41.198068 ignition[1217]: Stage: kargs Feb 13 19:50:41.199019 ignition[1217]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:41.199046 ignition[1217]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:41.199277 ignition[1217]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:41.201368 ignition[1217]: PUT result: OK Feb 13 19:50:41.211234 ignition[1217]: kargs: kargs passed Feb 13 19:50:41.211345 ignition[1217]: Ignition finished successfully Feb 13 19:50:41.218264 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:50:41.229575 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:50:41.254487 ignition[1223]: Ignition 2.19.0 Feb 13 19:50:41.254516 ignition[1223]: Stage: disks Feb 13 19:50:41.255701 ignition[1223]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:41.255727 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:41.255875 ignition[1223]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:41.257869 ignition[1223]: PUT result: OK Feb 13 19:50:41.267659 ignition[1223]: disks: disks passed Feb 13 19:50:41.267933 ignition[1223]: Ignition finished successfully Feb 13 19:50:41.273266 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:50:41.276038 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:50:41.278445 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:50:41.284553 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:50:41.290374 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:50:41.294061 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:50:41.309565 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:50:41.352085 systemd-fsck[1231]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:50:41.356533 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:50:41.368222 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:50:41.466243 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 19:50:41.467758 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:50:41.471757 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:50:41.486396 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:50:41.499746 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:50:41.503867 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:50:41.504796 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:50:41.504847 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:50:41.517760 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:50:41.527588 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:50:41.540299 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1250) Feb 13 19:50:41.545988 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:50:41.546052 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:50:41.546080 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:50:41.560248 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:50:41.562986 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:50:41.836135 initrd-setup-root[1274]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:50:41.855057 initrd-setup-root[1281]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:50:41.875621 initrd-setup-root[1288]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:50:41.883950 initrd-setup-root[1295]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:50:42.154234 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:50:42.163421 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:50:42.172468 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:50:42.192760 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:50:42.194890 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:50:42.228348 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:50:42.242693 ignition[1363]: INFO : Ignition 2.19.0 Feb 13 19:50:42.242693 ignition[1363]: INFO : Stage: mount Feb 13 19:50:42.245902 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:42.245902 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:42.249950 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:42.252903 ignition[1363]: INFO : PUT result: OK Feb 13 19:50:42.257378 ignition[1363]: INFO : mount: mount passed Feb 13 19:50:42.257378 ignition[1363]: INFO : Ignition finished successfully Feb 13 19:50:42.259476 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:50:42.273391 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:50:42.487757 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:50:42.508257 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1374) Feb 13 19:50:42.511805 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:50:42.511854 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:50:42.511880 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:50:42.518263 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:50:42.520751 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:50:42.561512 ignition[1391]: INFO : Ignition 2.19.0 Feb 13 19:50:42.561512 ignition[1391]: INFO : Stage: files Feb 13 19:50:42.564836 ignition[1391]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:42.564836 ignition[1391]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:42.564836 ignition[1391]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:42.571256 ignition[1391]: INFO : PUT result: OK Feb 13 19:50:42.575648 ignition[1391]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:50:42.578601 ignition[1391]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:50:42.578601 ignition[1391]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:50:42.585295 ignition[1391]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:50:42.588071 ignition[1391]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:50:42.590956 unknown[1391]: wrote ssh authorized keys file for user: core Feb 13 19:50:42.593050 ignition[1391]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:50:42.605764 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:50:42.605764 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:50:42.724300 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:50:42.733489 systemd-networkd[1200]: eth0: Gained IPv6LL Feb 13 19:50:42.937092 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:50:42.940842 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:50:42.944428 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:50:42.947731 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:50:42.950960 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:50:42.950960 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:50:42.958259 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:50:42.958259 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:50:42.958259 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:50:42.958259 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:50:42.971260 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:50:42.971260 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:50:42.971260 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:50:42.971260 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:50:42.971260 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 19:50:43.305736 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 19:50:43.645647 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:50:43.645647 ignition[1391]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 19:50:43.652426 ignition[1391]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:50:43.652426 ignition[1391]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:50:43.652426 ignition[1391]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 19:50:43.652426 ignition[1391]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:50:43.652426 ignition[1391]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:50:43.652426 ignition[1391]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:50:43.652426 ignition[1391]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:50:43.652426 ignition[1391]: INFO : files: files passed Feb 13 19:50:43.652426 ignition[1391]: INFO : Ignition finished successfully Feb 13 19:50:43.679262 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:50:43.689518 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:50:43.699623 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:50:43.712671 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:50:43.713388 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:50:43.730925 initrd-setup-root-after-ignition[1419]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:50:43.730925 initrd-setup-root-after-ignition[1419]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:50:43.737116 initrd-setup-root-after-ignition[1423]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:50:43.743495 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:50:43.746666 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:50:43.758477 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:50:43.809417 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:50:43.809836 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:50:43.816972 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:50:43.818941 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:50:43.820908 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:50:43.832482 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:50:43.864193 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:50:43.883643 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:50:43.907376 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:50:43.910499 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:50:43.916770 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:50:43.918905 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:50:43.919229 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:50:43.928412 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:50:43.930969 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:50:43.935900 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:50:43.939007 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:50:43.945259 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:50:43.948177 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:50:43.953502 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:50:43.956057 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:50:43.958259 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:50:43.962342 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:50:43.968781 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:50:43.969022 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:50:43.975320 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:50:43.977519 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:50:43.980041 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:50:43.984494 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:50:43.986877 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:50:43.987102 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:50:43.989501 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:50:43.989723 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:50:43.992172 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:50:43.992394 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:50:44.012071 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:50:44.014127 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:50:44.014425 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:50:44.023920 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:50:44.034469 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:50:44.035272 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:50:44.042011 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:50:44.042304 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:50:44.060827 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:50:44.061653 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:50:44.083263 ignition[1444]: INFO : Ignition 2.19.0 Feb 13 19:50:44.083263 ignition[1444]: INFO : Stage: umount Feb 13 19:50:44.086839 ignition[1444]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:44.086839 ignition[1444]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:44.086839 ignition[1444]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:44.084659 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:50:44.095261 ignition[1444]: INFO : PUT result: OK Feb 13 19:50:44.103002 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:50:44.103742 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:50:44.109436 ignition[1444]: INFO : umount: umount passed Feb 13 19:50:44.109436 ignition[1444]: INFO : Ignition finished successfully Feb 13 19:50:44.111433 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:50:44.111660 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:50:44.120929 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:50:44.121116 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:50:44.126273 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:50:44.126380 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:50:44.128747 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:50:44.128833 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:50:44.131809 systemd[1]: Stopped target network.target - Network. Feb 13 19:50:44.133545 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:50:44.133860 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:50:44.137435 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:50:44.139074 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:50:44.143032 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:50:44.150547 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:50:44.152198 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:50:44.154038 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:50:44.154119 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:50:44.155989 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:50:44.156059 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:50:44.158180 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:50:44.158280 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:50:44.160744 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:50:44.160822 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:50:44.188480 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:50:44.188596 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:50:44.191114 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:50:44.198533 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:50:44.199980 systemd-networkd[1200]: eth0: DHCPv6 lease lost Feb 13 19:50:44.205462 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:50:44.205702 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:50:44.210344 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:50:44.210558 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:50:44.221529 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:50:44.221620 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:50:44.242364 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:50:44.246746 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:50:44.246869 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:50:44.253259 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:50:44.253357 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:50:44.255391 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:50:44.255476 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:50:44.257523 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:50:44.257600 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:50:44.260020 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:50:44.295054 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:50:44.295547 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:50:44.299975 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:50:44.300394 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:50:44.311002 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:50:44.311133 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:50:44.316267 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:50:44.316347 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:50:44.323309 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:50:44.323405 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:50:44.325700 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:50:44.325787 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:50:44.334642 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:50:44.334735 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:50:44.354571 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:50:44.360320 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:50:44.360442 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:50:44.362897 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:50:44.363005 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:50:44.370986 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:50:44.371219 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:50:44.373677 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:50:44.380842 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:50:44.420684 systemd[1]: Switching root. Feb 13 19:50:44.472383 systemd-journald[251]: Journal stopped Feb 13 19:50:46.885086 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Feb 13 19:50:46.890303 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:50:46.890619 kernel: SELinux: policy capability open_perms=1 Feb 13 19:50:46.890659 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:50:46.890691 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:50:46.890722 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:50:46.890752 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:50:46.890782 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:50:46.890812 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:50:46.890841 kernel: audit: type=1403 audit(1739476244.977:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:50:46.890884 systemd[1]: Successfully loaded SELinux policy in 69.424ms. Feb 13 19:50:46.890931 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.210ms. Feb 13 19:50:46.890965 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:50:46.890999 systemd[1]: Detected virtualization amazon. Feb 13 19:50:46.891029 systemd[1]: Detected architecture arm64. Feb 13 19:50:46.891058 systemd[1]: Detected first boot. Feb 13 19:50:46.891089 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:50:46.891120 zram_generator::config[1486]: No configuration found. Feb 13 19:50:46.891160 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:50:46.891192 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:50:46.893575 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:50:46.893627 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:50:46.893662 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:50:46.893707 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:50:46.893740 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:50:46.893769 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:50:46.893801 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:50:46.893841 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:50:46.893874 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:50:46.893905 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:50:46.893936 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:50:46.893966 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:50:46.893998 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:50:46.894030 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:50:46.894071 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:50:46.894105 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:50:46.894139 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:50:46.894171 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:50:46.897102 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:50:46.897166 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:50:46.897199 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:50:46.897258 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:50:46.897293 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:50:46.897335 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:50:46.897369 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:50:46.897401 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:50:46.897431 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:50:46.897463 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:50:46.897497 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:50:46.897527 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:50:46.897559 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:50:46.897589 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:50:46.897619 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:50:46.897653 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:50:46.897685 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:50:46.897717 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:50:46.897747 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:50:46.897778 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:50:46.897811 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:50:46.897843 systemd[1]: Reached target machines.target - Containers. Feb 13 19:50:46.897874 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:50:46.897908 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:50:46.897938 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:50:46.897969 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:50:46.897999 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:50:46.898031 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:50:46.898061 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:50:46.898090 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:50:46.898120 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:50:46.898152 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:50:46.898186 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:50:46.899354 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:50:46.899394 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:50:46.899424 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:50:46.899468 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:50:46.899498 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:50:46.899528 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:50:46.899558 kernel: fuse: init (API version 7.39) Feb 13 19:50:46.899596 kernel: loop: module loaded Feb 13 19:50:46.899626 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:50:46.899656 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:50:46.899687 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:50:46.899717 systemd[1]: Stopped verity-setup.service. Feb 13 19:50:46.899746 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:50:46.899775 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:50:46.899806 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:50:46.899841 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:50:46.899872 kernel: ACPI: bus type drm_connector registered Feb 13 19:50:46.899902 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:50:46.899932 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:50:46.899963 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:50:46.899994 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:50:46.900027 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:50:46.900057 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:50:46.900086 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:50:46.900117 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:50:46.900149 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:50:46.900181 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:50:46.900252 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:50:46.900404 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:50:46.900436 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:50:46.900471 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:50:46.900543 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:50:46.900583 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:50:46.900614 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:50:46.900644 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:50:46.900679 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:50:46.900710 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:50:46.900786 systemd-journald[1568]: Collecting audit messages is disabled. Feb 13 19:50:46.900838 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:50:46.900870 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:50:46.900902 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:50:46.900936 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:50:46.900965 systemd-journald[1568]: Journal started Feb 13 19:50:46.901018 systemd-journald[1568]: Runtime Journal (/run/log/journal/ec2d776c1bd321cf005ff98c5b9a3b9b) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:50:46.227786 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:50:46.290506 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 19:50:46.291300 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:50:46.914381 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:50:46.924236 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:50:46.924321 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:50:46.939267 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:50:46.946745 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:50:46.956277 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:50:46.964417 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:50:46.970236 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:50:46.975638 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:50:46.986331 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:50:46.985324 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:50:46.990016 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:50:46.998446 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:50:47.002319 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:50:47.015452 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:50:47.039117 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:50:47.051345 kernel: loop0: detected capacity change from 0 to 194096 Feb 13 19:50:47.063089 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:50:47.081848 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:50:47.105018 systemd-journald[1568]: Time spent on flushing to /var/log/journal/ec2d776c1bd321cf005ff98c5b9a3b9b is 82.854ms for 909 entries. Feb 13 19:50:47.105018 systemd-journald[1568]: System Journal (/var/log/journal/ec2d776c1bd321cf005ff98c5b9a3b9b) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:50:47.218983 systemd-journald[1568]: Received client request to flush runtime journal. Feb 13 19:50:47.219074 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:50:47.219137 kernel: loop1: detected capacity change from 0 to 114432 Feb 13 19:50:47.105593 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:50:47.116486 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:50:47.125554 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:50:47.128551 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:50:47.224075 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:50:47.230568 udevadm[1625]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:50:47.231874 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:50:47.238808 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:50:47.268294 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:50:47.280551 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:50:47.329883 kernel: loop2: detected capacity change from 0 to 52536 Feb 13 19:50:47.335534 systemd-tmpfiles[1635]: ACLs are not supported, ignoring. Feb 13 19:50:47.336090 systemd-tmpfiles[1635]: ACLs are not supported, ignoring. Feb 13 19:50:47.347965 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:50:47.450258 kernel: loop3: detected capacity change from 0 to 114328 Feb 13 19:50:47.554260 kernel: loop4: detected capacity change from 0 to 194096 Feb 13 19:50:47.585255 kernel: loop5: detected capacity change from 0 to 114432 Feb 13 19:50:47.604257 kernel: loop6: detected capacity change from 0 to 52536 Feb 13 19:50:47.616450 kernel: loop7: detected capacity change from 0 to 114328 Feb 13 19:50:47.631690 (sd-merge)[1641]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 19:50:47.634900 (sd-merge)[1641]: Merged extensions into '/usr'. Feb 13 19:50:47.645568 systemd[1]: Reloading requested from client PID 1597 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:50:47.645761 systemd[1]: Reloading... Feb 13 19:50:47.764241 zram_generator::config[1667]: No configuration found. Feb 13 19:50:48.178545 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:50:48.304775 systemd[1]: Reloading finished in 658 ms. Feb 13 19:50:48.350307 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:50:48.353985 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:50:48.374459 systemd[1]: Starting ensure-sysext.service... Feb 13 19:50:48.382633 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:50:48.399422 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:50:48.408557 systemd[1]: Reloading requested from client PID 1719 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:50:48.408591 systemd[1]: Reloading... Feb 13 19:50:48.493524 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:50:48.494190 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:50:48.496035 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:50:48.496619 systemd-tmpfiles[1720]: ACLs are not supported, ignoring. Feb 13 19:50:48.496756 systemd-tmpfiles[1720]: ACLs are not supported, ignoring. Feb 13 19:50:48.503902 systemd-udevd[1721]: Using default interface naming scheme 'v255'. Feb 13 19:50:48.508922 systemd-tmpfiles[1720]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:50:48.508951 systemd-tmpfiles[1720]: Skipping /boot Feb 13 19:50:48.566715 systemd-tmpfiles[1720]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:50:48.566745 systemd-tmpfiles[1720]: Skipping /boot Feb 13 19:50:48.649802 ldconfig[1590]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:50:48.654587 zram_generator::config[1757]: No configuration found. Feb 13 19:50:48.820901 (udev-worker)[1762]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:49.019821 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:50:49.093240 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1754) Feb 13 19:50:49.167720 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:50:49.168387 systemd[1]: Reloading finished in 759 ms. Feb 13 19:50:49.208048 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:50:49.212298 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:50:49.224263 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:50:49.274728 systemd[1]: Finished ensure-sysext.service. Feb 13 19:50:49.298628 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:50:49.305681 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:50:49.308343 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:50:49.311513 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:50:49.318884 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:50:49.325718 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:50:49.332559 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:50:49.335083 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:50:49.339545 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:50:49.346549 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:50:49.354530 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:50:49.357403 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:50:49.366451 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:50:49.371680 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:50:49.415519 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:50:49.433301 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:50:49.434369 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:50:49.448585 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:50:49.451310 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:50:49.454706 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:50:49.491569 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:50:49.493867 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:50:49.509537 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:50:49.527064 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:50:49.530346 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:50:49.531386 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:50:49.536685 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:50:49.565120 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:50:49.577669 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:50:49.581806 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:50:49.584522 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:50:49.609494 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:50:49.615307 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:50:49.617765 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:50:49.618981 lvm[1942]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:50:49.672255 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:50:49.676335 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:50:49.678059 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:50:49.690676 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:50:49.703718 augenrules[1960]: No rules Feb 13 19:50:49.708351 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:50:49.711633 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:50:49.728188 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:50:49.737392 lvm[1958]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:50:49.777929 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:50:49.788113 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:50:49.866823 systemd-networkd[1924]: lo: Link UP Feb 13 19:50:49.866848 systemd-networkd[1924]: lo: Gained carrier Feb 13 19:50:49.869630 systemd-networkd[1924]: Enumeration completed Feb 13 19:50:49.869800 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:50:49.874735 systemd-networkd[1924]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:50:49.874757 systemd-networkd[1924]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:50:49.877142 systemd-networkd[1924]: eth0: Link UP Feb 13 19:50:49.877648 systemd-networkd[1924]: eth0: Gained carrier Feb 13 19:50:49.877684 systemd-networkd[1924]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:50:49.880620 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:50:49.886411 systemd-resolved[1925]: Positive Trust Anchors: Feb 13 19:50:49.886455 systemd-resolved[1925]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:50:49.886520 systemd-resolved[1925]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:50:49.889349 systemd-networkd[1924]: eth0: DHCPv4 address 172.31.16.124/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:50:49.898624 systemd-resolved[1925]: Defaulting to hostname 'linux'. Feb 13 19:50:49.902603 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:50:49.905134 systemd[1]: Reached target network.target - Network. Feb 13 19:50:49.907585 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:50:49.912129 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:50:49.914607 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:50:49.916938 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:50:49.919653 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:50:49.921918 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:50:49.924385 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:50:49.926683 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:50:49.926742 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:50:49.928369 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:50:49.931434 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:50:49.935911 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:50:49.947824 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:50:49.951052 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:50:49.953497 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:50:49.955629 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:50:49.958148 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:50:49.958200 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:50:49.965471 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:50:49.976237 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:50:49.983591 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:50:49.989462 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:50:50.005707 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:50:50.008353 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:50:50.011565 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:50:50.024705 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:50:50.032946 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:50:50.040442 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:50:50.048692 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:50:50.059625 jq[1984]: false Feb 13 19:50:50.059470 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:50:50.078804 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:50:50.085352 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:50:50.087368 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:50:50.110539 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:50:50.121477 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:50:50.132600 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:50:50.133789 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:50:50.183242 jq[1998]: true Feb 13 19:50:50.187697 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:50:50.219730 dbus-daemon[1983]: [system] SELinux support is enabled Feb 13 19:50:50.220028 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:50:50.229231 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:50:50.229499 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:50:50.234875 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:50:50.235296 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:50:50.241092 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:50:50.242685 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:50:50.259802 dbus-daemon[1983]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1924 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:50:50.274704 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:50:50.289574 extend-filesystems[1985]: Found loop4 Feb 13 19:50:50.289574 extend-filesystems[1985]: Found loop5 Feb 13 19:50:50.289574 extend-filesystems[1985]: Found loop6 Feb 13 19:50:50.289574 extend-filesystems[1985]: Found loop7 Feb 13 19:50:50.289574 extend-filesystems[1985]: Found nvme0n1 Feb 13 19:50:50.289574 extend-filesystems[1985]: Found nvme0n1p1 Feb 13 19:50:50.289574 extend-filesystems[1985]: Found nvme0n1p2 Feb 13 19:50:50.289574 extend-filesystems[1985]: Found nvme0n1p3 Feb 13 19:50:50.289574 extend-filesystems[1985]: Found usr Feb 13 19:50:50.289574 extend-filesystems[1985]: Found nvme0n1p4 Feb 13 19:50:50.289574 extend-filesystems[1985]: Found nvme0n1p6 Feb 13 19:50:50.289574 extend-filesystems[1985]: Found nvme0n1p7 Feb 13 19:50:50.289574 extend-filesystems[1985]: Found nvme0n1p9 Feb 13 19:50:50.289574 extend-filesystems[1985]: Checking size of /dev/nvme0n1p9 Feb 13 19:50:50.390381 jq[2011]: true Feb 13 19:50:50.411999 ntpd[1987]: 13 Feb 19:50:50 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:35:09 UTC 2025 (1): Starting Feb 13 19:50:50.411999 ntpd[1987]: 13 Feb 19:50:50 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:50:50.411999 ntpd[1987]: 13 Feb 19:50:50 ntpd[1987]: ---------------------------------------------------- Feb 13 19:50:50.411999 ntpd[1987]: 13 Feb 19:50:50 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:50:50.411999 ntpd[1987]: 13 Feb 19:50:50 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:50:50.411999 ntpd[1987]: 13 Feb 19:50:50 ntpd[1987]: corporation. Support and training for ntp-4 are Feb 13 19:50:50.411999 ntpd[1987]: 13 Feb 19:50:50 ntpd[1987]: available at https://www.nwtime.org/support Feb 13 19:50:50.411999 ntpd[1987]: 13 Feb 19:50:50 ntpd[1987]: ---------------------------------------------------- Feb 13 19:50:50.411999 ntpd[1987]: 13 Feb 19:50:50 ntpd[1987]: proto: precision = 0.108 usec (-23) Feb 13 19:50:50.411999 ntpd[1987]: 13 Feb 19:50:50 ntpd[1987]: basedate set to 2025-02-01 Feb 13 19:50:50.411999 ntpd[1987]: 13 Feb 19:50:50 ntpd[1987]: gps base set to 2025-02-02 (week 2352) Feb 13 19:50:50.411999 ntpd[1987]: 13 Feb 19:50:50 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:50:50.411999 ntpd[1987]: 13 Feb 19:50:50 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:50:50.411999 ntpd[1987]: 13 Feb 19:50:50 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:50:50.411999 ntpd[1987]: 13 Feb 19:50:50 ntpd[1987]: Listen normally on 3 eth0 172.31.16.124:123 Feb 13 19:50:50.411999 ntpd[1987]: 13 Feb 19:50:50 ntpd[1987]: Listen normally on 4 lo [::1]:123 Feb 13 19:50:50.411999 ntpd[1987]: 13 Feb 19:50:50 ntpd[1987]: bind(21) AF_INET6 fe80::446:67ff:fe1f:4bf%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:50:50.411999 ntpd[1987]: 13 Feb 19:50:50 ntpd[1987]: unable to create socket on eth0 (5) for fe80::446:67ff:fe1f:4bf%2#123 Feb 13 19:50:50.411999 ntpd[1987]: 13 Feb 19:50:50 ntpd[1987]: failed to init interface for address fe80::446:67ff:fe1f:4bf%2 Feb 13 19:50:50.411999 ntpd[1987]: 13 Feb 19:50:50 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Feb 13 19:50:50.411999 ntpd[1987]: 13 Feb 19:50:50 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:50:50.411999 ntpd[1987]: 13 Feb 19:50:50 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:50:50.414361 update_engine[1995]: I20250213 19:50:50.298015 1995 main.cc:92] Flatcar Update Engine starting Feb 13 19:50:50.414361 update_engine[1995]: I20250213 19:50:50.325996 1995 update_check_scheduler.cc:74] Next update check in 7m34s Feb 13 19:50:50.324864 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:50:50.337775 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:35:09 UTC 2025 (1): Starting Feb 13 19:50:50.415200 extend-filesystems[1985]: Resized partition /dev/nvme0n1p9 Feb 13 19:50:50.442816 coreos-metadata[1982]: Feb 13 19:50:50.398 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:50:50.442816 coreos-metadata[1982]: Feb 13 19:50:50.406 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 19:50:50.442816 coreos-metadata[1982]: Feb 13 19:50:50.408 INFO Fetch successful Feb 13 19:50:50.442816 coreos-metadata[1982]: Feb 13 19:50:50.408 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 19:50:50.442816 coreos-metadata[1982]: Feb 13 19:50:50.411 INFO Fetch successful Feb 13 19:50:50.442816 coreos-metadata[1982]: Feb 13 19:50:50.411 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 19:50:50.442816 coreos-metadata[1982]: Feb 13 19:50:50.422 INFO Fetch successful Feb 13 19:50:50.442816 coreos-metadata[1982]: Feb 13 19:50:50.422 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 19:50:50.442816 coreos-metadata[1982]: Feb 13 19:50:50.423 INFO Fetch successful Feb 13 19:50:50.442816 coreos-metadata[1982]: Feb 13 19:50:50.423 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 19:50:50.442816 coreos-metadata[1982]: Feb 13 19:50:50.424 INFO Fetch failed with 404: resource not found Feb 13 19:50:50.442816 coreos-metadata[1982]: Feb 13 19:50:50.424 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 19:50:50.442816 coreos-metadata[1982]: Feb 13 19:50:50.430 INFO Fetch successful Feb 13 19:50:50.442816 coreos-metadata[1982]: Feb 13 19:50:50.430 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 19:50:50.349827 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:50:50.337824 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:50:50.454435 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 19:50:50.454574 extend-filesystems[2037]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:50:50.472371 tar[2000]: linux-arm64/helm Feb 13 19:50:50.374491 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:50:50.337844 ntpd[1987]: ---------------------------------------------------- Feb 13 19:50:50.473060 coreos-metadata[1982]: Feb 13 19:50:50.447 INFO Fetch successful Feb 13 19:50:50.473060 coreos-metadata[1982]: Feb 13 19:50:50.447 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 19:50:50.473060 coreos-metadata[1982]: Feb 13 19:50:50.451 INFO Fetch successful Feb 13 19:50:50.473060 coreos-metadata[1982]: Feb 13 19:50:50.451 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 19:50:50.473060 coreos-metadata[1982]: Feb 13 19:50:50.453 INFO Fetch successful Feb 13 19:50:50.473060 coreos-metadata[1982]: Feb 13 19:50:50.453 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 19:50:50.473060 coreos-metadata[1982]: Feb 13 19:50:50.455 INFO Fetch successful Feb 13 19:50:50.374855 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:50:50.337864 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:50:50.407749 (ntainerd)[2024]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:50:50.337884 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:50:50.433842 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:50:50.337903 ntpd[1987]: corporation. Support and training for ntp-4 are Feb 13 19:50:50.337922 ntpd[1987]: available at https://www.nwtime.org/support Feb 13 19:50:50.337940 ntpd[1987]: ---------------------------------------------------- Feb 13 19:50:50.357996 ntpd[1987]: proto: precision = 0.108 usec (-23) Feb 13 19:50:50.366457 ntpd[1987]: basedate set to 2025-02-01 Feb 13 19:50:50.366493 ntpd[1987]: gps base set to 2025-02-02 (week 2352) Feb 13 19:50:50.380554 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:50:50.380632 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:50:50.382816 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:50:50.382891 ntpd[1987]: Listen normally on 3 eth0 172.31.16.124:123 Feb 13 19:50:50.382958 ntpd[1987]: Listen normally on 4 lo [::1]:123 Feb 13 19:50:50.383041 ntpd[1987]: bind(21) AF_INET6 fe80::446:67ff:fe1f:4bf%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:50:50.383101 ntpd[1987]: unable to create socket on eth0 (5) for fe80::446:67ff:fe1f:4bf%2#123 Feb 13 19:50:50.383130 ntpd[1987]: failed to init interface for address fe80::446:67ff:fe1f:4bf%2 Feb 13 19:50:50.383186 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Feb 13 19:50:50.407983 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:50:50.408035 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:50:50.596242 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 19:50:50.609979 extend-filesystems[2037]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 19:50:50.609979 extend-filesystems[2037]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:50:50.609979 extend-filesystems[2037]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 19:50:50.630622 extend-filesystems[1985]: Resized filesystem in /dev/nvme0n1p9 Feb 13 19:50:50.632966 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:50:50.634329 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:50:50.647436 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:50:50.652947 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:50:50.656862 systemd-logind[1994]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:50:50.656907 systemd-logind[1994]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 19:50:50.662512 systemd-logind[1994]: New seat seat0. Feb 13 19:50:50.723893 bash[2068]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:50:50.723027 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:50:50.754570 systemd[1]: Starting sshkeys.service... Feb 13 19:50:50.758333 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:50:50.794463 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:50:50.805916 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:50:50.873603 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1783) Feb 13 19:50:50.886024 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:50:50.886381 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:50:50.892723 dbus-daemon[1983]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2022 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:50:50.904805 locksmithd[2030]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:50:50.941101 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:50:50.960645 polkitd[2093]: Started polkitd version 121 Feb 13 19:50:50.978561 polkitd[2093]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:50:50.978706 polkitd[2093]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:50:50.980606 polkitd[2093]: Finished loading, compiling and executing 2 rules Feb 13 19:50:50.992499 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:50:50.992800 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:50:50.997290 polkitd[2093]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:50:51.074225 systemd-hostnamed[2022]: Hostname set to (transient) Feb 13 19:50:51.078830 systemd-resolved[1925]: System hostname changed to 'ip-172-31-16-124'. Feb 13 19:50:51.111266 containerd[2024]: time="2025-02-13T19:50:51.107862680Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 19:50:51.160198 coreos-metadata[2077]: Feb 13 19:50:51.159 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:50:51.163648 coreos-metadata[2077]: Feb 13 19:50:51.163 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 19:50:51.164593 coreos-metadata[2077]: Feb 13 19:50:51.164 INFO Fetch successful Feb 13 19:50:51.164593 coreos-metadata[2077]: Feb 13 19:50:51.164 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:50:51.168303 coreos-metadata[2077]: Feb 13 19:50:51.165 INFO Fetch successful Feb 13 19:50:51.173368 unknown[2077]: wrote ssh authorized keys file for user: core Feb 13 19:50:51.309668 update-ssh-keys[2166]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:50:51.313473 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:50:51.326008 systemd[1]: Finished sshkeys.service. Feb 13 19:50:51.333955 containerd[2024]: time="2025-02-13T19:50:51.333889857Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:51.339597 ntpd[1987]: bind(24) AF_INET6 fe80::446:67ff:fe1f:4bf%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:50:51.340904 ntpd[1987]: 13 Feb 19:50:51 ntpd[1987]: bind(24) AF_INET6 fe80::446:67ff:fe1f:4bf%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:50:51.340904 ntpd[1987]: 13 Feb 19:50:51 ntpd[1987]: unable to create socket on eth0 (6) for fe80::446:67ff:fe1f:4bf%2#123 Feb 13 19:50:51.340904 ntpd[1987]: 13 Feb 19:50:51 ntpd[1987]: failed to init interface for address fe80::446:67ff:fe1f:4bf%2 Feb 13 19:50:51.339655 ntpd[1987]: unable to create socket on eth0 (6) for fe80::446:67ff:fe1f:4bf%2#123 Feb 13 19:50:51.339684 ntpd[1987]: failed to init interface for address fe80::446:67ff:fe1f:4bf%2 Feb 13 19:50:51.342028 containerd[2024]: time="2025-02-13T19:50:51.341963553Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:50:51.342166 containerd[2024]: time="2025-02-13T19:50:51.342137829Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:50:51.342413 containerd[2024]: time="2025-02-13T19:50:51.342383601Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:50:51.342785 containerd[2024]: time="2025-02-13T19:50:51.342754761Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:50:51.344250 containerd[2024]: time="2025-02-13T19:50:51.342897921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:51.344250 containerd[2024]: time="2025-02-13T19:50:51.343029093Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:50:51.344250 containerd[2024]: time="2025-02-13T19:50:51.343068657Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:51.344250 containerd[2024]: time="2025-02-13T19:50:51.343380393Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:50:51.344250 containerd[2024]: time="2025-02-13T19:50:51.343417329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:51.344250 containerd[2024]: time="2025-02-13T19:50:51.343458813Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:50:51.344250 containerd[2024]: time="2025-02-13T19:50:51.343487829Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:51.344250 containerd[2024]: time="2025-02-13T19:50:51.343657677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:51.344250 containerd[2024]: time="2025-02-13T19:50:51.344041017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:50:51.345476 containerd[2024]: time="2025-02-13T19:50:51.345419829Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:50:51.345647 containerd[2024]: time="2025-02-13T19:50:51.345618405Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:50:51.345954 containerd[2024]: time="2025-02-13T19:50:51.345924069Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:50:51.346385 containerd[2024]: time="2025-02-13T19:50:51.346355097Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:50:51.354246 containerd[2024]: time="2025-02-13T19:50:51.353149245Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:50:51.356249 containerd[2024]: time="2025-02-13T19:50:51.354458181Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:50:51.356249 containerd[2024]: time="2025-02-13T19:50:51.354607017Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:50:51.356249 containerd[2024]: time="2025-02-13T19:50:51.354650553Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:50:51.356249 containerd[2024]: time="2025-02-13T19:50:51.354686073Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:50:51.356249 containerd[2024]: time="2025-02-13T19:50:51.354937629Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:50:51.362037 containerd[2024]: time="2025-02-13T19:50:51.359421933Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:50:51.362037 containerd[2024]: time="2025-02-13T19:50:51.359711553Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:50:51.362037 containerd[2024]: time="2025-02-13T19:50:51.359749233Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:50:51.362037 containerd[2024]: time="2025-02-13T19:50:51.359782809Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:50:51.362037 containerd[2024]: time="2025-02-13T19:50:51.359817285Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:50:51.362037 containerd[2024]: time="2025-02-13T19:50:51.359861109Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:50:51.362037 containerd[2024]: time="2025-02-13T19:50:51.359891253Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:50:51.362037 containerd[2024]: time="2025-02-13T19:50:51.359923113Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:50:51.362037 containerd[2024]: time="2025-02-13T19:50:51.359955105Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:50:51.362037 containerd[2024]: time="2025-02-13T19:50:51.359986209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:50:51.362037 containerd[2024]: time="2025-02-13T19:50:51.360016389Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:50:51.362037 containerd[2024]: time="2025-02-13T19:50:51.360046509Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:50:51.362037 containerd[2024]: time="2025-02-13T19:50:51.360087213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:50:51.362037 containerd[2024]: time="2025-02-13T19:50:51.360118017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:50:51.362747 containerd[2024]: time="2025-02-13T19:50:51.360147369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:50:51.364965 containerd[2024]: time="2025-02-13T19:50:51.360200781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:50:51.364965 containerd[2024]: time="2025-02-13T19:50:51.364399917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:50:51.364965 containerd[2024]: time="2025-02-13T19:50:51.364436061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:50:51.364965 containerd[2024]: time="2025-02-13T19:50:51.364466409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:50:51.364965 containerd[2024]: time="2025-02-13T19:50:51.364516437Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:50:51.364965 containerd[2024]: time="2025-02-13T19:50:51.364554573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:50:51.364965 containerd[2024]: time="2025-02-13T19:50:51.364591809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:50:51.364965 containerd[2024]: time="2025-02-13T19:50:51.364628397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:50:51.364965 containerd[2024]: time="2025-02-13T19:50:51.364658109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:50:51.364965 containerd[2024]: time="2025-02-13T19:50:51.364690869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:50:51.364965 containerd[2024]: time="2025-02-13T19:50:51.364726125Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:50:51.364965 containerd[2024]: time="2025-02-13T19:50:51.364773321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:50:51.364965 containerd[2024]: time="2025-02-13T19:50:51.364805253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:50:51.364965 containerd[2024]: time="2025-02-13T19:50:51.364845825Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:50:51.369267 containerd[2024]: time="2025-02-13T19:50:51.366833973Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:50:51.369267 containerd[2024]: time="2025-02-13T19:50:51.366902661Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:50:51.369267 containerd[2024]: time="2025-02-13T19:50:51.366929649Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:50:51.369267 containerd[2024]: time="2025-02-13T19:50:51.366958341Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:50:51.369267 containerd[2024]: time="2025-02-13T19:50:51.366982905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:50:51.369267 containerd[2024]: time="2025-02-13T19:50:51.367014741Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:50:51.369267 containerd[2024]: time="2025-02-13T19:50:51.367039557Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:50:51.369267 containerd[2024]: time="2025-02-13T19:50:51.367064985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:50:51.369704 containerd[2024]: time="2025-02-13T19:50:51.368638785Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:50:51.369704 containerd[2024]: time="2025-02-13T19:50:51.368769525Z" level=info msg="Connect containerd service" Feb 13 19:50:51.369704 containerd[2024]: time="2025-02-13T19:50:51.368842881Z" level=info msg="using legacy CRI server" Feb 13 19:50:51.369704 containerd[2024]: time="2025-02-13T19:50:51.368862189Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:50:51.369704 containerd[2024]: time="2025-02-13T19:50:51.369021909Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:50:51.373465 containerd[2024]: time="2025-02-13T19:50:51.373402389Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:50:51.375240 containerd[2024]: time="2025-02-13T19:50:51.374312721Z" level=info msg="Start subscribing containerd event" Feb 13 19:50:51.375240 containerd[2024]: time="2025-02-13T19:50:51.374401413Z" level=info msg="Start recovering state" Feb 13 19:50:51.375240 containerd[2024]: time="2025-02-13T19:50:51.374523285Z" level=info msg="Start event monitor" Feb 13 19:50:51.375240 containerd[2024]: time="2025-02-13T19:50:51.374546925Z" level=info msg="Start snapshots syncer" Feb 13 19:50:51.375240 containerd[2024]: time="2025-02-13T19:50:51.374568525Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:50:51.375240 containerd[2024]: time="2025-02-13T19:50:51.374588841Z" level=info msg="Start streaming server" Feb 13 19:50:51.376249 containerd[2024]: time="2025-02-13T19:50:51.376169889Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:50:51.376439 containerd[2024]: time="2025-02-13T19:50:51.376412481Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:50:51.377015 containerd[2024]: time="2025-02-13T19:50:51.376713405Z" level=info msg="containerd successfully booted in 0.276045s" Feb 13 19:50:51.376872 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:50:51.501449 systemd-networkd[1924]: eth0: Gained IPv6LL Feb 13 19:50:51.509946 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:50:51.517080 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:50:51.538671 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 19:50:51.554947 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:50:51.564728 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:50:51.657757 amazon-ssm-agent[2188]: Initializing new seelog logger Feb 13 19:50:51.660681 amazon-ssm-agent[2188]: New Seelog Logger Creation Complete Feb 13 19:50:51.660681 amazon-ssm-agent[2188]: 2025/02/13 19:50:51 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:51.660681 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:51.660681 amazon-ssm-agent[2188]: 2025/02/13 19:50:51 processing appconfig overrides Feb 13 19:50:51.662640 amazon-ssm-agent[2188]: 2025/02/13 19:50:51 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:51.662640 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:51.662821 amazon-ssm-agent[2188]: 2025/02/13 19:50:51 processing appconfig overrides Feb 13 19:50:51.663052 amazon-ssm-agent[2188]: 2025/02/13 19:50:51 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:51.663052 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:51.663179 amazon-ssm-agent[2188]: 2025/02/13 19:50:51 processing appconfig overrides Feb 13 19:50:51.663944 amazon-ssm-agent[2188]: 2025-02-13 19:50:51 INFO Proxy environment variables: Feb 13 19:50:51.674252 amazon-ssm-agent[2188]: 2025/02/13 19:50:51 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:51.674252 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:50:51.674252 amazon-ssm-agent[2188]: 2025/02/13 19:50:51 processing appconfig overrides Feb 13 19:50:51.695068 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:50:51.771376 amazon-ssm-agent[2188]: 2025-02-13 19:50:51 INFO http_proxy: Feb 13 19:50:51.874178 amazon-ssm-agent[2188]: 2025-02-13 19:50:51 INFO no_proxy: Feb 13 19:50:51.974131 amazon-ssm-agent[2188]: 2025-02-13 19:50:51 INFO https_proxy: Feb 13 19:50:52.073516 amazon-ssm-agent[2188]: 2025-02-13 19:50:51 INFO Checking if agent identity type OnPrem can be assumed Feb 13 19:50:52.171200 amazon-ssm-agent[2188]: 2025-02-13 19:50:51 INFO Checking if agent identity type EC2 can be assumed Feb 13 19:50:52.223793 tar[2000]: linux-arm64/LICENSE Feb 13 19:50:52.224363 tar[2000]: linux-arm64/README.md Feb 13 19:50:52.263840 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:50:52.270415 amazon-ssm-agent[2188]: 2025-02-13 19:50:51 INFO Agent will take identity from EC2 Feb 13 19:50:52.369729 amazon-ssm-agent[2188]: 2025-02-13 19:50:51 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:50:52.473375 amazon-ssm-agent[2188]: 2025-02-13 19:50:51 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:50:52.572610 amazon-ssm-agent[2188]: 2025-02-13 19:50:51 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:50:52.671823 amazon-ssm-agent[2188]: 2025-02-13 19:50:51 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 19:50:52.771822 amazon-ssm-agent[2188]: 2025-02-13 19:50:51 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 19:50:52.781201 amazon-ssm-agent[2188]: 2025-02-13 19:50:51 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 19:50:52.783365 amazon-ssm-agent[2188]: 2025-02-13 19:50:51 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 19:50:52.783365 amazon-ssm-agent[2188]: 2025-02-13 19:50:51 INFO [Registrar] Starting registrar module Feb 13 19:50:52.783365 amazon-ssm-agent[2188]: 2025-02-13 19:50:51 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 19:50:52.783365 amazon-ssm-agent[2188]: 2025-02-13 19:50:52 INFO [EC2Identity] EC2 registration was successful. Feb 13 19:50:52.783365 amazon-ssm-agent[2188]: 2025-02-13 19:50:52 INFO [CredentialRefresher] credentialRefresher has started Feb 13 19:50:52.783365 amazon-ssm-agent[2188]: 2025-02-13 19:50:52 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 19:50:52.783365 amazon-ssm-agent[2188]: 2025-02-13 19:50:52 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 19:50:52.871193 amazon-ssm-agent[2188]: 2025-02-13 19:50:52 INFO [CredentialRefresher] Next credential rotation will be in 30.016626692566668 minutes Feb 13 19:50:53.683702 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:50:53.708767 (kubelet)[2215]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:50:53.829721 amazon-ssm-agent[2188]: 2025-02-13 19:50:53 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 19:50:53.875695 sshd_keygen[2021]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:50:53.926060 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:50:53.936059 amazon-ssm-agent[2188]: 2025-02-13 19:50:53 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2217) started Feb 13 19:50:53.946245 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:50:53.966489 systemd[1]: Started sshd@0-172.31.16.124:22-139.178.89.65:60378.service - OpenSSH per-connection server daemon (139.178.89.65:60378). Feb 13 19:50:53.986886 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:50:53.987499 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:50:54.009804 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:50:54.036453 amazon-ssm-agent[2188]: 2025-02-13 19:50:53 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 19:50:54.048830 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:50:54.059467 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:50:54.071599 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:50:54.074735 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:50:54.076721 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:50:54.079413 systemd[1]: Startup finished in 1.151s (kernel) + 8.159s (initrd) + 9.169s (userspace) = 18.481s. Feb 13 19:50:54.210518 sshd[2232]: Accepted publickey for core from 139.178.89.65 port 60378 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:54.214609 sshd[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:54.229874 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:50:54.239808 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:50:54.248246 systemd-logind[1994]: New session 1 of user core. Feb 13 19:50:54.270613 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:50:54.279827 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:50:54.295858 (systemd)[2255]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:50:54.338488 ntpd[1987]: Listen normally on 7 eth0 [fe80::446:67ff:fe1f:4bf%2]:123 Feb 13 19:50:54.338956 ntpd[1987]: 13 Feb 19:50:54 ntpd[1987]: Listen normally on 7 eth0 [fe80::446:67ff:fe1f:4bf%2]:123 Feb 13 19:50:54.517801 systemd[2255]: Queued start job for default target default.target. Feb 13 19:50:54.526601 systemd[2255]: Created slice app.slice - User Application Slice. Feb 13 19:50:54.526668 systemd[2255]: Reached target paths.target - Paths. Feb 13 19:50:54.526701 systemd[2255]: Reached target timers.target - Timers. Feb 13 19:50:54.535539 systemd[2255]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:50:54.553423 systemd[2255]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:50:54.553545 systemd[2255]: Reached target sockets.target - Sockets. Feb 13 19:50:54.553577 systemd[2255]: Reached target basic.target - Basic System. Feb 13 19:50:54.553658 systemd[2255]: Reached target default.target - Main User Target. Feb 13 19:50:54.553721 systemd[2255]: Startup finished in 245ms. Feb 13 19:50:54.554051 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:50:54.567526 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:50:54.726698 systemd[1]: Started sshd@1-172.31.16.124:22-139.178.89.65:44780.service - OpenSSH per-connection server daemon (139.178.89.65:44780). Feb 13 19:50:54.917780 sshd[2267]: Accepted publickey for core from 139.178.89.65 port 44780 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:54.921001 sshd[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:54.931191 systemd-logind[1994]: New session 2 of user core. Feb 13 19:50:54.934494 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:50:55.062613 sshd[2267]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:55.069523 systemd-logind[1994]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:50:55.070090 systemd[1]: sshd@1-172.31.16.124:22-139.178.89.65:44780.service: Deactivated successfully. Feb 13 19:50:55.073587 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:50:55.079553 systemd-logind[1994]: Removed session 2. Feb 13 19:50:55.084281 kubelet[2215]: E0213 19:50:55.084128 2215 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:50:55.093483 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:50:55.095010 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:50:55.097320 systemd[1]: kubelet.service: Consumed 1.298s CPU time. Feb 13 19:50:55.104803 systemd[1]: Started sshd@2-172.31.16.124:22-139.178.89.65:44796.service - OpenSSH per-connection server daemon (139.178.89.65:44796). Feb 13 19:50:55.289873 sshd[2277]: Accepted publickey for core from 139.178.89.65 port 44796 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:55.292466 sshd[2277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:55.301276 systemd-logind[1994]: New session 3 of user core. Feb 13 19:50:55.307504 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:50:55.424963 sshd[2277]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:55.432017 systemd[1]: sshd@2-172.31.16.124:22-139.178.89.65:44796.service: Deactivated successfully. Feb 13 19:50:55.435298 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:50:55.436850 systemd-logind[1994]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:50:55.438711 systemd-logind[1994]: Removed session 3. Feb 13 19:50:55.469729 systemd[1]: Started sshd@3-172.31.16.124:22-139.178.89.65:44800.service - OpenSSH per-connection server daemon (139.178.89.65:44800). Feb 13 19:50:55.638002 sshd[2284]: Accepted publickey for core from 139.178.89.65 port 44800 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:55.640639 sshd[2284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:55.649160 systemd-logind[1994]: New session 4 of user core. Feb 13 19:50:55.658478 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:50:55.785186 sshd[2284]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:55.792731 systemd-logind[1994]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:50:55.792750 systemd[1]: sshd@3-172.31.16.124:22-139.178.89.65:44800.service: Deactivated successfully. Feb 13 19:50:55.796671 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:50:55.799565 systemd-logind[1994]: Removed session 4. Feb 13 19:50:55.827936 systemd[1]: Started sshd@4-172.31.16.124:22-139.178.89.65:44802.service - OpenSSH per-connection server daemon (139.178.89.65:44802). Feb 13 19:50:55.995553 sshd[2291]: Accepted publickey for core from 139.178.89.65 port 44802 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:55.997692 sshd[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:56.006574 systemd-logind[1994]: New session 5 of user core. Feb 13 19:50:56.014518 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:50:56.130901 sudo[2294]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:50:56.131572 sudo[2294]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:50:56.561749 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:50:56.575728 (dockerd)[2309]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:50:56.943899 dockerd[2309]: time="2025-02-13T19:50:56.943821881Z" level=info msg="Starting up" Feb 13 19:50:57.048362 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3463419160-merged.mount: Deactivated successfully. Feb 13 19:50:57.304578 dockerd[2309]: time="2025-02-13T19:50:57.304076667Z" level=info msg="Loading containers: start." Feb 13 19:50:57.472252 kernel: Initializing XFRM netlink socket Feb 13 19:50:57.504167 (udev-worker)[2333]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:57.588063 systemd-networkd[1924]: docker0: Link UP Feb 13 19:50:57.612672 dockerd[2309]: time="2025-02-13T19:50:57.612603484Z" level=info msg="Loading containers: done." Feb 13 19:50:57.637890 dockerd[2309]: time="2025-02-13T19:50:57.637828276Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:50:57.638198 dockerd[2309]: time="2025-02-13T19:50:57.637971760Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 19:50:57.638198 dockerd[2309]: time="2025-02-13T19:50:57.638160736Z" level=info msg="Daemon has completed initialization" Feb 13 19:50:57.709437 dockerd[2309]: time="2025-02-13T19:50:57.709300985Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:50:57.709771 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:50:58.042270 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3235118915-merged.mount: Deactivated successfully. Feb 13 19:50:59.071141 containerd[2024]: time="2025-02-13T19:50:59.071069183Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 19:50:59.668778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2675294583.mount: Deactivated successfully. Feb 13 19:51:01.081900 containerd[2024]: time="2025-02-13T19:51:01.081839073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:01.084696 containerd[2024]: time="2025-02-13T19:51:01.084650814Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865207" Feb 13 19:51:01.086053 containerd[2024]: time="2025-02-13T19:51:01.086011378Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:01.091662 containerd[2024]: time="2025-02-13T19:51:01.091593781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:01.094185 containerd[2024]: time="2025-02-13T19:51:01.094123100Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 2.02298013s" Feb 13 19:51:01.094338 containerd[2024]: time="2025-02-13T19:51:01.094183789Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 19:51:01.131385 containerd[2024]: time="2025-02-13T19:51:01.131321459Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 19:51:02.773516 containerd[2024]: time="2025-02-13T19:51:02.773456754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:02.776153 containerd[2024]: time="2025-02-13T19:51:02.776102462Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898594" Feb 13 19:51:02.776543 containerd[2024]: time="2025-02-13T19:51:02.776506120Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:02.781943 containerd[2024]: time="2025-02-13T19:51:02.781876878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:02.784555 containerd[2024]: time="2025-02-13T19:51:02.784489339Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 1.652766452s" Feb 13 19:51:02.784655 containerd[2024]: time="2025-02-13T19:51:02.784553711Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 19:51:02.824412 containerd[2024]: time="2025-02-13T19:51:02.823521977Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 19:51:03.960743 containerd[2024]: time="2025-02-13T19:51:03.960411867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:03.965260 containerd[2024]: time="2025-02-13T19:51:03.964575368Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:03.965260 containerd[2024]: time="2025-02-13T19:51:03.964702241Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164934" Feb 13 19:51:03.973931 containerd[2024]: time="2025-02-13T19:51:03.973855126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:03.976579 containerd[2024]: time="2025-02-13T19:51:03.976519425Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.152935271s" Feb 13 19:51:03.976765 containerd[2024]: time="2025-02-13T19:51:03.976733410Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 19:51:04.015783 containerd[2024]: time="2025-02-13T19:51:04.015480663Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:51:05.101479 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:51:05.112016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:05.430660 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:05.447912 (kubelet)[2543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:51:05.457417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1354351771.mount: Deactivated successfully. Feb 13 19:51:05.546883 kubelet[2543]: E0213 19:51:05.546791 2543 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:51:05.553337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:51:05.553651 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:51:05.977005 containerd[2024]: time="2025-02-13T19:51:05.976948436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:05.979070 containerd[2024]: time="2025-02-13T19:51:05.979015014Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663370" Feb 13 19:51:05.980286 containerd[2024]: time="2025-02-13T19:51:05.980177881Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:05.983835 containerd[2024]: time="2025-02-13T19:51:05.983738156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:05.985970 containerd[2024]: time="2025-02-13T19:51:05.985337772Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.969793445s" Feb 13 19:51:05.985970 containerd[2024]: time="2025-02-13T19:51:05.985397094Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 19:51:06.022905 containerd[2024]: time="2025-02-13T19:51:06.022849354Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:51:06.619562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount933249484.mount: Deactivated successfully. Feb 13 19:51:07.725235 containerd[2024]: time="2025-02-13T19:51:07.723445540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:07.727099 containerd[2024]: time="2025-02-13T19:51:07.727047638Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 19:51:07.729730 containerd[2024]: time="2025-02-13T19:51:07.729671374Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:07.738898 containerd[2024]: time="2025-02-13T19:51:07.738836097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:07.741366 containerd[2024]: time="2025-02-13T19:51:07.740623287Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.717707869s" Feb 13 19:51:07.741566 containerd[2024]: time="2025-02-13T19:51:07.741532264Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:51:07.783543 containerd[2024]: time="2025-02-13T19:51:07.783489183Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 19:51:08.313136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1984724332.mount: Deactivated successfully. Feb 13 19:51:08.323762 containerd[2024]: time="2025-02-13T19:51:08.323244049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:08.324964 containerd[2024]: time="2025-02-13T19:51:08.324898717Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Feb 13 19:51:08.326913 containerd[2024]: time="2025-02-13T19:51:08.326828360Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:08.332101 containerd[2024]: time="2025-02-13T19:51:08.332022817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:08.333860 containerd[2024]: time="2025-02-13T19:51:08.333665047Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 549.847744ms" Feb 13 19:51:08.333860 containerd[2024]: time="2025-02-13T19:51:08.333724526Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 19:51:08.374273 containerd[2024]: time="2025-02-13T19:51:08.374118339Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 19:51:08.911723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1265854896.mount: Deactivated successfully. Feb 13 19:51:11.248282 containerd[2024]: time="2025-02-13T19:51:11.247753167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:11.280685 containerd[2024]: time="2025-02-13T19:51:11.280619702Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Feb 13 19:51:11.322075 containerd[2024]: time="2025-02-13T19:51:11.322016213Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:11.330596 containerd[2024]: time="2025-02-13T19:51:11.330532037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:11.332763 containerd[2024]: time="2025-02-13T19:51:11.332712511Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.958531503s" Feb 13 19:51:11.332944 containerd[2024]: time="2025-02-13T19:51:11.332911851Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 19:51:15.601488 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:51:15.610666 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:15.906725 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:15.921712 (kubelet)[2730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:51:16.002862 kubelet[2730]: E0213 19:51:16.002793 2730 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:51:16.007761 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:51:16.008108 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:51:18.611550 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:18.621710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:18.666026 systemd[1]: Reloading requested from client PID 2745 ('systemctl') (unit session-5.scope)... Feb 13 19:51:18.666337 systemd[1]: Reloading... Feb 13 19:51:18.920275 zram_generator::config[2789]: No configuration found. Feb 13 19:51:19.135774 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:51:19.306485 systemd[1]: Reloading finished in 639 ms. Feb 13 19:51:19.398974 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:19.407583 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:19.410940 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:51:19.413282 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:19.421876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:19.712505 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:19.714025 (kubelet)[2851]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:51:19.801628 kubelet[2851]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:51:19.802125 kubelet[2851]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:51:19.802253 kubelet[2851]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:51:19.804192 kubelet[2851]: I0213 19:51:19.804108 2851 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:51:20.370753 kubelet[2851]: I0213 19:51:20.370690 2851 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:51:20.370753 kubelet[2851]: I0213 19:51:20.370737 2851 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:51:20.371088 kubelet[2851]: I0213 19:51:20.371060 2851 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:51:20.397638 kubelet[2851]: E0213 19:51:20.397590 2851 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.16.124:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.16.124:6443: connect: connection refused Feb 13 19:51:20.398078 kubelet[2851]: I0213 19:51:20.397898 2851 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:51:20.411830 kubelet[2851]: I0213 19:51:20.411771 2851 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:51:20.414226 kubelet[2851]: I0213 19:51:20.414125 2851 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:51:20.414526 kubelet[2851]: I0213 19:51:20.414200 2851 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-124","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:51:20.414699 kubelet[2851]: I0213 19:51:20.414552 2851 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:51:20.414699 kubelet[2851]: I0213 19:51:20.414575 2851 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:51:20.414840 kubelet[2851]: I0213 19:51:20.414827 2851 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:51:20.418291 kubelet[2851]: I0213 19:51:20.416871 2851 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:51:20.418291 kubelet[2851]: I0213 19:51:20.416915 2851 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:51:20.418291 kubelet[2851]: I0213 19:51:20.417029 2851 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:51:20.418291 kubelet[2851]: W0213 19:51:20.417026 2851 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-124&limit=500&resourceVersion=0": dial tcp 172.31.16.124:6443: connect: connection refused Feb 13 19:51:20.418291 kubelet[2851]: I0213 19:51:20.417085 2851 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:51:20.418291 kubelet[2851]: E0213 19:51:20.417105 2851 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.16.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-124&limit=500&resourceVersion=0": dial tcp 172.31.16.124:6443: connect: connection refused Feb 13 19:51:20.421249 kubelet[2851]: W0213 19:51:20.420026 2851 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.124:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.124:6443: connect: connection refused Feb 13 19:51:20.421249 kubelet[2851]: E0213 19:51:20.420124 2851 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.16.124:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.124:6443: connect: connection refused Feb 13 19:51:20.422564 kubelet[2851]: I0213 19:51:20.422508 2851 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:51:20.422940 kubelet[2851]: I0213 19:51:20.422903 2851 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:51:20.423010 kubelet[2851]: W0213 19:51:20.423000 2851 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:51:20.424864 kubelet[2851]: I0213 19:51:20.424808 2851 server.go:1264] "Started kubelet" Feb 13 19:51:20.436931 kubelet[2851]: E0213 19:51:20.436718 2851 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.124:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.124:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-124.1823dc75fdef7e38 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-124,UID:ip-172-31-16-124,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-124,},FirstTimestamp:2025-02-13 19:51:20.42475884 +0000 UTC m=+0.702928187,LastTimestamp:2025-02-13 19:51:20.42475884 +0000 UTC m=+0.702928187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-124,}" Feb 13 19:51:20.439264 kubelet[2851]: I0213 19:51:20.438427 2851 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:51:20.439264 kubelet[2851]: I0213 19:51:20.439077 2851 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:51:20.441042 kubelet[2851]: I0213 19:51:20.441001 2851 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:51:20.442971 kubelet[2851]: I0213 19:51:20.442879 2851 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:51:20.443475 kubelet[2851]: I0213 19:51:20.443448 2851 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:51:20.449012 kubelet[2851]: I0213 19:51:20.448959 2851 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:51:20.452390 kubelet[2851]: I0213 19:51:20.452352 2851 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:51:20.454377 kubelet[2851]: I0213 19:51:20.454335 2851 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:51:20.455545 kubelet[2851]: E0213 19:51:20.455456 2851 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-124?timeout=10s\": dial tcp 172.31.16.124:6443: connect: connection refused" interval="200ms" Feb 13 19:51:20.456726 kubelet[2851]: W0213 19:51:20.456608 2851 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.124:6443: connect: connection refused Feb 13 19:51:20.456726 kubelet[2851]: E0213 19:51:20.456798 2851 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.16.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.124:6443: connect: connection refused Feb 13 19:51:20.457300 kubelet[2851]: E0213 19:51:20.457253 2851 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:51:20.461261 kubelet[2851]: I0213 19:51:20.460459 2851 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:51:20.461261 kubelet[2851]: I0213 19:51:20.460492 2851 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:51:20.461261 kubelet[2851]: I0213 19:51:20.460618 2851 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:51:20.481719 kubelet[2851]: I0213 19:51:20.481656 2851 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:51:20.485666 kubelet[2851]: I0213 19:51:20.485609 2851 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:51:20.485812 kubelet[2851]: I0213 19:51:20.485752 2851 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:51:20.485812 kubelet[2851]: I0213 19:51:20.485786 2851 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:51:20.485931 kubelet[2851]: E0213 19:51:20.485879 2851 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:51:20.488613 kubelet[2851]: W0213 19:51:20.488518 2851 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.124:6443: connect: connection refused Feb 13 19:51:20.488763 kubelet[2851]: E0213 19:51:20.488622 2851 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.16.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.124:6443: connect: connection refused Feb 13 19:51:20.499521 kubelet[2851]: I0213 19:51:20.499459 2851 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:51:20.499746 kubelet[2851]: I0213 19:51:20.499689 2851 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:51:20.500159 kubelet[2851]: I0213 19:51:20.499725 2851 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:51:20.506656 kubelet[2851]: I0213 19:51:20.506501 2851 policy_none.go:49] "None policy: Start" Feb 13 19:51:20.508039 kubelet[2851]: I0213 19:51:20.507915 2851 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:51:20.508039 kubelet[2851]: I0213 19:51:20.507959 2851 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:51:20.522078 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:51:20.538000 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:51:20.539916 kubelet[2851]: E0213 19:51:20.539767 2851 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.124:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.124:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-124.1823dc75fdef7e38 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-124,UID:ip-172-31-16-124,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-124,},FirstTimestamp:2025-02-13 19:51:20.42475884 +0000 UTC m=+0.702928187,LastTimestamp:2025-02-13 19:51:20.42475884 +0000 UTC m=+0.702928187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-124,}" Feb 13 19:51:20.546440 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:51:20.554006 kubelet[2851]: I0213 19:51:20.552943 2851 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:51:20.554006 kubelet[2851]: I0213 19:51:20.553273 2851 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:51:20.554006 kubelet[2851]: I0213 19:51:20.553545 2851 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:51:20.556094 kubelet[2851]: I0213 19:51:20.554728 2851 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-124" Feb 13 19:51:20.556985 kubelet[2851]: E0213 19:51:20.556784 2851 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.124:6443/api/v1/nodes\": dial tcp 172.31.16.124:6443: connect: connection refused" node="ip-172-31-16-124" Feb 13 19:51:20.558052 kubelet[2851]: E0213 19:51:20.557997 2851 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-124\" not found" Feb 13 19:51:20.586668 kubelet[2851]: I0213 19:51:20.586289 2851 topology_manager.go:215] "Topology Admit Handler" podUID="664df8d9b4935c403207fa1f96e4d674" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-16-124" Feb 13 19:51:20.588721 kubelet[2851]: I0213 19:51:20.588664 2851 topology_manager.go:215] "Topology Admit Handler" podUID="45ec7815e26ed8c1cab818f966846dcd" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-16-124" Feb 13 19:51:20.591952 kubelet[2851]: I0213 19:51:20.591315 2851 topology_manager.go:215] "Topology Admit Handler" podUID="6d8114d7dd6b927436f6964b3930bb3e" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-16-124" Feb 13 19:51:20.605318 systemd[1]: Created slice kubepods-burstable-pod664df8d9b4935c403207fa1f96e4d674.slice - libcontainer container kubepods-burstable-pod664df8d9b4935c403207fa1f96e4d674.slice. Feb 13 19:51:20.630817 systemd[1]: Created slice kubepods-burstable-pod45ec7815e26ed8c1cab818f966846dcd.slice - libcontainer container kubepods-burstable-pod45ec7815e26ed8c1cab818f966846dcd.slice. Feb 13 19:51:20.642357 systemd[1]: Created slice kubepods-burstable-pod6d8114d7dd6b927436f6964b3930bb3e.slice - libcontainer container kubepods-burstable-pod6d8114d7dd6b927436f6964b3930bb3e.slice. Feb 13 19:51:20.656100 kubelet[2851]: I0213 19:51:20.655704 2851 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45ec7815e26ed8c1cab818f966846dcd-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-124\" (UID: \"45ec7815e26ed8c1cab818f966846dcd\") " pod="kube-system/kube-controller-manager-ip-172-31-16-124" Feb 13 19:51:20.656100 kubelet[2851]: I0213 19:51:20.655762 2851 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45ec7815e26ed8c1cab818f966846dcd-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-124\" (UID: \"45ec7815e26ed8c1cab818f966846dcd\") " pod="kube-system/kube-controller-manager-ip-172-31-16-124" Feb 13 19:51:20.656100 kubelet[2851]: I0213 19:51:20.655806 2851 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45ec7815e26ed8c1cab818f966846dcd-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-124\" (UID: \"45ec7815e26ed8c1cab818f966846dcd\") " pod="kube-system/kube-controller-manager-ip-172-31-16-124" Feb 13 19:51:20.656100 kubelet[2851]: I0213 19:51:20.655845 2851 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/664df8d9b4935c403207fa1f96e4d674-ca-certs\") pod \"kube-apiserver-ip-172-31-16-124\" (UID: \"664df8d9b4935c403207fa1f96e4d674\") " pod="kube-system/kube-apiserver-ip-172-31-16-124" Feb 13 19:51:20.656100 kubelet[2851]: I0213 19:51:20.655880 2851 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/45ec7815e26ed8c1cab818f966846dcd-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-124\" (UID: \"45ec7815e26ed8c1cab818f966846dcd\") " pod="kube-system/kube-controller-manager-ip-172-31-16-124" Feb 13 19:51:20.656570 kubelet[2851]: I0213 19:51:20.655918 2851 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45ec7815e26ed8c1cab818f966846dcd-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-124\" (UID: \"45ec7815e26ed8c1cab818f966846dcd\") " pod="kube-system/kube-controller-manager-ip-172-31-16-124" Feb 13 19:51:20.656570 kubelet[2851]: I0213 19:51:20.655953 2851 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6d8114d7dd6b927436f6964b3930bb3e-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-124\" (UID: \"6d8114d7dd6b927436f6964b3930bb3e\") " pod="kube-system/kube-scheduler-ip-172-31-16-124" Feb 13 19:51:20.656570 kubelet[2851]: I0213 19:51:20.655989 2851 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/664df8d9b4935c403207fa1f96e4d674-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-124\" (UID: \"664df8d9b4935c403207fa1f96e4d674\") " pod="kube-system/kube-apiserver-ip-172-31-16-124" Feb 13 19:51:20.656570 kubelet[2851]: I0213 19:51:20.656025 2851 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/664df8d9b4935c403207fa1f96e4d674-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-124\" (UID: \"664df8d9b4935c403207fa1f96e4d674\") " pod="kube-system/kube-apiserver-ip-172-31-16-124" Feb 13 19:51:20.656570 kubelet[2851]: E0213 19:51:20.656272 2851 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-124?timeout=10s\": dial tcp 172.31.16.124:6443: connect: connection refused" interval="400ms" Feb 13 19:51:20.758697 kubelet[2851]: I0213 19:51:20.758640 2851 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-124" Feb 13 19:51:20.759124 kubelet[2851]: E0213 19:51:20.759074 2851 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.124:6443/api/v1/nodes\": dial tcp 172.31.16.124:6443: connect: connection refused" node="ip-172-31-16-124" Feb 13 19:51:20.925458 containerd[2024]: time="2025-02-13T19:51:20.925197937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-124,Uid:664df8d9b4935c403207fa1f96e4d674,Namespace:kube-system,Attempt:0,}" Feb 13 19:51:20.938420 containerd[2024]: time="2025-02-13T19:51:20.938155103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-124,Uid:45ec7815e26ed8c1cab818f966846dcd,Namespace:kube-system,Attempt:0,}" Feb 13 19:51:20.947455 containerd[2024]: time="2025-02-13T19:51:20.947383119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-124,Uid:6d8114d7dd6b927436f6964b3930bb3e,Namespace:kube-system,Attempt:0,}" Feb 13 19:51:21.058057 kubelet[2851]: E0213 19:51:21.057820 2851 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-124?timeout=10s\": dial tcp 172.31.16.124:6443: connect: connection refused" interval="800ms" Feb 13 19:51:21.094395 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:51:21.164484 kubelet[2851]: I0213 19:51:21.164427 2851 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-124" Feb 13 19:51:21.164944 kubelet[2851]: E0213 19:51:21.164904 2851 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.124:6443/api/v1/nodes\": dial tcp 172.31.16.124:6443: connect: connection refused" node="ip-172-31-16-124" Feb 13 19:51:21.395747 kubelet[2851]: W0213 19:51:21.395651 2851 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.124:6443: connect: connection refused Feb 13 19:51:21.395747 kubelet[2851]: E0213 19:51:21.395712 2851 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.16.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.124:6443: connect: connection refused Feb 13 19:51:21.445364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1935191992.mount: Deactivated successfully. Feb 13 19:51:21.471979 containerd[2024]: time="2025-02-13T19:51:21.471902690Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 19:51:21.473263 containerd[2024]: time="2025-02-13T19:51:21.473148075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:51:21.475596 containerd[2024]: time="2025-02-13T19:51:21.475521987Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:51:21.477223 containerd[2024]: time="2025-02-13T19:51:21.477143408Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:51:21.479055 containerd[2024]: time="2025-02-13T19:51:21.478994142Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:51:21.480683 containerd[2024]: time="2025-02-13T19:51:21.480616654Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:51:21.482176 containerd[2024]: time="2025-02-13T19:51:21.481989344Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:51:21.484587 containerd[2024]: time="2025-02-13T19:51:21.484487142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:51:21.486403 containerd[2024]: time="2025-02-13T19:51:21.486343034Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 560.981295ms" Feb 13 19:51:21.494188 containerd[2024]: time="2025-02-13T19:51:21.494023811Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 555.727671ms" Feb 13 19:51:21.497071 containerd[2024]: time="2025-02-13T19:51:21.496648195Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 549.146659ms" Feb 13 19:51:21.557251 kubelet[2851]: W0213 19:51:21.555914 2851 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.124:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.124:6443: connect: connection refused Feb 13 19:51:21.557251 kubelet[2851]: E0213 19:51:21.556010 2851 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.16.124:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.124:6443: connect: connection refused Feb 13 19:51:21.627012 kubelet[2851]: W0213 19:51:21.626851 2851 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-124&limit=500&resourceVersion=0": dial tcp 172.31.16.124:6443: connect: connection refused Feb 13 19:51:21.627012 kubelet[2851]: E0213 19:51:21.626947 2851 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.16.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-124&limit=500&resourceVersion=0": dial tcp 172.31.16.124:6443: connect: connection refused Feb 13 19:51:21.699008 containerd[2024]: time="2025-02-13T19:51:21.697579707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:21.699993 containerd[2024]: time="2025-02-13T19:51:21.697941170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:21.699993 containerd[2024]: time="2025-02-13T19:51:21.698184157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:21.699993 containerd[2024]: time="2025-02-13T19:51:21.699555251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:21.709460 containerd[2024]: time="2025-02-13T19:51:21.708956964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:21.709765 containerd[2024]: time="2025-02-13T19:51:21.709394481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:21.709765 containerd[2024]: time="2025-02-13T19:51:21.709646763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:21.710404 containerd[2024]: time="2025-02-13T19:51:21.710135098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:21.714650 containerd[2024]: time="2025-02-13T19:51:21.711068891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:21.714650 containerd[2024]: time="2025-02-13T19:51:21.714482684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:21.714650 containerd[2024]: time="2025-02-13T19:51:21.714516195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:21.715103 containerd[2024]: time="2025-02-13T19:51:21.714702869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:21.767562 systemd[1]: Started cri-containerd-d89b3947621f3bbcb57516b3456d6ebc010671af1d2c67d53b81f7046fdeaa47.scope - libcontainer container d89b3947621f3bbcb57516b3456d6ebc010671af1d2c67d53b81f7046fdeaa47. Feb 13 19:51:21.787499 systemd[1]: Started cri-containerd-176116b5745f2c8ecb2c110c1662399c911a142399e3486709b02d542d967620.scope - libcontainer container 176116b5745f2c8ecb2c110c1662399c911a142399e3486709b02d542d967620. Feb 13 19:51:21.791874 systemd[1]: Started cri-containerd-8d516c452d335b21ed431bed00b0e1b3df4de88b453a0c2962ececc29836774b.scope - libcontainer container 8d516c452d335b21ed431bed00b0e1b3df4de88b453a0c2962ececc29836774b. Feb 13 19:51:21.860436 kubelet[2851]: E0213 19:51:21.860324 2851 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-124?timeout=10s\": dial tcp 172.31.16.124:6443: connect: connection refused" interval="1.6s" Feb 13 19:51:21.874041 kubelet[2851]: W0213 19:51:21.873951 2851 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.124:6443: connect: connection refused Feb 13 19:51:21.874041 kubelet[2851]: E0213 19:51:21.874049 2851 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.16.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.124:6443: connect: connection refused Feb 13 19:51:21.899522 containerd[2024]: time="2025-02-13T19:51:21.899443753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-124,Uid:664df8d9b4935c403207fa1f96e4d674,Namespace:kube-system,Attempt:0,} returns sandbox id \"176116b5745f2c8ecb2c110c1662399c911a142399e3486709b02d542d967620\"" Feb 13 19:51:21.912366 containerd[2024]: time="2025-02-13T19:51:21.912122143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-124,Uid:6d8114d7dd6b927436f6964b3930bb3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d89b3947621f3bbcb57516b3456d6ebc010671af1d2c67d53b81f7046fdeaa47\"" Feb 13 19:51:21.916796 containerd[2024]: time="2025-02-13T19:51:21.915764421Z" level=info msg="CreateContainer within sandbox \"176116b5745f2c8ecb2c110c1662399c911a142399e3486709b02d542d967620\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:51:21.917293 containerd[2024]: time="2025-02-13T19:51:21.917241206Z" level=info msg="CreateContainer within sandbox \"d89b3947621f3bbcb57516b3456d6ebc010671af1d2c67d53b81f7046fdeaa47\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:51:21.926803 containerd[2024]: time="2025-02-13T19:51:21.926728904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-124,Uid:45ec7815e26ed8c1cab818f966846dcd,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d516c452d335b21ed431bed00b0e1b3df4de88b453a0c2962ececc29836774b\"" Feb 13 19:51:21.934705 containerd[2024]: time="2025-02-13T19:51:21.933775391Z" level=info msg="CreateContainer within sandbox \"8d516c452d335b21ed431bed00b0e1b3df4de88b453a0c2962ececc29836774b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:51:21.969172 kubelet[2851]: I0213 19:51:21.968641 2851 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-124" Feb 13 19:51:21.970346 kubelet[2851]: E0213 19:51:21.969403 2851 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.124:6443/api/v1/nodes\": dial tcp 172.31.16.124:6443: connect: connection refused" node="ip-172-31-16-124" Feb 13 19:51:21.972595 containerd[2024]: time="2025-02-13T19:51:21.972522092Z" level=info msg="CreateContainer within sandbox \"d89b3947621f3bbcb57516b3456d6ebc010671af1d2c67d53b81f7046fdeaa47\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bd3a6e94827b807afa657a85573f51db097e0c7798777e470001450cc7cc503d\"" Feb 13 19:51:21.973711 containerd[2024]: time="2025-02-13T19:51:21.973576353Z" level=info msg="StartContainer for \"bd3a6e94827b807afa657a85573f51db097e0c7798777e470001450cc7cc503d\"" Feb 13 19:51:21.983328 containerd[2024]: time="2025-02-13T19:51:21.983057502Z" level=info msg="CreateContainer within sandbox \"8d516c452d335b21ed431bed00b0e1b3df4de88b453a0c2962ececc29836774b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d3d7fbbb1d5734426f386bd4db57422c0d594aa98c2c768636a43366c47ef1b3\"" Feb 13 19:51:21.984311 containerd[2024]: time="2025-02-13T19:51:21.983957136Z" level=info msg="StartContainer for \"d3d7fbbb1d5734426f386bd4db57422c0d594aa98c2c768636a43366c47ef1b3\"" Feb 13 19:51:21.985827 containerd[2024]: time="2025-02-13T19:51:21.985744901Z" level=info msg="CreateContainer within sandbox \"176116b5745f2c8ecb2c110c1662399c911a142399e3486709b02d542d967620\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"032c0dc77c109a63d347657b6897b4aec2225a4508474bb122292874651bf982\"" Feb 13 19:51:21.986570 containerd[2024]: time="2025-02-13T19:51:21.986463630Z" level=info msg="StartContainer for \"032c0dc77c109a63d347657b6897b4aec2225a4508474bb122292874651bf982\"" Feb 13 19:51:22.045431 systemd[1]: Started cri-containerd-bd3a6e94827b807afa657a85573f51db097e0c7798777e470001450cc7cc503d.scope - libcontainer container bd3a6e94827b807afa657a85573f51db097e0c7798777e470001450cc7cc503d. Feb 13 19:51:22.070570 systemd[1]: Started cri-containerd-d3d7fbbb1d5734426f386bd4db57422c0d594aa98c2c768636a43366c47ef1b3.scope - libcontainer container d3d7fbbb1d5734426f386bd4db57422c0d594aa98c2c768636a43366c47ef1b3. Feb 13 19:51:22.080770 systemd[1]: Started cri-containerd-032c0dc77c109a63d347657b6897b4aec2225a4508474bb122292874651bf982.scope - libcontainer container 032c0dc77c109a63d347657b6897b4aec2225a4508474bb122292874651bf982. Feb 13 19:51:22.183331 containerd[2024]: time="2025-02-13T19:51:22.182737440Z" level=info msg="StartContainer for \"bd3a6e94827b807afa657a85573f51db097e0c7798777e470001450cc7cc503d\" returns successfully" Feb 13 19:51:22.195269 containerd[2024]: time="2025-02-13T19:51:22.195146361Z" level=info msg="StartContainer for \"d3d7fbbb1d5734426f386bd4db57422c0d594aa98c2c768636a43366c47ef1b3\" returns successfully" Feb 13 19:51:22.238922 containerd[2024]: time="2025-02-13T19:51:22.238675861Z" level=info msg="StartContainer for \"032c0dc77c109a63d347657b6897b4aec2225a4508474bb122292874651bf982\" returns successfully" Feb 13 19:51:23.574239 kubelet[2851]: I0213 19:51:23.571912 2851 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-124" Feb 13 19:51:25.421813 kubelet[2851]: I0213 19:51:25.421470 2851 apiserver.go:52] "Watching apiserver" Feb 13 19:51:25.434607 kubelet[2851]: E0213 19:51:25.434541 2851 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-124\" not found" node="ip-172-31-16-124" Feb 13 19:51:25.452753 kubelet[2851]: I0213 19:51:25.452687 2851 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:51:25.566075 kubelet[2851]: I0213 19:51:25.565838 2851 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-16-124" Feb 13 19:51:27.690995 systemd[1]: Reloading requested from client PID 3133 ('systemctl') (unit session-5.scope)... Feb 13 19:51:27.691600 systemd[1]: Reloading... Feb 13 19:51:27.966239 zram_generator::config[3182]: No configuration found. Feb 13 19:51:28.199437 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:51:28.401360 systemd[1]: Reloading finished in 709 ms. Feb 13 19:51:28.480676 kubelet[2851]: I0213 19:51:28.480527 2851 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:51:28.481511 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:28.496879 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:51:28.498341 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:28.498435 systemd[1]: kubelet.service: Consumed 1.368s CPU time, 114.5M memory peak, 0B memory swap peak. Feb 13 19:51:28.509871 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:28.806543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:28.822039 (kubelet)[3233]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:51:28.933022 kubelet[3233]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:51:28.933022 kubelet[3233]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:51:28.933022 kubelet[3233]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:51:28.935247 kubelet[3233]: I0213 19:51:28.933765 3233 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:51:28.942313 kubelet[3233]: I0213 19:51:28.942199 3233 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:51:28.942555 kubelet[3233]: I0213 19:51:28.942533 3233 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:51:28.942983 kubelet[3233]: I0213 19:51:28.942957 3233 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:51:28.945955 kubelet[3233]: I0213 19:51:28.945919 3233 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:51:28.948498 kubelet[3233]: I0213 19:51:28.948459 3233 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:51:28.966349 kubelet[3233]: I0213 19:51:28.966310 3233 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:51:28.967718 kubelet[3233]: I0213 19:51:28.967012 3233 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:51:28.967718 kubelet[3233]: I0213 19:51:28.967061 3233 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-124","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:51:28.967718 kubelet[3233]: I0213 19:51:28.967404 3233 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:51:28.967718 kubelet[3233]: I0213 19:51:28.967424 3233 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:51:28.967718 kubelet[3233]: I0213 19:51:28.967488 3233 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:51:28.968839 kubelet[3233]: I0213 19:51:28.968807 3233 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:51:28.969368 kubelet[3233]: I0213 19:51:28.969034 3233 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:51:28.969368 kubelet[3233]: I0213 19:51:28.969103 3233 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:51:28.969368 kubelet[3233]: I0213 19:51:28.969145 3233 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:51:28.974405 kubelet[3233]: I0213 19:51:28.974339 3233 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:51:28.975658 kubelet[3233]: I0213 19:51:28.975519 3233 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:51:28.977597 kubelet[3233]: I0213 19:51:28.977564 3233 server.go:1264] "Started kubelet" Feb 13 19:51:28.984389 kubelet[3233]: I0213 19:51:28.984152 3233 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:51:29.004235 kubelet[3233]: I0213 19:51:29.002931 3233 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:51:29.004851 kubelet[3233]: I0213 19:51:29.004814 3233 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:51:29.006575 kubelet[3233]: I0213 19:51:29.006492 3233 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:51:29.006862 kubelet[3233]: I0213 19:51:29.006829 3233 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:51:29.016536 kubelet[3233]: I0213 19:51:29.016188 3233 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:51:29.020497 kubelet[3233]: I0213 19:51:29.020452 3233 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:51:29.021822 kubelet[3233]: I0213 19:51:29.020945 3233 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:51:29.045314 kubelet[3233]: I0213 19:51:29.045173 3233 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:51:29.055198 kubelet[3233]: E0213 19:51:29.053799 3233 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:51:29.065324 kubelet[3233]: I0213 19:51:29.064445 3233 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:51:29.066797 kubelet[3233]: I0213 19:51:29.066758 3233 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:51:29.067239 kubelet[3233]: I0213 19:51:29.066962 3233 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:51:29.067239 kubelet[3233]: E0213 19:51:29.067045 3233 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:51:29.069243 kubelet[3233]: I0213 19:51:29.065107 3233 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:51:29.069243 kubelet[3233]: I0213 19:51:29.067597 3233 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:51:29.069243 kubelet[3233]: I0213 19:51:29.067751 3233 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:51:29.143018 kubelet[3233]: I0213 19:51:29.142978 3233 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-124" Feb 13 19:51:29.168075 kubelet[3233]: E0213 19:51:29.168037 3233 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:51:29.175016 kubelet[3233]: I0213 19:51:29.174964 3233 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-16-124" Feb 13 19:51:29.177060 kubelet[3233]: I0213 19:51:29.175095 3233 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-16-124" Feb 13 19:51:29.221804 kubelet[3233]: I0213 19:51:29.221389 3233 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:51:29.221804 kubelet[3233]: I0213 19:51:29.221417 3233 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:51:29.221804 kubelet[3233]: I0213 19:51:29.221451 3233 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:51:29.221804 kubelet[3233]: I0213 19:51:29.221679 3233 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:51:29.221804 kubelet[3233]: I0213 19:51:29.221699 3233 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:51:29.221804 kubelet[3233]: I0213 19:51:29.221733 3233 policy_none.go:49] "None policy: Start" Feb 13 19:51:29.224240 kubelet[3233]: I0213 19:51:29.224185 3233 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:51:29.224873 kubelet[3233]: I0213 19:51:29.224580 3233 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:51:29.224977 kubelet[3233]: I0213 19:51:29.224929 3233 state_mem.go:75] "Updated machine memory state" Feb 13 19:51:29.236432 kubelet[3233]: I0213 19:51:29.236384 3233 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:51:29.236432 kubelet[3233]: I0213 19:51:29.236658 3233 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:51:29.238366 kubelet[3233]: I0213 19:51:29.237288 3233 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:51:29.370647 kubelet[3233]: I0213 19:51:29.369326 3233 topology_manager.go:215] "Topology Admit Handler" podUID="664df8d9b4935c403207fa1f96e4d674" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-16-124" Feb 13 19:51:29.370647 kubelet[3233]: I0213 19:51:29.369507 3233 topology_manager.go:215] "Topology Admit Handler" podUID="45ec7815e26ed8c1cab818f966846dcd" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-16-124" Feb 13 19:51:29.370647 kubelet[3233]: I0213 19:51:29.369581 3233 topology_manager.go:215] "Topology Admit Handler" podUID="6d8114d7dd6b927436f6964b3930bb3e" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-16-124" Feb 13 19:51:29.384254 kubelet[3233]: E0213 19:51:29.383803 3233 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-16-124\" already exists" pod="kube-system/kube-scheduler-ip-172-31-16-124" Feb 13 19:51:29.430742 kubelet[3233]: I0213 19:51:29.430236 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/45ec7815e26ed8c1cab818f966846dcd-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-124\" (UID: \"45ec7815e26ed8c1cab818f966846dcd\") " pod="kube-system/kube-controller-manager-ip-172-31-16-124" Feb 13 19:51:29.430742 kubelet[3233]: I0213 19:51:29.430317 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/45ec7815e26ed8c1cab818f966846dcd-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-124\" (UID: \"45ec7815e26ed8c1cab818f966846dcd\") " pod="kube-system/kube-controller-manager-ip-172-31-16-124" Feb 13 19:51:29.430742 kubelet[3233]: I0213 19:51:29.430361 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/664df8d9b4935c403207fa1f96e4d674-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-124\" (UID: \"664df8d9b4935c403207fa1f96e4d674\") " pod="kube-system/kube-apiserver-ip-172-31-16-124" Feb 13 19:51:29.430742 kubelet[3233]: I0213 19:51:29.430406 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/45ec7815e26ed8c1cab818f966846dcd-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-124\" (UID: \"45ec7815e26ed8c1cab818f966846dcd\") " pod="kube-system/kube-controller-manager-ip-172-31-16-124" Feb 13 19:51:29.430742 kubelet[3233]: I0213 19:51:29.430443 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/45ec7815e26ed8c1cab818f966846dcd-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-124\" (UID: \"45ec7815e26ed8c1cab818f966846dcd\") " pod="kube-system/kube-controller-manager-ip-172-31-16-124" Feb 13 19:51:29.431128 kubelet[3233]: I0213 19:51:29.430477 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45ec7815e26ed8c1cab818f966846dcd-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-124\" (UID: \"45ec7815e26ed8c1cab818f966846dcd\") " pod="kube-system/kube-controller-manager-ip-172-31-16-124" Feb 13 19:51:29.431128 kubelet[3233]: I0213 19:51:29.430509 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6d8114d7dd6b927436f6964b3930bb3e-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-124\" (UID: \"6d8114d7dd6b927436f6964b3930bb3e\") " pod="kube-system/kube-scheduler-ip-172-31-16-124" Feb 13 19:51:29.431128 kubelet[3233]: I0213 19:51:29.430544 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/664df8d9b4935c403207fa1f96e4d674-ca-certs\") pod \"kube-apiserver-ip-172-31-16-124\" (UID: \"664df8d9b4935c403207fa1f96e4d674\") " pod="kube-system/kube-apiserver-ip-172-31-16-124" Feb 13 19:51:29.431128 kubelet[3233]: I0213 19:51:29.430577 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/664df8d9b4935c403207fa1f96e4d674-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-124\" (UID: \"664df8d9b4935c403207fa1f96e4d674\") " pod="kube-system/kube-apiserver-ip-172-31-16-124" Feb 13 19:51:29.972957 kubelet[3233]: I0213 19:51:29.972821 3233 apiserver.go:52] "Watching apiserver" Feb 13 19:51:30.021718 kubelet[3233]: I0213 19:51:30.021633 3233 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:51:30.171640 kubelet[3233]: E0213 19:51:30.171520 3233 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-16-124\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-124" Feb 13 19:51:30.221368 kubelet[3233]: I0213 19:51:30.220754 3233 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-124" podStartSLOduration=3.220733044 podStartE2EDuration="3.220733044s" podCreationTimestamp="2025-02-13 19:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:51:30.206915008 +0000 UTC m=+1.376620122" watchObservedRunningTime="2025-02-13 19:51:30.220733044 +0000 UTC m=+1.390438158" Feb 13 19:51:30.235596 kubelet[3233]: I0213 19:51:30.235189 3233 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-124" podStartSLOduration=1.235167259 podStartE2EDuration="1.235167259s" podCreationTimestamp="2025-02-13 19:51:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:51:30.222688354 +0000 UTC m=+1.392393492" watchObservedRunningTime="2025-02-13 19:51:30.235167259 +0000 UTC m=+1.404872373" Feb 13 19:51:30.254797 kubelet[3233]: I0213 19:51:30.254708 3233 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-124" podStartSLOduration=1.25468831 podStartE2EDuration="1.25468831s" podCreationTimestamp="2025-02-13 19:51:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:51:30.238008878 +0000 UTC m=+1.407714004" watchObservedRunningTime="2025-02-13 19:51:30.25468831 +0000 UTC m=+1.424393424" Feb 13 19:51:30.519106 sudo[2294]: pam_unix(sudo:session): session closed for user root Feb 13 19:51:30.544539 sshd[2291]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:30.549624 systemd[1]: sshd@4-172.31.16.124:22-139.178.89.65:44802.service: Deactivated successfully. Feb 13 19:51:30.553921 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:51:30.554722 systemd[1]: session-5.scope: Consumed 9.402s CPU time, 187.2M memory peak, 0B memory swap peak. Feb 13 19:51:30.557974 systemd-logind[1994]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:51:30.560055 systemd-logind[1994]: Removed session 5. Feb 13 19:51:35.603771 update_engine[1995]: I20250213 19:51:35.603679 1995 update_attempter.cc:509] Updating boot flags... Feb 13 19:51:35.688323 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3306) Feb 13 19:51:35.966346 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3306) Feb 13 19:51:36.227309 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3306) Feb 13 19:51:43.294344 kubelet[3233]: I0213 19:51:43.294298 3233 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:51:43.294930 containerd[2024]: time="2025-02-13T19:51:43.294820261Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:51:43.295521 kubelet[3233]: I0213 19:51:43.295485 3233 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:51:43.533116 kubelet[3233]: I0213 19:51:43.533038 3233 topology_manager.go:215] "Topology Admit Handler" podUID="933f50c5-0d11-423e-a8a4-6a9d2060993b" podNamespace="kube-system" podName="kube-proxy-gjg6b" Feb 13 19:51:43.554897 systemd[1]: Created slice kubepods-besteffort-pod933f50c5_0d11_423e_a8a4_6a9d2060993b.slice - libcontainer container kubepods-besteffort-pod933f50c5_0d11_423e_a8a4_6a9d2060993b.slice. Feb 13 19:51:43.589125 kubelet[3233]: I0213 19:51:43.589067 3233 topology_manager.go:215] "Topology Admit Handler" podUID="fa99dba2-e228-437b-a3d0-408ff6eea644" podNamespace="kube-flannel" podName="kube-flannel-ds-bmdvj" Feb 13 19:51:43.609194 systemd[1]: Created slice kubepods-burstable-podfa99dba2_e228_437b_a3d0_408ff6eea644.slice - libcontainer container kubepods-burstable-podfa99dba2_e228_437b_a3d0_408ff6eea644.slice. Feb 13 19:51:43.620914 kubelet[3233]: W0213 19:51:43.619931 3233 reflector.go:547] object-"kube-flannel"/"kube-flannel-cfg": failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ip-172-31-16-124" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-16-124' and this object Feb 13 19:51:43.620914 kubelet[3233]: E0213 19:51:43.619999 3233 reflector.go:150] object-"kube-flannel"/"kube-flannel-cfg": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ip-172-31-16-124" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-16-124' and this object Feb 13 19:51:43.620914 kubelet[3233]: W0213 19:51:43.620069 3233 reflector.go:547] object-"kube-flannel"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-16-124" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-16-124' and this object Feb 13 19:51:43.620914 kubelet[3233]: E0213 19:51:43.620102 3233 reflector.go:150] object-"kube-flannel"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-16-124" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-16-124' and this object Feb 13 19:51:43.625616 kubelet[3233]: I0213 19:51:43.625551 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d628v\" (UniqueName: \"kubernetes.io/projected/933f50c5-0d11-423e-a8a4-6a9d2060993b-kube-api-access-d628v\") pod \"kube-proxy-gjg6b\" (UID: \"933f50c5-0d11-423e-a8a4-6a9d2060993b\") " pod="kube-system/kube-proxy-gjg6b" Feb 13 19:51:43.627524 kubelet[3233]: I0213 19:51:43.627328 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/fa99dba2-e228-437b-a3d0-408ff6eea644-cni-plugin\") pod \"kube-flannel-ds-bmdvj\" (UID: \"fa99dba2-e228-437b-a3d0-408ff6eea644\") " pod="kube-flannel/kube-flannel-ds-bmdvj" Feb 13 19:51:43.627524 kubelet[3233]: I0213 19:51:43.627455 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6qs9\" (UniqueName: \"kubernetes.io/projected/fa99dba2-e228-437b-a3d0-408ff6eea644-kube-api-access-p6qs9\") pod \"kube-flannel-ds-bmdvj\" (UID: \"fa99dba2-e228-437b-a3d0-408ff6eea644\") " pod="kube-flannel/kube-flannel-ds-bmdvj" Feb 13 19:51:43.628174 kubelet[3233]: I0213 19:51:43.627795 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/933f50c5-0d11-423e-a8a4-6a9d2060993b-xtables-lock\") pod \"kube-proxy-gjg6b\" (UID: \"933f50c5-0d11-423e-a8a4-6a9d2060993b\") " pod="kube-system/kube-proxy-gjg6b" Feb 13 19:51:43.628174 kubelet[3233]: I0213 19:51:43.627862 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/fa99dba2-e228-437b-a3d0-408ff6eea644-cni\") pod \"kube-flannel-ds-bmdvj\" (UID: \"fa99dba2-e228-437b-a3d0-408ff6eea644\") " pod="kube-flannel/kube-flannel-ds-bmdvj" Feb 13 19:51:43.628174 kubelet[3233]: I0213 19:51:43.627905 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/933f50c5-0d11-423e-a8a4-6a9d2060993b-lib-modules\") pod \"kube-proxy-gjg6b\" (UID: \"933f50c5-0d11-423e-a8a4-6a9d2060993b\") " pod="kube-system/kube-proxy-gjg6b" Feb 13 19:51:43.628174 kubelet[3233]: I0213 19:51:43.627945 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fa99dba2-e228-437b-a3d0-408ff6eea644-run\") pod \"kube-flannel-ds-bmdvj\" (UID: \"fa99dba2-e228-437b-a3d0-408ff6eea644\") " pod="kube-flannel/kube-flannel-ds-bmdvj" Feb 13 19:51:43.628174 kubelet[3233]: I0213 19:51:43.627981 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/fa99dba2-e228-437b-a3d0-408ff6eea644-flannel-cfg\") pod \"kube-flannel-ds-bmdvj\" (UID: \"fa99dba2-e228-437b-a3d0-408ff6eea644\") " pod="kube-flannel/kube-flannel-ds-bmdvj" Feb 13 19:51:43.628470 kubelet[3233]: I0213 19:51:43.628022 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa99dba2-e228-437b-a3d0-408ff6eea644-xtables-lock\") pod \"kube-flannel-ds-bmdvj\" (UID: \"fa99dba2-e228-437b-a3d0-408ff6eea644\") " pod="kube-flannel/kube-flannel-ds-bmdvj" Feb 13 19:51:43.628470 kubelet[3233]: I0213 19:51:43.628058 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/933f50c5-0d11-423e-a8a4-6a9d2060993b-kube-proxy\") pod \"kube-proxy-gjg6b\" (UID: \"933f50c5-0d11-423e-a8a4-6a9d2060993b\") " pod="kube-system/kube-proxy-gjg6b" Feb 13 19:51:43.739939 kubelet[3233]: E0213 19:51:43.739881 3233 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 19:51:43.739939 kubelet[3233]: E0213 19:51:43.739931 3233 projected.go:200] Error preparing data for projected volume kube-api-access-d628v for pod kube-system/kube-proxy-gjg6b: configmap "kube-root-ca.crt" not found Feb 13 19:51:43.740176 kubelet[3233]: E0213 19:51:43.740044 3233 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/933f50c5-0d11-423e-a8a4-6a9d2060993b-kube-api-access-d628v podName:933f50c5-0d11-423e-a8a4-6a9d2060993b nodeName:}" failed. No retries permitted until 2025-02-13 19:51:44.240006922 +0000 UTC m=+15.409712048 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d628v" (UniqueName: "kubernetes.io/projected/933f50c5-0d11-423e-a8a4-6a9d2060993b-kube-api-access-d628v") pod "kube-proxy-gjg6b" (UID: "933f50c5-0d11-423e-a8a4-6a9d2060993b") : configmap "kube-root-ca.crt" not found Feb 13 19:51:44.467641 containerd[2024]: time="2025-02-13T19:51:44.467567414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gjg6b,Uid:933f50c5-0d11-423e-a8a4-6a9d2060993b,Namespace:kube-system,Attempt:0,}" Feb 13 19:51:44.516295 containerd[2024]: time="2025-02-13T19:51:44.516117135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:44.517353 containerd[2024]: time="2025-02-13T19:51:44.516985416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:44.517353 containerd[2024]: time="2025-02-13T19:51:44.517044631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:44.517840 containerd[2024]: time="2025-02-13T19:51:44.517275143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:44.552186 systemd[1]: run-containerd-runc-k8s.io-e261bf409c174e2967bf3854489b294ad5956bc2ef832093a7811fc17098f350-runc.BklMY1.mount: Deactivated successfully. Feb 13 19:51:44.569514 systemd[1]: Started cri-containerd-e261bf409c174e2967bf3854489b294ad5956bc2ef832093a7811fc17098f350.scope - libcontainer container e261bf409c174e2967bf3854489b294ad5956bc2ef832093a7811fc17098f350. Feb 13 19:51:44.613130 containerd[2024]: time="2025-02-13T19:51:44.612957818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gjg6b,Uid:933f50c5-0d11-423e-a8a4-6a9d2060993b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e261bf409c174e2967bf3854489b294ad5956bc2ef832093a7811fc17098f350\"" Feb 13 19:51:44.621053 containerd[2024]: time="2025-02-13T19:51:44.620960803Z" level=info msg="CreateContainer within sandbox \"e261bf409c174e2967bf3854489b294ad5956bc2ef832093a7811fc17098f350\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:51:44.651323 containerd[2024]: time="2025-02-13T19:51:44.651258559Z" level=info msg="CreateContainer within sandbox \"e261bf409c174e2967bf3854489b294ad5956bc2ef832093a7811fc17098f350\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ff36f63cf7e3378307199ed24fdcc20c4fc66ce2f97d2a761e3eb748e35c8262\"" Feb 13 19:51:44.653311 containerd[2024]: time="2025-02-13T19:51:44.652296399Z" level=info msg="StartContainer for \"ff36f63cf7e3378307199ed24fdcc20c4fc66ce2f97d2a761e3eb748e35c8262\"" Feb 13 19:51:44.701552 systemd[1]: Started cri-containerd-ff36f63cf7e3378307199ed24fdcc20c4fc66ce2f97d2a761e3eb748e35c8262.scope - libcontainer container ff36f63cf7e3378307199ed24fdcc20c4fc66ce2f97d2a761e3eb748e35c8262. Feb 13 19:51:44.730723 kubelet[3233]: E0213 19:51:44.730598 3233 configmap.go:199] Couldn't get configMap kube-flannel/kube-flannel-cfg: failed to sync configmap cache: timed out waiting for the condition Feb 13 19:51:44.733781 kubelet[3233]: E0213 19:51:44.732188 3233 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fa99dba2-e228-437b-a3d0-408ff6eea644-flannel-cfg podName:fa99dba2-e228-437b-a3d0-408ff6eea644 nodeName:}" failed. No retries permitted until 2025-02-13 19:51:45.231884974 +0000 UTC m=+16.401590099 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/fa99dba2-e228-437b-a3d0-408ff6eea644-flannel-cfg") pod "kube-flannel-ds-bmdvj" (UID: "fa99dba2-e228-437b-a3d0-408ff6eea644") : failed to sync configmap cache: timed out waiting for the condition Feb 13 19:51:44.740995 kubelet[3233]: E0213 19:51:44.740664 3233 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 13 19:51:44.740995 kubelet[3233]: E0213 19:51:44.740722 3233 projected.go:200] Error preparing data for projected volume kube-api-access-p6qs9 for pod kube-flannel/kube-flannel-ds-bmdvj: failed to sync configmap cache: timed out waiting for the condition Feb 13 19:51:44.740995 kubelet[3233]: E0213 19:51:44.740835 3233 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fa99dba2-e228-437b-a3d0-408ff6eea644-kube-api-access-p6qs9 podName:fa99dba2-e228-437b-a3d0-408ff6eea644 nodeName:}" failed. No retries permitted until 2025-02-13 19:51:45.240808462 +0000 UTC m=+16.410513576 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p6qs9" (UniqueName: "kubernetes.io/projected/fa99dba2-e228-437b-a3d0-408ff6eea644-kube-api-access-p6qs9") pod "kube-flannel-ds-bmdvj" (UID: "fa99dba2-e228-437b-a3d0-408ff6eea644") : failed to sync configmap cache: timed out waiting for the condition Feb 13 19:51:44.766081 containerd[2024]: time="2025-02-13T19:51:44.764879388Z" level=info msg="StartContainer for \"ff36f63cf7e3378307199ed24fdcc20c4fc66ce2f97d2a761e3eb748e35c8262\" returns successfully" Feb 13 19:51:45.209816 kubelet[3233]: I0213 19:51:45.208234 3233 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gjg6b" podStartSLOduration=2.208193186 podStartE2EDuration="2.208193186s" podCreationTimestamp="2025-02-13 19:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:51:45.206581864 +0000 UTC m=+16.376287002" watchObservedRunningTime="2025-02-13 19:51:45.208193186 +0000 UTC m=+16.377898300" Feb 13 19:51:45.418231 containerd[2024]: time="2025-02-13T19:51:45.418124648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-bmdvj,Uid:fa99dba2-e228-437b-a3d0-408ff6eea644,Namespace:kube-flannel,Attempt:0,}" Feb 13 19:51:45.469513 containerd[2024]: time="2025-02-13T19:51:45.468701847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:45.469513 containerd[2024]: time="2025-02-13T19:51:45.468784030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:45.469513 containerd[2024]: time="2025-02-13T19:51:45.468808785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:45.469513 containerd[2024]: time="2025-02-13T19:51:45.468951166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:45.514554 systemd[1]: Started cri-containerd-31279d8b184aadee0920981b0ef99745d09daa85bd595cf04a4f101f4e27d580.scope - libcontainer container 31279d8b184aadee0920981b0ef99745d09daa85bd595cf04a4f101f4e27d580. Feb 13 19:51:45.575075 containerd[2024]: time="2025-02-13T19:51:45.575001191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-bmdvj,Uid:fa99dba2-e228-437b-a3d0-408ff6eea644,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"31279d8b184aadee0920981b0ef99745d09daa85bd595cf04a4f101f4e27d580\"" Feb 13 19:51:45.579934 containerd[2024]: time="2025-02-13T19:51:45.579853400Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 19:51:48.039879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1342652770.mount: Deactivated successfully. Feb 13 19:51:48.112311 containerd[2024]: time="2025-02-13T19:51:48.112255581Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:48.114650 containerd[2024]: time="2025-02-13T19:51:48.114601031Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Feb 13 19:51:48.116842 containerd[2024]: time="2025-02-13T19:51:48.116798764Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:48.123708 containerd[2024]: time="2025-02-13T19:51:48.123540711Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:48.125352 containerd[2024]: time="2025-02-13T19:51:48.125103685Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 2.544831479s" Feb 13 19:51:48.125352 containerd[2024]: time="2025-02-13T19:51:48.125162588Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 13 19:51:48.130364 containerd[2024]: time="2025-02-13T19:51:48.129908326Z" level=info msg="CreateContainer within sandbox \"31279d8b184aadee0920981b0ef99745d09daa85bd595cf04a4f101f4e27d580\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 19:51:48.157431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3016717227.mount: Deactivated successfully. Feb 13 19:51:48.160400 containerd[2024]: time="2025-02-13T19:51:48.160229170Z" level=info msg="CreateContainer within sandbox \"31279d8b184aadee0920981b0ef99745d09daa85bd595cf04a4f101f4e27d580\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"055ca0d04e26f76f79d497239195b7fb09e9770d37a10638b9054ac730b2abf6\"" Feb 13 19:51:48.161971 containerd[2024]: time="2025-02-13T19:51:48.161114543Z" level=info msg="StartContainer for \"055ca0d04e26f76f79d497239195b7fb09e9770d37a10638b9054ac730b2abf6\"" Feb 13 19:51:48.215526 systemd[1]: Started cri-containerd-055ca0d04e26f76f79d497239195b7fb09e9770d37a10638b9054ac730b2abf6.scope - libcontainer container 055ca0d04e26f76f79d497239195b7fb09e9770d37a10638b9054ac730b2abf6. Feb 13 19:51:48.263537 containerd[2024]: time="2025-02-13T19:51:48.263450878Z" level=info msg="StartContainer for \"055ca0d04e26f76f79d497239195b7fb09e9770d37a10638b9054ac730b2abf6\" returns successfully" Feb 13 19:51:48.269553 systemd[1]: cri-containerd-055ca0d04e26f76f79d497239195b7fb09e9770d37a10638b9054ac730b2abf6.scope: Deactivated successfully. Feb 13 19:51:48.338460 containerd[2024]: time="2025-02-13T19:51:48.338287981Z" level=info msg="shim disconnected" id=055ca0d04e26f76f79d497239195b7fb09e9770d37a10638b9054ac730b2abf6 namespace=k8s.io Feb 13 19:51:48.338974 containerd[2024]: time="2025-02-13T19:51:48.338712772Z" level=warning msg="cleaning up after shim disconnected" id=055ca0d04e26f76f79d497239195b7fb09e9770d37a10638b9054ac730b2abf6 namespace=k8s.io Feb 13 19:51:48.338974 containerd[2024]: time="2025-02-13T19:51:48.338745492Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:48.898196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-055ca0d04e26f76f79d497239195b7fb09e9770d37a10638b9054ac730b2abf6-rootfs.mount: Deactivated successfully. Feb 13 19:51:49.216245 containerd[2024]: time="2025-02-13T19:51:49.215945311Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 19:51:51.425982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3353297951.mount: Deactivated successfully. Feb 13 19:51:53.321274 containerd[2024]: time="2025-02-13T19:51:53.320889952Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:53.323332 containerd[2024]: time="2025-02-13T19:51:53.323236842Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Feb 13 19:51:53.325676 containerd[2024]: time="2025-02-13T19:51:53.325595186Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:53.332155 containerd[2024]: time="2025-02-13T19:51:53.332057338Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:53.336617 containerd[2024]: time="2025-02-13T19:51:53.336411160Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 4.12039187s" Feb 13 19:51:53.336617 containerd[2024]: time="2025-02-13T19:51:53.336472317Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 13 19:51:53.343261 containerd[2024]: time="2025-02-13T19:51:53.342941761Z" level=info msg="CreateContainer within sandbox \"31279d8b184aadee0920981b0ef99745d09daa85bd595cf04a4f101f4e27d580\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:51:53.369897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2296211115.mount: Deactivated successfully. Feb 13 19:51:53.371716 containerd[2024]: time="2025-02-13T19:51:53.371641424Z" level=info msg="CreateContainer within sandbox \"31279d8b184aadee0920981b0ef99745d09daa85bd595cf04a4f101f4e27d580\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"976e4e6ec4d1a824bbc85edc03afbbcaf9616d04be1a908174d745bffbc3c293\"" Feb 13 19:51:53.372538 containerd[2024]: time="2025-02-13T19:51:53.372487049Z" level=info msg="StartContainer for \"976e4e6ec4d1a824bbc85edc03afbbcaf9616d04be1a908174d745bffbc3c293\"" Feb 13 19:51:53.427530 systemd[1]: Started cri-containerd-976e4e6ec4d1a824bbc85edc03afbbcaf9616d04be1a908174d745bffbc3c293.scope - libcontainer container 976e4e6ec4d1a824bbc85edc03afbbcaf9616d04be1a908174d745bffbc3c293. Feb 13 19:51:53.475792 containerd[2024]: time="2025-02-13T19:51:53.475708889Z" level=info msg="StartContainer for \"976e4e6ec4d1a824bbc85edc03afbbcaf9616d04be1a908174d745bffbc3c293\" returns successfully" Feb 13 19:51:53.476116 systemd[1]: cri-containerd-976e4e6ec4d1a824bbc85edc03afbbcaf9616d04be1a908174d745bffbc3c293.scope: Deactivated successfully. Feb 13 19:51:53.548316 kubelet[3233]: I0213 19:51:53.547630 3233 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:51:53.591385 kubelet[3233]: I0213 19:51:53.591221 3233 topology_manager.go:215] "Topology Admit Handler" podUID="49953bf8-55c8-40c4-a771-e7e1ebb19439" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nxwcv" Feb 13 19:51:53.608309 kubelet[3233]: I0213 19:51:53.607586 3233 topology_manager.go:215] "Topology Admit Handler" podUID="49a8020a-3f38-435f-863f-d57f023e6d77" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rbr6n" Feb 13 19:51:53.621414 systemd[1]: Created slice kubepods-burstable-pod49953bf8_55c8_40c4_a771_e7e1ebb19439.slice - libcontainer container kubepods-burstable-pod49953bf8_55c8_40c4_a771_e7e1ebb19439.slice. Feb 13 19:51:53.648651 systemd[1]: Created slice kubepods-burstable-pod49a8020a_3f38_435f_863f_d57f023e6d77.slice - libcontainer container kubepods-burstable-pod49a8020a_3f38_435f_863f_d57f023e6d77.slice. Feb 13 19:51:53.663339 containerd[2024]: time="2025-02-13T19:51:53.663249544Z" level=info msg="shim disconnected" id=976e4e6ec4d1a824bbc85edc03afbbcaf9616d04be1a908174d745bffbc3c293 namespace=k8s.io Feb 13 19:51:53.663611 containerd[2024]: time="2025-02-13T19:51:53.663576788Z" level=warning msg="cleaning up after shim disconnected" id=976e4e6ec4d1a824bbc85edc03afbbcaf9616d04be1a908174d745bffbc3c293 namespace=k8s.io Feb 13 19:51:53.664324 containerd[2024]: time="2025-02-13T19:51:53.664261562Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:53.700112 kubelet[3233]: I0213 19:51:53.699997 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49953bf8-55c8-40c4-a771-e7e1ebb19439-config-volume\") pod \"coredns-7db6d8ff4d-nxwcv\" (UID: \"49953bf8-55c8-40c4-a771-e7e1ebb19439\") " pod="kube-system/coredns-7db6d8ff4d-nxwcv" Feb 13 19:51:53.700112 kubelet[3233]: I0213 19:51:53.700069 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49a8020a-3f38-435f-863f-d57f023e6d77-config-volume\") pod \"coredns-7db6d8ff4d-rbr6n\" (UID: \"49a8020a-3f38-435f-863f-d57f023e6d77\") " pod="kube-system/coredns-7db6d8ff4d-rbr6n" Feb 13 19:51:53.700112 kubelet[3233]: I0213 19:51:53.700114 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djqw8\" (UniqueName: \"kubernetes.io/projected/49953bf8-55c8-40c4-a771-e7e1ebb19439-kube-api-access-djqw8\") pod \"coredns-7db6d8ff4d-nxwcv\" (UID: \"49953bf8-55c8-40c4-a771-e7e1ebb19439\") " pod="kube-system/coredns-7db6d8ff4d-nxwcv" Feb 13 19:51:53.700533 kubelet[3233]: I0213 19:51:53.700160 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbjs6\" (UniqueName: \"kubernetes.io/projected/49a8020a-3f38-435f-863f-d57f023e6d77-kube-api-access-rbjs6\") pod \"coredns-7db6d8ff4d-rbr6n\" (UID: \"49a8020a-3f38-435f-863f-d57f023e6d77\") " pod="kube-system/coredns-7db6d8ff4d-rbr6n" Feb 13 19:51:53.938745 containerd[2024]: time="2025-02-13T19:51:53.938673812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxwcv,Uid:49953bf8-55c8-40c4-a771-e7e1ebb19439,Namespace:kube-system,Attempt:0,}" Feb 13 19:51:53.963421 containerd[2024]: time="2025-02-13T19:51:53.962796516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rbr6n,Uid:49a8020a-3f38-435f-863f-d57f023e6d77,Namespace:kube-system,Attempt:0,}" Feb 13 19:51:54.000372 containerd[2024]: time="2025-02-13T19:51:54.000288513Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxwcv,Uid:49953bf8-55c8-40c4-a771-e7e1ebb19439,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5cee326590d388587b030a5f6f7eff8f7a588c995d6f9cee163813d29dbda0d0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:51:54.000961 kubelet[3233]: E0213 19:51:54.000900 3233 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cee326590d388587b030a5f6f7eff8f7a588c995d6f9cee163813d29dbda0d0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:51:54.001123 kubelet[3233]: E0213 19:51:54.000990 3233 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cee326590d388587b030a5f6f7eff8f7a588c995d6f9cee163813d29dbda0d0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-nxwcv" Feb 13 19:51:54.001123 kubelet[3233]: E0213 19:51:54.001024 3233 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cee326590d388587b030a5f6f7eff8f7a588c995d6f9cee163813d29dbda0d0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-nxwcv" Feb 13 19:51:54.002725 kubelet[3233]: E0213 19:51:54.001102 3233 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nxwcv_kube-system(49953bf8-55c8-40c4-a771-e7e1ebb19439)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nxwcv_kube-system(49953bf8-55c8-40c4-a771-e7e1ebb19439)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cee326590d388587b030a5f6f7eff8f7a588c995d6f9cee163813d29dbda0d0\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-nxwcv" podUID="49953bf8-55c8-40c4-a771-e7e1ebb19439" Feb 13 19:51:54.008692 containerd[2024]: time="2025-02-13T19:51:54.008608523Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rbr6n,Uid:49a8020a-3f38-435f-863f-d57f023e6d77,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d5ab071a8666304b93c695dce40600735d8c07a47d004bb25c26ee5e1052bae2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:51:54.009547 kubelet[3233]: E0213 19:51:54.009025 3233 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5ab071a8666304b93c695dce40600735d8c07a47d004bb25c26ee5e1052bae2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:51:54.009547 kubelet[3233]: E0213 19:51:54.009100 3233 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5ab071a8666304b93c695dce40600735d8c07a47d004bb25c26ee5e1052bae2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-rbr6n" Feb 13 19:51:54.009547 kubelet[3233]: E0213 19:51:54.009137 3233 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5ab071a8666304b93c695dce40600735d8c07a47d004bb25c26ee5e1052bae2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-rbr6n" Feb 13 19:51:54.009547 kubelet[3233]: E0213 19:51:54.009233 3233 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-rbr6n_kube-system(49a8020a-3f38-435f-863f-d57f023e6d77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-rbr6n_kube-system(49a8020a-3f38-435f-863f-d57f023e6d77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5ab071a8666304b93c695dce40600735d8c07a47d004bb25c26ee5e1052bae2\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-rbr6n" podUID="49a8020a-3f38-435f-863f-d57f023e6d77" Feb 13 19:51:54.232932 containerd[2024]: time="2025-02-13T19:51:54.232423400Z" level=info msg="CreateContainer within sandbox \"31279d8b184aadee0920981b0ef99745d09daa85bd595cf04a4f101f4e27d580\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 19:51:54.252220 containerd[2024]: time="2025-02-13T19:51:54.252105278Z" level=info msg="CreateContainer within sandbox \"31279d8b184aadee0920981b0ef99745d09daa85bd595cf04a4f101f4e27d580\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"afbc8d3201bc3f17d2c7ae3a3336ade60885879fdd9b2cb6be4e6022b71840f4\"" Feb 13 19:51:54.254011 containerd[2024]: time="2025-02-13T19:51:54.253050586Z" level=info msg="StartContainer for \"afbc8d3201bc3f17d2c7ae3a3336ade60885879fdd9b2cb6be4e6022b71840f4\"" Feb 13 19:51:54.302499 systemd[1]: Started cri-containerd-afbc8d3201bc3f17d2c7ae3a3336ade60885879fdd9b2cb6be4e6022b71840f4.scope - libcontainer container afbc8d3201bc3f17d2c7ae3a3336ade60885879fdd9b2cb6be4e6022b71840f4. Feb 13 19:51:54.347698 containerd[2024]: time="2025-02-13T19:51:54.347630772Z" level=info msg="StartContainer for \"afbc8d3201bc3f17d2c7ae3a3336ade60885879fdd9b2cb6be4e6022b71840f4\" returns successfully" Feb 13 19:51:54.372162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-976e4e6ec4d1a824bbc85edc03afbbcaf9616d04be1a908174d745bffbc3c293-rootfs.mount: Deactivated successfully. Feb 13 19:51:55.437098 (udev-worker)[4037]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:51:55.459141 systemd-networkd[1924]: flannel.1: Link UP Feb 13 19:51:55.459370 systemd-networkd[1924]: flannel.1: Gained carrier Feb 13 19:51:56.973578 systemd-networkd[1924]: flannel.1: Gained IPv6LL Feb 13 19:51:59.338727 ntpd[1987]: Listen normally on 8 flannel.1 192.168.0.0:123 Feb 13 19:51:59.339688 ntpd[1987]: 13 Feb 19:51:59 ntpd[1987]: Listen normally on 8 flannel.1 192.168.0.0:123 Feb 13 19:51:59.339688 ntpd[1987]: 13 Feb 19:51:59 ntpd[1987]: Listen normally on 9 flannel.1 [fe80::408:e5ff:fefe:cfdf%4]:123 Feb 13 19:51:59.338849 ntpd[1987]: Listen normally on 9 flannel.1 [fe80::408:e5ff:fefe:cfdf%4]:123 Feb 13 19:52:07.068970 containerd[2024]: time="2025-02-13T19:52:07.068440139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxwcv,Uid:49953bf8-55c8-40c4-a771-e7e1ebb19439,Namespace:kube-system,Attempt:0,}" Feb 13 19:52:07.108370 systemd-networkd[1924]: cni0: Link UP Feb 13 19:52:07.108392 systemd-networkd[1924]: cni0: Gained carrier Feb 13 19:52:07.115747 systemd-networkd[1924]: veth7e31da5f: Link UP Feb 13 19:52:07.119027 kernel: cni0: port 1(veth7e31da5f) entered blocking state Feb 13 19:52:07.119132 kernel: cni0: port 1(veth7e31da5f) entered disabled state Feb 13 19:52:07.120373 kernel: veth7e31da5f: entered allmulticast mode Feb 13 19:52:07.122336 kernel: veth7e31da5f: entered promiscuous mode Feb 13 19:52:07.122377 systemd-networkd[1924]: cni0: Lost carrier Feb 13 19:52:07.124515 (udev-worker)[4178]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:52:07.125331 (udev-worker)[4176]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:52:07.135333 kernel: cni0: port 1(veth7e31da5f) entered blocking state Feb 13 19:52:07.135461 kernel: cni0: port 1(veth7e31da5f) entered forwarding state Feb 13 19:52:07.135492 systemd-networkd[1924]: veth7e31da5f: Gained carrier Feb 13 19:52:07.139983 systemd-networkd[1924]: cni0: Gained carrier Feb 13 19:52:07.142557 containerd[2024]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"} Feb 13 19:52:07.142557 containerd[2024]: delegateAdd: netconf sent to delegate plugin: Feb 13 19:52:07.186634 containerd[2024]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T19:52:07.186317448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:52:07.187778 containerd[2024]: time="2025-02-13T19:52:07.187380584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:52:07.187778 containerd[2024]: time="2025-02-13T19:52:07.187434677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:07.187778 containerd[2024]: time="2025-02-13T19:52:07.187586701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:07.228549 systemd[1]: Started cri-containerd-2d80b149943cafb99331a44721d5ad5d1c6a01b858a82567f46c7cc8be3ec4e1.scope - libcontainer container 2d80b149943cafb99331a44721d5ad5d1c6a01b858a82567f46c7cc8be3ec4e1. Feb 13 19:52:07.298143 containerd[2024]: time="2025-02-13T19:52:07.298003177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nxwcv,Uid:49953bf8-55c8-40c4-a771-e7e1ebb19439,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d80b149943cafb99331a44721d5ad5d1c6a01b858a82567f46c7cc8be3ec4e1\"" Feb 13 19:52:07.305479 containerd[2024]: time="2025-02-13T19:52:07.305115091Z" level=info msg="CreateContainer within sandbox \"2d80b149943cafb99331a44721d5ad5d1c6a01b858a82567f46c7cc8be3ec4e1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:52:07.329087 containerd[2024]: time="2025-02-13T19:52:07.328375044Z" level=info msg="CreateContainer within sandbox \"2d80b149943cafb99331a44721d5ad5d1c6a01b858a82567f46c7cc8be3ec4e1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c8bad33ff0a32be0f571840a2a81f83986d12498ff0c1e249072c3aecf0d04c6\"" Feb 13 19:52:07.330490 containerd[2024]: time="2025-02-13T19:52:07.330105514Z" level=info msg="StartContainer for \"c8bad33ff0a32be0f571840a2a81f83986d12498ff0c1e249072c3aecf0d04c6\"" Feb 13 19:52:07.376519 systemd[1]: Started cri-containerd-c8bad33ff0a32be0f571840a2a81f83986d12498ff0c1e249072c3aecf0d04c6.scope - libcontainer container c8bad33ff0a32be0f571840a2a81f83986d12498ff0c1e249072c3aecf0d04c6. Feb 13 19:52:07.423957 containerd[2024]: time="2025-02-13T19:52:07.423830337Z" level=info msg="StartContainer for \"c8bad33ff0a32be0f571840a2a81f83986d12498ff0c1e249072c3aecf0d04c6\" returns successfully" Feb 13 19:52:08.068690 containerd[2024]: time="2025-02-13T19:52:08.068569488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rbr6n,Uid:49a8020a-3f38-435f-863f-d57f023e6d77,Namespace:kube-system,Attempt:0,}" Feb 13 19:52:08.084819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1726580341.mount: Deactivated successfully. Feb 13 19:52:08.105695 kernel: cni0: port 2(vethf716902e) entered blocking state Feb 13 19:52:08.105825 kernel: cni0: port 2(vethf716902e) entered disabled state Feb 13 19:52:08.105564 systemd-networkd[1924]: vethf716902e: Link UP Feb 13 19:52:08.106931 kernel: vethf716902e: entered allmulticast mode Feb 13 19:52:08.107360 kernel: vethf716902e: entered promiscuous mode Feb 13 19:52:08.111644 kernel: cni0: port 2(vethf716902e) entered blocking state Feb 13 19:52:08.111770 kernel: cni0: port 2(vethf716902e) entered forwarding state Feb 13 19:52:08.117847 systemd-networkd[1924]: vethf716902e: Gained carrier Feb 13 19:52:08.120915 containerd[2024]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"} Feb 13 19:52:08.120915 containerd[2024]: delegateAdd: netconf sent to delegate plugin: Feb 13 19:52:08.167437 containerd[2024]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T19:52:08.167090399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:52:08.167710 containerd[2024]: time="2025-02-13T19:52:08.167246465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:52:08.168181 containerd[2024]: time="2025-02-13T19:52:08.168009283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:08.168836 containerd[2024]: time="2025-02-13T19:52:08.168646153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:08.208524 systemd[1]: Started cri-containerd-7cff12a828d5014c922f8c3c84c140507755d1e172f46c784af625e243616e43.scope - libcontainer container 7cff12a828d5014c922f8c3c84c140507755d1e172f46c784af625e243616e43. Feb 13 19:52:08.277616 containerd[2024]: time="2025-02-13T19:52:08.277519313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rbr6n,Uid:49a8020a-3f38-435f-863f-d57f023e6d77,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cff12a828d5014c922f8c3c84c140507755d1e172f46c784af625e243616e43\"" Feb 13 19:52:08.298923 kubelet[3233]: I0213 19:52:08.298099 3233 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-bmdvj" podStartSLOduration=17.536124176 podStartE2EDuration="25.298075122s" podCreationTimestamp="2025-02-13 19:51:43 +0000 UTC" firstStartedPulling="2025-02-13 19:51:45.577098066 +0000 UTC m=+16.746803180" lastFinishedPulling="2025-02-13 19:51:53.339049 +0000 UTC m=+24.508754126" observedRunningTime="2025-02-13 19:51:55.24988908 +0000 UTC m=+26.419594206" watchObservedRunningTime="2025-02-13 19:52:08.298075122 +0000 UTC m=+39.467780236" Feb 13 19:52:08.298923 kubelet[3233]: I0213 19:52:08.298371 3233 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nxwcv" podStartSLOduration=24.298358324 podStartE2EDuration="24.298358324s" podCreationTimestamp="2025-02-13 19:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:52:08.296292922 +0000 UTC m=+39.465998120" watchObservedRunningTime="2025-02-13 19:52:08.298358324 +0000 UTC m=+39.468063462" Feb 13 19:52:08.301323 containerd[2024]: time="2025-02-13T19:52:08.300889154Z" level=info msg="CreateContainer within sandbox \"7cff12a828d5014c922f8c3c84c140507755d1e172f46c784af625e243616e43\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:52:08.325640 containerd[2024]: time="2025-02-13T19:52:08.324639205Z" level=info msg="CreateContainer within sandbox \"7cff12a828d5014c922f8c3c84c140507755d1e172f46c784af625e243616e43\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0008a0725c05e1e042eaa7a94904f14074f015ece096fdbf9b449773359a548c\"" Feb 13 19:52:08.330564 containerd[2024]: time="2025-02-13T19:52:08.328926245Z" level=info msg="StartContainer for \"0008a0725c05e1e042eaa7a94904f14074f015ece096fdbf9b449773359a548c\"" Feb 13 19:52:08.397010 systemd[1]: Started cri-containerd-0008a0725c05e1e042eaa7a94904f14074f015ece096fdbf9b449773359a548c.scope - libcontainer container 0008a0725c05e1e042eaa7a94904f14074f015ece096fdbf9b449773359a548c. Feb 13 19:52:08.448380 containerd[2024]: time="2025-02-13T19:52:08.448307205Z" level=info msg="StartContainer for \"0008a0725c05e1e042eaa7a94904f14074f015ece096fdbf9b449773359a548c\" returns successfully" Feb 13 19:52:08.557660 systemd-networkd[1924]: veth7e31da5f: Gained IPv6LL Feb 13 19:52:09.069415 systemd-networkd[1924]: cni0: Gained IPv6LL Feb 13 19:52:09.197503 systemd-networkd[1924]: vethf716902e: Gained IPv6LL Feb 13 19:52:10.080843 systemd[1]: Started sshd@5-172.31.16.124:22-139.178.89.65:39376.service - OpenSSH per-connection server daemon (139.178.89.65:39376). Feb 13 19:52:10.265166 sshd[4383]: Accepted publickey for core from 139.178.89.65 port 39376 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:10.267971 sshd[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:10.276921 systemd-logind[1994]: New session 6 of user core. Feb 13 19:52:10.288526 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:52:10.548130 sshd[4383]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:10.555525 systemd[1]: sshd@5-172.31.16.124:22-139.178.89.65:39376.service: Deactivated successfully. Feb 13 19:52:10.559162 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:52:10.568230 systemd-logind[1994]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:52:10.570109 systemd-logind[1994]: Removed session 6. Feb 13 19:52:11.338506 ntpd[1987]: Listen normally on 10 cni0 192.168.0.1:123 Feb 13 19:52:11.338653 ntpd[1987]: Listen normally on 11 cni0 [fe80::288b:a2ff:fee0:7eee%5]:123 Feb 13 19:52:11.339070 ntpd[1987]: 13 Feb 19:52:11 ntpd[1987]: Listen normally on 10 cni0 192.168.0.1:123 Feb 13 19:52:11.339070 ntpd[1987]: 13 Feb 19:52:11 ntpd[1987]: Listen normally on 11 cni0 [fe80::288b:a2ff:fee0:7eee%5]:123 Feb 13 19:52:11.339070 ntpd[1987]: 13 Feb 19:52:11 ntpd[1987]: Listen normally on 12 veth7e31da5f [fe80::c068:fbff:fe73:d77d%6]:123 Feb 13 19:52:11.339070 ntpd[1987]: 13 Feb 19:52:11 ntpd[1987]: Listen normally on 13 vethf716902e [fe80::c8e8:c8ff:fe72:7375%7]:123 Feb 13 19:52:11.338738 ntpd[1987]: Listen normally on 12 veth7e31da5f [fe80::c068:fbff:fe73:d77d%6]:123 Feb 13 19:52:11.338813 ntpd[1987]: Listen normally on 13 vethf716902e [fe80::c8e8:c8ff:fe72:7375%7]:123 Feb 13 19:52:13.974981 kubelet[3233]: I0213 19:52:13.974632 3233 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rbr6n" podStartSLOduration=29.97460503 podStartE2EDuration="29.97460503s" podCreationTimestamp="2025-02-13 19:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:52:09.288439315 +0000 UTC m=+40.458144441" watchObservedRunningTime="2025-02-13 19:52:13.97460503 +0000 UTC m=+45.144310144" Feb 13 19:52:15.588819 systemd[1]: Started sshd@6-172.31.16.124:22-139.178.89.65:53984.service - OpenSSH per-connection server daemon (139.178.89.65:53984). Feb 13 19:52:15.768438 sshd[4424]: Accepted publickey for core from 139.178.89.65 port 53984 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:15.771159 sshd[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:15.780418 systemd-logind[1994]: New session 7 of user core. Feb 13 19:52:15.790484 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:52:16.032324 sshd[4424]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:16.038324 systemd[1]: sshd@6-172.31.16.124:22-139.178.89.65:53984.service: Deactivated successfully. Feb 13 19:52:16.042715 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:52:16.044354 systemd-logind[1994]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:52:16.046105 systemd-logind[1994]: Removed session 7. Feb 13 19:52:21.071779 systemd[1]: Started sshd@7-172.31.16.124:22-139.178.89.65:53986.service - OpenSSH per-connection server daemon (139.178.89.65:53986). Feb 13 19:52:21.244477 sshd[4480]: Accepted publickey for core from 139.178.89.65 port 53986 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:21.247104 sshd[4480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:21.255812 systemd-logind[1994]: New session 8 of user core. Feb 13 19:52:21.261497 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:52:21.506015 sshd[4480]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:21.511321 systemd[1]: sshd@7-172.31.16.124:22-139.178.89.65:53986.service: Deactivated successfully. Feb 13 19:52:21.515638 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:52:21.518959 systemd-logind[1994]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:52:21.520793 systemd-logind[1994]: Removed session 8. Feb 13 19:52:26.542706 systemd[1]: Started sshd@8-172.31.16.124:22-139.178.89.65:36190.service - OpenSSH per-connection server daemon (139.178.89.65:36190). Feb 13 19:52:26.728522 sshd[4514]: Accepted publickey for core from 139.178.89.65 port 36190 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:26.731280 sshd[4514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:26.739320 systemd-logind[1994]: New session 9 of user core. Feb 13 19:52:26.742543 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:52:26.986003 sshd[4514]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:26.992397 systemd[1]: sshd@8-172.31.16.124:22-139.178.89.65:36190.service: Deactivated successfully. Feb 13 19:52:26.996758 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:52:26.999575 systemd-logind[1994]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:52:27.001872 systemd-logind[1994]: Removed session 9. Feb 13 19:52:27.024792 systemd[1]: Started sshd@9-172.31.16.124:22-139.178.89.65:36194.service - OpenSSH per-connection server daemon (139.178.89.65:36194). Feb 13 19:52:27.203590 sshd[4527]: Accepted publickey for core from 139.178.89.65 port 36194 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:27.206289 sshd[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:27.213680 systemd-logind[1994]: New session 10 of user core. Feb 13 19:52:27.223695 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:52:27.529130 sshd[4527]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:27.538844 systemd-logind[1994]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:52:27.539978 systemd[1]: sshd@9-172.31.16.124:22-139.178.89.65:36194.service: Deactivated successfully. Feb 13 19:52:27.547516 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:52:27.570320 systemd-logind[1994]: Removed session 10. Feb 13 19:52:27.581663 systemd[1]: Started sshd@10-172.31.16.124:22-139.178.89.65:36206.service - OpenSSH per-connection server daemon (139.178.89.65:36206). Feb 13 19:52:27.756330 sshd[4538]: Accepted publickey for core from 139.178.89.65 port 36206 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:27.758598 sshd[4538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:27.765789 systemd-logind[1994]: New session 11 of user core. Feb 13 19:52:27.776543 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:52:28.016276 sshd[4538]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:28.023923 systemd[1]: sshd@10-172.31.16.124:22-139.178.89.65:36206.service: Deactivated successfully. Feb 13 19:52:28.029420 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:52:28.031483 systemd-logind[1994]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:52:28.033283 systemd-logind[1994]: Removed session 11. Feb 13 19:52:33.054812 systemd[1]: Started sshd@11-172.31.16.124:22-139.178.89.65:36210.service - OpenSSH per-connection server daemon (139.178.89.65:36210). Feb 13 19:52:33.239595 sshd[4574]: Accepted publickey for core from 139.178.89.65 port 36210 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:33.242371 sshd[4574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:33.250928 systemd-logind[1994]: New session 12 of user core. Feb 13 19:52:33.257470 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:52:33.498698 sshd[4574]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:33.505807 systemd[1]: sshd@11-172.31.16.124:22-139.178.89.65:36210.service: Deactivated successfully. Feb 13 19:52:33.510413 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:52:33.513019 systemd-logind[1994]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:52:33.514985 systemd-logind[1994]: Removed session 12. Feb 13 19:52:33.537777 systemd[1]: Started sshd@12-172.31.16.124:22-139.178.89.65:36226.service - OpenSSH per-connection server daemon (139.178.89.65:36226). Feb 13 19:52:33.721665 sshd[4587]: Accepted publickey for core from 139.178.89.65 port 36226 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:33.724437 sshd[4587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:33.733740 systemd-logind[1994]: New session 13 of user core. Feb 13 19:52:33.742508 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:52:34.035035 sshd[4587]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:34.042076 systemd[1]: sshd@12-172.31.16.124:22-139.178.89.65:36226.service: Deactivated successfully. Feb 13 19:52:34.045949 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:52:34.047382 systemd-logind[1994]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:52:34.049128 systemd-logind[1994]: Removed session 13. Feb 13 19:52:34.076817 systemd[1]: Started sshd@13-172.31.16.124:22-139.178.89.65:36228.service - OpenSSH per-connection server daemon (139.178.89.65:36228). Feb 13 19:52:34.253739 sshd[4598]: Accepted publickey for core from 139.178.89.65 port 36228 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:34.257427 sshd[4598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:34.267093 systemd-logind[1994]: New session 14 of user core. Feb 13 19:52:34.272559 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:52:36.562343 sshd[4598]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:36.573724 systemd[1]: sshd@13-172.31.16.124:22-139.178.89.65:36228.service: Deactivated successfully. Feb 13 19:52:36.581509 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:52:36.585331 systemd-logind[1994]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:52:36.608592 systemd[1]: Started sshd@14-172.31.16.124:22-139.178.89.65:39840.service - OpenSSH per-connection server daemon (139.178.89.65:39840). Feb 13 19:52:36.613138 systemd-logind[1994]: Removed session 14. Feb 13 19:52:36.782326 sshd[4637]: Accepted publickey for core from 139.178.89.65 port 39840 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:36.785022 sshd[4637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:36.792302 systemd-logind[1994]: New session 15 of user core. Feb 13 19:52:36.801473 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:52:37.271013 sshd[4637]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:37.277750 systemd-logind[1994]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:52:37.278960 systemd[1]: sshd@14-172.31.16.124:22-139.178.89.65:39840.service: Deactivated successfully. Feb 13 19:52:37.283548 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:52:37.287944 systemd-logind[1994]: Removed session 15. Feb 13 19:52:37.311747 systemd[1]: Started sshd@15-172.31.16.124:22-139.178.89.65:39852.service - OpenSSH per-connection server daemon (139.178.89.65:39852). Feb 13 19:52:37.482530 sshd[4648]: Accepted publickey for core from 139.178.89.65 port 39852 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:37.485168 sshd[4648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:37.493622 systemd-logind[1994]: New session 16 of user core. Feb 13 19:52:37.498533 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:52:37.738630 sshd[4648]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:37.744433 systemd[1]: sshd@15-172.31.16.124:22-139.178.89.65:39852.service: Deactivated successfully. Feb 13 19:52:37.749372 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:52:37.753335 systemd-logind[1994]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:52:37.755522 systemd-logind[1994]: Removed session 16. Feb 13 19:52:42.773063 systemd[1]: Started sshd@16-172.31.16.124:22-139.178.89.65:39860.service - OpenSSH per-connection server daemon (139.178.89.65:39860). Feb 13 19:52:42.950618 sshd[4682]: Accepted publickey for core from 139.178.89.65 port 39860 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:42.953369 sshd[4682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:42.961863 systemd-logind[1994]: New session 17 of user core. Feb 13 19:52:42.972495 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:52:43.210659 sshd[4682]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:43.218855 systemd[1]: sshd@16-172.31.16.124:22-139.178.89.65:39860.service: Deactivated successfully. Feb 13 19:52:43.224023 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:52:43.227422 systemd-logind[1994]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:52:43.229727 systemd-logind[1994]: Removed session 17. Feb 13 19:52:48.254025 systemd[1]: Started sshd@17-172.31.16.124:22-139.178.89.65:34424.service - OpenSSH per-connection server daemon (139.178.89.65:34424). Feb 13 19:52:48.422573 sshd[4721]: Accepted publickey for core from 139.178.89.65 port 34424 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:48.425358 sshd[4721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:48.435177 systemd-logind[1994]: New session 18 of user core. Feb 13 19:52:48.437530 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:52:48.670111 sshd[4721]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:48.676558 systemd[1]: sshd@17-172.31.16.124:22-139.178.89.65:34424.service: Deactivated successfully. Feb 13 19:52:48.679801 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:52:48.681337 systemd-logind[1994]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:52:48.683478 systemd-logind[1994]: Removed session 18. Feb 13 19:52:53.712729 systemd[1]: Started sshd@18-172.31.16.124:22-139.178.89.65:34434.service - OpenSSH per-connection server daemon (139.178.89.65:34434). Feb 13 19:52:53.887845 sshd[4755]: Accepted publickey for core from 139.178.89.65 port 34434 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:53.891358 sshd[4755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:53.900924 systemd-logind[1994]: New session 19 of user core. Feb 13 19:52:53.906506 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:52:54.139192 sshd[4755]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:54.146254 systemd[1]: sshd@18-172.31.16.124:22-139.178.89.65:34434.service: Deactivated successfully. Feb 13 19:52:54.150356 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:52:54.152799 systemd-logind[1994]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:52:54.155005 systemd-logind[1994]: Removed session 19. Feb 13 19:52:59.176731 systemd[1]: Started sshd@19-172.31.16.124:22-139.178.89.65:47790.service - OpenSSH per-connection server daemon (139.178.89.65:47790). Feb 13 19:52:59.357922 sshd[4791]: Accepted publickey for core from 139.178.89.65 port 47790 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:52:59.360675 sshd[4791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:59.368028 systemd-logind[1994]: New session 20 of user core. Feb 13 19:52:59.377488 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:52:59.623631 sshd[4791]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:59.628981 systemd[1]: sshd@19-172.31.16.124:22-139.178.89.65:47790.service: Deactivated successfully. Feb 13 19:52:59.632771 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:52:59.636515 systemd-logind[1994]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:52:59.638928 systemd-logind[1994]: Removed session 20. Feb 13 19:53:13.857996 systemd[1]: cri-containerd-d3d7fbbb1d5734426f386bd4db57422c0d594aa98c2c768636a43366c47ef1b3.scope: Deactivated successfully. Feb 13 19:53:13.860339 systemd[1]: cri-containerd-d3d7fbbb1d5734426f386bd4db57422c0d594aa98c2c768636a43366c47ef1b3.scope: Consumed 3.128s CPU time, 22.3M memory peak, 0B memory swap peak. Feb 13 19:53:13.898739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3d7fbbb1d5734426f386bd4db57422c0d594aa98c2c768636a43366c47ef1b3-rootfs.mount: Deactivated successfully. Feb 13 19:53:13.909869 containerd[2024]: time="2025-02-13T19:53:13.909780114Z" level=info msg="shim disconnected" id=d3d7fbbb1d5734426f386bd4db57422c0d594aa98c2c768636a43366c47ef1b3 namespace=k8s.io Feb 13 19:53:13.909869 containerd[2024]: time="2025-02-13T19:53:13.909860485Z" level=warning msg="cleaning up after shim disconnected" id=d3d7fbbb1d5734426f386bd4db57422c0d594aa98c2c768636a43366c47ef1b3 namespace=k8s.io Feb 13 19:53:13.910817 containerd[2024]: time="2025-02-13T19:53:13.909882375Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:53:14.423665 kubelet[3233]: I0213 19:53:14.423358 3233 scope.go:117] "RemoveContainer" containerID="d3d7fbbb1d5734426f386bd4db57422c0d594aa98c2c768636a43366c47ef1b3" Feb 13 19:53:14.428387 containerd[2024]: time="2025-02-13T19:53:14.428151008Z" level=info msg="CreateContainer within sandbox \"8d516c452d335b21ed431bed00b0e1b3df4de88b453a0c2962ececc29836774b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 19:53:14.459522 containerd[2024]: time="2025-02-13T19:53:14.459445297Z" level=info msg="CreateContainer within sandbox \"8d516c452d335b21ed431bed00b0e1b3df4de88b453a0c2962ececc29836774b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e92d50e13f0a5fa6a76d941bd807dbd5a09823866666a38294a650544c3c7eb2\"" Feb 13 19:53:14.460288 containerd[2024]: time="2025-02-13T19:53:14.460243114Z" level=info msg="StartContainer for \"e92d50e13f0a5fa6a76d941bd807dbd5a09823866666a38294a650544c3c7eb2\"" Feb 13 19:53:14.512568 systemd[1]: Started cri-containerd-e92d50e13f0a5fa6a76d941bd807dbd5a09823866666a38294a650544c3c7eb2.scope - libcontainer container e92d50e13f0a5fa6a76d941bd807dbd5a09823866666a38294a650544c3c7eb2. Feb 13 19:53:14.581860 containerd[2024]: time="2025-02-13T19:53:14.581660295Z" level=info msg="StartContainer for \"e92d50e13f0a5fa6a76d941bd807dbd5a09823866666a38294a650544c3c7eb2\" returns successfully" Feb 13 19:53:19.309095 systemd[1]: cri-containerd-bd3a6e94827b807afa657a85573f51db097e0c7798777e470001450cc7cc503d.scope: Deactivated successfully. Feb 13 19:53:19.309888 systemd[1]: cri-containerd-bd3a6e94827b807afa657a85573f51db097e0c7798777e470001450cc7cc503d.scope: Consumed 4.109s CPU time, 15.6M memory peak, 0B memory swap peak. Feb 13 19:53:19.347959 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd3a6e94827b807afa657a85573f51db097e0c7798777e470001450cc7cc503d-rootfs.mount: Deactivated successfully. Feb 13 19:53:19.361351 containerd[2024]: time="2025-02-13T19:53:19.361238821Z" level=info msg="shim disconnected" id=bd3a6e94827b807afa657a85573f51db097e0c7798777e470001450cc7cc503d namespace=k8s.io Feb 13 19:53:19.361351 containerd[2024]: time="2025-02-13T19:53:19.361315786Z" level=warning msg="cleaning up after shim disconnected" id=bd3a6e94827b807afa657a85573f51db097e0c7798777e470001450cc7cc503d namespace=k8s.io Feb 13 19:53:19.361351 containerd[2024]: time="2025-02-13T19:53:19.361336764Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:53:19.443348 kubelet[3233]: I0213 19:53:19.443262 3233 scope.go:117] "RemoveContainer" containerID="bd3a6e94827b807afa657a85573f51db097e0c7798777e470001450cc7cc503d" Feb 13 19:53:19.447261 containerd[2024]: time="2025-02-13T19:53:19.447154950Z" level=info msg="CreateContainer within sandbox \"d89b3947621f3bbcb57516b3456d6ebc010671af1d2c67d53b81f7046fdeaa47\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 19:53:19.478439 containerd[2024]: time="2025-02-13T19:53:19.478360243Z" level=info msg="CreateContainer within sandbox \"d89b3947621f3bbcb57516b3456d6ebc010671af1d2c67d53b81f7046fdeaa47\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"56655bcca5edb796ed822b77b8c97e9f78ded2ef2cc1ecad11228c80ab028b4d\"" Feb 13 19:53:19.479102 containerd[2024]: time="2025-02-13T19:53:19.479043482Z" level=info msg="StartContainer for \"56655bcca5edb796ed822b77b8c97e9f78ded2ef2cc1ecad11228c80ab028b4d\"" Feb 13 19:53:19.535526 systemd[1]: Started cri-containerd-56655bcca5edb796ed822b77b8c97e9f78ded2ef2cc1ecad11228c80ab028b4d.scope - libcontainer container 56655bcca5edb796ed822b77b8c97e9f78ded2ef2cc1ecad11228c80ab028b4d. Feb 13 19:53:19.609939 containerd[2024]: time="2025-02-13T19:53:19.609716613Z" level=info msg="StartContainer for \"56655bcca5edb796ed822b77b8c97e9f78ded2ef2cc1ecad11228c80ab028b4d\" returns successfully" Feb 13 19:53:21.521233 kubelet[3233]: E0213 19:53:21.520769 3233 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-124?timeout=10s\": context deadline exceeded" Feb 13 19:53:31.522759 kubelet[3233]: E0213 19:53:31.521939 3233 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-124?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"