Feb 13 19:01:37.184082 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 19:01:37.184130 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Feb 13 17:39:57 -00 2025 Feb 13 19:01:37.184155 kernel: KASLR disabled due to lack of seed Feb 13 19:01:37.184172 kernel: efi: EFI v2.7 by EDK II Feb 13 19:01:37.184188 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Feb 13 19:01:37.184204 kernel: secureboot: Secure boot disabled Feb 13 19:01:37.184221 kernel: ACPI: Early table checksum verification disabled Feb 13 19:01:37.184237 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 19:01:37.184252 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 19:01:37.184268 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 19:01:37.184289 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 19:01:37.184305 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 19:01:37.184320 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 19:01:37.184336 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 19:01:37.184355 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 19:01:37.186463 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 19:01:37.186505 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 19:01:37.186524 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 19:01:37.186543 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 19:01:37.186561 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 19:01:37.186579 kernel: printk: bootconsole [uart0] enabled Feb 13 19:01:37.186596 kernel: NUMA: Failed to initialise from firmware Feb 13 19:01:37.186613 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:01:37.186630 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 19:01:37.186647 kernel: Zone ranges: Feb 13 19:01:37.186663 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 19:01:37.186691 kernel: DMA32 empty Feb 13 19:01:37.186709 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 19:01:37.186727 kernel: Movable zone start for each node Feb 13 19:01:37.186743 kernel: Early memory node ranges Feb 13 19:01:37.186760 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 19:01:37.186776 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 19:01:37.186793 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 19:01:37.186810 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 19:01:37.186827 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 19:01:37.186844 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 19:01:37.186861 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 19:01:37.186878 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 19:01:37.186901 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:01:37.186919 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 19:01:37.186943 kernel: psci: probing for conduit method from ACPI. Feb 13 19:01:37.186961 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 19:01:37.186979 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:01:37.187001 kernel: psci: Trusted OS migration not required Feb 13 19:01:37.187018 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:01:37.187036 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:01:37.187053 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:01:37.187071 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 19:01:37.187088 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:01:37.187107 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:01:37.187125 kernel: CPU features: detected: Spectre-v2 Feb 13 19:01:37.187142 kernel: CPU features: detected: Spectre-v3a Feb 13 19:01:37.187159 kernel: CPU features: detected: Spectre-BHB Feb 13 19:01:37.187177 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 19:01:37.187195 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 19:01:37.187217 kernel: alternatives: applying boot alternatives Feb 13 19:01:37.187238 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:01:37.187257 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:01:37.187276 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:01:37.187294 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:01:37.187311 kernel: Fallback order for Node 0: 0 Feb 13 19:01:37.187329 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 19:01:37.187347 kernel: Policy zone: Normal Feb 13 19:01:37.187365 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:01:37.187417 kernel: software IO TLB: area num 2. Feb 13 19:01:37.187446 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 19:01:37.187465 kernel: Memory: 3821240K/4030464K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 209224K reserved, 0K cma-reserved) Feb 13 19:01:37.187484 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:01:37.187502 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:01:37.187521 kernel: rcu: RCU event tracing is enabled. Feb 13 19:01:37.187540 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:01:37.187558 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:01:37.187576 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:01:37.187594 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:01:37.187612 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:01:37.187630 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:01:37.187653 kernel: GICv3: 96 SPIs implemented Feb 13 19:01:37.187670 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:01:37.187688 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:01:37.187707 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 19:01:37.187725 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 19:01:37.187744 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 19:01:37.187763 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:01:37.187783 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:01:37.187802 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 19:01:37.187821 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 19:01:37.187842 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 19:01:37.187861 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:01:37.187886 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 19:01:37.187906 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 19:01:37.187927 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 19:01:37.187947 kernel: Console: colour dummy device 80x25 Feb 13 19:01:37.187966 kernel: printk: console [tty1] enabled Feb 13 19:01:37.187985 kernel: ACPI: Core revision 20230628 Feb 13 19:01:37.188003 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 19:01:37.188021 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:01:37.188040 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:01:37.188058 kernel: landlock: Up and running. Feb 13 19:01:37.188081 kernel: SELinux: Initializing. Feb 13 19:01:37.188099 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:01:37.188117 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:01:37.188135 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:01:37.188153 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:01:37.188171 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:01:37.188190 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:01:37.188210 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 19:01:37.188232 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 19:01:37.188250 kernel: Remapping and enabling EFI services. Feb 13 19:01:37.188268 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:01:37.188286 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:01:37.188304 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 19:01:37.188323 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 19:01:37.188341 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 19:01:37.188359 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:01:37.190594 kernel: SMP: Total of 2 processors activated. Feb 13 19:01:37.190632 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:01:37.190663 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 19:01:37.190683 kernel: CPU features: detected: CRC32 instructions Feb 13 19:01:37.190716 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:01:37.190741 kernel: alternatives: applying system-wide alternatives Feb 13 19:01:37.190760 kernel: devtmpfs: initialized Feb 13 19:01:37.190781 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:01:37.190801 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:01:37.190822 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:01:37.190844 kernel: SMBIOS 3.0.0 present. Feb 13 19:01:37.190871 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 19:01:37.190891 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:01:37.190911 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:01:37.190930 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:01:37.190950 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:01:37.190969 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:01:37.190988 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:01:37.191012 kernel: audit: type=2000 audit(0.230:1): state=initialized audit_enabled=0 res=1 Feb 13 19:01:37.191032 kernel: cpuidle: using governor menu Feb 13 19:01:37.191051 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:01:37.191070 kernel: ASID allocator initialised with 65536 entries Feb 13 19:01:37.191088 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:01:37.191107 kernel: Serial: AMBA PL011 UART driver Feb 13 19:01:37.191126 kernel: Modules: 17760 pages in range for non-PLT usage Feb 13 19:01:37.191145 kernel: Modules: 509280 pages in range for PLT usage Feb 13 19:01:37.191164 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:01:37.191188 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:01:37.191207 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:01:37.191226 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:01:37.191246 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:01:37.191264 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:01:37.191283 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:01:37.191302 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:01:37.191321 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:01:37.191340 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:01:37.191365 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:01:37.191433 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:01:37.192497 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:01:37.192519 kernel: ACPI: Interpreter enabled Feb 13 19:01:37.192538 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:01:37.192557 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:01:37.192576 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 19:01:37.192892 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:01:37.193120 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:01:37.193356 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:01:37.194697 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 19:01:37.194926 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 19:01:37.194957 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 19:01:37.194977 kernel: acpiphp: Slot [1] registered Feb 13 19:01:37.194998 kernel: acpiphp: Slot [2] registered Feb 13 19:01:37.195017 kernel: acpiphp: Slot [3] registered Feb 13 19:01:37.195046 kernel: acpiphp: Slot [4] registered Feb 13 19:01:37.195066 kernel: acpiphp: Slot [5] registered Feb 13 19:01:37.195085 kernel: acpiphp: Slot [6] registered Feb 13 19:01:37.195104 kernel: acpiphp: Slot [7] registered Feb 13 19:01:37.195123 kernel: acpiphp: Slot [8] registered Feb 13 19:01:37.195144 kernel: acpiphp: Slot [9] registered Feb 13 19:01:37.195164 kernel: acpiphp: Slot [10] registered Feb 13 19:01:37.195182 kernel: acpiphp: Slot [11] registered Feb 13 19:01:37.195202 kernel: acpiphp: Slot [12] registered Feb 13 19:01:37.195221 kernel: acpiphp: Slot [13] registered Feb 13 19:01:37.195245 kernel: acpiphp: Slot [14] registered Feb 13 19:01:37.195264 kernel: acpiphp: Slot [15] registered Feb 13 19:01:37.195283 kernel: acpiphp: Slot [16] registered Feb 13 19:01:37.195302 kernel: acpiphp: Slot [17] registered Feb 13 19:01:37.195322 kernel: acpiphp: Slot [18] registered Feb 13 19:01:37.195341 kernel: acpiphp: Slot [19] registered Feb 13 19:01:37.195360 kernel: acpiphp: Slot [20] registered Feb 13 19:01:37.196479 kernel: acpiphp: Slot [21] registered Feb 13 19:01:37.196526 kernel: acpiphp: Slot [22] registered Feb 13 19:01:37.196557 kernel: acpiphp: Slot [23] registered Feb 13 19:01:37.196578 kernel: acpiphp: Slot [24] registered Feb 13 19:01:37.196596 kernel: acpiphp: Slot [25] registered Feb 13 19:01:37.196616 kernel: acpiphp: Slot [26] registered Feb 13 19:01:37.196636 kernel: acpiphp: Slot [27] registered Feb 13 19:01:37.196656 kernel: acpiphp: Slot [28] registered Feb 13 19:01:37.196676 kernel: acpiphp: Slot [29] registered Feb 13 19:01:37.196695 kernel: acpiphp: Slot [30] registered Feb 13 19:01:37.196715 kernel: acpiphp: Slot [31] registered Feb 13 19:01:37.196736 kernel: PCI host bridge to bus 0000:00 Feb 13 19:01:37.197074 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 19:01:37.197349 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:01:37.199699 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 19:01:37.199902 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 19:01:37.200170 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 19:01:37.200475 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 19:01:37.200754 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 19:01:37.201018 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 19:01:37.201286 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 19:01:37.202673 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:01:37.202945 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 19:01:37.203177 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 19:01:37.204430 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 19:01:37.204758 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 19:01:37.204990 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:01:37.205230 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 19:01:37.207610 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 19:01:37.207851 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 19:01:37.208060 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 19:01:37.208276 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 19:01:37.208751 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 19:01:37.208981 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:01:37.209342 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 19:01:37.211438 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:01:37.211484 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:01:37.211505 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:01:37.211527 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:01:37.211547 kernel: iommu: Default domain type: Translated Feb 13 19:01:37.211581 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:01:37.211601 kernel: efivars: Registered efivars operations Feb 13 19:01:37.211620 kernel: vgaarb: loaded Feb 13 19:01:37.211640 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:01:37.211660 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:01:37.211679 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:01:37.211699 kernel: pnp: PnP ACPI init Feb 13 19:01:37.212010 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 19:01:37.212065 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:01:37.212085 kernel: NET: Registered PF_INET protocol family Feb 13 19:01:37.212115 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:01:37.212135 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:01:37.212154 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:01:37.212173 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:01:37.212192 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:01:37.212210 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:01:37.212229 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:01:37.212253 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:01:37.212272 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:01:37.212292 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:01:37.212310 kernel: kvm [1]: HYP mode not available Feb 13 19:01:37.212329 kernel: Initialise system trusted keyrings Feb 13 19:01:37.212348 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:01:37.212368 kernel: Key type asymmetric registered Feb 13 19:01:37.212439 kernel: Asymmetric key parser 'x509' registered Feb 13 19:01:37.212460 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:01:37.212486 kernel: io scheduler mq-deadline registered Feb 13 19:01:37.212505 kernel: io scheduler kyber registered Feb 13 19:01:37.212524 kernel: io scheduler bfq registered Feb 13 19:01:37.212789 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 19:01:37.212820 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:01:37.212839 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:01:37.212858 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 19:01:37.212877 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 19:01:37.212902 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:01:37.212923 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 19:01:37.213166 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 19:01:37.213197 kernel: printk: console [ttyS0] disabled Feb 13 19:01:37.213217 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 19:01:37.213236 kernel: printk: console [ttyS0] enabled Feb 13 19:01:37.213255 kernel: printk: bootconsole [uart0] disabled Feb 13 19:01:37.213274 kernel: thunder_xcv, ver 1.0 Feb 13 19:01:37.213293 kernel: thunder_bgx, ver 1.0 Feb 13 19:01:37.213311 kernel: nicpf, ver 1.0 Feb 13 19:01:37.213337 kernel: nicvf, ver 1.0 Feb 13 19:01:37.215889 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:01:37.216124 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:01:36 UTC (1739473296) Feb 13 19:01:37.216152 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:01:37.216172 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 19:01:37.216193 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:01:37.216213 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:01:37.216244 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:01:37.216264 kernel: Segment Routing with IPv6 Feb 13 19:01:37.216283 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:01:37.216302 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:01:37.216321 kernel: Key type dns_resolver registered Feb 13 19:01:37.216341 kernel: registered taskstats version 1 Feb 13 19:01:37.216360 kernel: Loading compiled-in X.509 certificates Feb 13 19:01:37.216424 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 58bec1a0c6b8a133d1af4ea745973da0351f7027' Feb 13 19:01:37.216446 kernel: Key type .fscrypt registered Feb 13 19:01:37.216509 kernel: Key type fscrypt-provisioning registered Feb 13 19:01:37.216870 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:01:37.217186 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:01:37.218704 kernel: ima: No architecture policies found Feb 13 19:01:37.218754 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:01:37.218774 kernel: clk: Disabling unused clocks Feb 13 19:01:37.218793 kernel: Freeing unused kernel memory: 38336K Feb 13 19:01:37.218812 kernel: Run /init as init process Feb 13 19:01:37.218831 kernel: with arguments: Feb 13 19:01:37.218850 kernel: /init Feb 13 19:01:37.218880 kernel: with environment: Feb 13 19:01:37.218899 kernel: HOME=/ Feb 13 19:01:37.218918 kernel: TERM=linux Feb 13 19:01:37.218937 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:01:37.218958 systemd[1]: Successfully made /usr/ read-only. Feb 13 19:01:37.218985 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:01:37.219007 systemd[1]: Detected virtualization amazon. Feb 13 19:01:37.219032 systemd[1]: Detected architecture arm64. Feb 13 19:01:37.219055 systemd[1]: Running in initrd. Feb 13 19:01:37.219076 systemd[1]: No hostname configured, using default hostname. Feb 13 19:01:37.219097 systemd[1]: Hostname set to . Feb 13 19:01:37.219117 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:01:37.219137 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:01:37.219157 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:01:37.219178 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:01:37.219201 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:01:37.219229 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:01:37.219250 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:01:37.219272 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:01:37.219294 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:01:37.219316 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:01:37.219336 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:01:37.219362 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:01:37.220484 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:01:37.220517 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:01:37.220538 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:01:37.220559 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:01:37.220579 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:01:37.220599 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:01:37.220619 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:01:37.220638 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 19:01:37.220672 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:01:37.220692 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:01:37.220711 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:01:37.220731 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:01:37.220751 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:01:37.220771 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:01:37.220790 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:01:37.220810 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:01:37.220835 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:01:37.220855 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:01:37.220875 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:01:37.220894 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:01:37.220977 systemd-journald[252]: Collecting audit messages is disabled. Feb 13 19:01:37.221025 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:01:37.221047 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:01:37.221068 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:01:37.221088 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:01:37.221113 systemd-journald[252]: Journal started Feb 13 19:01:37.221165 systemd-journald[252]: Runtime Journal (/run/log/journal/ec225f64645684e8451af8243d8108fc) is 8M, max 75.3M, 67.3M free. Feb 13 19:01:37.213465 systemd-modules-load[254]: Inserted module 'overlay' Feb 13 19:01:37.238553 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:01:37.243406 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:01:37.243495 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:01:37.248161 systemd-modules-load[254]: Inserted module 'br_netfilter' Feb 13 19:01:37.250789 kernel: Bridge firewalling registered Feb 13 19:01:37.250060 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:01:37.253536 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:01:37.271894 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:01:37.274669 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:01:37.278530 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:01:37.307822 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:01:37.330029 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:01:37.349178 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:01:37.353352 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:01:37.369454 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:01:37.381702 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:01:37.395360 dracut-cmdline[288]: dracut-dracut-053 Feb 13 19:01:37.403602 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:01:37.476245 systemd-resolved[289]: Positive Trust Anchors: Feb 13 19:01:37.476285 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:01:37.476348 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:01:37.547413 kernel: SCSI subsystem initialized Feb 13 19:01:37.557396 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:01:37.567411 kernel: iscsi: registered transport (tcp) Feb 13 19:01:37.590416 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:01:37.590486 kernel: QLogic iSCSI HBA Driver Feb 13 19:01:37.680416 kernel: random: crng init done Feb 13 19:01:37.680756 systemd-resolved[289]: Defaulting to hostname 'linux'. Feb 13 19:01:37.684505 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:01:37.689351 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:01:37.714901 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:01:37.735797 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:01:37.768992 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:01:37.770497 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:01:37.770526 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:01:37.837460 kernel: raid6: neonx8 gen() 6533 MB/s Feb 13 19:01:37.855436 kernel: raid6: neonx4 gen() 6493 MB/s Feb 13 19:01:37.872419 kernel: raid6: neonx2 gen() 5441 MB/s Feb 13 19:01:37.889432 kernel: raid6: neonx1 gen() 3944 MB/s Feb 13 19:01:37.906429 kernel: raid6: int64x8 gen() 3613 MB/s Feb 13 19:01:37.923430 kernel: raid6: int64x4 gen() 3698 MB/s Feb 13 19:01:37.940423 kernel: raid6: int64x2 gen() 3590 MB/s Feb 13 19:01:37.958534 kernel: raid6: int64x1 gen() 2767 MB/s Feb 13 19:01:37.958600 kernel: raid6: using algorithm neonx8 gen() 6533 MB/s Feb 13 19:01:37.977427 kernel: raid6: .... xor() 4744 MB/s, rmw enabled Feb 13 19:01:37.977497 kernel: raid6: using neon recovery algorithm Feb 13 19:01:37.985850 kernel: xor: measuring software checksum speed Feb 13 19:01:37.985924 kernel: 8regs : 13008 MB/sec Feb 13 19:01:37.986919 kernel: 32regs : 13073 MB/sec Feb 13 19:01:37.988114 kernel: arm64_neon : 9561 MB/sec Feb 13 19:01:37.988187 kernel: xor: using function: 32regs (13073 MB/sec) Feb 13 19:01:38.074429 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:01:38.094763 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:01:38.105693 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:01:38.150228 systemd-udevd[471]: Using default interface naming scheme 'v255'. Feb 13 19:01:38.161882 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:01:38.173669 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:01:38.213990 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Feb 13 19:01:38.273718 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:01:38.281754 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:01:38.413945 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:01:38.434591 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:01:38.469106 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:01:38.474254 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:01:38.483499 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:01:38.493672 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:01:38.512783 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:01:38.562368 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:01:38.641951 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:01:38.642031 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 19:01:38.669230 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 19:01:38.669951 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 19:01:38.670236 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:a3:3b:42:98:b3 Feb 13 19:01:38.655725 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:01:38.655998 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:01:38.658904 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:01:38.661502 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:01:38.661915 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:01:38.664605 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:01:38.692767 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:01:38.713971 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:01:38.718026 (udev-worker)[523]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:01:38.733437 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 19:01:38.735420 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 19:01:38.745427 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 19:01:38.756040 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:01:38.756112 kernel: GPT:9289727 != 16777215 Feb 13 19:01:38.756137 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:01:38.757897 kernel: GPT:9289727 != 16777215 Feb 13 19:01:38.760182 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:01:38.760304 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:01:38.759444 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:01:38.772705 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:01:38.803555 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:01:38.861443 kernel: BTRFS: device fsid 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (539) Feb 13 19:01:38.892460 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (533) Feb 13 19:01:38.949191 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 19:01:39.014970 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 19:01:39.055734 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 19:01:39.058418 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 19:01:39.087631 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:01:39.104685 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:01:39.121815 disk-uuid[660]: Primary Header is updated. Feb 13 19:01:39.121815 disk-uuid[660]: Secondary Entries is updated. Feb 13 19:01:39.121815 disk-uuid[660]: Secondary Header is updated. Feb 13 19:01:39.136431 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:01:40.149575 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:01:40.151908 disk-uuid[661]: The operation has completed successfully. Feb 13 19:01:40.389815 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:01:40.390559 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:01:40.473668 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:01:40.483252 sh[921]: Success Feb 13 19:01:40.509441 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:01:40.642604 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:01:40.654584 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:01:40.662490 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:01:40.710048 kernel: BTRFS info (device dm-0): first mount of filesystem 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 Feb 13 19:01:40.710145 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:01:40.712054 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:01:40.714517 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:01:40.714593 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:01:40.742432 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:01:40.746699 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:01:40.752080 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:01:40.770841 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:01:40.777827 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:01:40.821971 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:01:40.822074 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:01:40.822123 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:01:40.833434 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:01:40.858243 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:01:40.861474 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:01:40.873926 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:01:40.891896 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:01:41.016304 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:01:41.029729 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:01:41.110092 systemd-networkd[1116]: lo: Link UP Feb 13 19:01:41.110118 systemd-networkd[1116]: lo: Gained carrier Feb 13 19:01:41.114531 systemd-networkd[1116]: Enumeration completed Feb 13 19:01:41.115206 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:01:41.115294 systemd-networkd[1116]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:01:41.115302 systemd-networkd[1116]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:01:41.123052 systemd-networkd[1116]: eth0: Link UP Feb 13 19:01:41.123060 systemd-networkd[1116]: eth0: Gained carrier Feb 13 19:01:41.123079 systemd-networkd[1116]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:01:41.124142 systemd[1]: Reached target network.target - Network. Feb 13 19:01:41.142254 systemd-networkd[1116]: eth0: DHCPv4 address 172.31.27.130/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:01:41.186921 ignition[1033]: Ignition 2.20.0 Feb 13 19:01:41.186948 ignition[1033]: Stage: fetch-offline Feb 13 19:01:41.192715 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:01:41.187541 ignition[1033]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:01:41.187570 ignition[1033]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:01:41.189254 ignition[1033]: Ignition finished successfully Feb 13 19:01:41.207714 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:01:41.243271 ignition[1129]: Ignition 2.20.0 Feb 13 19:01:41.243301 ignition[1129]: Stage: fetch Feb 13 19:01:41.245109 ignition[1129]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:01:41.245159 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:01:41.245815 ignition[1129]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:01:41.263538 ignition[1129]: PUT result: OK Feb 13 19:01:41.266913 ignition[1129]: parsed url from cmdline: "" Feb 13 19:01:41.267110 ignition[1129]: no config URL provided Feb 13 19:01:41.267133 ignition[1129]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:01:41.267168 ignition[1129]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:01:41.267210 ignition[1129]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:01:41.271394 ignition[1129]: PUT result: OK Feb 13 19:01:41.271592 ignition[1129]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 19:01:41.279818 ignition[1129]: GET result: OK Feb 13 19:01:41.280693 ignition[1129]: parsing config with SHA512: e33115b51a199e363dba35e837ab32b7662aa9d8f25da9bf5d96b97317f0a9e18689626a4576c543b943a8829fec166f7889c30174259dd092bacc5aa5475225 Feb 13 19:01:41.291913 unknown[1129]: fetched base config from "system" Feb 13 19:01:41.291944 unknown[1129]: fetched base config from "system" Feb 13 19:01:41.293187 ignition[1129]: fetch: fetch complete Feb 13 19:01:41.291959 unknown[1129]: fetched user config from "aws" Feb 13 19:01:41.293205 ignition[1129]: fetch: fetch passed Feb 13 19:01:41.293700 ignition[1129]: Ignition finished successfully Feb 13 19:01:41.303358 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:01:41.327862 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:01:41.360338 ignition[1136]: Ignition 2.20.0 Feb 13 19:01:41.360361 ignition[1136]: Stage: kargs Feb 13 19:01:41.361713 ignition[1136]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:01:41.361742 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:01:41.361911 ignition[1136]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:01:41.363431 ignition[1136]: PUT result: OK Feb 13 19:01:41.374535 ignition[1136]: kargs: kargs passed Feb 13 19:01:41.374690 ignition[1136]: Ignition finished successfully Feb 13 19:01:41.379864 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:01:41.391755 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:01:41.416680 ignition[1143]: Ignition 2.20.0 Feb 13 19:01:41.416711 ignition[1143]: Stage: disks Feb 13 19:01:41.418500 ignition[1143]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:01:41.418532 ignition[1143]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:01:41.418834 ignition[1143]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:01:41.422220 ignition[1143]: PUT result: OK Feb 13 19:01:41.432214 ignition[1143]: disks: disks passed Feb 13 19:01:41.432647 ignition[1143]: Ignition finished successfully Feb 13 19:01:41.438495 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:01:41.442545 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:01:41.447332 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:01:41.450684 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:01:41.452769 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:01:41.454839 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:01:41.476581 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:01:41.524044 systemd-fsck[1151]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:01:41.531818 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:01:41.716616 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:01:41.803469 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 24882d04-b1a5-4a27-95f1-925956e69b18 r/w with ordered data mode. Quota mode: none. Feb 13 19:01:41.805760 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:01:41.807370 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:01:41.821590 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:01:41.834672 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:01:41.842073 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:01:41.842178 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:01:41.842244 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:01:41.858814 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:01:41.877163 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1170) Feb 13 19:01:41.877262 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:01:41.877632 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:01:41.885748 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:01:41.885790 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:01:41.899419 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:01:41.903263 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:01:41.991838 initrd-setup-root[1194]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:01:42.003043 initrd-setup-root[1201]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:01:42.012504 initrd-setup-root[1208]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:01:42.021529 initrd-setup-root[1215]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:01:42.186456 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:01:42.195612 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:01:42.212637 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:01:42.228415 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:01:42.264827 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:01:42.275998 ignition[1283]: INFO : Ignition 2.20.0 Feb 13 19:01:42.275998 ignition[1283]: INFO : Stage: mount Feb 13 19:01:42.279814 ignition[1283]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:01:42.279814 ignition[1283]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:01:42.279814 ignition[1283]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:01:42.287722 ignition[1283]: INFO : PUT result: OK Feb 13 19:01:42.291051 ignition[1283]: INFO : mount: mount passed Feb 13 19:01:42.291051 ignition[1283]: INFO : Ignition finished successfully Feb 13 19:01:42.294798 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:01:42.309556 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:01:42.707829 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:01:42.720908 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:01:42.746444 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1296) Feb 13 19:01:42.751548 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:01:42.751646 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:01:42.751676 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:01:42.759441 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:01:42.763417 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:01:42.802081 ignition[1313]: INFO : Ignition 2.20.0 Feb 13 19:01:42.804137 ignition[1313]: INFO : Stage: files Feb 13 19:01:42.806280 ignition[1313]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:01:42.806280 ignition[1313]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:01:42.811346 ignition[1313]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:01:42.814846 ignition[1313]: INFO : PUT result: OK Feb 13 19:01:42.812663 systemd-networkd[1116]: eth0: Gained IPv6LL Feb 13 19:01:42.821154 ignition[1313]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:01:42.833037 ignition[1313]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:01:42.833037 ignition[1313]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:01:42.844485 ignition[1313]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:01:42.847316 ignition[1313]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:01:42.850666 unknown[1313]: wrote ssh authorized keys file for user: core Feb 13 19:01:42.854025 ignition[1313]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:01:42.856566 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:01:42.856566 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:01:42.964723 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:01:43.193301 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:01:43.193301 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:01:43.202348 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:01:43.202348 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:01:43.202348 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:01:43.202348 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:01:43.202348 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:01:43.202348 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:01:43.202348 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:01:43.202348 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:01:43.202348 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:01:43.202348 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:01:43.202348 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:01:43.202348 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:01:43.202348 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 19:01:50.809572 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 19:01:51.177341 ignition[1313]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:01:51.177341 ignition[1313]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 19:01:51.184462 ignition[1313]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:01:51.184462 ignition[1313]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:01:51.184462 ignition[1313]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 19:01:51.184462 ignition[1313]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:01:51.184462 ignition[1313]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:01:51.184462 ignition[1313]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:01:51.184462 ignition[1313]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:01:51.184462 ignition[1313]: INFO : files: files passed Feb 13 19:01:51.184462 ignition[1313]: INFO : Ignition finished successfully Feb 13 19:01:51.209471 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:01:51.230858 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:01:51.238088 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:01:51.247240 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:01:51.247575 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:01:51.279618 initrd-setup-root-after-ignition[1342]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:01:51.279618 initrd-setup-root-after-ignition[1342]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:01:51.288009 initrd-setup-root-after-ignition[1346]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:01:51.293952 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:01:51.298062 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:01:51.314722 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:01:51.362352 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:01:51.362792 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:01:51.369708 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:01:51.371697 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:01:51.373748 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:01:51.392155 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:01:51.428195 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:01:51.439692 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:01:51.473302 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:01:51.473688 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:01:51.480333 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:01:51.482280 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:01:51.482553 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:01:51.485514 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:01:51.487688 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:01:51.489596 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:01:51.491827 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:01:51.494278 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:01:51.496677 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:01:51.498854 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:01:51.501432 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:01:51.503687 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:01:51.505879 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:01:51.507613 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:01:51.507914 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:01:51.510518 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:01:51.512738 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:01:51.515644 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:01:51.552561 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:01:51.555210 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:01:51.555508 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:01:51.564110 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:01:51.564817 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:01:51.571912 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:01:51.572129 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:01:51.586546 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:01:51.592339 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:01:51.594110 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:01:51.618358 ignition[1366]: INFO : Ignition 2.20.0 Feb 13 19:01:51.625566 ignition[1366]: INFO : Stage: umount Feb 13 19:01:51.625566 ignition[1366]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:01:51.625566 ignition[1366]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:01:51.625566 ignition[1366]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:01:51.619826 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:01:51.637429 ignition[1366]: INFO : PUT result: OK Feb 13 19:01:51.621630 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:01:51.621887 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:01:51.647761 ignition[1366]: INFO : umount: umount passed Feb 13 19:01:51.647761 ignition[1366]: INFO : Ignition finished successfully Feb 13 19:01:51.626922 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:01:51.631369 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:01:51.658525 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:01:51.662474 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:01:51.669498 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:01:51.671890 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:01:51.676314 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:01:51.676526 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:01:51.684633 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:01:51.684750 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:01:51.701412 systemd[1]: Stopped target network.target - Network. Feb 13 19:01:51.705388 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:01:51.707741 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:01:51.723840 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:01:51.725585 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:01:51.731275 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:01:51.733553 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:01:51.735212 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:01:51.737061 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:01:51.737166 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:01:51.739080 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:01:51.739146 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:01:51.741092 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:01:51.741186 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:01:51.743094 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:01:51.743174 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:01:51.745364 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:01:51.751082 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:01:51.753357 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:01:51.754699 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:01:51.754875 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:01:51.768335 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:01:51.781680 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:01:51.794561 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 19:01:51.795156 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:01:51.795364 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:01:51.801931 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 19:01:51.802577 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:01:51.802755 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:01:51.812215 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:01:51.812973 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:01:51.816085 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:01:51.817036 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:01:51.830725 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:01:51.840338 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:01:51.840480 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:01:51.842914 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:01:51.843000 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:01:51.870784 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:01:51.870886 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:01:51.872958 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:01:51.873052 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:01:51.885316 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:01:51.892757 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 19:01:51.892942 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:01:51.909077 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:01:51.911036 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:01:51.919034 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:01:51.919165 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:01:51.923817 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:01:51.923908 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:01:51.927422 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:01:51.927534 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:01:51.929947 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:01:51.930047 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:01:51.944118 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:01:51.944236 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:01:51.958740 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:01:51.962840 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:01:51.962985 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:01:51.974801 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:01:51.974944 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:01:51.982629 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 19:01:51.982788 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:01:51.985892 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:01:51.986110 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:01:52.003510 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:01:52.005952 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:01:52.016693 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:01:52.027682 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:01:52.054750 systemd[1]: Switching root. Feb 13 19:01:52.092931 systemd-journald[252]: Journal stopped Feb 13 19:01:54.245411 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Feb 13 19:01:54.245545 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:01:54.245598 kernel: SELinux: policy capability open_perms=1 Feb 13 19:01:54.245626 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:01:54.245662 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:01:54.245692 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:01:54.245722 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:01:54.245761 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:01:54.245792 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:01:54.245822 kernel: audit: type=1403 audit(1739473312.383:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:01:54.245862 systemd[1]: Successfully loaded SELinux policy in 53.246ms. Feb 13 19:01:54.245907 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 29ms. Feb 13 19:01:54.245944 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:01:54.245975 systemd[1]: Detected virtualization amazon. Feb 13 19:01:54.246006 systemd[1]: Detected architecture arm64. Feb 13 19:01:54.246037 systemd[1]: Detected first boot. Feb 13 19:01:54.246067 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:01:54.246098 zram_generator::config[1410]: No configuration found. Feb 13 19:01:54.246131 kernel: NET: Registered PF_VSOCK protocol family Feb 13 19:01:54.246162 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:01:54.246193 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 19:01:54.246227 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:01:54.246257 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:01:54.246296 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:01:54.246328 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:01:54.246361 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:01:54.248204 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:01:54.248251 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:01:54.248284 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:01:54.248321 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:01:54.248353 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:01:54.250535 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:01:54.250594 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:01:54.250723 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:01:54.250759 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:01:54.250792 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:01:54.250825 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:01:54.250858 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:01:54.250903 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:01:54.250934 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:01:54.250963 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:01:54.250994 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:01:54.251022 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:01:54.251051 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:01:54.251080 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:01:54.251111 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:01:54.251148 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:01:54.251189 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:01:54.251218 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:01:54.251250 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:01:54.251281 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 19:01:54.251310 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:01:54.251342 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:01:54.251402 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:01:54.251441 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:01:54.251477 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:01:54.251508 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:01:54.251539 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:01:54.251571 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:01:54.251600 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:01:54.251628 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:01:54.251658 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:01:54.251687 systemd[1]: Reached target machines.target - Containers. Feb 13 19:01:54.251720 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:01:54.251751 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:01:54.251780 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:01:54.251809 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:01:54.251840 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:01:54.251871 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:01:54.251905 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:01:54.251934 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:01:54.251962 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:01:54.251997 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:01:54.252028 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:01:54.252057 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:01:54.252086 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:01:54.252115 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:01:54.252144 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:01:54.252177 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:01:54.252206 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:01:54.252239 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:01:54.252269 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:01:54.252298 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 19:01:54.252327 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:01:54.252359 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:01:54.252423 kernel: fuse: init (API version 7.39) Feb 13 19:01:54.252462 systemd[1]: Stopped verity-setup.service. Feb 13 19:01:54.252491 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:01:54.252520 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:01:54.252548 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:01:54.252577 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:01:54.252605 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:01:54.252634 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:01:54.252665 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:01:54.252699 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:01:54.252730 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:01:54.252761 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:01:54.252792 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:01:54.252820 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:01:54.252855 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:01:54.252885 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:01:54.252916 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:01:54.252944 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 19:01:54.252973 kernel: ACPI: bus type drm_connector registered Feb 13 19:01:54.253003 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:01:54.253036 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:01:54.253068 kernel: loop: module loaded Feb 13 19:01:54.253118 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:01:54.253153 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:01:54.253182 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:01:54.253211 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:01:54.253240 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:01:54.253268 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:01:54.253301 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:01:54.253331 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:01:54.253359 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:01:54.253456 systemd-journald[1493]: Collecting audit messages is disabled. Feb 13 19:01:54.253515 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:01:54.253549 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:01:54.253579 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 19:01:54.253612 systemd-journald[1493]: Journal started Feb 13 19:01:54.253661 systemd-journald[1493]: Runtime Journal (/run/log/journal/ec225f64645684e8451af8243d8108fc) is 8M, max 75.3M, 67.3M free. Feb 13 19:01:54.257620 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:01:53.552631 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:01:53.564403 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 19:01:53.565365 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:01:54.270783 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:01:54.279630 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:01:54.296732 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:01:54.296854 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:01:54.319055 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:01:54.319157 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:01:54.338573 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:01:54.362209 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:01:54.362329 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:01:54.372483 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:01:54.380917 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:01:54.385902 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:01:54.390243 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:01:54.406497 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:01:54.472811 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:01:54.481151 kernel: loop0: detected capacity change from 0 to 123192 Feb 13 19:01:54.483151 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:01:54.491741 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:01:54.505250 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 19:01:54.541423 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:01:54.568358 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:01:54.573770 systemd-journald[1493]: Time spent on flushing to /var/log/journal/ec225f64645684e8451af8243d8108fc is 65.070ms for 925 entries. Feb 13 19:01:54.573770 systemd-journald[1493]: System Journal (/var/log/journal/ec225f64645684e8451af8243d8108fc) is 8M, max 195.6M, 187.6M free. Feb 13 19:01:54.672922 systemd-journald[1493]: Received client request to flush runtime journal. Feb 13 19:01:54.672999 kernel: loop1: detected capacity change from 0 to 113512 Feb 13 19:01:54.587760 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:01:54.592042 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:01:54.595601 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 19:01:54.599475 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:01:54.618762 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:01:54.672495 udevadm[1560]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:01:54.679520 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:01:54.691426 kernel: loop2: detected capacity change from 0 to 189592 Feb 13 19:01:54.713402 systemd-tmpfiles[1564]: ACLs are not supported, ignoring. Feb 13 19:01:54.713915 systemd-tmpfiles[1564]: ACLs are not supported, ignoring. Feb 13 19:01:54.731467 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:01:54.843430 kernel: loop3: detected capacity change from 0 to 53784 Feb 13 19:01:54.977413 kernel: loop4: detected capacity change from 0 to 123192 Feb 13 19:01:55.016415 kernel: loop5: detected capacity change from 0 to 113512 Feb 13 19:01:55.054009 kernel: loop6: detected capacity change from 0 to 189592 Feb 13 19:01:55.091424 kernel: loop7: detected capacity change from 0 to 53784 Feb 13 19:01:55.107427 (sd-merge)[1572]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 19:01:55.109690 (sd-merge)[1572]: Merged extensions into '/usr'. Feb 13 19:01:55.120142 systemd[1]: Reload requested from client PID 1527 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:01:55.120401 systemd[1]: Reloading... Feb 13 19:01:55.263462 ldconfig[1523]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:01:55.347916 zram_generator::config[1604]: No configuration found. Feb 13 19:01:55.638079 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:01:55.786861 systemd[1]: Reloading finished in 665 ms. Feb 13 19:01:55.809406 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:01:55.812154 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:01:55.815562 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:01:55.833839 systemd[1]: Starting ensure-sysext.service... Feb 13 19:01:55.838787 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:01:55.847700 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:01:55.877571 systemd[1]: Reload requested from client PID 1654 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:01:55.877603 systemd[1]: Reloading... Feb 13 19:01:55.938670 systemd-udevd[1656]: Using default interface naming scheme 'v255'. Feb 13 19:01:55.944999 systemd-tmpfiles[1655]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:01:55.945625 systemd-tmpfiles[1655]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:01:55.948033 systemd-tmpfiles[1655]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:01:55.948673 systemd-tmpfiles[1655]: ACLs are not supported, ignoring. Feb 13 19:01:55.948830 systemd-tmpfiles[1655]: ACLs are not supported, ignoring. Feb 13 19:01:55.960930 systemd-tmpfiles[1655]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:01:55.961647 systemd-tmpfiles[1655]: Skipping /boot Feb 13 19:01:56.006998 systemd-tmpfiles[1655]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:01:56.009446 systemd-tmpfiles[1655]: Skipping /boot Feb 13 19:01:56.121454 zram_generator::config[1693]: No configuration found. Feb 13 19:01:56.303919 (udev-worker)[1690]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:01:56.480403 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1707) Feb 13 19:01:56.505356 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:01:56.698369 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:01:56.698985 systemd[1]: Reloading finished in 820 ms. Feb 13 19:01:56.720020 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:01:56.755616 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:01:56.812583 systemd[1]: Finished ensure-sysext.service. Feb 13 19:01:56.849202 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:01:56.867661 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:01:56.882820 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:01:56.888730 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:01:56.891421 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:01:56.898824 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:01:56.914606 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:01:56.923767 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:01:56.930709 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:01:56.935104 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:01:56.937486 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:01:56.944752 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:01:56.947673 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:01:56.959689 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:01:56.975664 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:01:56.982408 lvm[1855]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:01:56.984724 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:01:56.987575 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:01:57.015827 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:01:57.023736 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:01:57.029208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:01:57.031484 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:01:57.056212 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:01:57.076724 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:01:57.080259 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:01:57.082043 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:01:57.085483 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:01:57.085936 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:01:57.095511 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:01:57.096209 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:01:57.097624 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:01:57.104877 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:01:57.111554 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:01:57.117179 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:01:57.127923 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:01:57.133412 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:01:57.145978 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:01:57.150202 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:01:57.187526 lvm[1890]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:01:57.209836 augenrules[1899]: No rules Feb 13 19:01:57.213560 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:01:57.214028 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:01:57.216823 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:01:57.248857 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:01:57.268496 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:01:57.271739 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:01:57.284583 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:01:57.319525 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:01:57.420923 systemd-networkd[1868]: lo: Link UP Feb 13 19:01:57.421506 systemd-networkd[1868]: lo: Gained carrier Feb 13 19:01:57.424696 systemd-networkd[1868]: Enumeration completed Feb 13 19:01:57.425072 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:01:57.427681 systemd-networkd[1868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:01:57.427859 systemd-networkd[1868]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:01:57.429919 systemd-networkd[1868]: eth0: Link UP Feb 13 19:01:57.430583 systemd-networkd[1868]: eth0: Gained carrier Feb 13 19:01:57.430750 systemd-networkd[1868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:01:57.437786 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 19:01:57.444777 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:01:57.449050 systemd-resolved[1870]: Positive Trust Anchors: Feb 13 19:01:57.449097 systemd-resolved[1870]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:01:57.449161 systemd-resolved[1870]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:01:57.449490 systemd-networkd[1868]: eth0: DHCPv4 address 172.31.27.130/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:01:57.458765 systemd-resolved[1870]: Defaulting to hostname 'linux'. Feb 13 19:01:57.462685 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:01:57.464892 systemd[1]: Reached target network.target - Network. Feb 13 19:01:57.466622 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:01:57.468876 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:01:57.471733 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:01:57.474639 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:01:57.477707 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:01:57.480775 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:01:57.483595 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:01:57.487823 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:01:57.487890 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:01:57.489600 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:01:57.492264 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:01:57.498727 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:01:57.507726 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 19:01:57.510611 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 19:01:57.513355 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 19:01:57.524542 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:01:57.528098 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 19:01:57.533434 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 19:01:57.536265 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:01:57.539523 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:01:57.541526 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:01:57.543615 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:01:57.543940 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:01:57.553603 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:01:57.559726 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:01:57.566743 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:01:57.579596 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:01:57.601822 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:01:57.603778 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:01:57.609431 jq[1927]: false Feb 13 19:01:57.616863 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:01:57.622734 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:01:57.633594 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:01:57.645543 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:01:57.650746 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:01:57.664687 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:01:57.694747 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:01:57.703334 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:01:57.706351 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:01:57.715685 extend-filesystems[1928]: Found loop4 Feb 13 19:01:57.715685 extend-filesystems[1928]: Found loop5 Feb 13 19:01:57.715685 extend-filesystems[1928]: Found loop6 Feb 13 19:01:57.715685 extend-filesystems[1928]: Found loop7 Feb 13 19:01:57.715685 extend-filesystems[1928]: Found nvme0n1 Feb 13 19:01:57.715685 extend-filesystems[1928]: Found nvme0n1p1 Feb 13 19:01:57.715685 extend-filesystems[1928]: Found nvme0n1p2 Feb 13 19:01:57.715685 extend-filesystems[1928]: Found nvme0n1p3 Feb 13 19:01:57.715685 extend-filesystems[1928]: Found usr Feb 13 19:01:57.715685 extend-filesystems[1928]: Found nvme0n1p4 Feb 13 19:01:57.715685 extend-filesystems[1928]: Found nvme0n1p6 Feb 13 19:01:57.715685 extend-filesystems[1928]: Found nvme0n1p7 Feb 13 19:01:57.715685 extend-filesystems[1928]: Found nvme0n1p9 Feb 13 19:01:57.709725 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:01:57.774284 extend-filesystems[1928]: Checking size of /dev/nvme0n1p9 Feb 13 19:01:57.715991 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:01:57.729476 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:01:57.731499 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:01:57.782076 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:01:57.783531 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:01:57.814142 dbus-daemon[1926]: [system] SELinux support is enabled Feb 13 19:01:57.824099 dbus-daemon[1926]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1868 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:01:57.833785 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:01:57.835966 extend-filesystems[1928]: Resized partition /dev/nvme0n1p9 Feb 13 19:01:57.842071 jq[1941]: true Feb 13 19:01:57.843292 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:01:57.843870 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:01:57.859003 ntpd[1932]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:02:48 UTC 2025 (1): Starting Feb 13 19:01:57.870710 ntpd[1932]: 13 Feb 19:01:57 ntpd[1932]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:02:48 UTC 2025 (1): Starting Feb 13 19:01:57.870710 ntpd[1932]: 13 Feb 19:01:57 ntpd[1932]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:01:57.870710 ntpd[1932]: 13 Feb 19:01:57 ntpd[1932]: ---------------------------------------------------- Feb 13 19:01:57.870710 ntpd[1932]: 13 Feb 19:01:57 ntpd[1932]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:01:57.870710 ntpd[1932]: 13 Feb 19:01:57 ntpd[1932]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:01:57.870710 ntpd[1932]: 13 Feb 19:01:57 ntpd[1932]: corporation. Support and training for ntp-4 are Feb 13 19:01:57.870710 ntpd[1932]: 13 Feb 19:01:57 ntpd[1932]: available at https://www.nwtime.org/support Feb 13 19:01:57.870710 ntpd[1932]: 13 Feb 19:01:57 ntpd[1932]: ---------------------------------------------------- Feb 13 19:01:57.870710 ntpd[1932]: 13 Feb 19:01:57 ntpd[1932]: proto: precision = 0.096 usec (-23) Feb 13 19:01:57.859061 ntpd[1932]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:01:57.859082 ntpd[1932]: ---------------------------------------------------- Feb 13 19:01:57.859101 ntpd[1932]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:01:57.859119 ntpd[1932]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:01:57.859137 ntpd[1932]: corporation. Support and training for ntp-4 are Feb 13 19:01:57.859160 ntpd[1932]: available at https://www.nwtime.org/support Feb 13 19:01:57.859177 ntpd[1932]: ---------------------------------------------------- Feb 13 19:01:57.868699 ntpd[1932]: proto: precision = 0.096 usec (-23) Feb 13 19:01:57.875488 ntpd[1932]: basedate set to 2025-02-01 Feb 13 19:01:57.882707 ntpd[1932]: 13 Feb 19:01:57 ntpd[1932]: basedate set to 2025-02-01 Feb 13 19:01:57.882707 ntpd[1932]: 13 Feb 19:01:57 ntpd[1932]: gps base set to 2025-02-02 (week 2352) Feb 13 19:01:57.882797 extend-filesystems[1969]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:01:57.875528 ntpd[1932]: gps base set to 2025-02-02 (week 2352) Feb 13 19:01:57.885184 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:01:57.890997 ntpd[1932]: 13 Feb 19:01:57 ntpd[1932]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:01:57.890997 ntpd[1932]: 13 Feb 19:01:57 ntpd[1932]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:01:57.890997 ntpd[1932]: 13 Feb 19:01:57 ntpd[1932]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:01:57.890997 ntpd[1932]: 13 Feb 19:01:57 ntpd[1932]: Listen normally on 3 eth0 172.31.27.130:123 Feb 13 19:01:57.890997 ntpd[1932]: 13 Feb 19:01:57 ntpd[1932]: Listen normally on 4 lo [::1]:123 Feb 13 19:01:57.890997 ntpd[1932]: 13 Feb 19:01:57 ntpd[1932]: bind(21) AF_INET6 fe80::4a3:3bff:fe42:98b3%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:01:57.890997 ntpd[1932]: 13 Feb 19:01:57 ntpd[1932]: unable to create socket on eth0 (5) for fe80::4a3:3bff:fe42:98b3%2#123 Feb 13 19:01:57.890997 ntpd[1932]: 13 Feb 19:01:57 ntpd[1932]: failed to init interface for address fe80::4a3:3bff:fe42:98b3%2 Feb 13 19:01:57.890997 ntpd[1932]: 13 Feb 19:01:57 ntpd[1932]: Listening on routing socket on fd #21 for interface updates Feb 13 19:01:57.886253 dbus-daemon[1926]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:01:57.885243 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:01:57.887307 ntpd[1932]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:01:57.890950 (ntainerd)[1967]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:01:57.887424 ntpd[1932]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:01:57.893767 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:01:57.887699 ntpd[1932]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:01:57.893809 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:01:57.887761 ntpd[1932]: Listen normally on 3 eth0 172.31.27.130:123 Feb 13 19:01:57.887833 ntpd[1932]: Listen normally on 4 lo [::1]:123 Feb 13 19:01:57.887909 ntpd[1932]: bind(21) AF_INET6 fe80::4a3:3bff:fe42:98b3%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:01:57.887951 ntpd[1932]: unable to create socket on eth0 (5) for fe80::4a3:3bff:fe42:98b3%2#123 Feb 13 19:01:57.887978 ntpd[1932]: failed to init interface for address fe80::4a3:3bff:fe42:98b3%2 Feb 13 19:01:57.888031 ntpd[1932]: Listening on routing socket on fd #21 for interface updates Feb 13 19:01:57.925757 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 19:01:57.925886 ntpd[1932]: 13 Feb 19:01:57 ntpd[1932]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:01:57.925886 ntpd[1932]: 13 Feb 19:01:57 ntpd[1932]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:01:57.921027 ntpd[1932]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:01:57.910445 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:01:57.921101 ntpd[1932]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:01:57.948611 jq[1965]: true Feb 13 19:01:57.966851 tar[1945]: linux-arm64/helm Feb 13 19:01:57.972769 update_engine[1940]: I20250213 19:01:57.970939 1940 main.cc:92] Flatcar Update Engine starting Feb 13 19:01:57.980337 update_engine[1940]: I20250213 19:01:57.979665 1940 update_check_scheduler.cc:74] Next update check in 8m48s Feb 13 19:01:57.983174 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:01:57.994596 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:01:58.045355 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:01:58.061846 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 19:01:58.080468 extend-filesystems[1969]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 19:01:58.080468 extend-filesystems[1969]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:01:58.080468 extend-filesystems[1969]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 19:01:58.098455 extend-filesystems[1928]: Resized filesystem in /dev/nvme0n1p9 Feb 13 19:01:58.097188 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:01:58.101328 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:01:58.142445 coreos-metadata[1925]: Feb 13 19:01:58.132 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:01:58.142445 coreos-metadata[1925]: Feb 13 19:01:58.132 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 19:01:58.142445 coreos-metadata[1925]: Feb 13 19:01:58.132 INFO Fetch successful Feb 13 19:01:58.142445 coreos-metadata[1925]: Feb 13 19:01:58.132 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 19:01:58.142445 coreos-metadata[1925]: Feb 13 19:01:58.132 INFO Fetch successful Feb 13 19:01:58.142445 coreos-metadata[1925]: Feb 13 19:01:58.132 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 19:01:58.142445 coreos-metadata[1925]: Feb 13 19:01:58.132 INFO Fetch successful Feb 13 19:01:58.142445 coreos-metadata[1925]: Feb 13 19:01:58.132 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 19:01:58.142445 coreos-metadata[1925]: Feb 13 19:01:58.132 INFO Fetch successful Feb 13 19:01:58.142445 coreos-metadata[1925]: Feb 13 19:01:58.132 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 19:01:58.142445 coreos-metadata[1925]: Feb 13 19:01:58.134 INFO Fetch failed with 404: resource not found Feb 13 19:01:58.142445 coreos-metadata[1925]: Feb 13 19:01:58.134 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 19:01:58.142445 coreos-metadata[1925]: Feb 13 19:01:58.141 INFO Fetch successful Feb 13 19:01:58.142445 coreos-metadata[1925]: Feb 13 19:01:58.141 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 19:01:58.142445 coreos-metadata[1925]: Feb 13 19:01:58.141 INFO Fetch successful Feb 13 19:01:58.142445 coreos-metadata[1925]: Feb 13 19:01:58.141 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 19:01:58.142445 coreos-metadata[1925]: Feb 13 19:01:58.141 INFO Fetch successful Feb 13 19:01:58.142445 coreos-metadata[1925]: Feb 13 19:01:58.141 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 19:01:58.151621 coreos-metadata[1925]: Feb 13 19:01:58.146 INFO Fetch successful Feb 13 19:01:58.151621 coreos-metadata[1925]: Feb 13 19:01:58.146 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 19:01:58.151621 coreos-metadata[1925]: Feb 13 19:01:58.147 INFO Fetch successful Feb 13 19:01:58.222519 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1707) Feb 13 19:01:58.260348 systemd-logind[1938]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:01:58.261550 systemd-logind[1938]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 19:01:58.267247 systemd-logind[1938]: New seat seat0. Feb 13 19:01:58.281573 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:01:58.288629 bash[2011]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:01:58.296288 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:01:58.348970 systemd[1]: Starting sshkeys.service... Feb 13 19:01:58.372957 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:01:58.375908 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:01:58.405138 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:01:58.420059 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:01:58.537219 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:01:58.540647 dbus-daemon[1926]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:01:58.549702 dbus-daemon[1926]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1976 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:01:58.569282 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:01:58.594462 polkitd[2058]: Started polkitd version 121 Feb 13 19:01:58.611186 polkitd[2058]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:01:58.613040 polkitd[2058]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:01:58.616158 polkitd[2058]: Finished loading, compiling and executing 2 rules Feb 13 19:01:58.620922 dbus-daemon[1926]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:01:58.622968 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:01:58.627561 polkitd[2058]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:01:58.707017 systemd-hostnamed[1976]: Hostname set to (transient) Feb 13 19:01:58.707018 systemd-resolved[1870]: System hostname changed to 'ip-172-31-27-130'. Feb 13 19:01:58.716755 locksmithd[1981]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:01:58.746571 systemd-networkd[1868]: eth0: Gained IPv6LL Feb 13 19:01:58.776510 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:01:58.782921 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:01:58.787954 coreos-metadata[2028]: Feb 13 19:01:58.787 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:01:58.795673 coreos-metadata[2028]: Feb 13 19:01:58.795 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 19:01:58.801584 coreos-metadata[2028]: Feb 13 19:01:58.801 INFO Fetch successful Feb 13 19:01:58.801584 coreos-metadata[2028]: Feb 13 19:01:58.801 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:01:58.805012 coreos-metadata[2028]: Feb 13 19:01:58.804 INFO Fetch successful Feb 13 19:01:58.810618 unknown[2028]: wrote ssh authorized keys file for user: core Feb 13 19:01:58.824775 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 19:01:58.873897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:01:58.881677 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:01:58.943116 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:01:59.003177 update-ssh-keys[2115]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:01:59.005613 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:01:59.014461 systemd[1]: Finished sshkeys.service. Feb 13 19:01:59.019479 containerd[1967]: time="2025-02-13T19:01:59.017173587Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:01:59.026104 amazon-ssm-agent[2092]: Initializing new seelog logger Feb 13 19:01:59.028549 amazon-ssm-agent[2092]: New Seelog Logger Creation Complete Feb 13 19:01:59.035440 amazon-ssm-agent[2092]: 2025/02/13 19:01:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:01:59.035440 amazon-ssm-agent[2092]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:01:59.035440 amazon-ssm-agent[2092]: 2025/02/13 19:01:59 processing appconfig overrides Feb 13 19:01:59.035440 amazon-ssm-agent[2092]: 2025-02-13 19:01:59 INFO Proxy environment variables: Feb 13 19:01:59.038406 amazon-ssm-agent[2092]: 2025/02/13 19:01:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:01:59.038406 amazon-ssm-agent[2092]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:01:59.038406 amazon-ssm-agent[2092]: 2025/02/13 19:01:59 processing appconfig overrides Feb 13 19:01:59.038406 amazon-ssm-agent[2092]: 2025/02/13 19:01:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:01:59.038406 amazon-ssm-agent[2092]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:01:59.038406 amazon-ssm-agent[2092]: 2025/02/13 19:01:59 processing appconfig overrides Feb 13 19:01:59.055552 amazon-ssm-agent[2092]: 2025/02/13 19:01:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:01:59.055552 amazon-ssm-agent[2092]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:01:59.055552 amazon-ssm-agent[2092]: 2025/02/13 19:01:59 processing appconfig overrides Feb 13 19:01:59.082001 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:01:59.139194 amazon-ssm-agent[2092]: 2025-02-13 19:01:59 INFO https_proxy: Feb 13 19:01:59.237863 containerd[1967]: time="2025-02-13T19:01:59.237768988Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:01:59.244417 amazon-ssm-agent[2092]: 2025-02-13 19:01:59 INFO http_proxy: Feb 13 19:01:59.249731 containerd[1967]: time="2025-02-13T19:01:59.249649012Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:01:59.252516 containerd[1967]: time="2025-02-13T19:01:59.251480261Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:01:59.252516 containerd[1967]: time="2025-02-13T19:01:59.251553893Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:01:59.252516 containerd[1967]: time="2025-02-13T19:01:59.251908301Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:01:59.252516 containerd[1967]: time="2025-02-13T19:01:59.251951501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:01:59.252516 containerd[1967]: time="2025-02-13T19:01:59.252111473Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:01:59.252516 containerd[1967]: time="2025-02-13T19:01:59.252148001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:01:59.259289 containerd[1967]: time="2025-02-13T19:01:59.257689637Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:01:59.259289 containerd[1967]: time="2025-02-13T19:01:59.257752013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:01:59.259289 containerd[1967]: time="2025-02-13T19:01:59.257786297Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:01:59.259289 containerd[1967]: time="2025-02-13T19:01:59.257810105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:01:59.259289 containerd[1967]: time="2025-02-13T19:01:59.258037325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:01:59.259289 containerd[1967]: time="2025-02-13T19:01:59.258489569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:01:59.259289 containerd[1967]: time="2025-02-13T19:01:59.258754517Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:01:59.259289 containerd[1967]: time="2025-02-13T19:01:59.258782921Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:01:59.259289 containerd[1967]: time="2025-02-13T19:01:59.258953501Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:01:59.259289 containerd[1967]: time="2025-02-13T19:01:59.259060169Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:01:59.292404 containerd[1967]: time="2025-02-13T19:01:59.290702201Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:01:59.292404 containerd[1967]: time="2025-02-13T19:01:59.290800817Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:01:59.292404 containerd[1967]: time="2025-02-13T19:01:59.290834645Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:01:59.292404 containerd[1967]: time="2025-02-13T19:01:59.290871209Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:01:59.292404 containerd[1967]: time="2025-02-13T19:01:59.290905805Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:01:59.292404 containerd[1967]: time="2025-02-13T19:01:59.291179393Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:01:59.292404 containerd[1967]: time="2025-02-13T19:01:59.291697289Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:01:59.292404 containerd[1967]: time="2025-02-13T19:01:59.291921101Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:01:59.292404 containerd[1967]: time="2025-02-13T19:01:59.291955793Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:01:59.292404 containerd[1967]: time="2025-02-13T19:01:59.291988577Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:01:59.292404 containerd[1967]: time="2025-02-13T19:01:59.292026233Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:01:59.292404 containerd[1967]: time="2025-02-13T19:01:59.292057481Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:01:59.292404 containerd[1967]: time="2025-02-13T19:01:59.292087037Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:01:59.292404 containerd[1967]: time="2025-02-13T19:01:59.292121957Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:01:59.293124 containerd[1967]: time="2025-02-13T19:01:59.292154669Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:01:59.293124 containerd[1967]: time="2025-02-13T19:01:59.292188965Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:01:59.293124 containerd[1967]: time="2025-02-13T19:01:59.292217297Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:01:59.293124 containerd[1967]: time="2025-02-13T19:01:59.292244585Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:01:59.293124 containerd[1967]: time="2025-02-13T19:01:59.292286381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:01:59.293124 containerd[1967]: time="2025-02-13T19:01:59.292329389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:01:59.305988 containerd[1967]: time="2025-02-13T19:01:59.292358981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:01:59.305988 containerd[1967]: time="2025-02-13T19:01:59.299530481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:01:59.305988 containerd[1967]: time="2025-02-13T19:01:59.299576885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:01:59.305988 containerd[1967]: time="2025-02-13T19:01:59.299615333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:01:59.305988 containerd[1967]: time="2025-02-13T19:01:59.299647289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:01:59.305988 containerd[1967]: time="2025-02-13T19:01:59.299677637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:01:59.305988 containerd[1967]: time="2025-02-13T19:01:59.299707697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:01:59.305988 containerd[1967]: time="2025-02-13T19:01:59.299743685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:01:59.305988 containerd[1967]: time="2025-02-13T19:01:59.299772233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:01:59.305988 containerd[1967]: time="2025-02-13T19:01:59.299803337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:01:59.305988 containerd[1967]: time="2025-02-13T19:01:59.299839757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:01:59.305988 containerd[1967]: time="2025-02-13T19:01:59.299878625Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:01:59.305988 containerd[1967]: time="2025-02-13T19:01:59.299932805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:01:59.305988 containerd[1967]: time="2025-02-13T19:01:59.299964329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:01:59.305988 containerd[1967]: time="2025-02-13T19:01:59.299991845Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:01:59.306710 containerd[1967]: time="2025-02-13T19:01:59.300141065Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:01:59.306710 containerd[1967]: time="2025-02-13T19:01:59.300188405Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:01:59.306710 containerd[1967]: time="2025-02-13T19:01:59.300213989Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:01:59.306710 containerd[1967]: time="2025-02-13T19:01:59.300245285Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:01:59.306710 containerd[1967]: time="2025-02-13T19:01:59.300269189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:01:59.306710 containerd[1967]: time="2025-02-13T19:01:59.300297653Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:01:59.306710 containerd[1967]: time="2025-02-13T19:01:59.300321857Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:01:59.306710 containerd[1967]: time="2025-02-13T19:01:59.300346565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:01:59.307048 containerd[1967]: time="2025-02-13T19:01:59.300940493Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:01:59.307048 containerd[1967]: time="2025-02-13T19:01:59.301033541Z" level=info msg="Connect containerd service" Feb 13 19:01:59.307048 containerd[1967]: time="2025-02-13T19:01:59.301108685Z" level=info msg="using legacy CRI server" Feb 13 19:01:59.307048 containerd[1967]: time="2025-02-13T19:01:59.301128005Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:01:59.311491 containerd[1967]: time="2025-02-13T19:01:59.301365593Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:01:59.311491 containerd[1967]: time="2025-02-13T19:01:59.310990085Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:01:59.312276 containerd[1967]: time="2025-02-13T19:01:59.312181709Z" level=info msg="Start subscribing containerd event" Feb 13 19:01:59.312423 containerd[1967]: time="2025-02-13T19:01:59.312293081Z" level=info msg="Start recovering state" Feb 13 19:01:59.312477 containerd[1967]: time="2025-02-13T19:01:59.312458093Z" level=info msg="Start event monitor" Feb 13 19:01:59.312524 containerd[1967]: time="2025-02-13T19:01:59.312483413Z" level=info msg="Start snapshots syncer" Feb 13 19:01:59.312524 containerd[1967]: time="2025-02-13T19:01:59.312505985Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:01:59.312615 containerd[1967]: time="2025-02-13T19:01:59.312524345Z" level=info msg="Start streaming server" Feb 13 19:01:59.315286 containerd[1967]: time="2025-02-13T19:01:59.313106321Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:01:59.315286 containerd[1967]: time="2025-02-13T19:01:59.313331321Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:01:59.315286 containerd[1967]: time="2025-02-13T19:01:59.315027725Z" level=info msg="containerd successfully booted in 0.314830s" Feb 13 19:01:59.315118 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:01:59.346338 amazon-ssm-agent[2092]: 2025-02-13 19:01:59 INFO no_proxy: Feb 13 19:01:59.391353 sshd_keygen[1968]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:01:59.443782 amazon-ssm-agent[2092]: 2025-02-13 19:01:59 INFO Checking if agent identity type OnPrem can be assumed Feb 13 19:01:59.510469 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:01:59.525033 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:01:59.537996 systemd[1]: Started sshd@0-172.31.27.130:22-139.178.89.65:48736.service - OpenSSH per-connection server daemon (139.178.89.65:48736). Feb 13 19:01:59.547683 amazon-ssm-agent[2092]: 2025-02-13 19:01:59 INFO Checking if agent identity type EC2 can be assumed Feb 13 19:01:59.583324 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:01:59.586345 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:01:59.602874 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:01:59.646707 amazon-ssm-agent[2092]: 2025-02-13 19:01:59 INFO Agent will take identity from EC2 Feb 13 19:01:59.666484 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:01:59.679085 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:01:59.692028 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:01:59.697258 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:01:59.747398 amazon-ssm-agent[2092]: 2025-02-13 19:01:59 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:01:59.844490 amazon-ssm-agent[2092]: 2025-02-13 19:01:59 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:01:59.848715 sshd[2158]: Accepted publickey for core from 139.178.89.65 port 48736 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:01:59.854994 sshd-session[2158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:01:59.895610 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:01:59.905585 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:01:59.914405 systemd-logind[1938]: New session 1 of user core. Feb 13 19:01:59.945370 amazon-ssm-agent[2092]: 2025-02-13 19:01:59 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:01:59.955992 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:01:59.972626 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:01:59.999511 (systemd)[2172]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:02:00.014029 systemd-logind[1938]: New session c1 of user core. Feb 13 19:02:00.048529 amazon-ssm-agent[2092]: 2025-02-13 19:01:59 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 19:02:00.151174 amazon-ssm-agent[2092]: 2025-02-13 19:01:59 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 19:02:00.203828 amazon-ssm-agent[2092]: 2025-02-13 19:01:59 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 19:02:00.204142 amazon-ssm-agent[2092]: 2025-02-13 19:01:59 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 19:02:00.204284 amazon-ssm-agent[2092]: 2025-02-13 19:01:59 INFO [Registrar] Starting registrar module Feb 13 19:02:00.204432 amazon-ssm-agent[2092]: 2025-02-13 19:01:59 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 19:02:00.204559 amazon-ssm-agent[2092]: 2025-02-13 19:02:00 INFO [EC2Identity] EC2 registration was successful. Feb 13 19:02:00.204673 amazon-ssm-agent[2092]: 2025-02-13 19:02:00 INFO [CredentialRefresher] credentialRefresher has started Feb 13 19:02:00.204816 amazon-ssm-agent[2092]: 2025-02-13 19:02:00 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 19:02:00.204937 amazon-ssm-agent[2092]: 2025-02-13 19:02:00 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 19:02:00.247824 amazon-ssm-agent[2092]: 2025-02-13 19:02:00 INFO [CredentialRefresher] Next credential rotation will be in 31.2249596289 minutes Feb 13 19:02:00.251170 tar[1945]: linux-arm64/LICENSE Feb 13 19:02:00.252982 tar[1945]: linux-arm64/README.md Feb 13 19:02:00.287576 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:02:00.379232 systemd[2172]: Queued start job for default target default.target. Feb 13 19:02:00.387597 systemd[2172]: Created slice app.slice - User Application Slice. Feb 13 19:02:00.387660 systemd[2172]: Reached target paths.target - Paths. Feb 13 19:02:00.387750 systemd[2172]: Reached target timers.target - Timers. Feb 13 19:02:00.391062 systemd[2172]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:02:00.428022 systemd[2172]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:02:00.428589 systemd[2172]: Reached target sockets.target - Sockets. Feb 13 19:02:00.428709 systemd[2172]: Reached target basic.target - Basic System. Feb 13 19:02:00.428810 systemd[2172]: Reached target default.target - Main User Target. Feb 13 19:02:00.428863 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:02:00.428874 systemd[2172]: Startup finished in 388ms. Feb 13 19:02:00.439730 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:02:00.600005 systemd[1]: Started sshd@1-172.31.27.130:22-139.178.89.65:48742.service - OpenSSH per-connection server daemon (139.178.89.65:48742). Feb 13 19:02:00.803009 sshd[2186]: Accepted publickey for core from 139.178.89.65 port 48742 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:02:00.805581 sshd-session[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:00.815805 systemd-logind[1938]: New session 2 of user core. Feb 13 19:02:00.829708 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:02:00.866821 ntpd[1932]: Listen normally on 6 eth0 [fe80::4a3:3bff:fe42:98b3%2]:123 Feb 13 19:02:00.867519 ntpd[1932]: 13 Feb 19:02:00 ntpd[1932]: Listen normally on 6 eth0 [fe80::4a3:3bff:fe42:98b3%2]:123 Feb 13 19:02:00.961945 sshd[2188]: Connection closed by 139.178.89.65 port 48742 Feb 13 19:02:00.962890 sshd-session[2186]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:00.970185 systemd[1]: sshd@1-172.31.27.130:22-139.178.89.65:48742.service: Deactivated successfully. Feb 13 19:02:00.974210 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:02:00.976727 systemd-logind[1938]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:02:00.980808 systemd-logind[1938]: Removed session 2. Feb 13 19:02:01.016610 systemd[1]: Started sshd@2-172.31.27.130:22-139.178.89.65:48758.service - OpenSSH per-connection server daemon (139.178.89.65:48758). Feb 13 19:02:01.202543 sshd[2194]: Accepted publickey for core from 139.178.89.65 port 48758 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:02:01.205504 sshd-session[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:01.218560 systemd-logind[1938]: New session 3 of user core. Feb 13 19:02:01.226710 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:02:01.241697 amazon-ssm-agent[2092]: 2025-02-13 19:02:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 19:02:01.342367 amazon-ssm-agent[2092]: 2025-02-13 19:02:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2198) started Feb 13 19:02:01.364791 sshd[2197]: Connection closed by 139.178.89.65 port 48758 Feb 13 19:02:01.367467 sshd-session[2194]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:01.382469 systemd[1]: sshd@2-172.31.27.130:22-139.178.89.65:48758.service: Deactivated successfully. Feb 13 19:02:01.391705 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:02:01.398724 systemd-logind[1938]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:02:01.408654 systemd-logind[1938]: Removed session 3. Feb 13 19:02:01.429688 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:01.433210 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:02:01.435582 systemd[1]: Startup finished in 1.124s (kernel) + 15.597s (initrd) + 9.102s (userspace) = 25.824s. Feb 13 19:02:01.442992 (kubelet)[2213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:02:01.446367 amazon-ssm-agent[2092]: 2025-02-13 19:02:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 19:02:02.638540 kubelet[2213]: E0213 19:02:02.638474 2213 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:02:02.643282 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:02:02.644095 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:02:02.645204 systemd[1]: kubelet.service: Consumed 1.295s CPU time, 234.4M memory peak. Feb 13 19:02:04.514034 systemd-resolved[1870]: Clock change detected. Flushing caches. Feb 13 19:02:11.060948 systemd[1]: Started sshd@3-172.31.27.130:22-139.178.89.65:54804.service - OpenSSH per-connection server daemon (139.178.89.65:54804). Feb 13 19:02:11.243929 sshd[2231]: Accepted publickey for core from 139.178.89.65 port 54804 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:02:11.246355 sshd-session[2231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:11.255156 systemd-logind[1938]: New session 4 of user core. Feb 13 19:02:11.263785 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:02:11.389239 sshd[2233]: Connection closed by 139.178.89.65 port 54804 Feb 13 19:02:11.390119 sshd-session[2231]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:11.396150 systemd[1]: sshd@3-172.31.27.130:22-139.178.89.65:54804.service: Deactivated successfully. Feb 13 19:02:11.399404 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:02:11.401230 systemd-logind[1938]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:02:11.403168 systemd-logind[1938]: Removed session 4. Feb 13 19:02:11.435968 systemd[1]: Started sshd@4-172.31.27.130:22-139.178.89.65:54812.service - OpenSSH per-connection server daemon (139.178.89.65:54812). Feb 13 19:02:11.623092 sshd[2239]: Accepted publickey for core from 139.178.89.65 port 54812 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:02:11.625512 sshd-session[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:11.634454 systemd-logind[1938]: New session 5 of user core. Feb 13 19:02:11.641762 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:02:11.762748 sshd[2241]: Connection closed by 139.178.89.65 port 54812 Feb 13 19:02:11.762627 sshd-session[2239]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:11.768511 systemd[1]: sshd@4-172.31.27.130:22-139.178.89.65:54812.service: Deactivated successfully. Feb 13 19:02:11.771459 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:02:11.772708 systemd-logind[1938]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:02:11.774834 systemd-logind[1938]: Removed session 5. Feb 13 19:02:11.801037 systemd[1]: Started sshd@5-172.31.27.130:22-139.178.89.65:54814.service - OpenSSH per-connection server daemon (139.178.89.65:54814). Feb 13 19:02:11.993791 sshd[2247]: Accepted publickey for core from 139.178.89.65 port 54814 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:02:11.996225 sshd-session[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:12.005211 systemd-logind[1938]: New session 6 of user core. Feb 13 19:02:12.011801 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:02:12.138573 sshd[2249]: Connection closed by 139.178.89.65 port 54814 Feb 13 19:02:12.139455 sshd-session[2247]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:12.145633 systemd[1]: sshd@5-172.31.27.130:22-139.178.89.65:54814.service: Deactivated successfully. Feb 13 19:02:12.150281 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:02:12.151803 systemd-logind[1938]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:02:12.153783 systemd-logind[1938]: Removed session 6. Feb 13 19:02:12.185043 systemd[1]: Started sshd@6-172.31.27.130:22-139.178.89.65:54820.service - OpenSSH per-connection server daemon (139.178.89.65:54820). Feb 13 19:02:12.341772 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:02:12.347913 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:12.375549 sshd[2255]: Accepted publickey for core from 139.178.89.65 port 54820 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:02:12.378233 sshd-session[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:02:12.388793 systemd-logind[1938]: New session 7 of user core. Feb 13 19:02:12.394870 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:02:12.527740 sudo[2261]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:02:12.529223 sudo[2261]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:02:12.771828 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:12.786103 (kubelet)[2279]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:02:12.973301 kubelet[2279]: E0213 19:02:12.973206 2279 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:02:12.978890 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:02:12.979196 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:02:12.981663 systemd[1]: kubelet.service: Consumed 385ms CPU time, 96.6M memory peak. Feb 13 19:02:13.144482 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:02:13.157026 (dockerd)[2291]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:02:13.509515 dockerd[2291]: time="2025-02-13T19:02:13.509413108Z" level=info msg="Starting up" Feb 13 19:02:13.628230 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1720391033-merged.mount: Deactivated successfully. Feb 13 19:02:13.663956 dockerd[2291]: time="2025-02-13T19:02:13.663540617Z" level=info msg="Loading containers: start." Feb 13 19:02:13.935225 kernel: Initializing XFRM netlink socket Feb 13 19:02:13.968952 (udev-worker)[2315]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:02:14.076904 systemd-networkd[1868]: docker0: Link UP Feb 13 19:02:14.119047 dockerd[2291]: time="2025-02-13T19:02:14.118993587Z" level=info msg="Loading containers: done." Feb 13 19:02:14.145337 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck231619294-merged.mount: Deactivated successfully. Feb 13 19:02:14.149428 dockerd[2291]: time="2025-02-13T19:02:14.149326467Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:02:14.149639 dockerd[2291]: time="2025-02-13T19:02:14.149488419Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:02:14.149859 dockerd[2291]: time="2025-02-13T19:02:14.149791803Z" level=info msg="Daemon has completed initialization" Feb 13 19:02:14.205844 dockerd[2291]: time="2025-02-13T19:02:14.205532344Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:02:14.206403 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:02:15.514118 containerd[1967]: time="2025-02-13T19:02:15.513882486Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 19:02:16.097260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1601810804.mount: Deactivated successfully. Feb 13 19:02:17.367527 containerd[1967]: time="2025-02-13T19:02:17.367435855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:17.369439 containerd[1967]: time="2025-02-13T19:02:17.369352855Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620375" Feb 13 19:02:17.370649 containerd[1967]: time="2025-02-13T19:02:17.370605103Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:17.378625 containerd[1967]: time="2025-02-13T19:02:17.378548911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:17.380380 containerd[1967]: time="2025-02-13T19:02:17.379977799Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 1.866036681s" Feb 13 19:02:17.380380 containerd[1967]: time="2025-02-13T19:02:17.380034811Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 19:02:17.381339 containerd[1967]: time="2025-02-13T19:02:17.381188563Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 19:02:18.765323 containerd[1967]: time="2025-02-13T19:02:18.765267754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:18.768397 containerd[1967]: time="2025-02-13T19:02:18.768318070Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471773" Feb 13 19:02:18.770347 containerd[1967]: time="2025-02-13T19:02:18.770302054Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:18.775527 containerd[1967]: time="2025-02-13T19:02:18.775446322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:18.778114 containerd[1967]: time="2025-02-13T19:02:18.778068142Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 1.396825207s" Feb 13 19:02:18.778298 containerd[1967]: time="2025-02-13T19:02:18.778266130Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 19:02:18.780013 containerd[1967]: time="2025-02-13T19:02:18.779946670Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 19:02:19.935356 containerd[1967]: time="2025-02-13T19:02:19.934866744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:19.937022 containerd[1967]: time="2025-02-13T19:02:19.936932796Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024540" Feb 13 19:02:19.938025 containerd[1967]: time="2025-02-13T19:02:19.937941672Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:19.945598 containerd[1967]: time="2025-02-13T19:02:19.945527388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:19.948322 containerd[1967]: time="2025-02-13T19:02:19.947781300Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 1.167778338s" Feb 13 19:02:19.948322 containerd[1967]: time="2025-02-13T19:02:19.947829696Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 19:02:19.949301 containerd[1967]: time="2025-02-13T19:02:19.948957888Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 19:02:21.187792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2222296423.mount: Deactivated successfully. Feb 13 19:02:21.726567 containerd[1967]: time="2025-02-13T19:02:21.726463873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:21.728587 containerd[1967]: time="2025-02-13T19:02:21.728463865Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769256" Feb 13 19:02:21.730094 containerd[1967]: time="2025-02-13T19:02:21.730052497Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:21.734204 containerd[1967]: time="2025-02-13T19:02:21.734153329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:21.735485 containerd[1967]: time="2025-02-13T19:02:21.735441697Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.786400205s" Feb 13 19:02:21.735675 containerd[1967]: time="2025-02-13T19:02:21.735645445Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 19:02:21.736872 containerd[1967]: time="2025-02-13T19:02:21.736542517Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:02:22.258179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount664826333.mount: Deactivated successfully. Feb 13 19:02:23.155490 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:02:23.165395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:23.477285 containerd[1967]: time="2025-02-13T19:02:23.477159206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:23.479340 containerd[1967]: time="2025-02-13T19:02:23.479185994Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 19:02:23.480134 containerd[1967]: time="2025-02-13T19:02:23.480035066Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:23.486521 containerd[1967]: time="2025-02-13T19:02:23.486407078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:23.491003 containerd[1967]: time="2025-02-13T19:02:23.490928870Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.754310645s" Feb 13 19:02:23.491126 containerd[1967]: time="2025-02-13T19:02:23.491018030Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:02:23.491825 containerd[1967]: time="2025-02-13T19:02:23.491772602Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:02:23.515669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:23.534051 (kubelet)[2603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:02:23.605568 kubelet[2603]: E0213 19:02:23.605465 2603 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:02:23.610021 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:02:23.610369 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:02:23.612732 systemd[1]: kubelet.service: Consumed 284ms CPU time, 94.5M memory peak. Feb 13 19:02:23.954672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4024053812.mount: Deactivated successfully. Feb 13 19:02:23.961559 containerd[1967]: time="2025-02-13T19:02:23.961135048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:23.963525 containerd[1967]: time="2025-02-13T19:02:23.963419512Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Feb 13 19:02:23.965194 containerd[1967]: time="2025-02-13T19:02:23.965121748Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:23.969170 containerd[1967]: time="2025-02-13T19:02:23.969109864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:23.971183 containerd[1967]: time="2025-02-13T19:02:23.971014048Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 479.184518ms" Feb 13 19:02:23.971183 containerd[1967]: time="2025-02-13T19:02:23.971063008Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 19:02:23.972330 containerd[1967]: time="2025-02-13T19:02:23.971968576Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 19:02:24.505861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount147755702.mount: Deactivated successfully. Feb 13 19:02:26.537997 containerd[1967]: time="2025-02-13T19:02:26.537923573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:26.544370 containerd[1967]: time="2025-02-13T19:02:26.543850565Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Feb 13 19:02:26.548369 containerd[1967]: time="2025-02-13T19:02:26.548305565Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:26.557127 containerd[1967]: time="2025-02-13T19:02:26.557059085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:26.560197 containerd[1967]: time="2025-02-13T19:02:26.559903469Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.587885057s" Feb 13 19:02:26.560197 containerd[1967]: time="2025-02-13T19:02:26.559961513Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 19:02:28.389167 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:02:32.716982 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:32.717745 systemd[1]: kubelet.service: Consumed 284ms CPU time, 94.5M memory peak. Feb 13 19:02:32.734973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:32.783807 systemd[1]: Reload requested from client PID 2699 ('systemctl') (unit session-7.scope)... Feb 13 19:02:32.783842 systemd[1]: Reloading... Feb 13 19:02:33.042561 zram_generator::config[2747]: No configuration found. Feb 13 19:02:33.275613 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:02:33.497714 systemd[1]: Reloading finished in 713 ms. Feb 13 19:02:33.585801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:33.595774 (kubelet)[2797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:02:33.602228 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:33.604942 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:02:33.606579 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:33.606676 systemd[1]: kubelet.service: Consumed 202ms CPU time, 83.4M memory peak. Feb 13 19:02:33.620979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:33.928791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:33.941112 (kubelet)[2810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:02:34.015145 kubelet[2810]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:02:34.015145 kubelet[2810]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:02:34.015145 kubelet[2810]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:02:34.015721 kubelet[2810]: I0213 19:02:34.015277 2810 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:02:35.472165 kubelet[2810]: I0213 19:02:35.472112 2810 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:02:35.473534 kubelet[2810]: I0213 19:02:35.472892 2810 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:02:35.473534 kubelet[2810]: I0213 19:02:35.473323 2810 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:02:35.515199 kubelet[2810]: E0213 19:02:35.515128 2810 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.27.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.27.130:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:35.517545 kubelet[2810]: I0213 19:02:35.517477 2810 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:02:35.528836 kubelet[2810]: E0213 19:02:35.528746 2810 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:02:35.528836 kubelet[2810]: I0213 19:02:35.528826 2810 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:02:35.535834 kubelet[2810]: I0213 19:02:35.535764 2810 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:02:35.536103 kubelet[2810]: I0213 19:02:35.536060 2810 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:02:35.536398 kubelet[2810]: I0213 19:02:35.536349 2810 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:02:35.536716 kubelet[2810]: I0213 19:02:35.536398 2810 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-130","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:02:35.536890 kubelet[2810]: I0213 19:02:35.536766 2810 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:02:35.536890 kubelet[2810]: I0213 19:02:35.536789 2810 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:02:35.537002 kubelet[2810]: I0213 19:02:35.536985 2810 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:02:35.541534 kubelet[2810]: I0213 19:02:35.541476 2810 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:02:35.541652 kubelet[2810]: I0213 19:02:35.541541 2810 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:02:35.541652 kubelet[2810]: I0213 19:02:35.541609 2810 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:02:35.541652 kubelet[2810]: I0213 19:02:35.541634 2810 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:02:35.547580 kubelet[2810]: W0213 19:02:35.546774 2810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.27.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-130&limit=500&resourceVersion=0": dial tcp 172.31.27.130:6443: connect: connection refused Feb 13 19:02:35.547580 kubelet[2810]: E0213 19:02:35.546866 2810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.27.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-130&limit=500&resourceVersion=0\": dial tcp 172.31.27.130:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:35.551538 kubelet[2810]: W0213 19:02:35.550971 2810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.27.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.27.130:6443: connect: connection refused Feb 13 19:02:35.551538 kubelet[2810]: E0213 19:02:35.551083 2810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.27.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.27.130:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:35.553289 kubelet[2810]: I0213 19:02:35.553230 2810 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:02:35.556367 kubelet[2810]: I0213 19:02:35.556312 2810 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:02:35.557766 kubelet[2810]: W0213 19:02:35.557723 2810 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:02:35.562007 kubelet[2810]: I0213 19:02:35.561957 2810 server.go:1269] "Started kubelet" Feb 13 19:02:35.565556 kubelet[2810]: I0213 19:02:35.565425 2810 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:02:35.567190 kubelet[2810]: I0213 19:02:35.567130 2810 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:02:35.569334 kubelet[2810]: I0213 19:02:35.568489 2810 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:02:35.569334 kubelet[2810]: I0213 19:02:35.568925 2810 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:02:35.571185 kubelet[2810]: E0213 19:02:35.569161 2810 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.27.130:6443/api/v1/namespaces/default/events\": dial tcp 172.31.27.130:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-27-130.1823d9ccfe866a0e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-130,UID:ip-172-31-27-130,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-130,},FirstTimestamp:2025-02-13 19:02:35.561921038 +0000 UTC m=+1.614263469,LastTimestamp:2025-02-13 19:02:35.561921038 +0000 UTC m=+1.614263469,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-130,}" Feb 13 19:02:35.572077 kubelet[2810]: I0213 19:02:35.572027 2810 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:02:35.572569 kubelet[2810]: I0213 19:02:35.572491 2810 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:02:35.579375 kubelet[2810]: I0213 19:02:35.578071 2810 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:02:35.579375 kubelet[2810]: I0213 19:02:35.578295 2810 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:02:35.579375 kubelet[2810]: I0213 19:02:35.579270 2810 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:02:35.580028 kubelet[2810]: W0213 19:02:35.579917 2810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.27.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.130:6443: connect: connection refused Feb 13 19:02:35.580123 kubelet[2810]: E0213 19:02:35.580045 2810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.27.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.27.130:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:35.581154 kubelet[2810]: E0213 19:02:35.581090 2810 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-27-130\" not found" Feb 13 19:02:35.582677 kubelet[2810]: E0213 19:02:35.582487 2810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-130?timeout=10s\": dial tcp 172.31.27.130:6443: connect: connection refused" interval="200ms" Feb 13 19:02:35.583679 kubelet[2810]: I0213 19:02:35.583140 2810 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:02:35.587610 kubelet[2810]: E0213 19:02:35.586772 2810 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:02:35.587610 kubelet[2810]: I0213 19:02:35.587135 2810 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:02:35.587610 kubelet[2810]: I0213 19:02:35.587156 2810 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:02:35.606139 kubelet[2810]: I0213 19:02:35.606061 2810 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:02:35.608253 kubelet[2810]: I0213 19:02:35.608192 2810 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:02:35.608253 kubelet[2810]: I0213 19:02:35.608241 2810 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:02:35.608435 kubelet[2810]: I0213 19:02:35.608283 2810 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:02:35.608435 kubelet[2810]: E0213 19:02:35.608358 2810 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:02:35.621217 kubelet[2810]: W0213 19:02:35.621129 2810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.27.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.130:6443: connect: connection refused Feb 13 19:02:35.621382 kubelet[2810]: E0213 19:02:35.621233 2810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.27.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.27.130:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:35.631598 kubelet[2810]: I0213 19:02:35.631532 2810 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:02:35.631598 kubelet[2810]: I0213 19:02:35.631566 2810 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:02:35.631598 kubelet[2810]: I0213 19:02:35.631599 2810 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:02:35.634024 kubelet[2810]: I0213 19:02:35.633987 2810 policy_none.go:49] "None policy: Start" Feb 13 19:02:35.635716 kubelet[2810]: I0213 19:02:35.635548 2810 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:02:35.635716 kubelet[2810]: I0213 19:02:35.635594 2810 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:02:35.645756 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:02:35.667006 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:02:35.674256 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:02:35.681287 kubelet[2810]: E0213 19:02:35.681225 2810 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-27-130\" not found" Feb 13 19:02:35.684993 kubelet[2810]: I0213 19:02:35.684157 2810 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:02:35.684993 kubelet[2810]: I0213 19:02:35.684467 2810 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:02:35.684993 kubelet[2810]: I0213 19:02:35.684487 2810 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:02:35.684993 kubelet[2810]: I0213 19:02:35.684893 2810 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:02:35.688696 kubelet[2810]: E0213 19:02:35.688614 2810 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-27-130\" not found" Feb 13 19:02:35.727876 systemd[1]: Created slice kubepods-burstable-pod6c39194449cca7014a3028d707d3365e.slice - libcontainer container kubepods-burstable-pod6c39194449cca7014a3028d707d3365e.slice. Feb 13 19:02:35.744234 systemd[1]: Created slice kubepods-burstable-pod4b7c6605b64b3437b4c86ef3b77bc769.slice - libcontainer container kubepods-burstable-pod4b7c6605b64b3437b4c86ef3b77bc769.slice. Feb 13 19:02:35.762773 systemd[1]: Created slice kubepods-burstable-pod061944bac344b1791ce6e4a50e504c82.slice - libcontainer container kubepods-burstable-pod061944bac344b1791ce6e4a50e504c82.slice. Feb 13 19:02:35.781273 kubelet[2810]: I0213 19:02:35.781209 2810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/061944bac344b1791ce6e4a50e504c82-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-130\" (UID: \"061944bac344b1791ce6e4a50e504c82\") " pod="kube-system/kube-controller-manager-ip-172-31-27-130" Feb 13 19:02:35.781273 kubelet[2810]: I0213 19:02:35.781275 2810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/061944bac344b1791ce6e4a50e504c82-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-130\" (UID: \"061944bac344b1791ce6e4a50e504c82\") " pod="kube-system/kube-controller-manager-ip-172-31-27-130" Feb 13 19:02:35.781484 kubelet[2810]: I0213 19:02:35.781316 2810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b7c6605b64b3437b4c86ef3b77bc769-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-130\" (UID: \"4b7c6605b64b3437b4c86ef3b77bc769\") " pod="kube-system/kube-apiserver-ip-172-31-27-130" Feb 13 19:02:35.781484 kubelet[2810]: I0213 19:02:35.781357 2810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b7c6605b64b3437b4c86ef3b77bc769-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-130\" (UID: \"4b7c6605b64b3437b4c86ef3b77bc769\") " pod="kube-system/kube-apiserver-ip-172-31-27-130" Feb 13 19:02:35.781484 kubelet[2810]: I0213 19:02:35.781393 2810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/061944bac344b1791ce6e4a50e504c82-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-130\" (UID: \"061944bac344b1791ce6e4a50e504c82\") " pod="kube-system/kube-controller-manager-ip-172-31-27-130" Feb 13 19:02:35.781484 kubelet[2810]: I0213 19:02:35.781430 2810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6c39194449cca7014a3028d707d3365e-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-130\" (UID: \"6c39194449cca7014a3028d707d3365e\") " pod="kube-system/kube-scheduler-ip-172-31-27-130" Feb 13 19:02:35.781484 kubelet[2810]: I0213 19:02:35.781462 2810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b7c6605b64b3437b4c86ef3b77bc769-ca-certs\") pod \"kube-apiserver-ip-172-31-27-130\" (UID: \"4b7c6605b64b3437b4c86ef3b77bc769\") " pod="kube-system/kube-apiserver-ip-172-31-27-130" Feb 13 19:02:35.781767 kubelet[2810]: I0213 19:02:35.781513 2810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/061944bac344b1791ce6e4a50e504c82-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-130\" (UID: \"061944bac344b1791ce6e4a50e504c82\") " pod="kube-system/kube-controller-manager-ip-172-31-27-130" Feb 13 19:02:35.781767 kubelet[2810]: I0213 19:02:35.781558 2810 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/061944bac344b1791ce6e4a50e504c82-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-130\" (UID: \"061944bac344b1791ce6e4a50e504c82\") " pod="kube-system/kube-controller-manager-ip-172-31-27-130" Feb 13 19:02:35.783198 kubelet[2810]: E0213 19:02:35.783133 2810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-130?timeout=10s\": dial tcp 172.31.27.130:6443: connect: connection refused" interval="400ms" Feb 13 19:02:35.787740 kubelet[2810]: I0213 19:02:35.787680 2810 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-27-130" Feb 13 19:02:35.788313 kubelet[2810]: E0213 19:02:35.788259 2810 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.27.130:6443/api/v1/nodes\": dial tcp 172.31.27.130:6443: connect: connection refused" node="ip-172-31-27-130" Feb 13 19:02:35.991817 kubelet[2810]: I0213 19:02:35.991178 2810 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-27-130" Feb 13 19:02:35.992049 kubelet[2810]: E0213 19:02:35.991992 2810 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.27.130:6443/api/v1/nodes\": dial tcp 172.31.27.130:6443: connect: connection refused" node="ip-172-31-27-130" Feb 13 19:02:36.042302 containerd[1967]: time="2025-02-13T19:02:36.042217248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-130,Uid:6c39194449cca7014a3028d707d3365e,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:36.058578 containerd[1967]: time="2025-02-13T19:02:36.058449792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-130,Uid:4b7c6605b64b3437b4c86ef3b77bc769,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:36.068382 containerd[1967]: time="2025-02-13T19:02:36.068007216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-130,Uid:061944bac344b1791ce6e4a50e504c82,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:36.183965 kubelet[2810]: E0213 19:02:36.183901 2810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-130?timeout=10s\": dial tcp 172.31.27.130:6443: connect: connection refused" interval="800ms" Feb 13 19:02:36.394689 kubelet[2810]: I0213 19:02:36.394428 2810 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-27-130" Feb 13 19:02:36.395451 kubelet[2810]: E0213 19:02:36.395396 2810 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.27.130:6443/api/v1/nodes\": dial tcp 172.31.27.130:6443: connect: connection refused" node="ip-172-31-27-130" Feb 13 19:02:36.520985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3025664281.mount: Deactivated successfully. Feb 13 19:02:36.527188 containerd[1967]: time="2025-02-13T19:02:36.527131959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:36.529165 containerd[1967]: time="2025-02-13T19:02:36.529119483Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:36.531366 containerd[1967]: time="2025-02-13T19:02:36.531310815Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 19:02:36.532298 containerd[1967]: time="2025-02-13T19:02:36.532249515Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:02:36.535164 containerd[1967]: time="2025-02-13T19:02:36.534843615Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:36.536578 containerd[1967]: time="2025-02-13T19:02:36.536376867Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:02:36.537523 containerd[1967]: time="2025-02-13T19:02:36.537081843Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:36.544425 containerd[1967]: time="2025-02-13T19:02:36.544357911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:02:36.546683 containerd[1967]: time="2025-02-13T19:02:36.546322791Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 487.717431ms" Feb 13 19:02:36.550744 containerd[1967]: time="2025-02-13T19:02:36.550683375Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 508.356087ms" Feb 13 19:02:36.558936 containerd[1967]: time="2025-02-13T19:02:36.558636063Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 490.502787ms" Feb 13 19:02:36.562427 kubelet[2810]: W0213 19:02:36.562324 2810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.27.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.130:6443: connect: connection refused Feb 13 19:02:36.562427 kubelet[2810]: E0213 19:02:36.562398 2810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.27.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.27.130:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:36.689324 kubelet[2810]: W0213 19:02:36.689198 2810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.27.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.130:6443: connect: connection refused Feb 13 19:02:36.689324 kubelet[2810]: E0213 19:02:36.689266 2810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.27.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.27.130:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:36.736572 containerd[1967]: time="2025-02-13T19:02:36.735926056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:36.736572 containerd[1967]: time="2025-02-13T19:02:36.736067656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:36.736572 containerd[1967]: time="2025-02-13T19:02:36.736097296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:36.736572 containerd[1967]: time="2025-02-13T19:02:36.736292008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:36.746273 containerd[1967]: time="2025-02-13T19:02:36.746065372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:36.746273 containerd[1967]: time="2025-02-13T19:02:36.746180452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:36.747058 containerd[1967]: time="2025-02-13T19:02:36.746226160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:36.747058 containerd[1967]: time="2025-02-13T19:02:36.746393476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:36.749284 containerd[1967]: time="2025-02-13T19:02:36.749113552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:36.749284 containerd[1967]: time="2025-02-13T19:02:36.749227432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:36.749822 containerd[1967]: time="2025-02-13T19:02:36.749733760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:36.750243 containerd[1967]: time="2025-02-13T19:02:36.750157852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:36.784121 systemd[1]: Started cri-containerd-e49a2308ca23561b2640694d22b8471e91b09abfaf1a88e48ef01d1ce846b452.scope - libcontainer container e49a2308ca23561b2640694d22b8471e91b09abfaf1a88e48ef01d1ce846b452. Feb 13 19:02:36.810961 systemd[1]: Started cri-containerd-6b5f7cadd3e3ca8cd76d2ef7e1041b7f3500d5bbca9eee99a77b32e2c4de718a.scope - libcontainer container 6b5f7cadd3e3ca8cd76d2ef7e1041b7f3500d5bbca9eee99a77b32e2c4de718a. Feb 13 19:02:36.833844 systemd[1]: Started cri-containerd-957066d25dcdc7f663f28ecd65bc3d59243454a6c73bde7531a501faa2698b9a.scope - libcontainer container 957066d25dcdc7f663f28ecd65bc3d59243454a6c73bde7531a501faa2698b9a. Feb 13 19:02:36.901527 containerd[1967]: time="2025-02-13T19:02:36.900867196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-130,Uid:061944bac344b1791ce6e4a50e504c82,Namespace:kube-system,Attempt:0,} returns sandbox id \"e49a2308ca23561b2640694d22b8471e91b09abfaf1a88e48ef01d1ce846b452\"" Feb 13 19:02:36.912404 containerd[1967]: time="2025-02-13T19:02:36.912156352Z" level=info msg="CreateContainer within sandbox \"e49a2308ca23561b2640694d22b8471e91b09abfaf1a88e48ef01d1ce846b452\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:02:36.953179 containerd[1967]: time="2025-02-13T19:02:36.953040485Z" level=info msg="CreateContainer within sandbox \"e49a2308ca23561b2640694d22b8471e91b09abfaf1a88e48ef01d1ce846b452\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1cd1ca50b53c3973ee814e7b823f8c942669c34c1cf6db9e9a977ce2d64b1889\"" Feb 13 19:02:36.955403 containerd[1967]: time="2025-02-13T19:02:36.955277789Z" level=info msg="StartContainer for \"1cd1ca50b53c3973ee814e7b823f8c942669c34c1cf6db9e9a977ce2d64b1889\"" Feb 13 19:02:36.956352 containerd[1967]: time="2025-02-13T19:02:36.956309297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-130,Uid:6c39194449cca7014a3028d707d3365e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b5f7cadd3e3ca8cd76d2ef7e1041b7f3500d5bbca9eee99a77b32e2c4de718a\"" Feb 13 19:02:36.960789 containerd[1967]: time="2025-02-13T19:02:36.960725885Z" level=info msg="CreateContainer within sandbox \"6b5f7cadd3e3ca8cd76d2ef7e1041b7f3500d5bbca9eee99a77b32e2c4de718a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:02:36.976292 containerd[1967]: time="2025-02-13T19:02:36.976240853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-130,Uid:4b7c6605b64b3437b4c86ef3b77bc769,Namespace:kube-system,Attempt:0,} returns sandbox id \"957066d25dcdc7f663f28ecd65bc3d59243454a6c73bde7531a501faa2698b9a\"" Feb 13 19:02:36.984071 containerd[1967]: time="2025-02-13T19:02:36.983996453Z" level=info msg="CreateContainer within sandbox \"957066d25dcdc7f663f28ecd65bc3d59243454a6c73bde7531a501faa2698b9a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:02:36.985592 kubelet[2810]: E0213 19:02:36.985375 2810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-130?timeout=10s\": dial tcp 172.31.27.130:6443: connect: connection refused" interval="1.6s" Feb 13 19:02:36.993533 containerd[1967]: time="2025-02-13T19:02:36.993326489Z" level=info msg="CreateContainer within sandbox \"6b5f7cadd3e3ca8cd76d2ef7e1041b7f3500d5bbca9eee99a77b32e2c4de718a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e1289283615d63b66aac50f9a9421c4c5603f1dfdfebbc82cd5c12371322357a\"" Feb 13 19:02:36.994469 containerd[1967]: time="2025-02-13T19:02:36.994419413Z" level=info msg="StartContainer for \"e1289283615d63b66aac50f9a9421c4c5603f1dfdfebbc82cd5c12371322357a\"" Feb 13 19:02:37.008103 kubelet[2810]: W0213 19:02:37.007116 2810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.27.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-130&limit=500&resourceVersion=0": dial tcp 172.31.27.130:6443: connect: connection refused Feb 13 19:02:37.008103 kubelet[2810]: E0213 19:02:37.007214 2810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.27.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-130&limit=500&resourceVersion=0\": dial tcp 172.31.27.130:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:37.016076 containerd[1967]: time="2025-02-13T19:02:37.015968689Z" level=info msg="CreateContainer within sandbox \"957066d25dcdc7f663f28ecd65bc3d59243454a6c73bde7531a501faa2698b9a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"22b9eb7016a8776bc5a1e70cadc9da500be1af7e4b37307fdbe7e74bb10eb604\"" Feb 13 19:02:37.017875 containerd[1967]: time="2025-02-13T19:02:37.017828809Z" level=info msg="StartContainer for \"22b9eb7016a8776bc5a1e70cadc9da500be1af7e4b37307fdbe7e74bb10eb604\"" Feb 13 19:02:37.020826 systemd[1]: Started cri-containerd-1cd1ca50b53c3973ee814e7b823f8c942669c34c1cf6db9e9a977ce2d64b1889.scope - libcontainer container 1cd1ca50b53c3973ee814e7b823f8c942669c34c1cf6db9e9a977ce2d64b1889. Feb 13 19:02:37.069836 systemd[1]: Started cri-containerd-e1289283615d63b66aac50f9a9421c4c5603f1dfdfebbc82cd5c12371322357a.scope - libcontainer container e1289283615d63b66aac50f9a9421c4c5603f1dfdfebbc82cd5c12371322357a. Feb 13 19:02:37.073288 kubelet[2810]: W0213 19:02:37.073031 2810 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.27.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.27.130:6443: connect: connection refused Feb 13 19:02:37.073288 kubelet[2810]: E0213 19:02:37.073135 2810 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.27.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.27.130:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:02:37.113901 systemd[1]: Started cri-containerd-22b9eb7016a8776bc5a1e70cadc9da500be1af7e4b37307fdbe7e74bb10eb604.scope - libcontainer container 22b9eb7016a8776bc5a1e70cadc9da500be1af7e4b37307fdbe7e74bb10eb604. Feb 13 19:02:37.151688 containerd[1967]: time="2025-02-13T19:02:37.151629314Z" level=info msg="StartContainer for \"1cd1ca50b53c3973ee814e7b823f8c942669c34c1cf6db9e9a977ce2d64b1889\" returns successfully" Feb 13 19:02:37.212907 kubelet[2810]: I0213 19:02:37.211581 2810 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-27-130" Feb 13 19:02:37.212907 kubelet[2810]: E0213 19:02:37.212163 2810 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.27.130:6443/api/v1/nodes\": dial tcp 172.31.27.130:6443: connect: connection refused" node="ip-172-31-27-130" Feb 13 19:02:37.249637 containerd[1967]: time="2025-02-13T19:02:37.243836990Z" level=info msg="StartContainer for \"e1289283615d63b66aac50f9a9421c4c5603f1dfdfebbc82cd5c12371322357a\" returns successfully" Feb 13 19:02:37.308393 containerd[1967]: time="2025-02-13T19:02:37.308334302Z" level=info msg="StartContainer for \"22b9eb7016a8776bc5a1e70cadc9da500be1af7e4b37307fdbe7e74bb10eb604\" returns successfully" Feb 13 19:02:38.817540 kubelet[2810]: I0213 19:02:38.816321 2810 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-27-130" Feb 13 19:02:41.340477 kubelet[2810]: E0213 19:02:41.340406 2810 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-27-130\" not found" node="ip-172-31-27-130" Feb 13 19:02:41.525010 kubelet[2810]: I0213 19:02:41.523711 2810 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-27-130" Feb 13 19:02:41.550713 kubelet[2810]: I0213 19:02:41.550669 2810 apiserver.go:52] "Watching apiserver" Feb 13 19:02:41.579104 kubelet[2810]: I0213 19:02:41.579021 2810 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:02:43.262584 update_engine[1940]: I20250213 19:02:43.262085 1940 update_attempter.cc:509] Updating boot flags... Feb 13 19:02:43.343637 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3098) Feb 13 19:02:43.440038 systemd[1]: Reload requested from client PID 3147 ('systemctl') (unit session-7.scope)... Feb 13 19:02:43.440065 systemd[1]: Reloading... Feb 13 19:02:43.863601 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3089) Feb 13 19:02:43.925529 zram_generator::config[3245]: No configuration found. Feb 13 19:02:44.347697 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3089) Feb 13 19:02:44.449953 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:02:44.822200 systemd[1]: Reloading finished in 1381 ms. Feb 13 19:02:45.051540 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:45.080447 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:02:45.081008 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:45.081099 systemd[1]: kubelet.service: Consumed 2.337s CPU time, 116.8M memory peak. Feb 13 19:02:45.095909 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:02:45.419849 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:02:45.440701 (kubelet)[3458]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:02:45.547998 kubelet[3458]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:02:45.547998 kubelet[3458]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:02:45.547998 kubelet[3458]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:02:45.547998 kubelet[3458]: I0213 19:02:45.547360 3458 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:02:45.563070 kubelet[3458]: I0213 19:02:45.562986 3458 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:02:45.563070 kubelet[3458]: I0213 19:02:45.563055 3458 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:02:45.564154 kubelet[3458]: I0213 19:02:45.563695 3458 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:02:45.567086 kubelet[3458]: I0213 19:02:45.567038 3458 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:02:45.577568 kubelet[3458]: I0213 19:02:45.577340 3458 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:02:45.588653 kubelet[3458]: E0213 19:02:45.588576 3458 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:02:45.588653 kubelet[3458]: I0213 19:02:45.588653 3458 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:02:45.597038 kubelet[3458]: I0213 19:02:45.596980 3458 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:02:45.597201 kubelet[3458]: I0213 19:02:45.597177 3458 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:02:45.597451 kubelet[3458]: I0213 19:02:45.597394 3458 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:02:45.597767 kubelet[3458]: I0213 19:02:45.597443 3458 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-130","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:02:45.597912 kubelet[3458]: I0213 19:02:45.597780 3458 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:02:45.597912 kubelet[3458]: I0213 19:02:45.597801 3458 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:02:45.597912 kubelet[3458]: I0213 19:02:45.597855 3458 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:02:45.598185 kubelet[3458]: I0213 19:02:45.598048 3458 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:02:45.598185 kubelet[3458]: I0213 19:02:45.598073 3458 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:02:45.598185 kubelet[3458]: I0213 19:02:45.598114 3458 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:02:45.598185 kubelet[3458]: I0213 19:02:45.598133 3458 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:02:45.600832 kubelet[3458]: I0213 19:02:45.600786 3458 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:02:45.602018 kubelet[3458]: I0213 19:02:45.601579 3458 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:02:45.602275 kubelet[3458]: I0213 19:02:45.602237 3458 server.go:1269] "Started kubelet" Feb 13 19:02:45.606513 kubelet[3458]: I0213 19:02:45.606431 3458 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:02:45.622525 kubelet[3458]: I0213 19:02:45.620617 3458 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:02:45.629974 kubelet[3458]: I0213 19:02:45.629893 3458 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:02:45.631379 kubelet[3458]: I0213 19:02:45.631311 3458 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:02:45.632903 kubelet[3458]: I0213 19:02:45.632829 3458 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:02:45.635543 kubelet[3458]: E0213 19:02:45.634806 3458 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-27-130\" not found" Feb 13 19:02:45.640289 kubelet[3458]: I0213 19:02:45.640229 3458 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:02:45.641606 kubelet[3458]: I0213 19:02:45.640532 3458 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:02:45.645714 kubelet[3458]: I0213 19:02:45.645651 3458 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:02:45.651844 kubelet[3458]: I0213 19:02:45.651785 3458 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:02:45.672251 kubelet[3458]: I0213 19:02:45.670828 3458 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:02:45.675042 kubelet[3458]: I0213 19:02:45.672744 3458 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:02:45.706491 kubelet[3458]: I0213 19:02:45.705083 3458 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:02:45.716767 kubelet[3458]: I0213 19:02:45.716708 3458 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:02:45.716767 kubelet[3458]: I0213 19:02:45.716757 3458 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:02:45.716985 kubelet[3458]: I0213 19:02:45.716788 3458 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:02:45.716985 kubelet[3458]: E0213 19:02:45.716862 3458 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:02:45.732852 kubelet[3458]: I0213 19:02:45.732797 3458 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:02:45.737665 kubelet[3458]: E0213 19:02:45.737626 3458 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-27-130\" not found" Feb 13 19:02:45.767810 kubelet[3458]: E0213 19:02:45.767757 3458 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:02:45.817693 kubelet[3458]: E0213 19:02:45.817642 3458 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:02:45.844873 kubelet[3458]: I0213 19:02:45.844840 3458 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:02:45.845190 kubelet[3458]: I0213 19:02:45.845116 3458 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:02:45.845190 kubelet[3458]: I0213 19:02:45.845155 3458 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:02:45.845713 kubelet[3458]: I0213 19:02:45.845668 3458 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:02:45.845931 kubelet[3458]: I0213 19:02:45.845798 3458 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:02:45.845931 kubelet[3458]: I0213 19:02:45.845834 3458 policy_none.go:49] "None policy: Start" Feb 13 19:02:45.848877 kubelet[3458]: I0213 19:02:45.847930 3458 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:02:45.848877 kubelet[3458]: I0213 19:02:45.847993 3458 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:02:45.848877 kubelet[3458]: I0213 19:02:45.848278 3458 state_mem.go:75] "Updated machine memory state" Feb 13 19:02:45.860273 kubelet[3458]: I0213 19:02:45.860238 3458 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:02:45.862233 kubelet[3458]: I0213 19:02:45.862185 3458 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:02:45.864848 kubelet[3458]: I0213 19:02:45.864665 3458 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:02:45.868022 kubelet[3458]: I0213 19:02:45.867744 3458 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:02:45.991012 kubelet[3458]: I0213 19:02:45.990859 3458 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-27-130" Feb 13 19:02:46.003919 kubelet[3458]: I0213 19:02:46.003217 3458 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-27-130" Feb 13 19:02:46.003919 kubelet[3458]: I0213 19:02:46.003341 3458 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-27-130" Feb 13 19:02:46.042588 kubelet[3458]: I0213 19:02:46.042532 3458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6c39194449cca7014a3028d707d3365e-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-130\" (UID: \"6c39194449cca7014a3028d707d3365e\") " pod="kube-system/kube-scheduler-ip-172-31-27-130" Feb 13 19:02:46.043587 kubelet[3458]: I0213 19:02:46.042595 3458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/061944bac344b1791ce6e4a50e504c82-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-130\" (UID: \"061944bac344b1791ce6e4a50e504c82\") " pod="kube-system/kube-controller-manager-ip-172-31-27-130" Feb 13 19:02:46.043723 kubelet[3458]: I0213 19:02:46.043619 3458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b7c6605b64b3437b4c86ef3b77bc769-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-130\" (UID: \"4b7c6605b64b3437b4c86ef3b77bc769\") " pod="kube-system/kube-apiserver-ip-172-31-27-130" Feb 13 19:02:46.043723 kubelet[3458]: I0213 19:02:46.043667 3458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/061944bac344b1791ce6e4a50e504c82-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-130\" (UID: \"061944bac344b1791ce6e4a50e504c82\") " pod="kube-system/kube-controller-manager-ip-172-31-27-130" Feb 13 19:02:46.044006 kubelet[3458]: I0213 19:02:46.043949 3458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/061944bac344b1791ce6e4a50e504c82-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-130\" (UID: \"061944bac344b1791ce6e4a50e504c82\") " pod="kube-system/kube-controller-manager-ip-172-31-27-130" Feb 13 19:02:46.044084 kubelet[3458]: I0213 19:02:46.044036 3458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/061944bac344b1791ce6e4a50e504c82-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-130\" (UID: \"061944bac344b1791ce6e4a50e504c82\") " pod="kube-system/kube-controller-manager-ip-172-31-27-130" Feb 13 19:02:46.044359 kubelet[3458]: I0213 19:02:46.044313 3458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/061944bac344b1791ce6e4a50e504c82-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-130\" (UID: \"061944bac344b1791ce6e4a50e504c82\") " pod="kube-system/kube-controller-manager-ip-172-31-27-130" Feb 13 19:02:46.044977 kubelet[3458]: I0213 19:02:46.044927 3458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b7c6605b64b3437b4c86ef3b77bc769-ca-certs\") pod \"kube-apiserver-ip-172-31-27-130\" (UID: \"4b7c6605b64b3437b4c86ef3b77bc769\") " pod="kube-system/kube-apiserver-ip-172-31-27-130" Feb 13 19:02:46.045077 kubelet[3458]: I0213 19:02:46.045019 3458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b7c6605b64b3437b4c86ef3b77bc769-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-130\" (UID: \"4b7c6605b64b3437b4c86ef3b77bc769\") " pod="kube-system/kube-apiserver-ip-172-31-27-130" Feb 13 19:02:46.599475 kubelet[3458]: I0213 19:02:46.599418 3458 apiserver.go:52] "Watching apiserver" Feb 13 19:02:46.640968 kubelet[3458]: I0213 19:02:46.640852 3458 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:02:46.651694 kubelet[3458]: I0213 19:02:46.651465 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-27-130" podStartSLOduration=0.651445969 podStartE2EDuration="651.445969ms" podCreationTimestamp="2025-02-13 19:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:02:46.651394321 +0000 UTC m=+1.196869219" watchObservedRunningTime="2025-02-13 19:02:46.651445969 +0000 UTC m=+1.196920795" Feb 13 19:02:46.665849 kubelet[3458]: I0213 19:02:46.665765 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-27-130" podStartSLOduration=0.665740897 podStartE2EDuration="665.740897ms" podCreationTimestamp="2025-02-13 19:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:02:46.664911469 +0000 UTC m=+1.210386319" watchObservedRunningTime="2025-02-13 19:02:46.665740897 +0000 UTC m=+1.211215723" Feb 13 19:02:46.824988 kubelet[3458]: I0213 19:02:46.824891 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-27-130" podStartSLOduration=0.824866682 podStartE2EDuration="824.866682ms" podCreationTimestamp="2025-02-13 19:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:02:46.686249437 +0000 UTC m=+1.231724287" watchObservedRunningTime="2025-02-13 19:02:46.824866682 +0000 UTC m=+1.370341508" Feb 13 19:02:47.254060 sudo[2261]: pam_unix(sudo:session): session closed for user root Feb 13 19:02:47.277977 sshd[2260]: Connection closed by 139.178.89.65 port 54820 Feb 13 19:02:47.278864 sshd-session[2255]: pam_unix(sshd:session): session closed for user core Feb 13 19:02:47.286212 systemd[1]: sshd@6-172.31.27.130:22-139.178.89.65:54820.service: Deactivated successfully. Feb 13 19:02:47.291691 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:02:47.292079 systemd[1]: session-7.scope: Consumed 8.202s CPU time, 219.9M memory peak. Feb 13 19:02:47.294967 systemd-logind[1938]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:02:47.297187 systemd-logind[1938]: Removed session 7. Feb 13 19:02:48.248175 kubelet[3458]: I0213 19:02:48.248128 3458 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:02:48.249399 containerd[1967]: time="2025-02-13T19:02:48.249159301Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:02:48.251115 kubelet[3458]: I0213 19:02:48.249471 3458 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:02:48.895278 systemd[1]: Created slice kubepods-besteffort-podacdd61e9_13d0_45fd_ade3_7683d674d989.slice - libcontainer container kubepods-besteffort-podacdd61e9_13d0_45fd_ade3_7683d674d989.slice. Feb 13 19:02:48.925041 systemd[1]: Created slice kubepods-burstable-pod7283aa78_deef_47ed_bc95_785f8ea36571.slice - libcontainer container kubepods-burstable-pod7283aa78_deef_47ed_bc95_785f8ea36571.slice. Feb 13 19:02:48.963139 kubelet[3458]: I0213 19:02:48.963081 3458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acdd61e9-13d0-45fd-ade3-7683d674d989-lib-modules\") pod \"kube-proxy-r4pvm\" (UID: \"acdd61e9-13d0-45fd-ade3-7683d674d989\") " pod="kube-system/kube-proxy-r4pvm" Feb 13 19:02:48.964388 kubelet[3458]: I0213 19:02:48.964333 3458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/7283aa78-deef-47ed-bc95-785f8ea36571-cni\") pod \"kube-flannel-ds-xxqzc\" (UID: \"7283aa78-deef-47ed-bc95-785f8ea36571\") " pod="kube-flannel/kube-flannel-ds-xxqzc" Feb 13 19:02:48.964569 kubelet[3458]: I0213 19:02:48.964402 3458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/7283aa78-deef-47ed-bc95-785f8ea36571-flannel-cfg\") pod \"kube-flannel-ds-xxqzc\" (UID: \"7283aa78-deef-47ed-bc95-785f8ea36571\") " pod="kube-flannel/kube-flannel-ds-xxqzc" Feb 13 19:02:48.964569 kubelet[3458]: I0213 19:02:48.964450 3458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/acdd61e9-13d0-45fd-ade3-7683d674d989-kube-proxy\") pod \"kube-proxy-r4pvm\" (UID: \"acdd61e9-13d0-45fd-ade3-7683d674d989\") " pod="kube-system/kube-proxy-r4pvm" Feb 13 19:02:48.964569 kubelet[3458]: I0213 19:02:48.964483 3458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acdd61e9-13d0-45fd-ade3-7683d674d989-xtables-lock\") pod \"kube-proxy-r4pvm\" (UID: \"acdd61e9-13d0-45fd-ade3-7683d674d989\") " pod="kube-system/kube-proxy-r4pvm" Feb 13 19:02:48.964569 kubelet[3458]: I0213 19:02:48.964555 3458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/7283aa78-deef-47ed-bc95-785f8ea36571-cni-plugin\") pod \"kube-flannel-ds-xxqzc\" (UID: \"7283aa78-deef-47ed-bc95-785f8ea36571\") " pod="kube-flannel/kube-flannel-ds-xxqzc" Feb 13 19:02:48.964828 kubelet[3458]: I0213 19:02:48.964592 3458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7283aa78-deef-47ed-bc95-785f8ea36571-xtables-lock\") pod \"kube-flannel-ds-xxqzc\" (UID: \"7283aa78-deef-47ed-bc95-785f8ea36571\") " pod="kube-flannel/kube-flannel-ds-xxqzc" Feb 13 19:02:48.964828 kubelet[3458]: I0213 19:02:48.964633 3458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5hx5\" (UniqueName: \"kubernetes.io/projected/7283aa78-deef-47ed-bc95-785f8ea36571-kube-api-access-z5hx5\") pod \"kube-flannel-ds-xxqzc\" (UID: \"7283aa78-deef-47ed-bc95-785f8ea36571\") " pod="kube-flannel/kube-flannel-ds-xxqzc" Feb 13 19:02:48.964828 kubelet[3458]: I0213 19:02:48.964675 3458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnnqt\" (UniqueName: \"kubernetes.io/projected/acdd61e9-13d0-45fd-ade3-7683d674d989-kube-api-access-xnnqt\") pod \"kube-proxy-r4pvm\" (UID: \"acdd61e9-13d0-45fd-ade3-7683d674d989\") " pod="kube-system/kube-proxy-r4pvm" Feb 13 19:02:48.964828 kubelet[3458]: I0213 19:02:48.964708 3458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7283aa78-deef-47ed-bc95-785f8ea36571-run\") pod \"kube-flannel-ds-xxqzc\" (UID: \"7283aa78-deef-47ed-bc95-785f8ea36571\") " pod="kube-flannel/kube-flannel-ds-xxqzc" Feb 13 19:02:49.080577 kubelet[3458]: E0213 19:02:49.080344 3458 projected.go:288] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 19:02:49.080577 kubelet[3458]: E0213 19:02:49.080394 3458 projected.go:194] Error preparing data for projected volume kube-api-access-z5hx5 for pod kube-flannel/kube-flannel-ds-xxqzc: configmap "kube-root-ca.crt" not found Feb 13 19:02:49.080577 kubelet[3458]: E0213 19:02:49.080528 3458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7283aa78-deef-47ed-bc95-785f8ea36571-kube-api-access-z5hx5 podName:7283aa78-deef-47ed-bc95-785f8ea36571 nodeName:}" failed. No retries permitted until 2025-02-13 19:02:49.580465437 +0000 UTC m=+4.125940263 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-z5hx5" (UniqueName: "kubernetes.io/projected/7283aa78-deef-47ed-bc95-785f8ea36571-kube-api-access-z5hx5") pod "kube-flannel-ds-xxqzc" (UID: "7283aa78-deef-47ed-bc95-785f8ea36571") : configmap "kube-root-ca.crt" not found Feb 13 19:02:49.099116 kubelet[3458]: E0213 19:02:49.098919 3458 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 19:02:49.099116 kubelet[3458]: E0213 19:02:49.098982 3458 projected.go:194] Error preparing data for projected volume kube-api-access-xnnqt for pod kube-system/kube-proxy-r4pvm: configmap "kube-root-ca.crt" not found Feb 13 19:02:49.099116 kubelet[3458]: E0213 19:02:49.099073 3458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/acdd61e9-13d0-45fd-ade3-7683d674d989-kube-api-access-xnnqt podName:acdd61e9-13d0-45fd-ade3-7683d674d989 nodeName:}" failed. No retries permitted until 2025-02-13 19:02:49.599044353 +0000 UTC m=+4.144519167 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xnnqt" (UniqueName: "kubernetes.io/projected/acdd61e9-13d0-45fd-ade3-7683d674d989-kube-api-access-xnnqt") pod "kube-proxy-r4pvm" (UID: "acdd61e9-13d0-45fd-ade3-7683d674d989") : configmap "kube-root-ca.crt" not found Feb 13 19:02:49.822806 containerd[1967]: time="2025-02-13T19:02:49.822688325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r4pvm,Uid:acdd61e9-13d0-45fd-ade3-7683d674d989,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:49.839537 containerd[1967]: time="2025-02-13T19:02:49.836793773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-xxqzc,Uid:7283aa78-deef-47ed-bc95-785f8ea36571,Namespace:kube-flannel,Attempt:0,}" Feb 13 19:02:49.873123 containerd[1967]: time="2025-02-13T19:02:49.872749601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:49.874521 containerd[1967]: time="2025-02-13T19:02:49.873064349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:49.874665 containerd[1967]: time="2025-02-13T19:02:49.874586321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:49.875287 containerd[1967]: time="2025-02-13T19:02:49.875107409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:49.903418 containerd[1967]: time="2025-02-13T19:02:49.903258605Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:02:49.903941 containerd[1967]: time="2025-02-13T19:02:49.903703505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:02:49.903941 containerd[1967]: time="2025-02-13T19:02:49.903783497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:49.905869 containerd[1967]: time="2025-02-13T19:02:49.905762633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:02:49.926037 systemd[1]: Started cri-containerd-d452ecc8f3b518cc2aecc24bb5331fabf3566c4cf5e7f03b0a54514fd445c575.scope - libcontainer container d452ecc8f3b518cc2aecc24bb5331fabf3566c4cf5e7f03b0a54514fd445c575. Feb 13 19:02:49.960579 systemd[1]: Started cri-containerd-86779ded3beddcba67ecd3442de41f6d40a45acf45b7a5547fdb2c944ac5875a.scope - libcontainer container 86779ded3beddcba67ecd3442de41f6d40a45acf45b7a5547fdb2c944ac5875a. Feb 13 19:02:50.007423 containerd[1967]: time="2025-02-13T19:02:50.007218061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r4pvm,Uid:acdd61e9-13d0-45fd-ade3-7683d674d989,Namespace:kube-system,Attempt:0,} returns sandbox id \"d452ecc8f3b518cc2aecc24bb5331fabf3566c4cf5e7f03b0a54514fd445c575\"" Feb 13 19:02:50.018243 containerd[1967]: time="2025-02-13T19:02:50.018177386Z" level=info msg="CreateContainer within sandbox \"d452ecc8f3b518cc2aecc24bb5331fabf3566c4cf5e7f03b0a54514fd445c575\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:02:50.080482 containerd[1967]: time="2025-02-13T19:02:50.080162330Z" level=info msg="CreateContainer within sandbox \"d452ecc8f3b518cc2aecc24bb5331fabf3566c4cf5e7f03b0a54514fd445c575\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"299ec9ac1a44666cdf34016a630ecb8ea7384e0f01c23ff2f3c98d1503b10f61\"" Feb 13 19:02:50.082568 containerd[1967]: time="2025-02-13T19:02:50.082203242Z" level=info msg="StartContainer for \"299ec9ac1a44666cdf34016a630ecb8ea7384e0f01c23ff2f3c98d1503b10f61\"" Feb 13 19:02:50.086059 containerd[1967]: time="2025-02-13T19:02:50.085873574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-xxqzc,Uid:7283aa78-deef-47ed-bc95-785f8ea36571,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"86779ded3beddcba67ecd3442de41f6d40a45acf45b7a5547fdb2c944ac5875a\"" Feb 13 19:02:50.090373 containerd[1967]: time="2025-02-13T19:02:50.089954150Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 19:02:50.138794 systemd[1]: Started cri-containerd-299ec9ac1a44666cdf34016a630ecb8ea7384e0f01c23ff2f3c98d1503b10f61.scope - libcontainer container 299ec9ac1a44666cdf34016a630ecb8ea7384e0f01c23ff2f3c98d1503b10f61. Feb 13 19:02:50.200601 containerd[1967]: time="2025-02-13T19:02:50.200466926Z" level=info msg="StartContainer for \"299ec9ac1a44666cdf34016a630ecb8ea7384e0f01c23ff2f3c98d1503b10f61\" returns successfully" Feb 13 19:02:50.826764 kubelet[3458]: I0213 19:02:50.826669 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r4pvm" podStartSLOduration=2.826645998 podStartE2EDuration="2.826645998s" podCreationTimestamp="2025-02-13 19:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:02:50.826021026 +0000 UTC m=+5.371495876" watchObservedRunningTime="2025-02-13 19:02:50.826645998 +0000 UTC m=+5.372120836" Feb 13 19:02:52.238803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1923287737.mount: Deactivated successfully. Feb 13 19:02:52.297136 containerd[1967]: time="2025-02-13T19:02:52.297069233Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:52.299464 containerd[1967]: time="2025-02-13T19:02:52.299382869Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Feb 13 19:02:52.301019 containerd[1967]: time="2025-02-13T19:02:52.300950081Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:52.305651 containerd[1967]: time="2025-02-13T19:02:52.305551181Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:52.307168 containerd[1967]: time="2025-02-13T19:02:52.307124993Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 2.217110207s" Feb 13 19:02:52.307413 containerd[1967]: time="2025-02-13T19:02:52.307275425Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 13 19:02:52.311336 containerd[1967]: time="2025-02-13T19:02:52.311286305Z" level=info msg="CreateContainer within sandbox \"86779ded3beddcba67ecd3442de41f6d40a45acf45b7a5547fdb2c944ac5875a\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 19:02:52.335195 containerd[1967]: time="2025-02-13T19:02:52.335137709Z" level=info msg="CreateContainer within sandbox \"86779ded3beddcba67ecd3442de41f6d40a45acf45b7a5547fdb2c944ac5875a\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"27b7b8c60e0e77fff6c1d67c532cda8ca1580189638be2d7ad8ab02ae9a2158b\"" Feb 13 19:02:52.337822 containerd[1967]: time="2025-02-13T19:02:52.336425165Z" level=info msg="StartContainer for \"27b7b8c60e0e77fff6c1d67c532cda8ca1580189638be2d7ad8ab02ae9a2158b\"" Feb 13 19:02:52.389819 systemd[1]: Started cri-containerd-27b7b8c60e0e77fff6c1d67c532cda8ca1580189638be2d7ad8ab02ae9a2158b.scope - libcontainer container 27b7b8c60e0e77fff6c1d67c532cda8ca1580189638be2d7ad8ab02ae9a2158b. Feb 13 19:02:52.440650 containerd[1967]: time="2025-02-13T19:02:52.440295090Z" level=info msg="StartContainer for \"27b7b8c60e0e77fff6c1d67c532cda8ca1580189638be2d7ad8ab02ae9a2158b\" returns successfully" Feb 13 19:02:52.447454 systemd[1]: cri-containerd-27b7b8c60e0e77fff6c1d67c532cda8ca1580189638be2d7ad8ab02ae9a2158b.scope: Deactivated successfully. Feb 13 19:02:52.527629 containerd[1967]: time="2025-02-13T19:02:52.527442702Z" level=info msg="shim disconnected" id=27b7b8c60e0e77fff6c1d67c532cda8ca1580189638be2d7ad8ab02ae9a2158b namespace=k8s.io Feb 13 19:02:52.528479 containerd[1967]: time="2025-02-13T19:02:52.528134346Z" level=warning msg="cleaning up after shim disconnected" id=27b7b8c60e0e77fff6c1d67c532cda8ca1580189638be2d7ad8ab02ae9a2158b namespace=k8s.io Feb 13 19:02:52.528479 containerd[1967]: time="2025-02-13T19:02:52.528205218Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:02:52.823904 containerd[1967]: time="2025-02-13T19:02:52.823752403Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 19:02:53.093029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27b7b8c60e0e77fff6c1d67c532cda8ca1580189638be2d7ad8ab02ae9a2158b-rootfs.mount: Deactivated successfully. Feb 13 19:02:55.240895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1564229102.mount: Deactivated successfully. Feb 13 19:02:56.405474 containerd[1967]: time="2025-02-13T19:02:56.405397617Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:56.407342 containerd[1967]: time="2025-02-13T19:02:56.407266905Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Feb 13 19:02:56.408377 containerd[1967]: time="2025-02-13T19:02:56.408324369Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:56.414837 containerd[1967]: time="2025-02-13T19:02:56.414757377Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:02:56.417807 containerd[1967]: time="2025-02-13T19:02:56.417209193Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 3.593098674s" Feb 13 19:02:56.417807 containerd[1967]: time="2025-02-13T19:02:56.417262185Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 13 19:02:56.423129 containerd[1967]: time="2025-02-13T19:02:56.422912625Z" level=info msg="CreateContainer within sandbox \"86779ded3beddcba67ecd3442de41f6d40a45acf45b7a5547fdb2c944ac5875a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:02:56.447360 containerd[1967]: time="2025-02-13T19:02:56.447277989Z" level=info msg="CreateContainer within sandbox \"86779ded3beddcba67ecd3442de41f6d40a45acf45b7a5547fdb2c944ac5875a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"78492e67c4129d50d4c639a19cdf56718e710f5af7acad71a7ca1689454732ba\"" Feb 13 19:02:56.448883 containerd[1967]: time="2025-02-13T19:02:56.448266369Z" level=info msg="StartContainer for \"78492e67c4129d50d4c639a19cdf56718e710f5af7acad71a7ca1689454732ba\"" Feb 13 19:02:56.504825 systemd[1]: Started cri-containerd-78492e67c4129d50d4c639a19cdf56718e710f5af7acad71a7ca1689454732ba.scope - libcontainer container 78492e67c4129d50d4c639a19cdf56718e710f5af7acad71a7ca1689454732ba. Feb 13 19:02:56.556110 systemd[1]: cri-containerd-78492e67c4129d50d4c639a19cdf56718e710f5af7acad71a7ca1689454732ba.scope: Deactivated successfully. Feb 13 19:02:56.558728 containerd[1967]: time="2025-02-13T19:02:56.557487838Z" level=info msg="StartContainer for \"78492e67c4129d50d4c639a19cdf56718e710f5af7acad71a7ca1689454732ba\" returns successfully" Feb 13 19:02:56.596839 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78492e67c4129d50d4c639a19cdf56718e710f5af7acad71a7ca1689454732ba-rootfs.mount: Deactivated successfully. Feb 13 19:02:56.658735 kubelet[3458]: I0213 19:02:56.657967 3458 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 19:02:56.739878 systemd[1]: Created slice kubepods-burstable-podac2e3caa_5411_4d77_ab64_051fae3b89a0.slice - libcontainer container kubepods-burstable-podac2e3caa_5411_4d77_ab64_051fae3b89a0.slice. Feb 13 19:02:56.760942 systemd[1]: Created slice kubepods-burstable-pod98af0f84_3eda_4fa0_bae4_63009a0c8153.slice - libcontainer container kubepods-burstable-pod98af0f84_3eda_4fa0_bae4_63009a0c8153.slice. Feb 13 19:02:56.764797 containerd[1967]: time="2025-02-13T19:02:56.764708867Z" level=info msg="shim disconnected" id=78492e67c4129d50d4c639a19cdf56718e710f5af7acad71a7ca1689454732ba namespace=k8s.io Feb 13 19:02:56.764797 containerd[1967]: time="2025-02-13T19:02:56.764791835Z" level=warning msg="cleaning up after shim disconnected" id=78492e67c4129d50d4c639a19cdf56718e710f5af7acad71a7ca1689454732ba namespace=k8s.io Feb 13 19:02:56.765049 containerd[1967]: time="2025-02-13T19:02:56.764812679Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:02:56.822476 kubelet[3458]: I0213 19:02:56.822392 3458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnfdl\" (UniqueName: \"kubernetes.io/projected/98af0f84-3eda-4fa0-bae4-63009a0c8153-kube-api-access-pnfdl\") pod \"coredns-6f6b679f8f-mwghf\" (UID: \"98af0f84-3eda-4fa0-bae4-63009a0c8153\") " pod="kube-system/coredns-6f6b679f8f-mwghf" Feb 13 19:02:56.822671 kubelet[3458]: I0213 19:02:56.822481 3458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h97zg\" (UniqueName: \"kubernetes.io/projected/ac2e3caa-5411-4d77-ab64-051fae3b89a0-kube-api-access-h97zg\") pod \"coredns-6f6b679f8f-nbsjp\" (UID: \"ac2e3caa-5411-4d77-ab64-051fae3b89a0\") " pod="kube-system/coredns-6f6b679f8f-nbsjp" Feb 13 19:02:56.822671 kubelet[3458]: I0213 19:02:56.822557 3458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98af0f84-3eda-4fa0-bae4-63009a0c8153-config-volume\") pod \"coredns-6f6b679f8f-mwghf\" (UID: \"98af0f84-3eda-4fa0-bae4-63009a0c8153\") " pod="kube-system/coredns-6f6b679f8f-mwghf" Feb 13 19:02:56.822671 kubelet[3458]: I0213 19:02:56.822596 3458 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac2e3caa-5411-4d77-ab64-051fae3b89a0-config-volume\") pod \"coredns-6f6b679f8f-nbsjp\" (UID: \"ac2e3caa-5411-4d77-ab64-051fae3b89a0\") " pod="kube-system/coredns-6f6b679f8f-nbsjp" Feb 13 19:02:56.842971 containerd[1967]: time="2025-02-13T19:02:56.842562719Z" level=info msg="CreateContainer within sandbox \"86779ded3beddcba67ecd3442de41f6d40a45acf45b7a5547fdb2c944ac5875a\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 19:02:56.866116 containerd[1967]: time="2025-02-13T19:02:56.866040552Z" level=info msg="CreateContainer within sandbox \"86779ded3beddcba67ecd3442de41f6d40a45acf45b7a5547fdb2c944ac5875a\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"48ebe7a21ee2e832101ca317c5f246edb4fde0559e1c8b34ec1d3cee9f136f0b\"" Feb 13 19:02:56.867219 containerd[1967]: time="2025-02-13T19:02:56.867105708Z" level=info msg="StartContainer for \"48ebe7a21ee2e832101ca317c5f246edb4fde0559e1c8b34ec1d3cee9f136f0b\"" Feb 13 19:02:56.909827 systemd[1]: Started cri-containerd-48ebe7a21ee2e832101ca317c5f246edb4fde0559e1c8b34ec1d3cee9f136f0b.scope - libcontainer container 48ebe7a21ee2e832101ca317c5f246edb4fde0559e1c8b34ec1d3cee9f136f0b. Feb 13 19:02:56.974546 containerd[1967]: time="2025-02-13T19:02:56.973897092Z" level=info msg="StartContainer for \"48ebe7a21ee2e832101ca317c5f246edb4fde0559e1c8b34ec1d3cee9f136f0b\" returns successfully" Feb 13 19:02:57.050201 containerd[1967]: time="2025-02-13T19:02:57.050133704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-nbsjp,Uid:ac2e3caa-5411-4d77-ab64-051fae3b89a0,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:57.078011 containerd[1967]: time="2025-02-13T19:02:57.077955333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mwghf,Uid:98af0f84-3eda-4fa0-bae4-63009a0c8153,Namespace:kube-system,Attempt:0,}" Feb 13 19:02:57.103660 containerd[1967]: time="2025-02-13T19:02:57.103390629Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-nbsjp,Uid:ac2e3caa-5411-4d77-ab64-051fae3b89a0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7649034068c527e5197faeebd9d03d59b4b84f2119f25ff2c7c4c5cb171a0ee1\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:02:57.106043 kubelet[3458]: E0213 19:02:57.103924 3458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7649034068c527e5197faeebd9d03d59b4b84f2119f25ff2c7c4c5cb171a0ee1\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:02:57.106043 kubelet[3458]: E0213 19:02:57.104025 3458 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7649034068c527e5197faeebd9d03d59b4b84f2119f25ff2c7c4c5cb171a0ee1\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-nbsjp" Feb 13 19:02:57.106043 kubelet[3458]: E0213 19:02:57.104061 3458 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7649034068c527e5197faeebd9d03d59b4b84f2119f25ff2c7c4c5cb171a0ee1\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-nbsjp" Feb 13 19:02:57.106043 kubelet[3458]: E0213 19:02:57.104144 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-nbsjp_kube-system(ac2e3caa-5411-4d77-ab64-051fae3b89a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-nbsjp_kube-system(ac2e3caa-5411-4d77-ab64-051fae3b89a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7649034068c527e5197faeebd9d03d59b4b84f2119f25ff2c7c4c5cb171a0ee1\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-nbsjp" podUID="ac2e3caa-5411-4d77-ab64-051fae3b89a0" Feb 13 19:02:57.123669 containerd[1967]: time="2025-02-13T19:02:57.123546837Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mwghf,Uid:98af0f84-3eda-4fa0-bae4-63009a0c8153,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6ed0ca33fa4c56c3dec7a6da96f2671a7975c5ce914e2676f88e27d89132e294\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:02:57.123890 kubelet[3458]: E0213 19:02:57.123840 3458 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ed0ca33fa4c56c3dec7a6da96f2671a7975c5ce914e2676f88e27d89132e294\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:02:57.123980 kubelet[3458]: E0213 19:02:57.123920 3458 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ed0ca33fa4c56c3dec7a6da96f2671a7975c5ce914e2676f88e27d89132e294\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-mwghf" Feb 13 19:02:57.123980 kubelet[3458]: E0213 19:02:57.123968 3458 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ed0ca33fa4c56c3dec7a6da96f2671a7975c5ce914e2676f88e27d89132e294\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-mwghf" Feb 13 19:02:57.124111 kubelet[3458]: E0213 19:02:57.124032 3458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-mwghf_kube-system(98af0f84-3eda-4fa0-bae4-63009a0c8153)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-mwghf_kube-system(98af0f84-3eda-4fa0-bae4-63009a0c8153)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ed0ca33fa4c56c3dec7a6da96f2671a7975c5ce914e2676f88e27d89132e294\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-mwghf" podUID="98af0f84-3eda-4fa0-bae4-63009a0c8153" Feb 13 19:02:58.050744 (udev-worker)[4005]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:02:58.074955 systemd-networkd[1868]: flannel.1: Link UP Feb 13 19:02:58.074976 systemd-networkd[1868]: flannel.1: Gained carrier Feb 13 19:03:00.025829 systemd-networkd[1868]: flannel.1: Gained IPv6LL Feb 13 19:03:02.513890 ntpd[1932]: Listen normally on 7 flannel.1 192.168.0.0:123 Feb 13 19:03:02.514026 ntpd[1932]: Listen normally on 8 flannel.1 [fe80::9478:b8ff:feeb:22af%4]:123 Feb 13 19:03:02.514875 ntpd[1932]: 13 Feb 19:03:02 ntpd[1932]: Listen normally on 7 flannel.1 192.168.0.0:123 Feb 13 19:03:02.514875 ntpd[1932]: 13 Feb 19:03:02 ntpd[1932]: Listen normally on 8 flannel.1 [fe80::9478:b8ff:feeb:22af%4]:123 Feb 13 19:03:11.718653 containerd[1967]: time="2025-02-13T19:03:11.718154101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mwghf,Uid:98af0f84-3eda-4fa0-bae4-63009a0c8153,Namespace:kube-system,Attempt:0,}" Feb 13 19:03:11.755271 systemd-networkd[1868]: cni0: Link UP Feb 13 19:03:11.755289 systemd-networkd[1868]: cni0: Gained carrier Feb 13 19:03:11.763760 (udev-worker)[4147]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:03:11.766440 systemd-networkd[1868]: cni0: Lost carrier Feb 13 19:03:11.769370 systemd-networkd[1868]: veth2cb58dd1: Link UP Feb 13 19:03:11.773729 kernel: cni0: port 1(veth2cb58dd1) entered blocking state Feb 13 19:03:11.773861 kernel: cni0: port 1(veth2cb58dd1) entered disabled state Feb 13 19:03:11.773904 kernel: veth2cb58dd1: entered allmulticast mode Feb 13 19:03:11.775476 kernel: veth2cb58dd1: entered promiscuous mode Feb 13 19:03:11.778914 kernel: cni0: port 1(veth2cb58dd1) entered blocking state Feb 13 19:03:11.779005 kernel: cni0: port 1(veth2cb58dd1) entered forwarding state Feb 13 19:03:11.780298 kernel: cni0: port 1(veth2cb58dd1) entered disabled state Feb 13 19:03:11.785636 (udev-worker)[4148]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:03:11.799148 kernel: cni0: port 1(veth2cb58dd1) entered blocking state Feb 13 19:03:11.799254 kernel: cni0: port 1(veth2cb58dd1) entered forwarding state Feb 13 19:03:11.799645 systemd-networkd[1868]: veth2cb58dd1: Gained carrier Feb 13 19:03:11.801461 systemd-networkd[1868]: cni0: Gained carrier Feb 13 19:03:11.809666 containerd[1967]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000012938), "name":"cbr0", "type":"bridge"} Feb 13 19:03:11.809666 containerd[1967]: delegateAdd: netconf sent to delegate plugin: Feb 13 19:03:11.841519 containerd[1967]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T19:03:11.841294634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:11.841519 containerd[1967]: time="2025-02-13T19:03:11.841413122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:11.841777 containerd[1967]: time="2025-02-13T19:03:11.841450778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:11.842616 containerd[1967]: time="2025-02-13T19:03:11.842382998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:11.883829 systemd[1]: Started cri-containerd-30821f36055b14833069edfd5e98ef619f2f8014584e381d1969aeddccf8d1de.scope - libcontainer container 30821f36055b14833069edfd5e98ef619f2f8014584e381d1969aeddccf8d1de. Feb 13 19:03:11.946096 containerd[1967]: time="2025-02-13T19:03:11.946029302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mwghf,Uid:98af0f84-3eda-4fa0-bae4-63009a0c8153,Namespace:kube-system,Attempt:0,} returns sandbox id \"30821f36055b14833069edfd5e98ef619f2f8014584e381d1969aeddccf8d1de\"" Feb 13 19:03:11.952990 containerd[1967]: time="2025-02-13T19:03:11.952929770Z" level=info msg="CreateContainer within sandbox \"30821f36055b14833069edfd5e98ef619f2f8014584e381d1969aeddccf8d1de\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:03:11.972532 containerd[1967]: time="2025-02-13T19:03:11.972305583Z" level=info msg="CreateContainer within sandbox \"30821f36055b14833069edfd5e98ef619f2f8014584e381d1969aeddccf8d1de\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7950fa840b789c000fd6daf9d028125fd14c3a1a256580b97d22d75567b253f2\"" Feb 13 19:03:11.973935 containerd[1967]: time="2025-02-13T19:03:11.973780323Z" level=info msg="StartContainer for \"7950fa840b789c000fd6daf9d028125fd14c3a1a256580b97d22d75567b253f2\"" Feb 13 19:03:12.019815 systemd[1]: Started cri-containerd-7950fa840b789c000fd6daf9d028125fd14c3a1a256580b97d22d75567b253f2.scope - libcontainer container 7950fa840b789c000fd6daf9d028125fd14c3a1a256580b97d22d75567b253f2. Feb 13 19:03:12.072214 containerd[1967]: time="2025-02-13T19:03:12.072135587Z" level=info msg="StartContainer for \"7950fa840b789c000fd6daf9d028125fd14c3a1a256580b97d22d75567b253f2\" returns successfully" Feb 13 19:03:12.718631 containerd[1967]: time="2025-02-13T19:03:12.718552778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-nbsjp,Uid:ac2e3caa-5411-4d77-ab64-051fae3b89a0,Namespace:kube-system,Attempt:0,}" Feb 13 19:03:12.754571 (udev-worker)[4157]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:03:12.757336 systemd-networkd[1868]: vethdbc922ce: Link UP Feb 13 19:03:12.760072 kernel: cni0: port 2(vethdbc922ce) entered blocking state Feb 13 19:03:12.760156 kernel: cni0: port 2(vethdbc922ce) entered disabled state Feb 13 19:03:12.761145 kernel: vethdbc922ce: entered allmulticast mode Feb 13 19:03:12.762391 kernel: vethdbc922ce: entered promiscuous mode Feb 13 19:03:12.762667 kernel: cni0: port 2(vethdbc922ce) entered blocking state Feb 13 19:03:12.764763 kernel: cni0: port 2(vethdbc922ce) entered forwarding state Feb 13 19:03:12.766251 kernel: cni0: port 2(vethdbc922ce) entered disabled state Feb 13 19:03:12.777543 kernel: cni0: port 2(vethdbc922ce) entered blocking state Feb 13 19:03:12.777739 kernel: cni0: port 2(vethdbc922ce) entered forwarding state Feb 13 19:03:12.777574 systemd-networkd[1868]: vethdbc922ce: Gained carrier Feb 13 19:03:12.787086 containerd[1967]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a48e8), "name":"cbr0", "type":"bridge"} Feb 13 19:03:12.787086 containerd[1967]: delegateAdd: netconf sent to delegate plugin: Feb 13 19:03:12.826430 containerd[1967]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T19:03:12.825875595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:03:12.826205 systemd-networkd[1868]: cni0: Gained IPv6LL Feb 13 19:03:12.827279 containerd[1967]: time="2025-02-13T19:03:12.826191747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:03:12.827279 containerd[1967]: time="2025-02-13T19:03:12.826223223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:12.827279 containerd[1967]: time="2025-02-13T19:03:12.826410771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:03:12.872154 systemd[1]: Started cri-containerd-beb3a2b9c717901aca18b1420dee2cd492e48338e9d76394e041c65e7d4b0547.scope - libcontainer container beb3a2b9c717901aca18b1420dee2cd492e48338e9d76394e041c65e7d4b0547. Feb 13 19:03:12.912649 kubelet[3458]: I0213 19:03:12.911743 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-xxqzc" podStartSLOduration=18.580717844 podStartE2EDuration="24.911721903s" podCreationTimestamp="2025-02-13 19:02:48 +0000 UTC" firstStartedPulling="2025-02-13 19:02:50.088098518 +0000 UTC m=+4.633573344" lastFinishedPulling="2025-02-13 19:02:56.419102589 +0000 UTC m=+10.964577403" observedRunningTime="2025-02-13 19:02:57.861802476 +0000 UTC m=+12.407277374" watchObservedRunningTime="2025-02-13 19:03:12.911721903 +0000 UTC m=+27.457196729" Feb 13 19:03:12.914343 kubelet[3458]: I0213 19:03:12.913556 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-mwghf" podStartSLOduration=23.913533591 podStartE2EDuration="23.913533591s" podCreationTimestamp="2025-02-13 19:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:03:12.908621247 +0000 UTC m=+27.454096085" watchObservedRunningTime="2025-02-13 19:03:12.913533591 +0000 UTC m=+27.459008417" Feb 13 19:03:12.996061 containerd[1967]: time="2025-02-13T19:03:12.995287816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-nbsjp,Uid:ac2e3caa-5411-4d77-ab64-051fae3b89a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"beb3a2b9c717901aca18b1420dee2cd492e48338e9d76394e041c65e7d4b0547\"" Feb 13 19:03:13.003023 containerd[1967]: time="2025-02-13T19:03:13.002900580Z" level=info msg="CreateContainer within sandbox \"beb3a2b9c717901aca18b1420dee2cd492e48338e9d76394e041c65e7d4b0547\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:03:13.025585 containerd[1967]: time="2025-02-13T19:03:13.025380720Z" level=info msg="CreateContainer within sandbox \"beb3a2b9c717901aca18b1420dee2cd492e48338e9d76394e041c65e7d4b0547\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2e670d10503ae78daae96da12212a7c99263938772926ccf0cae27fb129797d8\"" Feb 13 19:03:13.027759 containerd[1967]: time="2025-02-13T19:03:13.027689028Z" level=info msg="StartContainer for \"2e670d10503ae78daae96da12212a7c99263938772926ccf0cae27fb129797d8\"" Feb 13 19:03:13.083804 systemd[1]: Started cri-containerd-2e670d10503ae78daae96da12212a7c99263938772926ccf0cae27fb129797d8.scope - libcontainer container 2e670d10503ae78daae96da12212a7c99263938772926ccf0cae27fb129797d8. Feb 13 19:03:13.130479 containerd[1967]: time="2025-02-13T19:03:13.130399956Z" level=info msg="StartContainer for \"2e670d10503ae78daae96da12212a7c99263938772926ccf0cae27fb129797d8\" returns successfully" Feb 13 19:03:13.657856 systemd-networkd[1868]: veth2cb58dd1: Gained IPv6LL Feb 13 19:03:14.233869 systemd-networkd[1868]: vethdbc922ce: Gained IPv6LL Feb 13 19:03:16.514024 ntpd[1932]: Listen normally on 9 cni0 192.168.0.1:123 Feb 13 19:03:16.514169 ntpd[1932]: Listen normally on 10 cni0 [fe80::d4d8:54ff:fed3:7c79%5]:123 Feb 13 19:03:16.515010 ntpd[1932]: 13 Feb 19:03:16 ntpd[1932]: Listen normally on 9 cni0 192.168.0.1:123 Feb 13 19:03:16.515010 ntpd[1932]: 13 Feb 19:03:16 ntpd[1932]: Listen normally on 10 cni0 [fe80::d4d8:54ff:fed3:7c79%5]:123 Feb 13 19:03:16.515010 ntpd[1932]: 13 Feb 19:03:16 ntpd[1932]: Listen normally on 11 veth2cb58dd1 [fe80::780d:feff:fe1a:38be%6]:123 Feb 13 19:03:16.515010 ntpd[1932]: 13 Feb 19:03:16 ntpd[1932]: Listen normally on 12 vethdbc922ce [fe80::c07a:e6ff:fecb:2ad2%7]:123 Feb 13 19:03:16.514249 ntpd[1932]: Listen normally on 11 veth2cb58dd1 [fe80::780d:feff:fe1a:38be%6]:123 Feb 13 19:03:16.514316 ntpd[1932]: Listen normally on 12 vethdbc922ce [fe80::c07a:e6ff:fecb:2ad2%7]:123 Feb 13 19:03:17.068090 kubelet[3458]: I0213 19:03:17.067976 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-nbsjp" podStartSLOduration=28.067951516 podStartE2EDuration="28.067951516s" podCreationTimestamp="2025-02-13 19:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:03:13.908773696 +0000 UTC m=+28.454248558" watchObservedRunningTime="2025-02-13 19:03:17.067951516 +0000 UTC m=+31.613426354" Feb 13 19:03:28.741081 systemd[1]: Started sshd@7-172.31.27.130:22-139.178.89.65:36154.service - OpenSSH per-connection server daemon (139.178.89.65:36154). Feb 13 19:03:28.927995 sshd[4442]: Accepted publickey for core from 139.178.89.65 port 36154 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:28.931092 sshd-session[4442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:28.940042 systemd-logind[1938]: New session 8 of user core. Feb 13 19:03:28.948761 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:03:29.203025 sshd[4444]: Connection closed by 139.178.89.65 port 36154 Feb 13 19:03:29.204862 sshd-session[4442]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:29.211660 systemd[1]: sshd@7-172.31.27.130:22-139.178.89.65:36154.service: Deactivated successfully. Feb 13 19:03:29.216933 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:03:29.218622 systemd-logind[1938]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:03:29.220364 systemd-logind[1938]: Removed session 8. Feb 13 19:03:34.250007 systemd[1]: Started sshd@8-172.31.27.130:22-139.178.89.65:36164.service - OpenSSH per-connection server daemon (139.178.89.65:36164). Feb 13 19:03:34.434022 sshd[4478]: Accepted publickey for core from 139.178.89.65 port 36164 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:34.436561 sshd-session[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:34.445355 systemd-logind[1938]: New session 9 of user core. Feb 13 19:03:34.454754 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:03:34.699747 sshd[4480]: Connection closed by 139.178.89.65 port 36164 Feb 13 19:03:34.701846 sshd-session[4478]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:34.707934 systemd[1]: sshd@8-172.31.27.130:22-139.178.89.65:36164.service: Deactivated successfully. Feb 13 19:03:34.712088 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:03:34.715556 systemd-logind[1938]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:03:34.717339 systemd-logind[1938]: Removed session 9. Feb 13 19:03:39.741016 systemd[1]: Started sshd@9-172.31.27.130:22-139.178.89.65:59032.service - OpenSSH per-connection server daemon (139.178.89.65:59032). Feb 13 19:03:39.931275 sshd[4515]: Accepted publickey for core from 139.178.89.65 port 59032 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:39.933729 sshd-session[4515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:39.942859 systemd-logind[1938]: New session 10 of user core. Feb 13 19:03:39.947768 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:03:40.191665 sshd[4517]: Connection closed by 139.178.89.65 port 59032 Feb 13 19:03:40.192642 sshd-session[4515]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:40.199214 systemd[1]: sshd@9-172.31.27.130:22-139.178.89.65:59032.service: Deactivated successfully. Feb 13 19:03:40.204733 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:03:40.207027 systemd-logind[1938]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:03:40.208744 systemd-logind[1938]: Removed session 10. Feb 13 19:03:40.239034 systemd[1]: Started sshd@10-172.31.27.130:22-139.178.89.65:59048.service - OpenSSH per-connection server daemon (139.178.89.65:59048). Feb 13 19:03:40.430752 sshd[4530]: Accepted publickey for core from 139.178.89.65 port 59048 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:40.433206 sshd-session[4530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:40.441615 systemd-logind[1938]: New session 11 of user core. Feb 13 19:03:40.448820 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:03:40.777604 sshd[4532]: Connection closed by 139.178.89.65 port 59048 Feb 13 19:03:40.778065 sshd-session[4530]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:40.789863 systemd-logind[1938]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:03:40.792206 systemd[1]: sshd@10-172.31.27.130:22-139.178.89.65:59048.service: Deactivated successfully. Feb 13 19:03:40.801030 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:03:40.822004 systemd-logind[1938]: Removed session 11. Feb 13 19:03:40.834032 systemd[1]: Started sshd@11-172.31.27.130:22-139.178.89.65:59056.service - OpenSSH per-connection server daemon (139.178.89.65:59056). Feb 13 19:03:41.035617 sshd[4541]: Accepted publickey for core from 139.178.89.65 port 59056 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:41.037467 sshd-session[4541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:41.049055 systemd-logind[1938]: New session 12 of user core. Feb 13 19:03:41.062772 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:03:41.309666 sshd[4544]: Connection closed by 139.178.89.65 port 59056 Feb 13 19:03:41.310846 sshd-session[4541]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:41.317249 systemd[1]: sshd@11-172.31.27.130:22-139.178.89.65:59056.service: Deactivated successfully. Feb 13 19:03:41.322309 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:03:41.326468 systemd-logind[1938]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:03:41.329308 systemd-logind[1938]: Removed session 12. Feb 13 19:03:46.352061 systemd[1]: Started sshd@12-172.31.27.130:22-139.178.89.65:42846.service - OpenSSH per-connection server daemon (139.178.89.65:42846). Feb 13 19:03:46.546898 sshd[4579]: Accepted publickey for core from 139.178.89.65 port 42846 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:46.549357 sshd-session[4579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:46.558739 systemd-logind[1938]: New session 13 of user core. Feb 13 19:03:46.566787 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:03:46.809394 sshd[4581]: Connection closed by 139.178.89.65 port 42846 Feb 13 19:03:46.810389 sshd-session[4579]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:46.816886 systemd[1]: sshd@12-172.31.27.130:22-139.178.89.65:42846.service: Deactivated successfully. Feb 13 19:03:46.821314 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:03:46.823228 systemd-logind[1938]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:03:46.825643 systemd-logind[1938]: Removed session 13. Feb 13 19:03:46.852061 systemd[1]: Started sshd@13-172.31.27.130:22-139.178.89.65:42854.service - OpenSSH per-connection server daemon (139.178.89.65:42854). Feb 13 19:03:47.042537 sshd[4593]: Accepted publickey for core from 139.178.89.65 port 42854 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:47.043671 sshd-session[4593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:47.051871 systemd-logind[1938]: New session 14 of user core. Feb 13 19:03:47.063797 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:03:47.361711 sshd[4595]: Connection closed by 139.178.89.65 port 42854 Feb 13 19:03:47.362677 sshd-session[4593]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:47.369890 systemd[1]: sshd@13-172.31.27.130:22-139.178.89.65:42854.service: Deactivated successfully. Feb 13 19:03:47.374152 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:03:47.377066 systemd-logind[1938]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:03:47.379206 systemd-logind[1938]: Removed session 14. Feb 13 19:03:47.401011 systemd[1]: Started sshd@14-172.31.27.130:22-139.178.89.65:42862.service - OpenSSH per-connection server daemon (139.178.89.65:42862). Feb 13 19:03:47.590964 sshd[4605]: Accepted publickey for core from 139.178.89.65 port 42862 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:47.593434 sshd-session[4605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:47.603236 systemd-logind[1938]: New session 15 of user core. Feb 13 19:03:47.608727 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:03:49.930592 sshd[4607]: Connection closed by 139.178.89.65 port 42862 Feb 13 19:03:49.932564 sshd-session[4605]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:49.941371 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:03:49.944492 systemd[1]: sshd@14-172.31.27.130:22-139.178.89.65:42862.service: Deactivated successfully. Feb 13 19:03:49.951212 systemd-logind[1938]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:03:49.980255 systemd[1]: Started sshd@15-172.31.27.130:22-139.178.89.65:42870.service - OpenSSH per-connection server daemon (139.178.89.65:42870). Feb 13 19:03:49.986266 systemd-logind[1938]: Removed session 15. Feb 13 19:03:50.176535 sshd[4645]: Accepted publickey for core from 139.178.89.65 port 42870 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:50.179444 sshd-session[4645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:50.189900 systemd-logind[1938]: New session 16 of user core. Feb 13 19:03:50.201736 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:03:50.679491 sshd[4648]: Connection closed by 139.178.89.65 port 42870 Feb 13 19:03:50.679937 sshd-session[4645]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:50.687620 systemd-logind[1938]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:03:50.688300 systemd[1]: sshd@15-172.31.27.130:22-139.178.89.65:42870.service: Deactivated successfully. Feb 13 19:03:50.694025 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:03:50.697147 systemd-logind[1938]: Removed session 16. Feb 13 19:03:50.720050 systemd[1]: Started sshd@16-172.31.27.130:22-139.178.89.65:42878.service - OpenSSH per-connection server daemon (139.178.89.65:42878). Feb 13 19:03:50.901515 sshd[4660]: Accepted publickey for core from 139.178.89.65 port 42878 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:50.904029 sshd-session[4660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:50.914859 systemd-logind[1938]: New session 17 of user core. Feb 13 19:03:50.922792 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:03:51.166576 sshd[4662]: Connection closed by 139.178.89.65 port 42878 Feb 13 19:03:51.167435 sshd-session[4660]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:51.173712 systemd[1]: sshd@16-172.31.27.130:22-139.178.89.65:42878.service: Deactivated successfully. Feb 13 19:03:51.178232 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:03:51.181185 systemd-logind[1938]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:03:51.184219 systemd-logind[1938]: Removed session 17. Feb 13 19:03:56.209031 systemd[1]: Started sshd@17-172.31.27.130:22-139.178.89.65:39078.service - OpenSSH per-connection server daemon (139.178.89.65:39078). Feb 13 19:03:56.398555 sshd[4694]: Accepted publickey for core from 139.178.89.65 port 39078 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:03:56.401093 sshd-session[4694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:56.409195 systemd-logind[1938]: New session 18 of user core. Feb 13 19:03:56.417777 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:03:56.655911 sshd[4696]: Connection closed by 139.178.89.65 port 39078 Feb 13 19:03:56.656894 sshd-session[4694]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:56.670459 systemd[1]: sshd@17-172.31.27.130:22-139.178.89.65:39078.service: Deactivated successfully. Feb 13 19:03:56.677739 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:03:56.680707 systemd-logind[1938]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:03:56.684350 systemd-logind[1938]: Removed session 18. Feb 13 19:04:01.703000 systemd[1]: Started sshd@18-172.31.27.130:22-139.178.89.65:39088.service - OpenSSH per-connection server daemon (139.178.89.65:39088). Feb 13 19:04:01.887574 sshd[4732]: Accepted publickey for core from 139.178.89.65 port 39088 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:04:01.890336 sshd-session[4732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:01.902797 systemd-logind[1938]: New session 19 of user core. Feb 13 19:04:01.911223 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:04:02.184530 sshd[4734]: Connection closed by 139.178.89.65 port 39088 Feb 13 19:04:02.183478 sshd-session[4732]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:02.189332 systemd[1]: sshd@18-172.31.27.130:22-139.178.89.65:39088.service: Deactivated successfully. Feb 13 19:04:02.189568 systemd-logind[1938]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:04:02.193198 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:04:02.198438 systemd-logind[1938]: Removed session 19. Feb 13 19:04:07.225052 systemd[1]: Started sshd@19-172.31.27.130:22-139.178.89.65:56158.service - OpenSSH per-connection server daemon (139.178.89.65:56158). Feb 13 19:04:07.419950 sshd[4768]: Accepted publickey for core from 139.178.89.65 port 56158 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:04:07.423022 sshd-session[4768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:07.432027 systemd-logind[1938]: New session 20 of user core. Feb 13 19:04:07.437810 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:04:07.676243 sshd[4770]: Connection closed by 139.178.89.65 port 56158 Feb 13 19:04:07.676870 sshd-session[4768]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:07.683674 systemd[1]: sshd@19-172.31.27.130:22-139.178.89.65:56158.service: Deactivated successfully. Feb 13 19:04:07.687856 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:04:07.690235 systemd-logind[1938]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:04:07.692695 systemd-logind[1938]: Removed session 20. Feb 13 19:04:12.719083 systemd[1]: Started sshd@20-172.31.27.130:22-139.178.89.65:56172.service - OpenSSH per-connection server daemon (139.178.89.65:56172). Feb 13 19:04:12.916133 sshd[4802]: Accepted publickey for core from 139.178.89.65 port 56172 ssh2: RSA SHA256:N5jzFAPw/VkUdyH7hxgwbv5n548nUQy18zKQaYF7hgg Feb 13 19:04:12.918571 sshd-session[4802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:04:12.927711 systemd-logind[1938]: New session 21 of user core. Feb 13 19:04:12.934843 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:04:13.182535 sshd[4804]: Connection closed by 139.178.89.65 port 56172 Feb 13 19:04:13.183035 sshd-session[4802]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:13.190259 systemd[1]: sshd@20-172.31.27.130:22-139.178.89.65:56172.service: Deactivated successfully. Feb 13 19:04:13.194983 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:04:13.196595 systemd-logind[1938]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:04:13.199274 systemd-logind[1938]: Removed session 21. Feb 13 19:04:26.920344 kubelet[3458]: E0213 19:04:26.919849 3458 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-130?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:04:27.933323 systemd[1]: cri-containerd-1cd1ca50b53c3973ee814e7b823f8c942669c34c1cf6db9e9a977ce2d64b1889.scope: Deactivated successfully. Feb 13 19:04:27.933941 systemd[1]: cri-containerd-1cd1ca50b53c3973ee814e7b823f8c942669c34c1cf6db9e9a977ce2d64b1889.scope: Consumed 3.547s CPU time, 51.6M memory peak. Feb 13 19:04:27.974861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cd1ca50b53c3973ee814e7b823f8c942669c34c1cf6db9e9a977ce2d64b1889-rootfs.mount: Deactivated successfully. Feb 13 19:04:27.979878 containerd[1967]: time="2025-02-13T19:04:27.979764484Z" level=info msg="shim disconnected" id=1cd1ca50b53c3973ee814e7b823f8c942669c34c1cf6db9e9a977ce2d64b1889 namespace=k8s.io Feb 13 19:04:27.980549 containerd[1967]: time="2025-02-13T19:04:27.979878472Z" level=warning msg="cleaning up after shim disconnected" id=1cd1ca50b53c3973ee814e7b823f8c942669c34c1cf6db9e9a977ce2d64b1889 namespace=k8s.io Feb 13 19:04:27.980549 containerd[1967]: time="2025-02-13T19:04:27.979904548Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:28.071167 kubelet[3458]: I0213 19:04:28.070891 3458 scope.go:117] "RemoveContainer" containerID="1cd1ca50b53c3973ee814e7b823f8c942669c34c1cf6db9e9a977ce2d64b1889" Feb 13 19:04:28.075551 containerd[1967]: time="2025-02-13T19:04:28.075411865Z" level=info msg="CreateContainer within sandbox \"e49a2308ca23561b2640694d22b8471e91b09abfaf1a88e48ef01d1ce846b452\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 19:04:28.098407 containerd[1967]: time="2025-02-13T19:04:28.098244697Z" level=info msg="CreateContainer within sandbox \"e49a2308ca23561b2640694d22b8471e91b09abfaf1a88e48ef01d1ce846b452\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"32566a7218ffb7e4fcdd13e3349f1f6a634d6d8ef6f944999ac92f789f2238dd\"" Feb 13 19:04:28.099225 containerd[1967]: time="2025-02-13T19:04:28.099178009Z" level=info msg="StartContainer for \"32566a7218ffb7e4fcdd13e3349f1f6a634d6d8ef6f944999ac92f789f2238dd\"" Feb 13 19:04:28.156796 systemd[1]: Started cri-containerd-32566a7218ffb7e4fcdd13e3349f1f6a634d6d8ef6f944999ac92f789f2238dd.scope - libcontainer container 32566a7218ffb7e4fcdd13e3349f1f6a634d6d8ef6f944999ac92f789f2238dd. Feb 13 19:04:28.229469 containerd[1967]: time="2025-02-13T19:04:28.228439813Z" level=info msg="StartContainer for \"32566a7218ffb7e4fcdd13e3349f1f6a634d6d8ef6f944999ac92f789f2238dd\" returns successfully" Feb 13 19:04:33.306646 systemd[1]: cri-containerd-e1289283615d63b66aac50f9a9421c4c5603f1dfdfebbc82cd5c12371322357a.scope: Deactivated successfully. Feb 13 19:04:33.307926 systemd[1]: cri-containerd-e1289283615d63b66aac50f9a9421c4c5603f1dfdfebbc82cd5c12371322357a.scope: Consumed 3.194s CPU time, 18.6M memory peak. Feb 13 19:04:33.350612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1289283615d63b66aac50f9a9421c4c5603f1dfdfebbc82cd5c12371322357a-rootfs.mount: Deactivated successfully. Feb 13 19:04:33.353982 containerd[1967]: time="2025-02-13T19:04:33.353274427Z" level=info msg="shim disconnected" id=e1289283615d63b66aac50f9a9421c4c5603f1dfdfebbc82cd5c12371322357a namespace=k8s.io Feb 13 19:04:33.353982 containerd[1967]: time="2025-02-13T19:04:33.353354851Z" level=warning msg="cleaning up after shim disconnected" id=e1289283615d63b66aac50f9a9421c4c5603f1dfdfebbc82cd5c12371322357a namespace=k8s.io Feb 13 19:04:33.353982 containerd[1967]: time="2025-02-13T19:04:33.353376211Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:34.096364 kubelet[3458]: I0213 19:04:34.096313 3458 scope.go:117] "RemoveContainer" containerID="e1289283615d63b66aac50f9a9421c4c5603f1dfdfebbc82cd5c12371322357a" Feb 13 19:04:34.099709 containerd[1967]: time="2025-02-13T19:04:34.099542886Z" level=info msg="CreateContainer within sandbox \"6b5f7cadd3e3ca8cd76d2ef7e1041b7f3500d5bbca9eee99a77b32e2c4de718a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 19:04:34.121966 containerd[1967]: time="2025-02-13T19:04:34.121835383Z" level=info msg="CreateContainer within sandbox \"6b5f7cadd3e3ca8cd76d2ef7e1041b7f3500d5bbca9eee99a77b32e2c4de718a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"04e5f73b2e1af2f7524315fafa52a4296a915db8940bd304f7a7f879afe014f0\"" Feb 13 19:04:34.122752 containerd[1967]: time="2025-02-13T19:04:34.122714203Z" level=info msg="StartContainer for \"04e5f73b2e1af2f7524315fafa52a4296a915db8940bd304f7a7f879afe014f0\"" Feb 13 19:04:34.175851 systemd[1]: Started cri-containerd-04e5f73b2e1af2f7524315fafa52a4296a915db8940bd304f7a7f879afe014f0.scope - libcontainer container 04e5f73b2e1af2f7524315fafa52a4296a915db8940bd304f7a7f879afe014f0. Feb 13 19:04:34.239275 containerd[1967]: time="2025-02-13T19:04:34.239209039Z" level=info msg="StartContainer for \"04e5f73b2e1af2f7524315fafa52a4296a915db8940bd304f7a7f879afe014f0\" returns successfully" Feb 13 19:04:36.920708 kubelet[3458]: E0213 19:04:36.920283 3458 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-130?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"