Feb 13 19:48:37.222810 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 19:48:37.222856 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 19:48:37.222881 kernel: KASLR disabled due to lack of seed Feb 13 19:48:37.222898 kernel: efi: EFI v2.7 by EDK II Feb 13 19:48:37.222913 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Feb 13 19:48:37.222929 kernel: ACPI: Early table checksum verification disabled Feb 13 19:48:37.222947 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 19:48:37.222962 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 19:48:37.222978 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 19:48:37.222993 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 19:48:37.223014 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 19:48:37.223030 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 19:48:37.223045 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 19:48:37.223061 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 19:48:37.223079 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 19:48:37.223101 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 19:48:37.223118 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 19:48:37.223163 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 19:48:37.223181 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 19:48:37.223198 kernel: printk: bootconsole [uart0] enabled Feb 13 19:48:37.223215 kernel: NUMA: Failed to initialise from firmware Feb 13 19:48:37.223233 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:48:37.223250 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 19:48:37.223267 kernel: Zone ranges: Feb 13 19:48:37.223285 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 19:48:37.223302 kernel: DMA32 empty Feb 13 19:48:37.223326 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 19:48:37.223343 kernel: Movable zone start for each node Feb 13 19:48:37.223359 kernel: Early memory node ranges Feb 13 19:48:37.223375 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 19:48:37.223391 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 19:48:37.223408 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 19:48:37.223423 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 19:48:37.223440 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 19:48:37.223456 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 19:48:37.223473 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 19:48:37.223489 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 19:48:37.223505 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:48:37.223542 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 19:48:37.223565 kernel: psci: probing for conduit method from ACPI. Feb 13 19:48:37.223591 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 19:48:37.223608 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:48:37.223626 kernel: psci: Trusted OS migration not required Feb 13 19:48:37.223647 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:48:37.223665 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:48:37.223683 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:48:37.223700 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 19:48:37.223717 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:48:37.223735 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:48:37.223752 kernel: CPU features: detected: Spectre-v2 Feb 13 19:48:37.223769 kernel: CPU features: detected: Spectre-v3a Feb 13 19:48:37.223786 kernel: CPU features: detected: Spectre-BHB Feb 13 19:48:37.223803 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 19:48:37.223821 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 19:48:37.223842 kernel: alternatives: applying boot alternatives Feb 13 19:48:37.223862 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:48:37.223881 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:48:37.223898 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:48:37.223915 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:48:37.223932 kernel: Fallback order for Node 0: 0 Feb 13 19:48:37.223950 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 19:48:37.223967 kernel: Policy zone: Normal Feb 13 19:48:37.223984 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:48:37.224001 kernel: software IO TLB: area num 2. Feb 13 19:48:37.224018 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 19:48:37.224040 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Feb 13 19:48:37.224058 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:48:37.224075 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:48:37.224093 kernel: rcu: RCU event tracing is enabled. Feb 13 19:48:37.224111 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:48:37.226182 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:48:37.226213 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:48:37.226231 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:48:37.226249 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:48:37.226267 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:48:37.226284 kernel: GICv3: 96 SPIs implemented Feb 13 19:48:37.226311 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:48:37.226329 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:48:37.226346 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 19:48:37.226364 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 19:48:37.226382 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 19:48:37.226400 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:48:37.226419 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:48:37.226439 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 19:48:37.226457 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 19:48:37.226475 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 19:48:37.226494 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:48:37.226512 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 19:48:37.226537 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 19:48:37.226557 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 19:48:37.226575 kernel: Console: colour dummy device 80x25 Feb 13 19:48:37.226594 kernel: printk: console [tty1] enabled Feb 13 19:48:37.226615 kernel: ACPI: Core revision 20230628 Feb 13 19:48:37.226633 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 19:48:37.226651 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:48:37.226670 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:48:37.226688 kernel: landlock: Up and running. Feb 13 19:48:37.226711 kernel: SELinux: Initializing. Feb 13 19:48:37.226730 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:48:37.226749 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:48:37.226769 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:48:37.226789 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:48:37.226807 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:48:37.226826 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:48:37.226847 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 19:48:37.226865 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 19:48:37.226888 kernel: Remapping and enabling EFI services. Feb 13 19:48:37.226906 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:48:37.226929 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:48:37.226947 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 19:48:37.226965 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 19:48:37.226984 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 19:48:37.227002 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:48:37.227020 kernel: SMP: Total of 2 processors activated. Feb 13 19:48:37.227038 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:48:37.227060 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 19:48:37.227078 kernel: CPU features: detected: CRC32 instructions Feb 13 19:48:37.227097 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:48:37.227164 kernel: alternatives: applying system-wide alternatives Feb 13 19:48:37.227196 kernel: devtmpfs: initialized Feb 13 19:48:37.227216 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:48:37.227235 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:48:37.227254 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:48:37.227273 kernel: SMBIOS 3.0.0 present. Feb 13 19:48:37.227292 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 19:48:37.227316 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:48:37.227334 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:48:37.227354 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:48:37.227372 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:48:37.227391 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:48:37.227409 kernel: audit: type=2000 audit(0.288:1): state=initialized audit_enabled=0 res=1 Feb 13 19:48:37.227428 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:48:37.227451 kernel: cpuidle: using governor menu Feb 13 19:48:37.227470 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:48:37.227489 kernel: ASID allocator initialised with 65536 entries Feb 13 19:48:37.227508 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:48:37.227545 kernel: Serial: AMBA PL011 UART driver Feb 13 19:48:37.227569 kernel: Modules: 17520 pages in range for non-PLT usage Feb 13 19:48:37.227588 kernel: Modules: 509040 pages in range for PLT usage Feb 13 19:48:37.227606 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:48:37.227625 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:48:37.227650 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:48:37.227669 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:48:37.227688 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:48:37.227706 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:48:37.227725 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:48:37.227743 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:48:37.227762 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:48:37.227781 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:48:37.227799 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:48:37.227823 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:48:37.227843 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:48:37.227861 kernel: ACPI: Interpreter enabled Feb 13 19:48:37.227880 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:48:37.227899 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:48:37.227918 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 19:48:37.230324 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:48:37.230577 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:48:37.230789 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:48:37.230991 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 19:48:37.231250 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 19:48:37.231281 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 19:48:37.231301 kernel: acpiphp: Slot [1] registered Feb 13 19:48:37.231320 kernel: acpiphp: Slot [2] registered Feb 13 19:48:37.231338 kernel: acpiphp: Slot [3] registered Feb 13 19:48:37.231356 kernel: acpiphp: Slot [4] registered Feb 13 19:48:37.231383 kernel: acpiphp: Slot [5] registered Feb 13 19:48:37.231402 kernel: acpiphp: Slot [6] registered Feb 13 19:48:37.231421 kernel: acpiphp: Slot [7] registered Feb 13 19:48:37.231439 kernel: acpiphp: Slot [8] registered Feb 13 19:48:37.231457 kernel: acpiphp: Slot [9] registered Feb 13 19:48:37.231475 kernel: acpiphp: Slot [10] registered Feb 13 19:48:37.231493 kernel: acpiphp: Slot [11] registered Feb 13 19:48:37.231512 kernel: acpiphp: Slot [12] registered Feb 13 19:48:37.231549 kernel: acpiphp: Slot [13] registered Feb 13 19:48:37.234215 kernel: acpiphp: Slot [14] registered Feb 13 19:48:37.234252 kernel: acpiphp: Slot [15] registered Feb 13 19:48:37.234271 kernel: acpiphp: Slot [16] registered Feb 13 19:48:37.234290 kernel: acpiphp: Slot [17] registered Feb 13 19:48:37.234308 kernel: acpiphp: Slot [18] registered Feb 13 19:48:37.234327 kernel: acpiphp: Slot [19] registered Feb 13 19:48:37.234345 kernel: acpiphp: Slot [20] registered Feb 13 19:48:37.234364 kernel: acpiphp: Slot [21] registered Feb 13 19:48:37.234382 kernel: acpiphp: Slot [22] registered Feb 13 19:48:37.234401 kernel: acpiphp: Slot [23] registered Feb 13 19:48:37.234425 kernel: acpiphp: Slot [24] registered Feb 13 19:48:37.234443 kernel: acpiphp: Slot [25] registered Feb 13 19:48:37.234462 kernel: acpiphp: Slot [26] registered Feb 13 19:48:37.234480 kernel: acpiphp: Slot [27] registered Feb 13 19:48:37.234499 kernel: acpiphp: Slot [28] registered Feb 13 19:48:37.234517 kernel: acpiphp: Slot [29] registered Feb 13 19:48:37.234536 kernel: acpiphp: Slot [30] registered Feb 13 19:48:37.234554 kernel: acpiphp: Slot [31] registered Feb 13 19:48:37.234573 kernel: PCI host bridge to bus 0000:00 Feb 13 19:48:37.234827 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 19:48:37.235021 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:48:37.238299 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 19:48:37.238548 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 19:48:37.238792 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 19:48:37.239046 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 19:48:37.239318 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 19:48:37.246258 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 19:48:37.246522 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 19:48:37.246734 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:48:37.246959 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 19:48:37.247199 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 19:48:37.247415 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 19:48:37.247679 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 19:48:37.247902 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:48:37.248117 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 19:48:37.250912 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 19:48:37.251167 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 19:48:37.251390 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 19:48:37.251632 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 19:48:37.251848 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 19:48:37.252040 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:48:37.253493 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 19:48:37.253535 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:48:37.253555 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:48:37.253574 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:48:37.253593 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:48:37.253611 kernel: iommu: Default domain type: Translated Feb 13 19:48:37.253630 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:48:37.253660 kernel: efivars: Registered efivars operations Feb 13 19:48:37.253678 kernel: vgaarb: loaded Feb 13 19:48:37.253697 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:48:37.253716 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:48:37.253735 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:48:37.253753 kernel: pnp: PnP ACPI init Feb 13 19:48:37.253980 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 19:48:37.254009 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:48:37.254034 kernel: NET: Registered PF_INET protocol family Feb 13 19:48:37.254053 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:48:37.254072 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:48:37.254091 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:48:37.254110 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:48:37.254154 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:48:37.254175 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:48:37.254194 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:48:37.254212 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:48:37.254238 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:48:37.254257 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:48:37.254276 kernel: kvm [1]: HYP mode not available Feb 13 19:48:37.254294 kernel: Initialise system trusted keyrings Feb 13 19:48:37.254314 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:48:37.254332 kernel: Key type asymmetric registered Feb 13 19:48:37.254352 kernel: Asymmetric key parser 'x509' registered Feb 13 19:48:37.254371 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:48:37.254391 kernel: io scheduler mq-deadline registered Feb 13 19:48:37.254416 kernel: io scheduler kyber registered Feb 13 19:48:37.254435 kernel: io scheduler bfq registered Feb 13 19:48:37.254713 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 19:48:37.254748 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:48:37.254769 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:48:37.254788 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 19:48:37.254807 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 19:48:37.254826 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:48:37.254855 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 19:48:37.255116 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 19:48:37.272627 kernel: printk: console [ttyS0] disabled Feb 13 19:48:37.272649 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 19:48:37.272668 kernel: printk: console [ttyS0] enabled Feb 13 19:48:37.272687 kernel: printk: bootconsole [uart0] disabled Feb 13 19:48:37.272706 kernel: thunder_xcv, ver 1.0 Feb 13 19:48:37.272724 kernel: thunder_bgx, ver 1.0 Feb 13 19:48:37.272742 kernel: nicpf, ver 1.0 Feb 13 19:48:37.272771 kernel: nicvf, ver 1.0 Feb 13 19:48:37.273047 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:48:37.273390 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:48:36 UTC (1739476116) Feb 13 19:48:37.273427 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:48:37.273447 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 19:48:37.273469 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:48:37.273489 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:48:37.273510 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:48:37.273543 kernel: Segment Routing with IPv6 Feb 13 19:48:37.273564 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:48:37.273584 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:48:37.273605 kernel: Key type dns_resolver registered Feb 13 19:48:37.273625 kernel: registered taskstats version 1 Feb 13 19:48:37.273645 kernel: Loading compiled-in X.509 certificates Feb 13 19:48:37.273665 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 19:48:37.273684 kernel: Key type .fscrypt registered Feb 13 19:48:37.273705 kernel: Key type fscrypt-provisioning registered Feb 13 19:48:37.273732 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:48:37.273751 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:48:37.273770 kernel: ima: No architecture policies found Feb 13 19:48:37.273789 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:48:37.273809 kernel: clk: Disabling unused clocks Feb 13 19:48:37.273827 kernel: Freeing unused kernel memory: 39360K Feb 13 19:48:37.273846 kernel: Run /init as init process Feb 13 19:48:37.273866 kernel: with arguments: Feb 13 19:48:37.273886 kernel: /init Feb 13 19:48:37.273904 kernel: with environment: Feb 13 19:48:37.273928 kernel: HOME=/ Feb 13 19:48:37.273947 kernel: TERM=linux Feb 13 19:48:37.273965 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:48:37.273989 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:48:37.274013 systemd[1]: Detected virtualization amazon. Feb 13 19:48:37.274034 systemd[1]: Detected architecture arm64. Feb 13 19:48:37.274055 systemd[1]: Running in initrd. Feb 13 19:48:37.274081 systemd[1]: No hostname configured, using default hostname. Feb 13 19:48:37.274102 systemd[1]: Hostname set to . Feb 13 19:48:37.274180 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:48:37.274208 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:48:37.274259 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:48:37.274281 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:48:37.274303 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:48:37.274324 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:48:37.274353 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:48:37.274374 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:48:37.274398 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:48:37.274419 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:48:37.274439 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:48:37.274460 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:48:37.274481 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:48:37.274506 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:48:37.274527 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:48:37.274547 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:48:37.274567 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:48:37.274587 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:48:37.274608 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:48:37.274628 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:48:37.274648 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:48:37.274669 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:48:37.274694 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:48:37.274714 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:48:37.274734 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:48:37.274755 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:48:37.274776 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:48:37.274796 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:48:37.274816 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:48:37.274837 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:48:37.274862 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:48:37.274884 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:48:37.274904 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:48:37.274925 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:48:37.274946 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:48:37.275013 systemd-journald[250]: Collecting audit messages is disabled. Feb 13 19:48:37.275061 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:37.275083 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:48:37.275109 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:48:37.275664 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:48:37.275694 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:48:37.275714 kernel: Bridge firewalling registered Feb 13 19:48:37.275734 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:48:37.275758 systemd-journald[250]: Journal started Feb 13 19:48:37.275815 systemd-journald[250]: Runtime Journal (/run/log/journal/ec258e9f74d37d31d9a45b91432b1b08) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:48:37.193530 systemd-modules-load[251]: Inserted module 'overlay' Feb 13 19:48:37.255260 systemd-modules-load[251]: Inserted module 'br_netfilter' Feb 13 19:48:37.285534 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:48:37.299731 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:48:37.309433 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:48:37.321109 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:48:37.335388 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:48:37.342579 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:48:37.353518 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:48:37.362777 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:48:37.375666 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:48:37.394103 dracut-cmdline[286]: dracut-dracut-053 Feb 13 19:48:37.402850 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 19:48:37.467552 systemd-resolved[290]: Positive Trust Anchors: Feb 13 19:48:37.467587 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:48:37.467650 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:48:37.553163 kernel: SCSI subsystem initialized Feb 13 19:48:37.561159 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:48:37.574175 kernel: iscsi: registered transport (tcp) Feb 13 19:48:37.596400 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:48:37.596470 kernel: QLogic iSCSI HBA Driver Feb 13 19:48:37.696502 kernel: random: crng init done Feb 13 19:48:37.696534 systemd-resolved[290]: Defaulting to hostname 'linux'. Feb 13 19:48:37.699945 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:48:37.703911 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:48:37.728194 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:48:37.737522 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:48:37.771355 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:48:37.771431 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:48:37.771459 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:48:37.841168 kernel: raid6: neonx8 gen() 6675 MB/s Feb 13 19:48:37.858157 kernel: raid6: neonx4 gen() 6486 MB/s Feb 13 19:48:37.875157 kernel: raid6: neonx2 gen() 5399 MB/s Feb 13 19:48:37.892156 kernel: raid6: neonx1 gen() 3940 MB/s Feb 13 19:48:37.909156 kernel: raid6: int64x8 gen() 3801 MB/s Feb 13 19:48:37.926156 kernel: raid6: int64x4 gen() 3688 MB/s Feb 13 19:48:37.943156 kernel: raid6: int64x2 gen() 3558 MB/s Feb 13 19:48:37.960904 kernel: raid6: int64x1 gen() 2771 MB/s Feb 13 19:48:37.960938 kernel: raid6: using algorithm neonx8 gen() 6675 MB/s Feb 13 19:48:37.978892 kernel: raid6: .... xor() 4875 MB/s, rmw enabled Feb 13 19:48:37.978946 kernel: raid6: using neon recovery algorithm Feb 13 19:48:37.987364 kernel: xor: measuring software checksum speed Feb 13 19:48:37.987422 kernel: 8regs : 10957 MB/sec Feb 13 19:48:37.988470 kernel: 32regs : 11494 MB/sec Feb 13 19:48:37.989628 kernel: arm64_neon : 9581 MB/sec Feb 13 19:48:37.989660 kernel: xor: using function: 32regs (11494 MB/sec) Feb 13 19:48:38.073175 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:48:38.092271 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:48:38.102472 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:48:38.148087 systemd-udevd[470]: Using default interface naming scheme 'v255'. Feb 13 19:48:38.156924 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:48:38.186432 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:48:38.213579 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation Feb 13 19:48:38.279290 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:48:38.294541 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:48:38.410257 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:48:38.418717 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:48:38.469052 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:48:38.473568 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:48:38.474025 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:48:38.474098 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:48:38.496618 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:48:38.533711 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:48:38.628631 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:48:38.628695 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 19:48:38.672670 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 19:48:38.672956 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 19:48:38.673343 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 19:48:38.673374 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 19:48:38.673648 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:4a:f8:8f:24:67 Feb 13 19:48:38.640430 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:48:38.640962 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:48:38.643646 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:48:38.645781 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:48:38.646046 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:38.648236 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:48:38.663820 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:48:38.704894 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:38.717292 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 19:48:38.719642 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:48:38.728797 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:48:38.728860 kernel: GPT:9289727 != 16777215 Feb 13 19:48:38.730703 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:48:38.730746 kernel: GPT:9289727 != 16777215 Feb 13 19:48:38.731768 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:48:38.731805 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:48:38.737904 (udev-worker)[544]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:48:38.766187 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:48:38.859556 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 19:48:38.870168 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (517) Feb 13 19:48:38.901193 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (526) Feb 13 19:48:38.983578 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 19:48:39.001437 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:48:39.027743 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 19:48:39.030246 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 19:48:39.044640 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:48:39.063619 disk-uuid[661]: Primary Header is updated. Feb 13 19:48:39.063619 disk-uuid[661]: Secondary Entries is updated. Feb 13 19:48:39.063619 disk-uuid[661]: Secondary Header is updated. Feb 13 19:48:39.074213 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:48:39.085175 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:48:40.093151 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:48:40.094606 disk-uuid[662]: The operation has completed successfully. Feb 13 19:48:40.284359 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:48:40.286462 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:48:40.351385 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:48:40.359490 sh[921]: Success Feb 13 19:48:40.389186 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:48:40.501889 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:48:40.519382 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:48:40.524416 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:48:40.562562 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 19:48:40.562652 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:48:40.562680 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:48:40.565450 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:48:40.565484 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:48:40.739183 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:48:40.777370 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:48:40.783623 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:48:40.799394 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:48:40.805573 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:48:40.824153 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:48:40.824221 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:48:40.824263 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:48:40.831186 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:48:40.848835 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:48:40.853146 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:48:40.884200 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:48:40.900876 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:48:41.007217 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:48:41.028563 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:48:41.086551 systemd-networkd[1113]: lo: Link UP Feb 13 19:48:41.086575 systemd-networkd[1113]: lo: Gained carrier Feb 13 19:48:41.090366 systemd-networkd[1113]: Enumeration completed Feb 13 19:48:41.092750 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:48:41.094216 systemd-networkd[1113]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:48:41.094222 systemd-networkd[1113]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:48:41.102053 systemd[1]: Reached target network.target - Network. Feb 13 19:48:41.108509 systemd-networkd[1113]: eth0: Link UP Feb 13 19:48:41.108526 systemd-networkd[1113]: eth0: Gained carrier Feb 13 19:48:41.108547 systemd-networkd[1113]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:48:41.132348 systemd-networkd[1113]: eth0: DHCPv4 address 172.31.20.134/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:48:41.710335 ignition[1032]: Ignition 2.19.0 Feb 13 19:48:41.710826 ignition[1032]: Stage: fetch-offline Feb 13 19:48:41.711987 ignition[1032]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:41.712010 ignition[1032]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:41.712509 ignition[1032]: Ignition finished successfully Feb 13 19:48:41.721296 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:48:41.728402 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:48:41.765426 ignition[1122]: Ignition 2.19.0 Feb 13 19:48:41.765454 ignition[1122]: Stage: fetch Feb 13 19:48:41.766494 ignition[1122]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:41.766527 ignition[1122]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:41.766762 ignition[1122]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:41.789565 ignition[1122]: PUT result: OK Feb 13 19:48:41.792597 ignition[1122]: parsed url from cmdline: "" Feb 13 19:48:41.792613 ignition[1122]: no config URL provided Feb 13 19:48:41.792629 ignition[1122]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:48:41.792654 ignition[1122]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:48:41.792685 ignition[1122]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:41.794492 ignition[1122]: PUT result: OK Feb 13 19:48:41.796503 ignition[1122]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 19:48:41.800631 ignition[1122]: GET result: OK Feb 13 19:48:41.800883 ignition[1122]: parsing config with SHA512: f5e995dcbaf8e22eec86f3a33309d397abeeb2d90feb026a55cd8b399a410380fde24436fab55eef8949e9dd75512fb47a4c87c6835e52b816e4727de2c50b14 Feb 13 19:48:41.814246 unknown[1122]: fetched base config from "system" Feb 13 19:48:41.814492 unknown[1122]: fetched base config from "system" Feb 13 19:48:41.815299 ignition[1122]: fetch: fetch complete Feb 13 19:48:41.814506 unknown[1122]: fetched user config from "aws" Feb 13 19:48:41.815311 ignition[1122]: fetch: fetch passed Feb 13 19:48:41.815432 ignition[1122]: Ignition finished successfully Feb 13 19:48:41.824620 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:48:41.846537 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:48:41.871538 ignition[1128]: Ignition 2.19.0 Feb 13 19:48:41.871572 ignition[1128]: Stage: kargs Feb 13 19:48:41.873390 ignition[1128]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:41.873972 ignition[1128]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:41.874764 ignition[1128]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:41.880855 ignition[1128]: PUT result: OK Feb 13 19:48:41.886816 ignition[1128]: kargs: kargs passed Feb 13 19:48:41.886970 ignition[1128]: Ignition finished successfully Feb 13 19:48:41.891896 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:48:41.905497 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:48:41.932198 ignition[1134]: Ignition 2.19.0 Feb 13 19:48:41.932220 ignition[1134]: Stage: disks Feb 13 19:48:41.932930 ignition[1134]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:41.932955 ignition[1134]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:41.933108 ignition[1134]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:41.935619 ignition[1134]: PUT result: OK Feb 13 19:48:41.945541 ignition[1134]: disks: disks passed Feb 13 19:48:41.945662 ignition[1134]: Ignition finished successfully Feb 13 19:48:41.949423 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:48:41.954232 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:48:41.956646 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:48:41.960867 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:48:41.962833 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:48:41.964797 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:48:41.984629 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:48:42.048918 systemd-fsck[1142]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:48:42.060011 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:48:42.072433 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:48:42.162664 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 19:48:42.163719 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:48:42.167398 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:48:42.201356 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:48:42.207454 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:48:42.212481 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:48:42.212598 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:48:42.212652 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:48:42.236172 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1161) Feb 13 19:48:42.241240 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:48:42.241394 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:48:42.241423 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:48:42.239760 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:48:42.251424 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:48:42.257170 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:48:42.266317 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:48:42.455311 systemd-networkd[1113]: eth0: Gained IPv6LL Feb 13 19:48:42.860032 initrd-setup-root[1185]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:48:42.883478 initrd-setup-root[1192]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:48:42.908896 initrd-setup-root[1199]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:48:42.918107 initrd-setup-root[1206]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:48:43.240530 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:48:43.248305 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:48:43.258564 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:48:43.280723 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:48:43.285223 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:48:43.314272 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:48:43.334869 ignition[1275]: INFO : Ignition 2.19.0 Feb 13 19:48:43.334869 ignition[1275]: INFO : Stage: mount Feb 13 19:48:43.334869 ignition[1275]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:43.334869 ignition[1275]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:43.343177 ignition[1275]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:43.343177 ignition[1275]: INFO : PUT result: OK Feb 13 19:48:43.348798 ignition[1275]: INFO : mount: mount passed Feb 13 19:48:43.351564 ignition[1275]: INFO : Ignition finished successfully Feb 13 19:48:43.353507 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:48:43.370499 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:48:43.395542 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:48:43.415156 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1285) Feb 13 19:48:43.420029 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 19:48:43.420080 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:48:43.420107 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:48:43.425180 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:48:43.429365 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:48:43.474357 ignition[1302]: INFO : Ignition 2.19.0 Feb 13 19:48:43.474357 ignition[1302]: INFO : Stage: files Feb 13 19:48:43.477635 ignition[1302]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:43.477635 ignition[1302]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:43.477635 ignition[1302]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:43.484167 ignition[1302]: INFO : PUT result: OK Feb 13 19:48:43.488928 ignition[1302]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:48:43.491788 ignition[1302]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:48:43.491788 ignition[1302]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:48:43.527676 ignition[1302]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:48:43.530565 ignition[1302]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:48:43.533411 unknown[1302]: wrote ssh authorized keys file for user: core Feb 13 19:48:43.535694 ignition[1302]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:48:43.551431 ignition[1302]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:48:43.555261 ignition[1302]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:48:43.633586 ignition[1302]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:48:43.797207 ignition[1302]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:48:43.797207 ignition[1302]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:48:43.804646 ignition[1302]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:48:43.804646 ignition[1302]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:48:43.804646 ignition[1302]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:48:43.804646 ignition[1302]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:48:43.804646 ignition[1302]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:48:43.804646 ignition[1302]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:48:43.804646 ignition[1302]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:48:43.804646 ignition[1302]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:48:43.804646 ignition[1302]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:48:43.804646 ignition[1302]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:48:43.804646 ignition[1302]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:48:43.804646 ignition[1302]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:48:43.804646 ignition[1302]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 19:48:44.310071 ignition[1302]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 19:48:44.702287 ignition[1302]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:48:44.702287 ignition[1302]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 19:48:44.709605 ignition[1302]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:48:44.709605 ignition[1302]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:48:44.709605 ignition[1302]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 19:48:44.709605 ignition[1302]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:48:44.709605 ignition[1302]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:48:44.709605 ignition[1302]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:48:44.709605 ignition[1302]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:48:44.709605 ignition[1302]: INFO : files: files passed Feb 13 19:48:44.709605 ignition[1302]: INFO : Ignition finished successfully Feb 13 19:48:44.732795 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:48:44.750560 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:48:44.759395 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:48:44.766746 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:48:44.766996 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:48:44.795849 initrd-setup-root-after-ignition[1330]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:48:44.795849 initrd-setup-root-after-ignition[1330]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:48:44.805170 initrd-setup-root-after-ignition[1334]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:48:44.809183 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:48:44.819250 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:48:44.829412 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:48:44.897449 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:48:44.898031 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:48:44.905821 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:48:44.908193 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:48:44.910316 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:48:44.917435 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:48:44.961869 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:48:44.971420 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:48:45.002206 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:48:45.006734 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:48:45.011184 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:48:45.013946 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:48:45.014230 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:48:45.021747 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:48:45.023888 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:48:45.025781 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:48:45.028015 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:48:45.030412 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:48:45.032744 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:48:45.034811 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:48:45.037303 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:48:45.039442 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:48:45.040370 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:48:45.040613 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:48:45.040844 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:48:45.041572 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:48:45.041908 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:48:45.042166 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:48:45.048451 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:48:45.048987 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:48:45.049239 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:48:45.050019 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:48:45.050523 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:48:45.081141 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:48:45.085522 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:48:45.113815 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:48:45.117359 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:48:45.121196 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:48:45.129529 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:48:45.132333 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:48:45.135360 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:48:45.138634 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:48:45.170431 ignition[1354]: INFO : Ignition 2.19.0 Feb 13 19:48:45.170431 ignition[1354]: INFO : Stage: umount Feb 13 19:48:45.170431 ignition[1354]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:48:45.170431 ignition[1354]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:48:45.170431 ignition[1354]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:48:45.139888 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:48:45.186214 ignition[1354]: INFO : PUT result: OK Feb 13 19:48:45.169983 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:48:45.173438 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:48:45.191768 ignition[1354]: INFO : umount: umount passed Feb 13 19:48:45.191768 ignition[1354]: INFO : Ignition finished successfully Feb 13 19:48:45.193805 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:48:45.194086 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:48:45.212010 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:48:45.212187 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:48:45.214488 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:48:45.214612 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:48:45.219502 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:48:45.221911 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:48:45.224675 systemd[1]: Stopped target network.target - Network. Feb 13 19:48:45.226451 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:48:45.226582 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:48:45.235662 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:48:45.239419 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:48:45.248018 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:48:45.250568 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:48:45.279859 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:48:45.282533 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:48:45.282616 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:48:45.284587 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:48:45.284663 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:48:45.286849 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:48:45.286939 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:48:45.288877 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:48:45.288958 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:48:45.291678 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:48:45.293949 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:48:45.303972 systemd-networkd[1113]: eth0: DHCPv6 lease lost Feb 13 19:48:45.306600 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:48:45.308608 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:48:45.308789 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:48:45.319322 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:48:45.319540 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:48:45.326660 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:48:45.328231 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:48:45.338833 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:48:45.338966 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:48:45.352867 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:48:45.352984 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:48:45.365350 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:48:45.372198 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:48:45.372544 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:48:45.379824 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:48:45.379923 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:48:45.382406 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:48:45.382509 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:48:45.384730 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:48:45.384846 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:48:45.399825 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:48:45.424735 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:48:45.425039 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:48:45.429979 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:48:45.430068 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:48:45.439008 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:48:45.439086 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:48:45.441081 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:48:45.441188 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:48:45.443342 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:48:45.443425 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:48:45.445552 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:48:45.445633 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:48:45.458549 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:48:45.474550 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:48:45.474663 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:48:45.477155 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:48:45.477239 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:45.483412 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:48:45.483620 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:48:45.502510 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:48:45.502716 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:48:45.510726 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:48:45.525114 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:48:45.545088 systemd[1]: Switching root. Feb 13 19:48:45.592897 systemd-journald[250]: Journal stopped Feb 13 19:48:48.753703 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Feb 13 19:48:48.753906 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:48:48.753965 kernel: SELinux: policy capability open_perms=1 Feb 13 19:48:48.753996 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:48:48.754027 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:48:48.754060 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:48:48.754091 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:48:48.754142 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:48:48.754180 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:48:48.754230 kernel: audit: type=1403 audit(1739476126.688:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:48:48.754276 systemd[1]: Successfully loaded SELinux policy in 81.100ms. Feb 13 19:48:48.754324 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.599ms. Feb 13 19:48:48.754359 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:48:48.754390 systemd[1]: Detected virtualization amazon. Feb 13 19:48:48.754424 systemd[1]: Detected architecture arm64. Feb 13 19:48:48.754453 systemd[1]: Detected first boot. Feb 13 19:48:48.754486 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:48:48.754519 zram_generator::config[1396]: No configuration found. Feb 13 19:48:48.754557 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:48:48.754592 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:48:48.754624 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:48:48.754658 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:48:48.754695 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:48:48.754729 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:48:48.754761 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:48:48.754793 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:48:48.754825 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:48:48.754861 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:48:48.754894 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:48:48.754927 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:48:48.754956 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:48:48.754987 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:48:48.755017 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:48:48.755058 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:48:48.755090 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:48:48.756115 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:48:48.756237 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:48:48.756272 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:48:48.756305 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:48:48.756347 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:48:48.756390 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:48:48.756420 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:48:48.756450 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:48:48.756482 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:48:48.756519 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:48:48.756550 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:48:48.756582 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:48:48.756614 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:48:48.756647 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:48:48.756677 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:48:48.756708 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:48:48.756740 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:48:48.756770 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:48:48.756805 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:48:48.756837 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:48:48.756870 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:48:48.756902 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:48:48.756932 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:48:48.756963 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:48:48.756993 systemd[1]: Reached target machines.target - Containers. Feb 13 19:48:48.757022 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:48:48.757054 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:48:48.757090 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:48:48.757119 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:48:48.757199 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:48:48.757233 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:48:48.757264 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:48:48.757294 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:48:48.757324 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:48:48.757360 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:48:48.757401 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:48:48.757432 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:48:48.764656 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:48:48.764738 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:48:48.764772 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:48:48.764804 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:48:48.764834 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:48:48.764874 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:48:48.764906 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:48:48.764954 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:48:48.764991 systemd[1]: Stopped verity-setup.service. Feb 13 19:48:48.765022 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:48:48.765054 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:48:48.765086 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:48:48.770178 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:48:48.770251 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:48:48.770283 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:48:48.770314 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:48:48.770344 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:48:48.770374 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:48:48.770407 kernel: ACPI: bus type drm_connector registered Feb 13 19:48:48.770437 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:48:48.770469 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:48:48.770510 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:48:48.770543 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:48:48.770571 kernel: loop: module loaded Feb 13 19:48:48.770600 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:48:48.770630 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:48:48.770662 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:48:48.770692 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:48:48.770722 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:48:48.770758 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:48:48.770791 kernel: fuse: init (API version 7.39) Feb 13 19:48:48.770821 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:48:48.770854 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:48:48.770887 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:48:48.770917 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:48:48.770954 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:48:48.771025 systemd-journald[1481]: Collecting audit messages is disabled. Feb 13 19:48:48.771077 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:48:48.771108 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:48:48.771158 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:48:48.771194 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:48:48.771224 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:48:48.771260 systemd-journald[1481]: Journal started Feb 13 19:48:48.771308 systemd-journald[1481]: Runtime Journal (/run/log/journal/ec258e9f74d37d31d9a45b91432b1b08) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:48:48.001998 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:48:48.781623 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:48:48.781736 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:48:48.101794 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 19:48:48.102725 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:48:48.802843 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:48:48.812044 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:48:48.812120 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:48:48.812183 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:48:48.830244 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:48:48.870073 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:48:48.870317 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:48:48.851204 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:48:48.853870 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:48:48.862961 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:48:48.871014 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:48:48.875760 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:48:48.940585 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:48:48.954473 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:48:48.961602 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:48:48.974689 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:48:48.980214 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:48:48.985162 kernel: loop0: detected capacity change from 0 to 114432 Feb 13 19:48:49.003669 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:48:49.017831 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:48:49.039316 systemd-journald[1481]: Time spent on flushing to /var/log/journal/ec258e9f74d37d31d9a45b91432b1b08 is 80.119ms for 912 entries. Feb 13 19:48:49.039316 systemd-journald[1481]: System Journal (/var/log/journal/ec258e9f74d37d31d9a45b91432b1b08) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:48:49.137696 systemd-journald[1481]: Received client request to flush runtime journal. Feb 13 19:48:49.137785 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:48:49.043374 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:48:49.048280 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:48:49.100550 udevadm[1536]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:48:49.137871 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:48:49.151762 kernel: loop1: detected capacity change from 0 to 114328 Feb 13 19:48:49.151561 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:48:49.156668 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:48:49.213678 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. Feb 13 19:48:49.214580 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. Feb 13 19:48:49.234418 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:48:49.286235 kernel: loop2: detected capacity change from 0 to 189592 Feb 13 19:48:49.352210 kernel: loop3: detected capacity change from 0 to 52536 Feb 13 19:48:49.575152 kernel: loop4: detected capacity change from 0 to 114432 Feb 13 19:48:49.587187 kernel: loop5: detected capacity change from 0 to 114328 Feb 13 19:48:49.604175 kernel: loop6: detected capacity change from 0 to 189592 Feb 13 19:48:49.629521 kernel: loop7: detected capacity change from 0 to 52536 Feb 13 19:48:49.639039 (sd-merge)[1551]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 19:48:49.644230 (sd-merge)[1551]: Merged extensions into '/usr'. Feb 13 19:48:49.651694 systemd[1]: Reloading requested from client PID 1507 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:48:49.651886 systemd[1]: Reloading... Feb 13 19:48:49.864269 zram_generator::config[1578]: No configuration found. Feb 13 19:48:50.202683 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:48:50.318212 systemd[1]: Reloading finished in 665 ms. Feb 13 19:48:50.362242 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:48:50.378558 systemd[1]: Starting ensure-sysext.service... Feb 13 19:48:50.389495 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:48:50.412285 systemd[1]: Reloading requested from client PID 1629 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:48:50.412323 systemd[1]: Reloading... Feb 13 19:48:50.474924 systemd-tmpfiles[1630]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:48:50.475681 systemd-tmpfiles[1630]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:48:50.478326 systemd-tmpfiles[1630]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:48:50.479031 systemd-tmpfiles[1630]: ACLs are not supported, ignoring. Feb 13 19:48:50.479377 systemd-tmpfiles[1630]: ACLs are not supported, ignoring. Feb 13 19:48:50.488680 systemd-tmpfiles[1630]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:48:50.488703 systemd-tmpfiles[1630]: Skipping /boot Feb 13 19:48:50.515153 systemd-tmpfiles[1630]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:48:50.515353 systemd-tmpfiles[1630]: Skipping /boot Feb 13 19:48:50.582223 zram_generator::config[1657]: No configuration found. Feb 13 19:48:50.850875 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:48:50.964946 systemd[1]: Reloading finished in 551 ms. Feb 13 19:48:51.003097 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:48:51.011992 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:48:51.036642 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 19:48:51.047460 ldconfig[1503]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:48:51.061619 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:48:51.067626 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:48:51.089891 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:48:51.096494 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:48:51.103896 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:48:51.107769 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:48:51.122640 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:48:51.132748 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:48:51.141888 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:48:51.160674 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:48:51.162814 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:48:51.179863 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:48:51.188457 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:48:51.188858 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:48:51.197393 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:48:51.202062 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:48:51.204312 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:48:51.204739 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:48:51.217263 systemd[1]: Finished ensure-sysext.service. Feb 13 19:48:51.219972 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:48:51.220346 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:48:51.228746 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:48:51.231227 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:48:51.234277 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:48:51.290501 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:48:51.296321 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:48:51.299990 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:48:51.301561 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:48:51.309060 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:48:51.322392 systemd-udevd[1722]: Using default interface naming scheme 'v255'. Feb 13 19:48:51.322621 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:48:51.327194 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:48:51.327607 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:48:51.365837 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:48:51.373288 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:48:51.376965 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:48:51.385750 augenrules[1749]: No rules Feb 13 19:48:51.391242 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 19:48:51.411346 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:48:51.426414 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:48:51.454534 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:48:51.634031 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:48:51.637260 systemd-networkd[1758]: lo: Link UP Feb 13 19:48:51.637283 systemd-networkd[1758]: lo: Gained carrier Feb 13 19:48:51.638398 systemd-networkd[1758]: Enumeration completed Feb 13 19:48:51.638585 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:48:51.656590 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:48:51.729910 (udev-worker)[1770]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:48:51.789584 systemd-resolved[1716]: Positive Trust Anchors: Feb 13 19:48:51.789615 systemd-resolved[1716]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:48:51.789679 systemd-resolved[1716]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:48:51.804666 systemd-resolved[1716]: Defaulting to hostname 'linux'. Feb 13 19:48:51.810193 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:48:51.812779 systemd[1]: Reached target network.target - Network. Feb 13 19:48:51.815091 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:48:51.851481 systemd-networkd[1758]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:48:51.851518 systemd-networkd[1758]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:48:51.856406 systemd-networkd[1758]: eth0: Link UP Feb 13 19:48:51.857472 systemd-networkd[1758]: eth0: Gained carrier Feb 13 19:48:51.857520 systemd-networkd[1758]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:48:51.862192 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1763) Feb 13 19:48:51.885289 systemd-networkd[1758]: eth0: DHCPv4 address 172.31.20.134/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:48:52.055242 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:48:52.142904 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:48:52.153741 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:48:52.158234 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:48:52.173048 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:48:52.198102 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:48:52.208055 lvm[1883]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:48:52.218241 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:48:52.252264 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:48:52.255357 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:48:52.257576 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:48:52.259908 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:48:52.263841 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:48:52.266602 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:48:52.268906 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:48:52.271546 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:48:52.273981 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:48:52.274050 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:48:52.275878 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:48:52.278911 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:48:52.284770 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:48:52.299341 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:48:52.303728 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:48:52.307036 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:48:52.309490 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:48:52.311422 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:48:52.314298 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:48:52.314370 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:48:52.325327 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:48:52.332458 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:48:52.345114 lvm[1891]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:48:52.348480 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:48:52.355280 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:48:52.361527 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:48:52.365663 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:48:52.377586 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:48:52.400556 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:48:52.425275 jq[1895]: false Feb 13 19:48:52.417831 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:48:52.439297 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:48:52.447537 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:48:52.457414 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:48:52.471076 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:48:52.471349 dbus-daemon[1894]: [system] SELinux support is enabled Feb 13 19:48:52.475679 dbus-daemon[1894]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1758 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:48:52.476726 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:48:52.479245 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:48:52.484717 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:48:52.492928 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:48:52.526696 extend-filesystems[1896]: Found loop4 Feb 13 19:48:52.526696 extend-filesystems[1896]: Found loop5 Feb 13 19:48:52.526696 extend-filesystems[1896]: Found loop6 Feb 13 19:48:52.526696 extend-filesystems[1896]: Found loop7 Feb 13 19:48:52.526696 extend-filesystems[1896]: Found nvme0n1 Feb 13 19:48:52.526696 extend-filesystems[1896]: Found nvme0n1p1 Feb 13 19:48:52.526696 extend-filesystems[1896]: Found nvme0n1p2 Feb 13 19:48:52.526696 extend-filesystems[1896]: Found nvme0n1p3 Feb 13 19:48:52.526696 extend-filesystems[1896]: Found usr Feb 13 19:48:52.526696 extend-filesystems[1896]: Found nvme0n1p4 Feb 13 19:48:52.526696 extend-filesystems[1896]: Found nvme0n1p6 Feb 13 19:48:52.526696 extend-filesystems[1896]: Found nvme0n1p7 Feb 13 19:48:52.526696 extend-filesystems[1896]: Found nvme0n1p9 Feb 13 19:48:52.526696 extend-filesystems[1896]: Checking size of /dev/nvme0n1p9 Feb 13 19:48:52.604944 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 19:48:52.497680 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:48:52.538791 dbus-daemon[1894]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:48:52.607639 extend-filesystems[1896]: Resized partition /dev/nvme0n1p9 Feb 13 19:48:52.509592 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:48:52.611856 extend-filesystems[1920]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:48:52.523584 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:48:52.524164 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:48:52.536749 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:48:52.536807 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:48:52.556548 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:48:52.556590 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:48:52.598179 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:48:52.624806 ntpd[1898]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:35:09 UTC 2025 (1): Starting Feb 13 19:48:52.648825 ntpd[1898]: 13 Feb 19:48:52 ntpd[1898]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:35:09 UTC 2025 (1): Starting Feb 13 19:48:52.648825 ntpd[1898]: 13 Feb 19:48:52 ntpd[1898]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:48:52.648825 ntpd[1898]: 13 Feb 19:48:52 ntpd[1898]: ---------------------------------------------------- Feb 13 19:48:52.648825 ntpd[1898]: 13 Feb 19:48:52 ntpd[1898]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:48:52.648825 ntpd[1898]: 13 Feb 19:48:52 ntpd[1898]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:48:52.648825 ntpd[1898]: 13 Feb 19:48:52 ntpd[1898]: corporation. Support and training for ntp-4 are Feb 13 19:48:52.648825 ntpd[1898]: 13 Feb 19:48:52 ntpd[1898]: available at https://www.nwtime.org/support Feb 13 19:48:52.648825 ntpd[1898]: 13 Feb 19:48:52 ntpd[1898]: ---------------------------------------------------- Feb 13 19:48:52.648825 ntpd[1898]: 13 Feb 19:48:52 ntpd[1898]: proto: precision = 0.108 usec (-23) Feb 13 19:48:52.648825 ntpd[1898]: 13 Feb 19:48:52 ntpd[1898]: basedate set to 2025-02-01 Feb 13 19:48:52.648825 ntpd[1898]: 13 Feb 19:48:52 ntpd[1898]: gps base set to 2025-02-02 (week 2352) Feb 13 19:48:52.648825 ntpd[1898]: 13 Feb 19:48:52 ntpd[1898]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:48:52.648825 ntpd[1898]: 13 Feb 19:48:52 ntpd[1898]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:48:52.648825 ntpd[1898]: 13 Feb 19:48:52 ntpd[1898]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:48:52.648825 ntpd[1898]: 13 Feb 19:48:52 ntpd[1898]: Listen normally on 3 eth0 172.31.20.134:123 Feb 13 19:48:52.648825 ntpd[1898]: 13 Feb 19:48:52 ntpd[1898]: Listen normally on 4 lo [::1]:123 Feb 13 19:48:52.648825 ntpd[1898]: 13 Feb 19:48:52 ntpd[1898]: bind(21) AF_INET6 fe80::44a:f8ff:fe8f:2467%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:48:52.648825 ntpd[1898]: 13 Feb 19:48:52 ntpd[1898]: unable to create socket on eth0 (5) for fe80::44a:f8ff:fe8f:2467%2#123 Feb 13 19:48:52.648825 ntpd[1898]: 13 Feb 19:48:52 ntpd[1898]: failed to init interface for address fe80::44a:f8ff:fe8f:2467%2 Feb 13 19:48:52.648825 ntpd[1898]: 13 Feb 19:48:52 ntpd[1898]: Listening on routing socket on fd #21 for interface updates Feb 13 19:48:52.648825 ntpd[1898]: 13 Feb 19:48:52 ntpd[1898]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:48:52.648825 ntpd[1898]: 13 Feb 19:48:52 ntpd[1898]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:48:52.643775 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:48:52.624865 ntpd[1898]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:48:52.644182 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:48:52.624887 ntpd[1898]: ---------------------------------------------------- Feb 13 19:48:52.624906 ntpd[1898]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:48:52.624927 ntpd[1898]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:48:52.624950 ntpd[1898]: corporation. Support and training for ntp-4 are Feb 13 19:48:52.624973 ntpd[1898]: available at https://www.nwtime.org/support Feb 13 19:48:52.624993 ntpd[1898]: ---------------------------------------------------- Feb 13 19:48:52.627470 ntpd[1898]: proto: precision = 0.108 usec (-23) Feb 13 19:48:52.628382 ntpd[1898]: basedate set to 2025-02-01 Feb 13 19:48:52.628414 ntpd[1898]: gps base set to 2025-02-02 (week 2352) Feb 13 19:48:52.631087 ntpd[1898]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:48:52.631180 ntpd[1898]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:48:52.631524 ntpd[1898]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:48:52.631605 ntpd[1898]: Listen normally on 3 eth0 172.31.20.134:123 Feb 13 19:48:52.631676 ntpd[1898]: Listen normally on 4 lo [::1]:123 Feb 13 19:48:52.631769 ntpd[1898]: bind(21) AF_INET6 fe80::44a:f8ff:fe8f:2467%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:48:52.631815 ntpd[1898]: unable to create socket on eth0 (5) for fe80::44a:f8ff:fe8f:2467%2#123 Feb 13 19:48:52.631843 ntpd[1898]: failed to init interface for address fe80::44a:f8ff:fe8f:2467%2 Feb 13 19:48:52.631912 ntpd[1898]: Listening on routing socket on fd #21 for interface updates Feb 13 19:48:52.637707 ntpd[1898]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:48:52.637756 ntpd[1898]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:48:52.693003 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 19:48:52.693093 jq[1911]: true Feb 13 19:48:52.704503 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:48:52.704897 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:48:52.716686 extend-filesystems[1920]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 19:48:52.716686 extend-filesystems[1920]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:48:52.716686 extend-filesystems[1920]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 19:48:52.739320 extend-filesystems[1896]: Resized filesystem in /dev/nvme0n1p9 Feb 13 19:48:52.724066 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:48:52.725574 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:48:52.777367 tar[1926]: linux-arm64/helm Feb 13 19:48:52.813483 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:48:52.825830 (ntainerd)[1939]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:48:52.847780 jq[1938]: true Feb 13 19:48:52.848000 update_engine[1909]: I20250213 19:48:52.846822 1909 main.cc:92] Flatcar Update Engine starting Feb 13 19:48:52.872542 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:48:52.876670 update_engine[1909]: I20250213 19:48:52.873772 1909 update_check_scheduler.cc:74] Next update check in 11m22s Feb 13 19:48:52.878437 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:48:52.886945 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:48:52.976645 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1772) Feb 13 19:48:53.052602 coreos-metadata[1893]: Feb 13 19:48:53.052 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:48:53.058375 coreos-metadata[1893]: Feb 13 19:48:53.058 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 19:48:53.061592 coreos-metadata[1893]: Feb 13 19:48:53.061 INFO Fetch successful Feb 13 19:48:53.061881 coreos-metadata[1893]: Feb 13 19:48:53.061 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 19:48:53.067841 coreos-metadata[1893]: Feb 13 19:48:53.067 INFO Fetch successful Feb 13 19:48:53.067841 coreos-metadata[1893]: Feb 13 19:48:53.067 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 19:48:53.068005 bash[1978]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:48:53.071051 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:48:53.077619 coreos-metadata[1893]: Feb 13 19:48:53.070 INFO Fetch successful Feb 13 19:48:53.077619 coreos-metadata[1893]: Feb 13 19:48:53.070 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 19:48:53.078887 coreos-metadata[1893]: Feb 13 19:48:53.078 INFO Fetch successful Feb 13 19:48:53.078887 coreos-metadata[1893]: Feb 13 19:48:53.078 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 19:48:53.079513 coreos-metadata[1893]: Feb 13 19:48:53.079 INFO Fetch failed with 404: resource not found Feb 13 19:48:53.079513 coreos-metadata[1893]: Feb 13 19:48:53.079 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 19:48:53.080897 coreos-metadata[1893]: Feb 13 19:48:53.080 INFO Fetch successful Feb 13 19:48:53.080897 coreos-metadata[1893]: Feb 13 19:48:53.080 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 19:48:53.087277 coreos-metadata[1893]: Feb 13 19:48:53.087 INFO Fetch successful Feb 13 19:48:53.087277 coreos-metadata[1893]: Feb 13 19:48:53.087 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 19:48:53.092530 coreos-metadata[1893]: Feb 13 19:48:53.092 INFO Fetch successful Feb 13 19:48:53.092530 coreos-metadata[1893]: Feb 13 19:48:53.092 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 19:48:53.095185 coreos-metadata[1893]: Feb 13 19:48:53.094 INFO Fetch successful Feb 13 19:48:53.095185 coreos-metadata[1893]: Feb 13 19:48:53.094 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 19:48:53.095861 coreos-metadata[1893]: Feb 13 19:48:53.095 INFO Fetch successful Feb 13 19:48:53.098173 systemd[1]: Starting sshkeys.service... Feb 13 19:48:53.120636 dbus-daemon[1894]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:48:53.120948 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:48:53.131567 dbus-daemon[1894]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.4' (uid=0 pid=1931 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:48:53.138448 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:48:53.171314 systemd-logind[1908]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:48:53.171364 systemd-logind[1908]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 19:48:53.177835 systemd-logind[1908]: New seat seat0. Feb 13 19:48:53.182070 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:48:53.225863 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:48:53.229270 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:48:53.247142 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:48:53.252460 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:48:53.284933 polkitd[1998]: Started polkitd version 121 Feb 13 19:48:53.325848 polkitd[1998]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:48:53.326081 polkitd[1998]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:48:53.336535 polkitd[1998]: Finished loading, compiling and executing 2 rules Feb 13 19:48:53.342336 dbus-daemon[1894]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:48:53.346836 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:48:53.353211 polkitd[1998]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:48:53.394485 systemd-hostnamed[1931]: Hostname set to (transient) Feb 13 19:48:53.394519 systemd-resolved[1716]: System hostname changed to 'ip-172-31-20-134'. Feb 13 19:48:53.527436 systemd-networkd[1758]: eth0: Gained IPv6LL Feb 13 19:48:53.538041 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:48:53.545811 containerd[1939]: time="2025-02-13T19:48:53.534209160Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 19:48:53.559580 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:48:53.596814 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 19:48:53.609530 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:48:53.618860 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:48:53.629663 locksmithd[1958]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:48:53.691107 amazon-ssm-agent[2067]: Initializing new seelog logger Feb 13 19:48:53.691107 amazon-ssm-agent[2067]: New Seelog Logger Creation Complete Feb 13 19:48:53.691107 amazon-ssm-agent[2067]: 2025/02/13 19:48:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:53.691107 amazon-ssm-agent[2067]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:53.691107 amazon-ssm-agent[2067]: 2025/02/13 19:48:53 processing appconfig overrides Feb 13 19:48:53.691107 amazon-ssm-agent[2067]: 2025/02/13 19:48:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:53.691107 amazon-ssm-agent[2067]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:53.691107 amazon-ssm-agent[2067]: 2025/02/13 19:48:53 processing appconfig overrides Feb 13 19:48:53.691107 amazon-ssm-agent[2067]: 2025/02/13 19:48:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:53.691107 amazon-ssm-agent[2067]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:53.691107 amazon-ssm-agent[2067]: 2025/02/13 19:48:53 processing appconfig overrides Feb 13 19:48:53.743652 amazon-ssm-agent[2067]: 2025-02-13 19:48:53 INFO Proxy environment variables: Feb 13 19:48:53.743652 amazon-ssm-agent[2067]: 2025/02/13 19:48:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:53.743652 amazon-ssm-agent[2067]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:48:53.743652 amazon-ssm-agent[2067]: 2025/02/13 19:48:53 processing appconfig overrides Feb 13 19:48:53.715665 unknown[2010]: wrote ssh authorized keys file for user: core Feb 13 19:48:53.750272 coreos-metadata[2010]: Feb 13 19:48:53.703 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:48:53.750272 coreos-metadata[2010]: Feb 13 19:48:53.706 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 19:48:53.750272 coreos-metadata[2010]: Feb 13 19:48:53.711 INFO Fetch successful Feb 13 19:48:53.750272 coreos-metadata[2010]: Feb 13 19:48:53.711 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:48:53.750272 coreos-metadata[2010]: Feb 13 19:48:53.711 INFO Fetch successful Feb 13 19:48:53.796933 amazon-ssm-agent[2067]: 2025-02-13 19:48:53 INFO https_proxy: Feb 13 19:48:53.877226 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:48:53.888370 containerd[1939]: time="2025-02-13T19:48:53.888249794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:53.897335 amazon-ssm-agent[2067]: 2025-02-13 19:48:53 INFO http_proxy: Feb 13 19:48:53.902168 containerd[1939]: time="2025-02-13T19:48:53.898448462Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:48:53.902168 containerd[1939]: time="2025-02-13T19:48:53.898853834Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:48:53.902168 containerd[1939]: time="2025-02-13T19:48:53.899023418Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:48:53.908714 containerd[1939]: time="2025-02-13T19:48:53.908465294Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:48:53.908714 containerd[1939]: time="2025-02-13T19:48:53.908538026Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:53.908714 containerd[1939]: time="2025-02-13T19:48:53.908676662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:48:53.908714 containerd[1939]: time="2025-02-13T19:48:53.908710874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:53.909238 containerd[1939]: time="2025-02-13T19:48:53.909019970Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:48:53.909238 containerd[1939]: time="2025-02-13T19:48:53.909064754Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:53.909238 containerd[1939]: time="2025-02-13T19:48:53.909098306Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:48:53.909238 containerd[1939]: time="2025-02-13T19:48:53.909147002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:53.909447 containerd[1939]: time="2025-02-13T19:48:53.909327686Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:53.915235 containerd[1939]: time="2025-02-13T19:48:53.912949514Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:48:53.915235 containerd[1939]: time="2025-02-13T19:48:53.913239230Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:48:53.915235 containerd[1939]: time="2025-02-13T19:48:53.913273550Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:48:53.915235 containerd[1939]: time="2025-02-13T19:48:53.913485302Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:48:53.915235 containerd[1939]: time="2025-02-13T19:48:53.913583294Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:48:53.915582 update-ssh-keys[2092]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:48:53.918310 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:48:53.930754 containerd[1939]: time="2025-02-13T19:48:53.929739710Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:48:53.930754 containerd[1939]: time="2025-02-13T19:48:53.929882042Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:48:53.930754 containerd[1939]: time="2025-02-13T19:48:53.930039650Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:48:53.930754 containerd[1939]: time="2025-02-13T19:48:53.930106154Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:48:53.930754 containerd[1939]: time="2025-02-13T19:48:53.930185198Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:48:53.923915 systemd[1]: Finished sshkeys.service. Feb 13 19:48:53.931205 containerd[1939]: time="2025-02-13T19:48:53.930737498Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:48:53.937309 containerd[1939]: time="2025-02-13T19:48:53.934313786Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:48:53.937309 containerd[1939]: time="2025-02-13T19:48:53.935891930Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:48:53.937309 containerd[1939]: time="2025-02-13T19:48:53.935935394Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:48:53.937309 containerd[1939]: time="2025-02-13T19:48:53.935966210Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:48:53.937309 containerd[1939]: time="2025-02-13T19:48:53.935999006Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:48:53.937309 containerd[1939]: time="2025-02-13T19:48:53.936030386Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:48:53.937309 containerd[1939]: time="2025-02-13T19:48:53.936061838Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:48:53.937309 containerd[1939]: time="2025-02-13T19:48:53.936093794Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:48:53.937309 containerd[1939]: time="2025-02-13T19:48:53.936153026Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:48:53.937309 containerd[1939]: time="2025-02-13T19:48:53.936189302Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:48:53.937309 containerd[1939]: time="2025-02-13T19:48:53.936219578Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:48:53.937309 containerd[1939]: time="2025-02-13T19:48:53.936253454Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:48:53.937309 containerd[1939]: time="2025-02-13T19:48:53.936297530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:48:53.937309 containerd[1939]: time="2025-02-13T19:48:53.936349262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:48:53.937947 containerd[1939]: time="2025-02-13T19:48:53.936380498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:48:53.937947 containerd[1939]: time="2025-02-13T19:48:53.936414746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:48:53.937947 containerd[1939]: time="2025-02-13T19:48:53.936457010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:48:53.937947 containerd[1939]: time="2025-02-13T19:48:53.936494810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:48:53.937947 containerd[1939]: time="2025-02-13T19:48:53.936525362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:48:53.937947 containerd[1939]: time="2025-02-13T19:48:53.936559046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:48:53.937947 containerd[1939]: time="2025-02-13T19:48:53.936592634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:48:53.937947 containerd[1939]: time="2025-02-13T19:48:53.936632258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:48:53.937947 containerd[1939]: time="2025-02-13T19:48:53.936664778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:48:53.937947 containerd[1939]: time="2025-02-13T19:48:53.936697226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:48:53.940506 containerd[1939]: time="2025-02-13T19:48:53.938727842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:48:53.940506 containerd[1939]: time="2025-02-13T19:48:53.938805842Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:48:53.947495 containerd[1939]: time="2025-02-13T19:48:53.941790374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:48:53.947495 containerd[1939]: time="2025-02-13T19:48:53.941864246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:48:53.947495 containerd[1939]: time="2025-02-13T19:48:53.941895782Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:48:53.947495 containerd[1939]: time="2025-02-13T19:48:53.942070154Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:48:53.947495 containerd[1939]: time="2025-02-13T19:48:53.942268106Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:48:53.947495 containerd[1939]: time="2025-02-13T19:48:53.942300878Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:48:53.947495 containerd[1939]: time="2025-02-13T19:48:53.942331670Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:48:53.947495 containerd[1939]: time="2025-02-13T19:48:53.942357518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:48:53.947495 containerd[1939]: time="2025-02-13T19:48:53.942387218Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:48:53.947495 containerd[1939]: time="2025-02-13T19:48:53.942411266Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:48:53.947495 containerd[1939]: time="2025-02-13T19:48:53.942437018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:48:53.948190 containerd[1939]: time="2025-02-13T19:48:53.942929258Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:48:53.948190 containerd[1939]: time="2025-02-13T19:48:53.943039598Z" level=info msg="Connect containerd service" Feb 13 19:48:53.948190 containerd[1939]: time="2025-02-13T19:48:53.943098590Z" level=info msg="using legacy CRI server" Feb 13 19:48:53.948190 containerd[1939]: time="2025-02-13T19:48:53.943118006Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:48:53.948190 containerd[1939]: time="2025-02-13T19:48:53.944989634Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:48:53.952549 containerd[1939]: time="2025-02-13T19:48:53.951919334Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:48:53.957166 containerd[1939]: time="2025-02-13T19:48:53.952699466Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:48:53.957166 containerd[1939]: time="2025-02-13T19:48:53.952812590Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:48:53.957166 containerd[1939]: time="2025-02-13T19:48:53.952903550Z" level=info msg="Start subscribing containerd event" Feb 13 19:48:53.957166 containerd[1939]: time="2025-02-13T19:48:53.952968014Z" level=info msg="Start recovering state" Feb 13 19:48:53.965224 containerd[1939]: time="2025-02-13T19:48:53.964971098Z" level=info msg="Start event monitor" Feb 13 19:48:53.988380 containerd[1939]: time="2025-02-13T19:48:53.984389918Z" level=info msg="Start snapshots syncer" Feb 13 19:48:53.988380 containerd[1939]: time="2025-02-13T19:48:53.984480002Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:48:53.988380 containerd[1939]: time="2025-02-13T19:48:53.984529142Z" level=info msg="Start streaming server" Feb 13 19:48:53.988380 containerd[1939]: time="2025-02-13T19:48:53.986362226Z" level=info msg="containerd successfully booted in 0.457077s" Feb 13 19:48:53.993252 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:48:53.998344 amazon-ssm-agent[2067]: 2025-02-13 19:48:53 INFO no_proxy: Feb 13 19:48:54.099233 amazon-ssm-agent[2067]: 2025-02-13 19:48:53 INFO Checking if agent identity type OnPrem can be assumed Feb 13 19:48:54.203832 amazon-ssm-agent[2067]: 2025-02-13 19:48:53 INFO Checking if agent identity type EC2 can be assumed Feb 13 19:48:54.304995 amazon-ssm-agent[2067]: 2025-02-13 19:48:54 INFO Agent will take identity from EC2 Feb 13 19:48:54.405296 amazon-ssm-agent[2067]: 2025-02-13 19:48:54 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:48:54.506097 amazon-ssm-agent[2067]: 2025-02-13 19:48:54 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:48:54.604591 amazon-ssm-agent[2067]: 2025-02-13 19:48:54 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:48:54.703798 amazon-ssm-agent[2067]: 2025-02-13 19:48:54 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 19:48:54.805140 amazon-ssm-agent[2067]: 2025-02-13 19:48:54 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 19:48:54.907191 amazon-ssm-agent[2067]: 2025-02-13 19:48:54 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 19:48:55.009246 amazon-ssm-agent[2067]: 2025-02-13 19:48:54 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 19:48:55.059518 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:48:55.076746 (kubelet)[2123]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:48:55.111084 amazon-ssm-agent[2067]: 2025-02-13 19:48:54 INFO [Registrar] Starting registrar module Feb 13 19:48:55.207658 tar[1926]: linux-arm64/LICENSE Feb 13 19:48:55.207658 tar[1926]: linux-arm64/README.md Feb 13 19:48:55.211050 amazon-ssm-agent[2067]: 2025-02-13 19:48:54 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 19:48:55.252298 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:48:55.311651 amazon-ssm-agent[2067]: 2025-02-13 19:48:55 INFO [EC2Identity] EC2 registration was successful. Feb 13 19:48:55.329287 amazon-ssm-agent[2067]: 2025-02-13 19:48:55 INFO [CredentialRefresher] credentialRefresher has started Feb 13 19:48:55.329287 amazon-ssm-agent[2067]: 2025-02-13 19:48:55 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 19:48:55.329287 amazon-ssm-agent[2067]: 2025-02-13 19:48:55 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 19:48:55.413524 amazon-ssm-agent[2067]: 2025-02-13 19:48:55 INFO [CredentialRefresher] Next credential rotation will be in 30.8999927926 minutes Feb 13 19:48:55.467585 sshd_keygen[1932]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:48:55.522045 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:48:55.534870 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:48:55.540835 systemd[1]: Started sshd@0-172.31.20.134:22-139.178.89.65:50186.service - OpenSSH per-connection server daemon (139.178.89.65:50186). Feb 13 19:48:55.560696 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:48:55.562343 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:48:55.572314 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:48:55.626173 ntpd[1898]: 13 Feb 19:48:55 ntpd[1898]: Listen normally on 6 eth0 [fe80::44a:f8ff:fe8f:2467%2]:123 Feb 13 19:48:55.625795 ntpd[1898]: Listen normally on 6 eth0 [fe80::44a:f8ff:fe8f:2467%2]:123 Feb 13 19:48:55.630188 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:48:55.643242 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:48:55.653337 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:48:55.656928 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:48:55.659072 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:48:55.663390 systemd[1]: Startup finished in 1.240s (kernel) + 9.858s (initrd) + 9.053s (userspace) = 20.152s. Feb 13 19:48:55.797282 sshd[2140]: Accepted publickey for core from 139.178.89.65 port 50186 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:48:55.799119 sshd[2140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:48:55.817314 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:48:55.825858 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:48:55.834845 systemd-logind[1908]: New session 1 of user core. Feb 13 19:48:55.871451 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:48:55.884264 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:48:55.905092 (systemd)[2156]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:48:55.924589 kubelet[2123]: E0213 19:48:55.924514 2123 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:48:55.929778 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:48:55.930646 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:48:55.939254 systemd[1]: kubelet.service: Consumed 1.300s CPU time. Feb 13 19:48:56.140218 systemd[2156]: Queued start job for default target default.target. Feb 13 19:48:56.152076 systemd[2156]: Created slice app.slice - User Application Slice. Feb 13 19:48:56.152171 systemd[2156]: Reached target paths.target - Paths. Feb 13 19:48:56.152206 systemd[2156]: Reached target timers.target - Timers. Feb 13 19:48:56.154935 systemd[2156]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:48:56.185603 systemd[2156]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:48:56.185840 systemd[2156]: Reached target sockets.target - Sockets. Feb 13 19:48:56.185874 systemd[2156]: Reached target basic.target - Basic System. Feb 13 19:48:56.185981 systemd[2156]: Reached target default.target - Main User Target. Feb 13 19:48:56.186057 systemd[2156]: Startup finished in 266ms. Feb 13 19:48:56.186150 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:48:56.194436 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:48:56.358645 systemd[1]: Started sshd@1-172.31.20.134:22-139.178.89.65:56786.service - OpenSSH per-connection server daemon (139.178.89.65:56786). Feb 13 19:48:56.380250 amazon-ssm-agent[2067]: 2025-02-13 19:48:56 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 19:48:56.479795 amazon-ssm-agent[2067]: 2025-02-13 19:48:56 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2170) started Feb 13 19:48:56.558502 sshd[2169]: Accepted publickey for core from 139.178.89.65 port 56786 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:48:56.562477 sshd[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:48:56.570744 systemd-logind[1908]: New session 2 of user core. Feb 13 19:48:56.580469 amazon-ssm-agent[2067]: 2025-02-13 19:48:56 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 19:48:56.582169 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:48:56.716465 sshd[2169]: pam_unix(sshd:session): session closed for user core Feb 13 19:48:56.721577 systemd[1]: sshd@1-172.31.20.134:22-139.178.89.65:56786.service: Deactivated successfully. Feb 13 19:48:56.726231 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:48:56.728850 systemd-logind[1908]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:48:56.731793 systemd-logind[1908]: Removed session 2. Feb 13 19:48:56.755753 systemd[1]: Started sshd@2-172.31.20.134:22-139.178.89.65:56802.service - OpenSSH per-connection server daemon (139.178.89.65:56802). Feb 13 19:48:56.933924 sshd[2185]: Accepted publickey for core from 139.178.89.65 port 56802 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:48:56.936881 sshd[2185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:48:56.946466 systemd-logind[1908]: New session 3 of user core. Feb 13 19:48:56.953466 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:48:57.071426 sshd[2185]: pam_unix(sshd:session): session closed for user core Feb 13 19:48:57.078885 systemd[1]: sshd@2-172.31.20.134:22-139.178.89.65:56802.service: Deactivated successfully. Feb 13 19:48:57.082374 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:48:57.083539 systemd-logind[1908]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:48:57.085434 systemd-logind[1908]: Removed session 3. Feb 13 19:48:57.113702 systemd[1]: Started sshd@3-172.31.20.134:22-139.178.89.65:56810.service - OpenSSH per-connection server daemon (139.178.89.65:56810). Feb 13 19:48:57.289925 sshd[2192]: Accepted publickey for core from 139.178.89.65 port 56810 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:48:57.292914 sshd[2192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:48:57.301897 systemd-logind[1908]: New session 4 of user core. Feb 13 19:48:57.305445 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:48:57.431900 sshd[2192]: pam_unix(sshd:session): session closed for user core Feb 13 19:48:57.439982 systemd[1]: sshd@3-172.31.20.134:22-139.178.89.65:56810.service: Deactivated successfully. Feb 13 19:48:57.444040 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:48:57.445982 systemd-logind[1908]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:48:57.448457 systemd-logind[1908]: Removed session 4. Feb 13 19:48:57.481057 systemd[1]: Started sshd@4-172.31.20.134:22-139.178.89.65:56824.service - OpenSSH per-connection server daemon (139.178.89.65:56824). Feb 13 19:48:57.660545 sshd[2199]: Accepted publickey for core from 139.178.89.65 port 56824 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:48:57.663173 sshd[2199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:48:57.671645 systemd-logind[1908]: New session 5 of user core. Feb 13 19:48:57.678421 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:48:57.793915 sudo[2202]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:48:57.794660 sudo[2202]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:48:58.324732 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:48:58.335659 (dockerd)[2218]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:48:58.747613 dockerd[2218]: time="2025-02-13T19:48:58.747512178Z" level=info msg="Starting up" Feb 13 19:48:58.893659 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2095843810-merged.mount: Deactivated successfully. Feb 13 19:48:59.354338 dockerd[2218]: time="2025-02-13T19:48:59.353682545Z" level=info msg="Loading containers: start." Feb 13 19:48:59.540281 kernel: Initializing XFRM netlink socket Feb 13 19:48:59.573157 (udev-worker)[2242]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:48:59.258412 systemd-resolved[1716]: Clock change detected. Flushing caches. Feb 13 19:48:59.268463 systemd-journald[1481]: Time jumped backwards, rotating. Feb 13 19:48:59.320929 systemd-networkd[1758]: docker0: Link UP Feb 13 19:48:59.349347 dockerd[2218]: time="2025-02-13T19:48:59.349276928Z" level=info msg="Loading containers: done." Feb 13 19:48:59.374686 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3256579840-merged.mount: Deactivated successfully. Feb 13 19:48:59.380143 dockerd[2218]: time="2025-02-13T19:48:59.380039540Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:48:59.381079 dockerd[2218]: time="2025-02-13T19:48:59.380291348Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 19:48:59.381079 dockerd[2218]: time="2025-02-13T19:48:59.380943224Z" level=info msg="Daemon has completed initialization" Feb 13 19:48:59.443900 dockerd[2218]: time="2025-02-13T19:48:59.443808212Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:48:59.444892 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:49:00.419024 containerd[1939]: time="2025-02-13T19:49:00.418929057Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 19:49:01.106582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3925055918.mount: Deactivated successfully. Feb 13 19:49:03.362621 containerd[1939]: time="2025-02-13T19:49:03.362192916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:03.364444 containerd[1939]: time="2025-02-13T19:49:03.364388856Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620375" Feb 13 19:49:03.365730 containerd[1939]: time="2025-02-13T19:49:03.365642964Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:03.376599 containerd[1939]: time="2025-02-13T19:49:03.374960892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:03.378133 containerd[1939]: time="2025-02-13T19:49:03.378029136Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 2.958995331s" Feb 13 19:49:03.378338 containerd[1939]: time="2025-02-13T19:49:03.378133068Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 19:49:03.381064 containerd[1939]: time="2025-02-13T19:49:03.381011676Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 19:49:05.610100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:49:05.619468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:06.074181 containerd[1939]: time="2025-02-13T19:49:06.073982905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:06.092421 containerd[1939]: time="2025-02-13T19:49:06.092338969Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471773" Feb 13 19:49:06.110820 containerd[1939]: time="2025-02-13T19:49:06.109209074Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:06.131354 containerd[1939]: time="2025-02-13T19:49:06.131240366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:06.133946 containerd[1939]: time="2025-02-13T19:49:06.133875554Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 2.75263421s" Feb 13 19:49:06.134045 containerd[1939]: time="2025-02-13T19:49:06.133941926Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 19:49:06.134045 containerd[1939]: time="2025-02-13T19:49:06.134889590Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 19:49:06.257359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:06.276675 (kubelet)[2426]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:49:06.361102 kubelet[2426]: E0213 19:49:06.361006 2426 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:49:06.369291 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:49:06.369699 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:49:08.181402 containerd[1939]: time="2025-02-13T19:49:08.181015012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:08.183327 containerd[1939]: time="2025-02-13T19:49:08.183247360Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024540" Feb 13 19:49:08.184810 containerd[1939]: time="2025-02-13T19:49:08.184741432Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:08.190686 containerd[1939]: time="2025-02-13T19:49:08.190595668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:08.193244 containerd[1939]: time="2025-02-13T19:49:08.193015300Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 2.058058726s" Feb 13 19:49:08.193244 containerd[1939]: time="2025-02-13T19:49:08.193073776Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 19:49:08.194106 containerd[1939]: time="2025-02-13T19:49:08.194020672Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 19:49:09.927635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1114602019.mount: Deactivated successfully. Feb 13 19:49:10.450512 containerd[1939]: time="2025-02-13T19:49:10.450059131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:10.452252 containerd[1939]: time="2025-02-13T19:49:10.451887403Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769256" Feb 13 19:49:10.453721 containerd[1939]: time="2025-02-13T19:49:10.453663607Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:10.458082 containerd[1939]: time="2025-02-13T19:49:10.457998919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:10.459673 containerd[1939]: time="2025-02-13T19:49:10.459608203Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 2.265267911s" Feb 13 19:49:10.459768 containerd[1939]: time="2025-02-13T19:49:10.459668995Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 19:49:10.460397 containerd[1939]: time="2025-02-13T19:49:10.460335643Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:49:11.103717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1682689241.mount: Deactivated successfully. Feb 13 19:49:12.646616 containerd[1939]: time="2025-02-13T19:49:12.646526758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:12.649177 containerd[1939]: time="2025-02-13T19:49:12.649101766Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 19:49:12.649455 containerd[1939]: time="2025-02-13T19:49:12.649403890Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:12.655498 containerd[1939]: time="2025-02-13T19:49:12.655432774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:12.658281 containerd[1939]: time="2025-02-13T19:49:12.658230718Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.197831703s" Feb 13 19:49:12.658421 containerd[1939]: time="2025-02-13T19:49:12.658391266Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:49:12.659301 containerd[1939]: time="2025-02-13T19:49:12.659138722Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:49:13.211674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1896822718.mount: Deactivated successfully. Feb 13 19:49:13.221404 containerd[1939]: time="2025-02-13T19:49:13.221274465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:13.223117 containerd[1939]: time="2025-02-13T19:49:13.223053789Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Feb 13 19:49:13.225572 containerd[1939]: time="2025-02-13T19:49:13.225440085Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:13.231750 containerd[1939]: time="2025-02-13T19:49:13.231616077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:13.233698 containerd[1939]: time="2025-02-13T19:49:13.233326473Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 573.860223ms" Feb 13 19:49:13.233698 containerd[1939]: time="2025-02-13T19:49:13.233427237Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 19:49:13.235281 containerd[1939]: time="2025-02-13T19:49:13.234982401Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 19:49:13.860267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3946112915.mount: Deactivated successfully. Feb 13 19:49:16.609997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:49:16.621797 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:17.124117 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:17.137321 (kubelet)[2554]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:49:17.234666 kubelet[2554]: E0213 19:49:17.234216 2554 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:49:17.242037 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:49:17.242374 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:49:17.517340 containerd[1939]: time="2025-02-13T19:49:17.517103870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:17.520882 containerd[1939]: time="2025-02-13T19:49:17.520218146Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Feb 13 19:49:17.527414 containerd[1939]: time="2025-02-13T19:49:17.526043438Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:17.559142 containerd[1939]: time="2025-02-13T19:49:17.559066934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:17.562137 containerd[1939]: time="2025-02-13T19:49:17.562057226Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.327019349s" Feb 13 19:49:17.562137 containerd[1939]: time="2025-02-13T19:49:17.562123970Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 19:49:23.064412 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:49:26.623915 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:26.637238 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:26.708811 systemd[1]: Reloading requested from client PID 2592 ('systemctl') (unit session-5.scope)... Feb 13 19:49:26.708837 systemd[1]: Reloading... Feb 13 19:49:26.962634 zram_generator::config[2635]: No configuration found. Feb 13 19:49:27.228444 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:49:27.400498 systemd[1]: Reloading finished in 691 ms. Feb 13 19:49:27.499285 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:27.505940 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:49:27.506435 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:27.513225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:27.888073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:27.904124 (kubelet)[2697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:49:27.980225 kubelet[2697]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:49:27.980225 kubelet[2697]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:49:27.980225 kubelet[2697]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:49:27.983048 kubelet[2697]: I0213 19:49:27.980434 2697 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:49:29.077997 kubelet[2697]: I0213 19:49:29.077946 2697 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:49:29.078740 kubelet[2697]: I0213 19:49:29.078691 2697 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:49:29.079365 kubelet[2697]: I0213 19:49:29.079316 2697 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:49:29.130384 kubelet[2697]: E0213 19:49:29.130281 2697 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.20.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.134:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:29.132931 kubelet[2697]: I0213 19:49:29.132653 2697 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:49:29.155779 kubelet[2697]: E0213 19:49:29.155727 2697 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:49:29.156287 kubelet[2697]: I0213 19:49:29.156051 2697 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:49:29.165351 kubelet[2697]: I0213 19:49:29.163621 2697 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:49:29.165351 kubelet[2697]: I0213 19:49:29.163879 2697 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:49:29.165351 kubelet[2697]: I0213 19:49:29.164175 2697 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:49:29.165351 kubelet[2697]: I0213 19:49:29.164215 2697 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-134","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:49:29.165823 kubelet[2697]: I0213 19:49:29.164613 2697 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:49:29.165823 kubelet[2697]: I0213 19:49:29.164635 2697 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:49:29.165823 kubelet[2697]: I0213 19:49:29.164843 2697 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:49:29.169333 kubelet[2697]: I0213 19:49:29.168755 2697 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:49:29.169333 kubelet[2697]: I0213 19:49:29.168808 2697 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:49:29.169333 kubelet[2697]: I0213 19:49:29.168846 2697 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:49:29.169333 kubelet[2697]: I0213 19:49:29.168866 2697 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:49:29.173318 kubelet[2697]: W0213 19:49:29.173228 2697 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-134&limit=500&resourceVersion=0": dial tcp 172.31.20.134:6443: connect: connection refused Feb 13 19:49:29.173437 kubelet[2697]: E0213 19:49:29.173334 2697 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.20.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-134&limit=500&resourceVersion=0\": dial tcp 172.31.20.134:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:29.177337 kubelet[2697]: W0213 19:49:29.176847 2697 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.20.134:6443: connect: connection refused Feb 13 19:49:29.177337 kubelet[2697]: E0213 19:49:29.176951 2697 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.20.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.134:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:29.177337 kubelet[2697]: I0213 19:49:29.177122 2697 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:49:29.181228 kubelet[2697]: I0213 19:49:29.180919 2697 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:49:29.182645 kubelet[2697]: W0213 19:49:29.182332 2697 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:49:29.186608 kubelet[2697]: I0213 19:49:29.186413 2697 server.go:1269] "Started kubelet" Feb 13 19:49:29.187026 kubelet[2697]: I0213 19:49:29.186639 2697 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:49:29.189103 kubelet[2697]: I0213 19:49:29.189024 2697 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:49:29.192611 kubelet[2697]: I0213 19:49:29.191812 2697 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:49:29.192611 kubelet[2697]: I0213 19:49:29.192386 2697 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:49:29.194324 kubelet[2697]: I0213 19:49:29.194248 2697 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:49:29.196615 kubelet[2697]: E0213 19:49:29.192974 2697 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.134:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.134:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-134.1823dc5c179c751c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-134,UID:ip-172-31-20-134,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-134,},FirstTimestamp:2025-02-13 19:49:29.18637494 +0000 UTC m=+1.275318883,LastTimestamp:2025-02-13 19:49:29.18637494 +0000 UTC m=+1.275318883,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-134,}" Feb 13 19:49:29.197506 kubelet[2697]: I0213 19:49:29.197467 2697 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:49:29.202437 kubelet[2697]: E0213 19:49:29.202396 2697 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-20-134\" not found" Feb 13 19:49:29.203609 kubelet[2697]: I0213 19:49:29.203528 2697 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:49:29.204390 kubelet[2697]: I0213 19:49:29.204356 2697 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:49:29.204666 kubelet[2697]: I0213 19:49:29.204645 2697 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:49:29.206230 kubelet[2697]: W0213 19:49:29.206051 2697 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.134:6443: connect: connection refused Feb 13 19:49:29.206230 kubelet[2697]: E0213 19:49:29.206146 2697 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.20.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.134:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:29.207231 kubelet[2697]: I0213 19:49:29.206961 2697 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:49:29.207231 kubelet[2697]: I0213 19:49:29.207130 2697 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:49:29.210954 kubelet[2697]: E0213 19:49:29.210674 2697 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-134?timeout=10s\": dial tcp 172.31.20.134:6443: connect: connection refused" interval="200ms" Feb 13 19:49:29.212795 kubelet[2697]: E0213 19:49:29.212315 2697 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:49:29.213379 kubelet[2697]: I0213 19:49:29.213344 2697 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:49:29.234923 kubelet[2697]: I0213 19:49:29.234807 2697 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:49:29.238165 kubelet[2697]: I0213 19:49:29.238079 2697 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:49:29.238165 kubelet[2697]: I0213 19:49:29.238139 2697 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:49:29.238165 kubelet[2697]: I0213 19:49:29.238176 2697 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:49:29.238398 kubelet[2697]: E0213 19:49:29.238267 2697 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:49:29.256504 kubelet[2697]: W0213 19:49:29.256404 2697 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.134:6443: connect: connection refused Feb 13 19:49:29.257619 kubelet[2697]: E0213 19:49:29.256511 2697 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.20.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.134:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:29.266274 kubelet[2697]: I0213 19:49:29.266239 2697 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:49:29.266625 kubelet[2697]: I0213 19:49:29.266601 2697 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:49:29.266760 kubelet[2697]: I0213 19:49:29.266740 2697 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:49:29.269982 kubelet[2697]: I0213 19:49:29.269952 2697 policy_none.go:49] "None policy: Start" Feb 13 19:49:29.271478 kubelet[2697]: I0213 19:49:29.271419 2697 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:49:29.271629 kubelet[2697]: I0213 19:49:29.271489 2697 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:49:29.286236 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:49:29.303617 kubelet[2697]: E0213 19:49:29.303580 2697 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-20-134\" not found" Feb 13 19:49:29.304699 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:49:29.310997 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:49:29.324227 kubelet[2697]: I0213 19:49:29.324152 2697 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:49:29.324635 kubelet[2697]: I0213 19:49:29.324454 2697 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:49:29.324635 kubelet[2697]: I0213 19:49:29.324486 2697 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:49:29.329489 kubelet[2697]: I0213 19:49:29.328457 2697 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:49:29.333870 kubelet[2697]: E0213 19:49:29.333724 2697 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-134\" not found" Feb 13 19:49:29.359382 systemd[1]: Created slice kubepods-burstable-pod3402de113a228d4177b3bb375c1ef25d.slice - libcontainer container kubepods-burstable-pod3402de113a228d4177b3bb375c1ef25d.slice. Feb 13 19:49:29.388333 systemd[1]: Created slice kubepods-burstable-pod4f638f113812fff74ed102452ac41ebb.slice - libcontainer container kubepods-burstable-pod4f638f113812fff74ed102452ac41ebb.slice. Feb 13 19:49:29.405488 kubelet[2697]: I0213 19:49:29.405182 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3402de113a228d4177b3bb375c1ef25d-ca-certs\") pod \"kube-apiserver-ip-172-31-20-134\" (UID: \"3402de113a228d4177b3bb375c1ef25d\") " pod="kube-system/kube-apiserver-ip-172-31-20-134" Feb 13 19:49:29.407482 systemd[1]: Created slice kubepods-burstable-pode7aa04c99a04f8b57fe6e22f656b3e24.slice - libcontainer container kubepods-burstable-pode7aa04c99a04f8b57fe6e22f656b3e24.slice. Feb 13 19:49:29.412081 kubelet[2697]: E0213 19:49:29.411831 2697 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-134?timeout=10s\": dial tcp 172.31.20.134:6443: connect: connection refused" interval="400ms" Feb 13 19:49:29.428514 kubelet[2697]: I0213 19:49:29.428461 2697 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-20-134" Feb 13 19:49:29.429766 kubelet[2697]: E0213 19:49:29.429644 2697 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.20.134:6443/api/v1/nodes\": dial tcp 172.31.20.134:6443: connect: connection refused" node="ip-172-31-20-134" Feb 13 19:49:29.506140 kubelet[2697]: I0213 19:49:29.506063 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3402de113a228d4177b3bb375c1ef25d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-134\" (UID: \"3402de113a228d4177b3bb375c1ef25d\") " pod="kube-system/kube-apiserver-ip-172-31-20-134" Feb 13 19:49:29.506354 kubelet[2697]: I0213 19:49:29.506153 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f638f113812fff74ed102452ac41ebb-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-134\" (UID: \"4f638f113812fff74ed102452ac41ebb\") " pod="kube-system/kube-controller-manager-ip-172-31-20-134" Feb 13 19:49:29.506354 kubelet[2697]: I0213 19:49:29.506196 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7aa04c99a04f8b57fe6e22f656b3e24-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-134\" (UID: \"e7aa04c99a04f8b57fe6e22f656b3e24\") " pod="kube-system/kube-scheduler-ip-172-31-20-134" Feb 13 19:49:29.506354 kubelet[2697]: I0213 19:49:29.506287 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3402de113a228d4177b3bb375c1ef25d-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-134\" (UID: \"3402de113a228d4177b3bb375c1ef25d\") " pod="kube-system/kube-apiserver-ip-172-31-20-134" Feb 13 19:49:29.506533 kubelet[2697]: I0213 19:49:29.506460 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f638f113812fff74ed102452ac41ebb-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-134\" (UID: \"4f638f113812fff74ed102452ac41ebb\") " pod="kube-system/kube-controller-manager-ip-172-31-20-134" Feb 13 19:49:29.506618 kubelet[2697]: I0213 19:49:29.506540 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f638f113812fff74ed102452ac41ebb-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-134\" (UID: \"4f638f113812fff74ed102452ac41ebb\") " pod="kube-system/kube-controller-manager-ip-172-31-20-134" Feb 13 19:49:29.506618 kubelet[2697]: I0213 19:49:29.506605 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f638f113812fff74ed102452ac41ebb-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-134\" (UID: \"4f638f113812fff74ed102452ac41ebb\") " pod="kube-system/kube-controller-manager-ip-172-31-20-134" Feb 13 19:49:29.506735 kubelet[2697]: I0213 19:49:29.506641 2697 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f638f113812fff74ed102452ac41ebb-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-134\" (UID: \"4f638f113812fff74ed102452ac41ebb\") " pod="kube-system/kube-controller-manager-ip-172-31-20-134" Feb 13 19:49:29.632479 kubelet[2697]: I0213 19:49:29.632435 2697 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-20-134" Feb 13 19:49:29.632961 kubelet[2697]: E0213 19:49:29.632909 2697 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.20.134:6443/api/v1/nodes\": dial tcp 172.31.20.134:6443: connect: connection refused" node="ip-172-31-20-134" Feb 13 19:49:29.685589 containerd[1939]: time="2025-02-13T19:49:29.685185891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-134,Uid:3402de113a228d4177b3bb375c1ef25d,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:29.701380 containerd[1939]: time="2025-02-13T19:49:29.701316999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-134,Uid:4f638f113812fff74ed102452ac41ebb,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:29.714488 containerd[1939]: time="2025-02-13T19:49:29.714227187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-134,Uid:e7aa04c99a04f8b57fe6e22f656b3e24,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:29.813661 kubelet[2697]: E0213 19:49:29.813085 2697 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-134?timeout=10s\": dial tcp 172.31.20.134:6443: connect: connection refused" interval="800ms" Feb 13 19:49:30.036928 kubelet[2697]: I0213 19:49:30.036681 2697 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-20-134" Feb 13 19:49:30.038064 kubelet[2697]: E0213 19:49:30.037395 2697 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.20.134:6443/api/v1/nodes\": dial tcp 172.31.20.134:6443: connect: connection refused" node="ip-172-31-20-134" Feb 13 19:49:30.116367 kubelet[2697]: W0213 19:49:30.116283 2697 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.20.134:6443: connect: connection refused Feb 13 19:49:30.117020 kubelet[2697]: E0213 19:49:30.116381 2697 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.20.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.134:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:30.214141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3799190855.mount: Deactivated successfully. Feb 13 19:49:30.228825 containerd[1939]: time="2025-02-13T19:49:30.228351397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:30.230289 containerd[1939]: time="2025-02-13T19:49:30.230182585Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 19:49:30.231895 containerd[1939]: time="2025-02-13T19:49:30.231839821Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:30.234350 containerd[1939]: time="2025-02-13T19:49:30.234261085Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:30.235337 containerd[1939]: time="2025-02-13T19:49:30.235127485Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:49:30.236038 containerd[1939]: time="2025-02-13T19:49:30.235982869Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:30.237253 containerd[1939]: time="2025-02-13T19:49:30.237130729Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:49:30.241092 containerd[1939]: time="2025-02-13T19:49:30.240873361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:49:30.246465 containerd[1939]: time="2025-02-13T19:49:30.246382789Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 561.070286ms" Feb 13 19:49:30.251364 containerd[1939]: time="2025-02-13T19:49:30.250888753Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 536.54039ms" Feb 13 19:49:30.272345 containerd[1939]: time="2025-02-13T19:49:30.271787654Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 570.348531ms" Feb 13 19:49:30.352493 kubelet[2697]: W0213 19:49:30.351918 2697 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.134:6443: connect: connection refused Feb 13 19:49:30.352493 kubelet[2697]: E0213 19:49:30.351993 2697 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.20.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.134:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:30.363121 kubelet[2697]: W0213 19:49:30.363030 2697 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.134:6443: connect: connection refused Feb 13 19:49:30.363290 kubelet[2697]: E0213 19:49:30.363134 2697 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.20.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.134:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:30.406457 kubelet[2697]: W0213 19:49:30.406305 2697 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-134&limit=500&resourceVersion=0": dial tcp 172.31.20.134:6443: connect: connection refused Feb 13 19:49:30.406457 kubelet[2697]: E0213 19:49:30.406405 2697 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.20.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-134&limit=500&resourceVersion=0\": dial tcp 172.31.20.134:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:49:30.472363 containerd[1939]: time="2025-02-13T19:49:30.471159339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:30.472363 containerd[1939]: time="2025-02-13T19:49:30.471264003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:30.472363 containerd[1939]: time="2025-02-13T19:49:30.471310455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:30.475252 containerd[1939]: time="2025-02-13T19:49:30.474973047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:30.476943 containerd[1939]: time="2025-02-13T19:49:30.476760795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:30.477700 containerd[1939]: time="2025-02-13T19:49:30.477509799Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:30.479872 containerd[1939]: time="2025-02-13T19:49:30.479693667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:30.479872 containerd[1939]: time="2025-02-13T19:49:30.479799531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:30.480689 containerd[1939]: time="2025-02-13T19:49:30.480586059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:30.481016 containerd[1939]: time="2025-02-13T19:49:30.480943287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:30.481768 containerd[1939]: time="2025-02-13T19:49:30.481666599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:30.482459 containerd[1939]: time="2025-02-13T19:49:30.482361399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:30.526302 systemd[1]: Started cri-containerd-8da9d2d5b0ed0b31abac8b54b491d8f4e88a7258b2886d719795e4268c5fad30.scope - libcontainer container 8da9d2d5b0ed0b31abac8b54b491d8f4e88a7258b2886d719795e4268c5fad30. Feb 13 19:49:30.537599 systemd[1]: Started cri-containerd-e908bb71f3091a3670afab5e57355892429f4f13137758f70150853b19a4f3f4.scope - libcontainer container e908bb71f3091a3670afab5e57355892429f4f13137758f70150853b19a4f3f4. Feb 13 19:49:30.553868 systemd[1]: Started cri-containerd-9876929367db8cd582336d0380fbd66ffe6f601085399be6d96f7008fb449a30.scope - libcontainer container 9876929367db8cd582336d0380fbd66ffe6f601085399be6d96f7008fb449a30. Feb 13 19:49:30.615299 kubelet[2697]: E0213 19:49:30.615115 2697 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-134?timeout=10s\": dial tcp 172.31.20.134:6443: connect: connection refused" interval="1.6s" Feb 13 19:49:30.682387 containerd[1939]: time="2025-02-13T19:49:30.681901168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-134,Uid:3402de113a228d4177b3bb375c1ef25d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e908bb71f3091a3670afab5e57355892429f4f13137758f70150853b19a4f3f4\"" Feb 13 19:49:30.696651 containerd[1939]: time="2025-02-13T19:49:30.696039976Z" level=info msg="CreateContainer within sandbox \"e908bb71f3091a3670afab5e57355892429f4f13137758f70150853b19a4f3f4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:49:30.705497 containerd[1939]: time="2025-02-13T19:49:30.704698504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-134,Uid:4f638f113812fff74ed102452ac41ebb,Namespace:kube-system,Attempt:0,} returns sandbox id \"9876929367db8cd582336d0380fbd66ffe6f601085399be6d96f7008fb449a30\"" Feb 13 19:49:30.711038 containerd[1939]: time="2025-02-13T19:49:30.710824108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-134,Uid:e7aa04c99a04f8b57fe6e22f656b3e24,Namespace:kube-system,Attempt:0,} returns sandbox id \"8da9d2d5b0ed0b31abac8b54b491d8f4e88a7258b2886d719795e4268c5fad30\"" Feb 13 19:49:30.716397 containerd[1939]: time="2025-02-13T19:49:30.716205532Z" level=info msg="CreateContainer within sandbox \"9876929367db8cd582336d0380fbd66ffe6f601085399be6d96f7008fb449a30\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:49:30.717948 containerd[1939]: time="2025-02-13T19:49:30.717894760Z" level=info msg="CreateContainer within sandbox \"8da9d2d5b0ed0b31abac8b54b491d8f4e88a7258b2886d719795e4268c5fad30\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:49:30.744457 containerd[1939]: time="2025-02-13T19:49:30.744339268Z" level=info msg="CreateContainer within sandbox \"e908bb71f3091a3670afab5e57355892429f4f13137758f70150853b19a4f3f4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4d6e5c367733e1ac7aad989a1a4d7d83b97fd6b960946fbc497f8892f1c5f652\"" Feb 13 19:49:30.748700 containerd[1939]: time="2025-02-13T19:49:30.748536424Z" level=info msg="StartContainer for \"4d6e5c367733e1ac7aad989a1a4d7d83b97fd6b960946fbc497f8892f1c5f652\"" Feb 13 19:49:30.764296 containerd[1939]: time="2025-02-13T19:49:30.764188540Z" level=info msg="CreateContainer within sandbox \"8da9d2d5b0ed0b31abac8b54b491d8f4e88a7258b2886d719795e4268c5fad30\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"735d58c88c7903c8f5fa90aab77560970519e87fc4701ece53c3133888655196\"" Feb 13 19:49:30.765668 containerd[1939]: time="2025-02-13T19:49:30.765567508Z" level=info msg="StartContainer for \"735d58c88c7903c8f5fa90aab77560970519e87fc4701ece53c3133888655196\"" Feb 13 19:49:30.765999 containerd[1939]: time="2025-02-13T19:49:30.765592840Z" level=info msg="CreateContainer within sandbox \"9876929367db8cd582336d0380fbd66ffe6f601085399be6d96f7008fb449a30\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"654fcd507060851b652c54b85e22f7d269a295792ed6af3eef30041044e5acba\"" Feb 13 19:49:30.766876 containerd[1939]: time="2025-02-13T19:49:30.766797256Z" level=info msg="StartContainer for \"654fcd507060851b652c54b85e22f7d269a295792ed6af3eef30041044e5acba\"" Feb 13 19:49:30.823907 systemd[1]: Started cri-containerd-4d6e5c367733e1ac7aad989a1a4d7d83b97fd6b960946fbc497f8892f1c5f652.scope - libcontainer container 4d6e5c367733e1ac7aad989a1a4d7d83b97fd6b960946fbc497f8892f1c5f652. Feb 13 19:49:30.845959 kubelet[2697]: I0213 19:49:30.845785 2697 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-20-134" Feb 13 19:49:30.846924 kubelet[2697]: E0213 19:49:30.846823 2697 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.20.134:6443/api/v1/nodes\": dial tcp 172.31.20.134:6443: connect: connection refused" node="ip-172-31-20-134" Feb 13 19:49:30.865784 systemd[1]: Started cri-containerd-654fcd507060851b652c54b85e22f7d269a295792ed6af3eef30041044e5acba.scope - libcontainer container 654fcd507060851b652c54b85e22f7d269a295792ed6af3eef30041044e5acba. Feb 13 19:49:30.880169 systemd[1]: Started cri-containerd-735d58c88c7903c8f5fa90aab77560970519e87fc4701ece53c3133888655196.scope - libcontainer container 735d58c88c7903c8f5fa90aab77560970519e87fc4701ece53c3133888655196. Feb 13 19:49:30.985133 containerd[1939]: time="2025-02-13T19:49:30.984934613Z" level=info msg="StartContainer for \"4d6e5c367733e1ac7aad989a1a4d7d83b97fd6b960946fbc497f8892f1c5f652\" returns successfully" Feb 13 19:49:31.023695 containerd[1939]: time="2025-02-13T19:49:31.023261053Z" level=info msg="StartContainer for \"654fcd507060851b652c54b85e22f7d269a295792ed6af3eef30041044e5acba\" returns successfully" Feb 13 19:49:31.043272 containerd[1939]: time="2025-02-13T19:49:31.042414817Z" level=info msg="StartContainer for \"735d58c88c7903c8f5fa90aab77560970519e87fc4701ece53c3133888655196\" returns successfully" Feb 13 19:49:32.452617 kubelet[2697]: I0213 19:49:32.450994 2697 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-20-134" Feb 13 19:49:34.177794 kubelet[2697]: I0213 19:49:34.177745 2697 apiserver.go:52] "Watching apiserver" Feb 13 19:49:34.405123 kubelet[2697]: I0213 19:49:34.404950 2697 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:49:34.441864 kubelet[2697]: E0213 19:49:34.441713 2697 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-134\" not found" node="ip-172-31-20-134" Feb 13 19:49:34.559652 kubelet[2697]: E0213 19:49:34.559281 2697 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-20-134.1823dc5c179c751c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-134,UID:ip-172-31-20-134,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-134,},FirstTimestamp:2025-02-13 19:49:29.18637494 +0000 UTC m=+1.275318883,LastTimestamp:2025-02-13 19:49:29.18637494 +0000 UTC m=+1.275318883,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-134,}" Feb 13 19:49:34.583012 kubelet[2697]: I0213 19:49:34.582943 2697 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-20-134" Feb 13 19:49:34.665007 kubelet[2697]: E0213 19:49:34.664841 2697 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-20-134.1823dc5c1927af10 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-134,UID:ip-172-31-20-134,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-20-134,},FirstTimestamp:2025-02-13 19:49:29.212276496 +0000 UTC m=+1.301220439,LastTimestamp:2025-02-13 19:49:29.212276496 +0000 UTC m=+1.301220439,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-134,}" Feb 13 19:49:34.794766 kubelet[2697]: E0213 19:49:34.794291 2697 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-20-134.1823dc5c1c48c86d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-134,UID:ip-172-31-20-134,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-20-134 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-20-134,},FirstTimestamp:2025-02-13 19:49:29.264777325 +0000 UTC m=+1.353721256,LastTimestamp:2025-02-13 19:49:29.264777325 +0000 UTC m=+1.353721256,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-134,}" Feb 13 19:49:36.384616 systemd[1]: Reloading requested from client PID 2973 ('systemctl') (unit session-5.scope)... Feb 13 19:49:36.384649 systemd[1]: Reloading... Feb 13 19:49:36.604672 zram_generator::config[3019]: No configuration found. Feb 13 19:49:36.882443 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:49:37.124625 systemd[1]: Reloading finished in 739 ms. Feb 13 19:49:37.210175 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:37.229650 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:49:37.230094 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:37.230178 systemd[1]: kubelet.service: Consumed 2.019s CPU time, 113.0M memory peak, 0B memory swap peak. Feb 13 19:49:37.238232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:49:37.608328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:49:37.631435 (kubelet)[3075]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:49:37.763279 kubelet[3075]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:49:37.767579 kubelet[3075]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:49:37.767579 kubelet[3075]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:49:37.767579 kubelet[3075]: I0213 19:49:37.766945 3075 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:49:37.785638 kubelet[3075]: I0213 19:49:37.785412 3075 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:49:37.785638 kubelet[3075]: I0213 19:49:37.785462 3075 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:49:37.786880 kubelet[3075]: I0213 19:49:37.786822 3075 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:49:37.790736 kubelet[3075]: I0213 19:49:37.790460 3075 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:49:37.796373 kubelet[3075]: I0213 19:49:37.796322 3075 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:49:37.805643 kubelet[3075]: E0213 19:49:37.804399 3075 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:49:37.805643 kubelet[3075]: I0213 19:49:37.804469 3075 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:49:37.811334 kubelet[3075]: I0213 19:49:37.811280 3075 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:49:37.811886 kubelet[3075]: I0213 19:49:37.811847 3075 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:49:37.812435 kubelet[3075]: I0213 19:49:37.812383 3075 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:49:37.812961 kubelet[3075]: I0213 19:49:37.812604 3075 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-134","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:49:37.813304 kubelet[3075]: I0213 19:49:37.813266 3075 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:49:37.813448 kubelet[3075]: I0213 19:49:37.813427 3075 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:49:37.813647 kubelet[3075]: I0213 19:49:37.813625 3075 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:49:37.813970 kubelet[3075]: I0213 19:49:37.813938 3075 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:49:37.815842 kubelet[3075]: I0213 19:49:37.814671 3075 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:49:37.815842 kubelet[3075]: I0213 19:49:37.814732 3075 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:49:37.815842 kubelet[3075]: I0213 19:49:37.814763 3075 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:49:37.826050 kubelet[3075]: I0213 19:49:37.824901 3075 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 19:49:37.829173 kubelet[3075]: I0213 19:49:37.828381 3075 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:49:37.836613 kubelet[3075]: I0213 19:49:37.833343 3075 server.go:1269] "Started kubelet" Feb 13 19:49:37.847338 kubelet[3075]: I0213 19:49:37.847249 3075 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:49:37.852103 kubelet[3075]: I0213 19:49:37.852042 3075 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:49:37.859688 kubelet[3075]: I0213 19:49:37.859069 3075 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:49:37.859688 kubelet[3075]: I0213 19:49:37.859463 3075 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:49:37.865336 kubelet[3075]: I0213 19:49:37.865025 3075 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:49:37.872662 kubelet[3075]: I0213 19:49:37.872596 3075 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:49:37.873681 kubelet[3075]: I0213 19:49:37.873623 3075 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:49:37.874308 kubelet[3075]: E0213 19:49:37.874240 3075 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-20-134\" not found" Feb 13 19:49:37.877163 kubelet[3075]: I0213 19:49:37.876450 3075 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:49:37.877163 kubelet[3075]: I0213 19:49:37.876822 3075 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:49:37.913930 kubelet[3075]: I0213 19:49:37.913895 3075 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:49:37.915618 kubelet[3075]: I0213 19:49:37.915596 3075 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:49:37.915952 kubelet[3075]: I0213 19:49:37.915917 3075 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:49:37.955191 kubelet[3075]: I0213 19:49:37.954896 3075 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:49:37.959072 kubelet[3075]: I0213 19:49:37.959029 3075 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:49:37.959420 kubelet[3075]: I0213 19:49:37.959364 3075 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:49:37.959666 kubelet[3075]: I0213 19:49:37.959590 3075 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:49:37.959915 kubelet[3075]: E0213 19:49:37.959779 3075 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:49:38.056378 kubelet[3075]: I0213 19:49:38.055892 3075 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:49:38.056378 kubelet[3075]: I0213 19:49:38.055921 3075 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:49:38.056378 kubelet[3075]: I0213 19:49:38.055961 3075 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:49:38.056378 kubelet[3075]: I0213 19:49:38.056202 3075 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:49:38.056378 kubelet[3075]: I0213 19:49:38.056223 3075 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:49:38.056378 kubelet[3075]: I0213 19:49:38.056256 3075 policy_none.go:49] "None policy: Start" Feb 13 19:49:38.058251 kubelet[3075]: I0213 19:49:38.058166 3075 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:49:38.058251 kubelet[3075]: I0213 19:49:38.058215 3075 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:49:38.058615 kubelet[3075]: I0213 19:49:38.058533 3075 state_mem.go:75] "Updated machine memory state" Feb 13 19:49:38.059907 kubelet[3075]: E0213 19:49:38.059857 3075 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:49:38.073604 kubelet[3075]: I0213 19:49:38.072303 3075 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:49:38.073604 kubelet[3075]: I0213 19:49:38.072609 3075 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:49:38.073604 kubelet[3075]: I0213 19:49:38.072634 3075 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:49:38.073604 kubelet[3075]: I0213 19:49:38.073209 3075 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:49:38.145628 update_engine[1909]: I20250213 19:49:38.143409 1909 update_attempter.cc:509] Updating boot flags... Feb 13 19:49:38.217304 kubelet[3075]: I0213 19:49:38.216815 3075 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-20-134" Feb 13 19:49:38.271527 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3130) Feb 13 19:49:38.275022 kubelet[3075]: I0213 19:49:38.274197 3075 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-20-134" Feb 13 19:49:38.275022 kubelet[3075]: I0213 19:49:38.274314 3075 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-20-134" Feb 13 19:49:38.334745 kubelet[3075]: E0213 19:49:38.333871 3075 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-20-134\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-134" Feb 13 19:49:38.385730 kubelet[3075]: I0213 19:49:38.384616 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3402de113a228d4177b3bb375c1ef25d-ca-certs\") pod \"kube-apiserver-ip-172-31-20-134\" (UID: \"3402de113a228d4177b3bb375c1ef25d\") " pod="kube-system/kube-apiserver-ip-172-31-20-134" Feb 13 19:49:38.385730 kubelet[3075]: I0213 19:49:38.384678 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3402de113a228d4177b3bb375c1ef25d-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-134\" (UID: \"3402de113a228d4177b3bb375c1ef25d\") " pod="kube-system/kube-apiserver-ip-172-31-20-134" Feb 13 19:49:38.385730 kubelet[3075]: I0213 19:49:38.384715 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3402de113a228d4177b3bb375c1ef25d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-134\" (UID: \"3402de113a228d4177b3bb375c1ef25d\") " pod="kube-system/kube-apiserver-ip-172-31-20-134" Feb 13 19:49:38.385730 kubelet[3075]: I0213 19:49:38.384760 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f638f113812fff74ed102452ac41ebb-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-134\" (UID: \"4f638f113812fff74ed102452ac41ebb\") " pod="kube-system/kube-controller-manager-ip-172-31-20-134" Feb 13 19:49:38.385730 kubelet[3075]: I0213 19:49:38.384798 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f638f113812fff74ed102452ac41ebb-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-134\" (UID: \"4f638f113812fff74ed102452ac41ebb\") " pod="kube-system/kube-controller-manager-ip-172-31-20-134" Feb 13 19:49:38.386116 kubelet[3075]: I0213 19:49:38.384834 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f638f113812fff74ed102452ac41ebb-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-134\" (UID: \"4f638f113812fff74ed102452ac41ebb\") " pod="kube-system/kube-controller-manager-ip-172-31-20-134" Feb 13 19:49:38.386116 kubelet[3075]: I0213 19:49:38.384868 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f638f113812fff74ed102452ac41ebb-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-134\" (UID: \"4f638f113812fff74ed102452ac41ebb\") " pod="kube-system/kube-controller-manager-ip-172-31-20-134" Feb 13 19:49:38.386116 kubelet[3075]: I0213 19:49:38.384913 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f638f113812fff74ed102452ac41ebb-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-134\" (UID: \"4f638f113812fff74ed102452ac41ebb\") " pod="kube-system/kube-controller-manager-ip-172-31-20-134" Feb 13 19:49:38.386116 kubelet[3075]: I0213 19:49:38.384956 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7aa04c99a04f8b57fe6e22f656b3e24-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-134\" (UID: \"e7aa04c99a04f8b57fe6e22f656b3e24\") " pod="kube-system/kube-scheduler-ip-172-31-20-134" Feb 13 19:49:38.759629 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3134) Feb 13 19:49:38.818200 kubelet[3075]: I0213 19:49:38.816492 3075 apiserver.go:52] "Watching apiserver" Feb 13 19:49:38.877875 kubelet[3075]: I0213 19:49:38.877712 3075 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:49:39.196303 kubelet[3075]: I0213 19:49:39.195150 3075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-134" podStartSLOduration=3.195128062 podStartE2EDuration="3.195128062s" podCreationTimestamp="2025-02-13 19:49:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:49:39.192075658 +0000 UTC m=+1.548206109" watchObservedRunningTime="2025-02-13 19:49:39.195128062 +0000 UTC m=+1.551258345" Feb 13 19:49:39.250730 kubelet[3075]: I0213 19:49:39.250627 3075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-134" podStartSLOduration=1.250599574 podStartE2EDuration="1.250599574s" podCreationTimestamp="2025-02-13 19:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:49:39.219507874 +0000 UTC m=+1.575638169" watchObservedRunningTime="2025-02-13 19:49:39.250599574 +0000 UTC m=+1.606729857" Feb 13 19:49:39.273621 kubelet[3075]: I0213 19:49:39.273446 3075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-134" podStartSLOduration=1.27342385 podStartE2EDuration="1.27342385s" podCreationTimestamp="2025-02-13 19:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:49:39.251743594 +0000 UTC m=+1.607873877" watchObservedRunningTime="2025-02-13 19:49:39.27342385 +0000 UTC m=+1.629554133" Feb 13 19:49:40.046892 sudo[2202]: pam_unix(sudo:session): session closed for user root Feb 13 19:49:40.071502 sshd[2199]: pam_unix(sshd:session): session closed for user core Feb 13 19:49:40.079030 systemd[1]: sshd@4-172.31.20.134:22-139.178.89.65:56824.service: Deactivated successfully. Feb 13 19:49:40.084040 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:49:40.085747 systemd[1]: session-5.scope: Consumed 11.052s CPU time, 153.1M memory peak, 0B memory swap peak. Feb 13 19:49:40.089874 systemd-logind[1908]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:49:40.092725 systemd-logind[1908]: Removed session 5. Feb 13 19:49:41.486592 kubelet[3075]: I0213 19:49:41.484960 3075 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:49:41.487578 containerd[1939]: time="2025-02-13T19:49:41.487505389Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:49:41.488977 kubelet[3075]: I0213 19:49:41.488873 3075 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:49:42.428271 systemd[1]: Created slice kubepods-besteffort-podaada2647_f09c_447b_ba5e_901605226e37.slice - libcontainer container kubepods-besteffort-podaada2647_f09c_447b_ba5e_901605226e37.slice. Feb 13 19:49:42.470157 systemd[1]: Created slice kubepods-burstable-pod188322f4_7980_4ee4_af4d_e771f6cf6c50.slice - libcontainer container kubepods-burstable-pod188322f4_7980_4ee4_af4d_e771f6cf6c50.slice. Feb 13 19:49:42.515134 kubelet[3075]: I0213 19:49:42.515054 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgfxh\" (UniqueName: \"kubernetes.io/projected/aada2647-f09c-447b-ba5e-901605226e37-kube-api-access-dgfxh\") pod \"kube-proxy-ttnvp\" (UID: \"aada2647-f09c-447b-ba5e-901605226e37\") " pod="kube-system/kube-proxy-ttnvp" Feb 13 19:49:42.516052 kubelet[3075]: I0213 19:49:42.515826 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aada2647-f09c-447b-ba5e-901605226e37-lib-modules\") pod \"kube-proxy-ttnvp\" (UID: \"aada2647-f09c-447b-ba5e-901605226e37\") " pod="kube-system/kube-proxy-ttnvp" Feb 13 19:49:42.516052 kubelet[3075]: I0213 19:49:42.515929 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aada2647-f09c-447b-ba5e-901605226e37-kube-proxy\") pod \"kube-proxy-ttnvp\" (UID: \"aada2647-f09c-447b-ba5e-901605226e37\") " pod="kube-system/kube-proxy-ttnvp" Feb 13 19:49:42.516052 kubelet[3075]: I0213 19:49:42.515979 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aada2647-f09c-447b-ba5e-901605226e37-xtables-lock\") pod \"kube-proxy-ttnvp\" (UID: \"aada2647-f09c-447b-ba5e-901605226e37\") " pod="kube-system/kube-proxy-ttnvp" Feb 13 19:49:42.617310 kubelet[3075]: I0213 19:49:42.617175 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/188322f4-7980-4ee4-af4d-e771f6cf6c50-cni\") pod \"kube-flannel-ds-fgxmc\" (UID: \"188322f4-7980-4ee4-af4d-e771f6cf6c50\") " pod="kube-flannel/kube-flannel-ds-fgxmc" Feb 13 19:49:42.617595 kubelet[3075]: I0213 19:49:42.617357 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/188322f4-7980-4ee4-af4d-e771f6cf6c50-run\") pod \"kube-flannel-ds-fgxmc\" (UID: \"188322f4-7980-4ee4-af4d-e771f6cf6c50\") " pod="kube-flannel/kube-flannel-ds-fgxmc" Feb 13 19:49:42.617595 kubelet[3075]: I0213 19:49:42.617420 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/188322f4-7980-4ee4-af4d-e771f6cf6c50-flannel-cfg\") pod \"kube-flannel-ds-fgxmc\" (UID: \"188322f4-7980-4ee4-af4d-e771f6cf6c50\") " pod="kube-flannel/kube-flannel-ds-fgxmc" Feb 13 19:49:42.617595 kubelet[3075]: I0213 19:49:42.617464 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/188322f4-7980-4ee4-af4d-e771f6cf6c50-xtables-lock\") pod \"kube-flannel-ds-fgxmc\" (UID: \"188322f4-7980-4ee4-af4d-e771f6cf6c50\") " pod="kube-flannel/kube-flannel-ds-fgxmc" Feb 13 19:49:42.617595 kubelet[3075]: I0213 19:49:42.617510 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/188322f4-7980-4ee4-af4d-e771f6cf6c50-cni-plugin\") pod \"kube-flannel-ds-fgxmc\" (UID: \"188322f4-7980-4ee4-af4d-e771f6cf6c50\") " pod="kube-flannel/kube-flannel-ds-fgxmc" Feb 13 19:49:42.617595 kubelet[3075]: I0213 19:49:42.617573 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76gtr\" (UniqueName: \"kubernetes.io/projected/188322f4-7980-4ee4-af4d-e771f6cf6c50-kube-api-access-76gtr\") pod \"kube-flannel-ds-fgxmc\" (UID: \"188322f4-7980-4ee4-af4d-e771f6cf6c50\") " pod="kube-flannel/kube-flannel-ds-fgxmc" Feb 13 19:49:42.747403 containerd[1939]: time="2025-02-13T19:49:42.747086572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ttnvp,Uid:aada2647-f09c-447b-ba5e-901605226e37,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:42.784135 containerd[1939]: time="2025-02-13T19:49:42.782465548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-fgxmc,Uid:188322f4-7980-4ee4-af4d-e771f6cf6c50,Namespace:kube-flannel,Attempt:0,}" Feb 13 19:49:42.820940 containerd[1939]: time="2025-02-13T19:49:42.820787392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:42.820940 containerd[1939]: time="2025-02-13T19:49:42.820882816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:42.821218 containerd[1939]: time="2025-02-13T19:49:42.820910572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:42.821477 containerd[1939]: time="2025-02-13T19:49:42.821403676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:42.852523 containerd[1939]: time="2025-02-13T19:49:42.852267988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:49:42.853367 containerd[1939]: time="2025-02-13T19:49:42.853296064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:49:42.853654 containerd[1939]: time="2025-02-13T19:49:42.853581604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:42.855820 containerd[1939]: time="2025-02-13T19:49:42.855723208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:49:42.868173 systemd[1]: Started cri-containerd-c8b8841dd2c23451efe3d590a33145124be8261ea28843bcc5e72ff4f1a7c26f.scope - libcontainer container c8b8841dd2c23451efe3d590a33145124be8261ea28843bcc5e72ff4f1a7c26f. Feb 13 19:49:42.905935 systemd[1]: Started cri-containerd-2aef24b0498d9de0e8f881c9e0647e45c3734a925b1580d52093513c017930b6.scope - libcontainer container 2aef24b0498d9de0e8f881c9e0647e45c3734a925b1580d52093513c017930b6. Feb 13 19:49:42.958348 containerd[1939]: time="2025-02-13T19:49:42.958068269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ttnvp,Uid:aada2647-f09c-447b-ba5e-901605226e37,Namespace:kube-system,Attempt:0,} returns sandbox id \"c8b8841dd2c23451efe3d590a33145124be8261ea28843bcc5e72ff4f1a7c26f\"" Feb 13 19:49:42.973902 containerd[1939]: time="2025-02-13T19:49:42.973686821Z" level=info msg="CreateContainer within sandbox \"c8b8841dd2c23451efe3d590a33145124be8261ea28843bcc5e72ff4f1a7c26f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:49:43.012744 containerd[1939]: time="2025-02-13T19:49:43.012422269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-fgxmc,Uid:188322f4-7980-4ee4-af4d-e771f6cf6c50,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"2aef24b0498d9de0e8f881c9e0647e45c3734a925b1580d52093513c017930b6\"" Feb 13 19:49:43.017633 containerd[1939]: time="2025-02-13T19:49:43.017410573Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 19:49:43.019197 containerd[1939]: time="2025-02-13T19:49:43.019095097Z" level=info msg="CreateContainer within sandbox \"c8b8841dd2c23451efe3d590a33145124be8261ea28843bcc5e72ff4f1a7c26f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"897f79bcd6123886b56f25f702b792ce25f14aadbc9cbcf0c7f4ca7d8cf71c7a\"" Feb 13 19:49:43.022963 containerd[1939]: time="2025-02-13T19:49:43.022267525Z" level=info msg="StartContainer for \"897f79bcd6123886b56f25f702b792ce25f14aadbc9cbcf0c7f4ca7d8cf71c7a\"" Feb 13 19:49:43.082226 systemd[1]: Started cri-containerd-897f79bcd6123886b56f25f702b792ce25f14aadbc9cbcf0c7f4ca7d8cf71c7a.scope - libcontainer container 897f79bcd6123886b56f25f702b792ce25f14aadbc9cbcf0c7f4ca7d8cf71c7a. Feb 13 19:49:43.141460 containerd[1939]: time="2025-02-13T19:49:43.141355033Z" level=info msg="StartContainer for \"897f79bcd6123886b56f25f702b792ce25f14aadbc9cbcf0c7f4ca7d8cf71c7a\" returns successfully" Feb 13 19:49:45.115364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3809273397.mount: Deactivated successfully. Feb 13 19:49:45.174505 containerd[1939]: time="2025-02-13T19:49:45.174286240Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:45.177829 containerd[1939]: time="2025-02-13T19:49:45.177711100Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Feb 13 19:49:45.179180 containerd[1939]: time="2025-02-13T19:49:45.179076928Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:45.186603 containerd[1939]: time="2025-02-13T19:49:45.186425752Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:45.188881 containerd[1939]: time="2025-02-13T19:49:45.188412808Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 2.170934231s" Feb 13 19:49:45.188881 containerd[1939]: time="2025-02-13T19:49:45.188591524Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 13 19:49:45.194307 containerd[1939]: time="2025-02-13T19:49:45.194129296Z" level=info msg="CreateContainer within sandbox \"2aef24b0498d9de0e8f881c9e0647e45c3734a925b1580d52093513c017930b6\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 19:49:45.218816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount859571032.mount: Deactivated successfully. Feb 13 19:49:45.231480 containerd[1939]: time="2025-02-13T19:49:45.231401584Z" level=info msg="CreateContainer within sandbox \"2aef24b0498d9de0e8f881c9e0647e45c3734a925b1580d52093513c017930b6\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"e9b28bb2297cbd4a19954574ff4690064fae4f3fccb24260e983560716824ef0\"" Feb 13 19:49:45.233918 containerd[1939]: time="2025-02-13T19:49:45.232372096Z" level=info msg="StartContainer for \"e9b28bb2297cbd4a19954574ff4690064fae4f3fccb24260e983560716824ef0\"" Feb 13 19:49:45.274931 systemd[1]: Started cri-containerd-e9b28bb2297cbd4a19954574ff4690064fae4f3fccb24260e983560716824ef0.scope - libcontainer container e9b28bb2297cbd4a19954574ff4690064fae4f3fccb24260e983560716824ef0. Feb 13 19:49:45.323605 containerd[1939]: time="2025-02-13T19:49:45.323478184Z" level=info msg="StartContainer for \"e9b28bb2297cbd4a19954574ff4690064fae4f3fccb24260e983560716824ef0\" returns successfully" Feb 13 19:49:45.327359 systemd[1]: cri-containerd-e9b28bb2297cbd4a19954574ff4690064fae4f3fccb24260e983560716824ef0.scope: Deactivated successfully. Feb 13 19:49:45.368183 kubelet[3075]: I0213 19:49:45.367978 3075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ttnvp" podStartSLOduration=3.367952609 podStartE2EDuration="3.367952609s" podCreationTimestamp="2025-02-13 19:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:49:44.077138918 +0000 UTC m=+6.433269213" watchObservedRunningTime="2025-02-13 19:49:45.367952609 +0000 UTC m=+7.724082904" Feb 13 19:49:45.421659 containerd[1939]: time="2025-02-13T19:49:45.421454417Z" level=info msg="shim disconnected" id=e9b28bb2297cbd4a19954574ff4690064fae4f3fccb24260e983560716824ef0 namespace=k8s.io Feb 13 19:49:45.422119 containerd[1939]: time="2025-02-13T19:49:45.421649213Z" level=warning msg="cleaning up after shim disconnected" id=e9b28bb2297cbd4a19954574ff4690064fae4f3fccb24260e983560716824ef0 namespace=k8s.io Feb 13 19:49:45.422119 containerd[1939]: time="2025-02-13T19:49:45.421705925Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:49:46.071812 containerd[1939]: time="2025-02-13T19:49:46.071337112Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 19:49:48.340512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount267394940.mount: Deactivated successfully. Feb 13 19:49:49.557760 containerd[1939]: time="2025-02-13T19:49:49.557676657Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:49.560328 containerd[1939]: time="2025-02-13T19:49:49.560280945Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Feb 13 19:49:49.562376 containerd[1939]: time="2025-02-13T19:49:49.562327437Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:49.569143 containerd[1939]: time="2025-02-13T19:49:49.569087985Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:49:49.571533 containerd[1939]: time="2025-02-13T19:49:49.571482573Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 3.500063117s" Feb 13 19:49:49.571837 containerd[1939]: time="2025-02-13T19:49:49.571796397Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 13 19:49:49.579161 containerd[1939]: time="2025-02-13T19:49:49.579089913Z" level=info msg="CreateContainer within sandbox \"2aef24b0498d9de0e8f881c9e0647e45c3734a925b1580d52093513c017930b6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:49:49.603576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount573115499.mount: Deactivated successfully. Feb 13 19:49:49.608001 containerd[1939]: time="2025-02-13T19:49:49.607864786Z" level=info msg="CreateContainer within sandbox \"2aef24b0498d9de0e8f881c9e0647e45c3734a925b1580d52093513c017930b6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dcdb22ac342501e8a1c6381abf976cfadf5acdf141067c107f2234430a2c10b0\"" Feb 13 19:49:49.609292 containerd[1939]: time="2025-02-13T19:49:49.608934022Z" level=info msg="StartContainer for \"dcdb22ac342501e8a1c6381abf976cfadf5acdf141067c107f2234430a2c10b0\"" Feb 13 19:49:49.671921 systemd[1]: Started cri-containerd-dcdb22ac342501e8a1c6381abf976cfadf5acdf141067c107f2234430a2c10b0.scope - libcontainer container dcdb22ac342501e8a1c6381abf976cfadf5acdf141067c107f2234430a2c10b0. Feb 13 19:49:49.719801 systemd[1]: cri-containerd-dcdb22ac342501e8a1c6381abf976cfadf5acdf141067c107f2234430a2c10b0.scope: Deactivated successfully. Feb 13 19:49:49.721606 containerd[1939]: time="2025-02-13T19:49:49.721313038Z" level=info msg="StartContainer for \"dcdb22ac342501e8a1c6381abf976cfadf5acdf141067c107f2234430a2c10b0\" returns successfully" Feb 13 19:49:49.759306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcdb22ac342501e8a1c6381abf976cfadf5acdf141067c107f2234430a2c10b0-rootfs.mount: Deactivated successfully. Feb 13 19:49:49.794888 kubelet[3075]: I0213 19:49:49.794528 3075 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 19:49:49.866990 kubelet[3075]: I0213 19:49:49.866927 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hhk4\" (UniqueName: \"kubernetes.io/projected/8d0b3db1-05f4-4066-9d89-032f4c2c8048-kube-api-access-6hhk4\") pod \"coredns-6f6b679f8f-2bhjv\" (UID: \"8d0b3db1-05f4-4066-9d89-032f4c2c8048\") " pod="kube-system/coredns-6f6b679f8f-2bhjv" Feb 13 19:49:49.867146 kubelet[3075]: I0213 19:49:49.866999 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03a32b38-1f84-403b-a43b-37c8f24015df-config-volume\") pod \"coredns-6f6b679f8f-tcdzx\" (UID: \"03a32b38-1f84-403b-a43b-37c8f24015df\") " pod="kube-system/coredns-6f6b679f8f-tcdzx" Feb 13 19:49:49.867146 kubelet[3075]: I0213 19:49:49.867037 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p8zl\" (UniqueName: \"kubernetes.io/projected/03a32b38-1f84-403b-a43b-37c8f24015df-kube-api-access-9p8zl\") pod \"coredns-6f6b679f8f-tcdzx\" (UID: \"03a32b38-1f84-403b-a43b-37c8f24015df\") " pod="kube-system/coredns-6f6b679f8f-tcdzx" Feb 13 19:49:49.867146 kubelet[3075]: I0213 19:49:49.867081 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d0b3db1-05f4-4066-9d89-032f4c2c8048-config-volume\") pod \"coredns-6f6b679f8f-2bhjv\" (UID: \"8d0b3db1-05f4-4066-9d89-032f4c2c8048\") " pod="kube-system/coredns-6f6b679f8f-2bhjv" Feb 13 19:49:49.877344 systemd[1]: Created slice kubepods-burstable-pod03a32b38_1f84_403b_a43b_37c8f24015df.slice - libcontainer container kubepods-burstable-pod03a32b38_1f84_403b_a43b_37c8f24015df.slice. Feb 13 19:49:49.896694 systemd[1]: Created slice kubepods-burstable-pod8d0b3db1_05f4_4066_9d89_032f4c2c8048.slice - libcontainer container kubepods-burstable-pod8d0b3db1_05f4_4066_9d89_032f4c2c8048.slice. Feb 13 19:49:49.974138 containerd[1939]: time="2025-02-13T19:49:49.972871019Z" level=info msg="shim disconnected" id=dcdb22ac342501e8a1c6381abf976cfadf5acdf141067c107f2234430a2c10b0 namespace=k8s.io Feb 13 19:49:49.974138 containerd[1939]: time="2025-02-13T19:49:49.972965003Z" level=warning msg="cleaning up after shim disconnected" id=dcdb22ac342501e8a1c6381abf976cfadf5acdf141067c107f2234430a2c10b0 namespace=k8s.io Feb 13 19:49:49.974138 containerd[1939]: time="2025-02-13T19:49:49.972987695Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:49:50.087343 containerd[1939]: time="2025-02-13T19:49:50.087266336Z" level=info msg="CreateContainer within sandbox \"2aef24b0498d9de0e8f881c9e0647e45c3734a925b1580d52093513c017930b6\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 19:49:50.110861 containerd[1939]: time="2025-02-13T19:49:50.110772716Z" level=info msg="CreateContainer within sandbox \"2aef24b0498d9de0e8f881c9e0647e45c3734a925b1580d52093513c017930b6\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"9450add73376b9d91a923e7742f99c8bb8127d380c36683f781cfd91e4482ca4\"" Feb 13 19:49:50.111902 containerd[1939]: time="2025-02-13T19:49:50.111839360Z" level=info msg="StartContainer for \"9450add73376b9d91a923e7742f99c8bb8127d380c36683f781cfd91e4482ca4\"" Feb 13 19:49:50.165871 systemd[1]: Started cri-containerd-9450add73376b9d91a923e7742f99c8bb8127d380c36683f781cfd91e4482ca4.scope - libcontainer container 9450add73376b9d91a923e7742f99c8bb8127d380c36683f781cfd91e4482ca4. Feb 13 19:49:50.190410 containerd[1939]: time="2025-02-13T19:49:50.190068440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tcdzx,Uid:03a32b38-1f84-403b-a43b-37c8f24015df,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:50.204599 containerd[1939]: time="2025-02-13T19:49:50.204194241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2bhjv,Uid:8d0b3db1-05f4-4066-9d89-032f4c2c8048,Namespace:kube-system,Attempt:0,}" Feb 13 19:49:50.227029 containerd[1939]: time="2025-02-13T19:49:50.226370133Z" level=info msg="StartContainer for \"9450add73376b9d91a923e7742f99c8bb8127d380c36683f781cfd91e4482ca4\" returns successfully" Feb 13 19:49:50.289218 containerd[1939]: time="2025-02-13T19:49:50.289147521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tcdzx,Uid:03a32b38-1f84-403b-a43b-37c8f24015df,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"65041bd3b232aac723d5b05d721612aa796f97382047d042ac04661775304699\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:49:50.290520 kubelet[3075]: E0213 19:49:50.289843 3075 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65041bd3b232aac723d5b05d721612aa796f97382047d042ac04661775304699\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:49:50.290520 kubelet[3075]: E0213 19:49:50.289927 3075 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65041bd3b232aac723d5b05d721612aa796f97382047d042ac04661775304699\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-tcdzx" Feb 13 19:49:50.290520 kubelet[3075]: E0213 19:49:50.289960 3075 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65041bd3b232aac723d5b05d721612aa796f97382047d042ac04661775304699\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-tcdzx" Feb 13 19:49:50.290520 kubelet[3075]: E0213 19:49:50.290072 3075 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-tcdzx_kube-system(03a32b38-1f84-403b-a43b-37c8f24015df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-tcdzx_kube-system(03a32b38-1f84-403b-a43b-37c8f24015df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"65041bd3b232aac723d5b05d721612aa796f97382047d042ac04661775304699\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-tcdzx" podUID="03a32b38-1f84-403b-a43b-37c8f24015df" Feb 13 19:49:50.304108 containerd[1939]: time="2025-02-13T19:49:50.303992733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2bhjv,Uid:8d0b3db1-05f4-4066-9d89-032f4c2c8048,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac6631eb91151572569a61c072cd5dc6d3148bbee8544ffdc5ea4e6d90c79d35\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:49:50.304676 kubelet[3075]: E0213 19:49:50.304478 3075 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac6631eb91151572569a61c072cd5dc6d3148bbee8544ffdc5ea4e6d90c79d35\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:49:50.306041 kubelet[3075]: E0213 19:49:50.305012 3075 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac6631eb91151572569a61c072cd5dc6d3148bbee8544ffdc5ea4e6d90c79d35\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-2bhjv" Feb 13 19:49:50.306041 kubelet[3075]: E0213 19:49:50.305056 3075 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac6631eb91151572569a61c072cd5dc6d3148bbee8544ffdc5ea4e6d90c79d35\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-2bhjv" Feb 13 19:49:50.306041 kubelet[3075]: E0213 19:49:50.305136 3075 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-2bhjv_kube-system(8d0b3db1-05f4-4066-9d89-032f4c2c8048)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-2bhjv_kube-system(8d0b3db1-05f4-4066-9d89-032f4c2c8048)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac6631eb91151572569a61c072cd5dc6d3148bbee8544ffdc5ea4e6d90c79d35\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-2bhjv" podUID="8d0b3db1-05f4-4066-9d89-032f4c2c8048" Feb 13 19:49:51.110606 kubelet[3075]: I0213 19:49:51.110047 3075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-fgxmc" podStartSLOduration=2.552258689 podStartE2EDuration="9.110027517s" podCreationTimestamp="2025-02-13 19:49:42 +0000 UTC" firstStartedPulling="2025-02-13 19:49:43.016618753 +0000 UTC m=+5.372749036" lastFinishedPulling="2025-02-13 19:49:49.574387593 +0000 UTC m=+11.930517864" observedRunningTime="2025-02-13 19:49:51.109680885 +0000 UTC m=+13.465811204" watchObservedRunningTime="2025-02-13 19:49:51.110027517 +0000 UTC m=+13.466157788" Feb 13 19:49:51.359759 (udev-worker)[3804]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:49:51.397305 systemd-networkd[1758]: flannel.1: Link UP Feb 13 19:49:51.397327 systemd-networkd[1758]: flannel.1: Gained carrier Feb 13 19:49:53.127371 systemd-networkd[1758]: flannel.1: Gained IPv6LL Feb 13 19:49:55.256962 ntpd[1898]: Listen normally on 7 flannel.1 192.168.0.0:123 Feb 13 19:49:55.257096 ntpd[1898]: Listen normally on 8 flannel.1 [fe80::7852:5dff:fe9d:f1c9%4]:123 Feb 13 19:49:55.257863 ntpd[1898]: 13 Feb 19:49:55 ntpd[1898]: Listen normally on 7 flannel.1 192.168.0.0:123 Feb 13 19:49:55.257863 ntpd[1898]: 13 Feb 19:49:55 ntpd[1898]: Listen normally on 8 flannel.1 [fe80::7852:5dff:fe9d:f1c9%4]:123 Feb 13 19:50:01.961939 containerd[1939]: time="2025-02-13T19:50:01.961869887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2bhjv,Uid:8d0b3db1-05f4-4066-9d89-032f4c2c8048,Namespace:kube-system,Attempt:0,}" Feb 13 19:50:02.005635 systemd-networkd[1758]: cni0: Link UP Feb 13 19:50:02.005657 systemd-networkd[1758]: cni0: Gained carrier Feb 13 19:50:02.018441 systemd-networkd[1758]: veth6e69de23: Link UP Feb 13 19:50:02.022794 kernel: cni0: port 1(veth6e69de23) entered blocking state Feb 13 19:50:02.022951 kernel: cni0: port 1(veth6e69de23) entered disabled state Feb 13 19:50:02.019984 (udev-worker)[3942]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:02.023521 systemd-networkd[1758]: cni0: Lost carrier Feb 13 19:50:02.026615 kernel: veth6e69de23: entered allmulticast mode Feb 13 19:50:02.028320 kernel: veth6e69de23: entered promiscuous mode Feb 13 19:50:02.031465 (udev-worker)[3946]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:02.055813 kernel: cni0: port 1(veth6e69de23) entered blocking state Feb 13 19:50:02.055921 kernel: cni0: port 1(veth6e69de23) entered forwarding state Feb 13 19:50:02.065520 systemd-networkd[1758]: veth6e69de23: Gained carrier Feb 13 19:50:02.066720 systemd-networkd[1758]: cni0: Gained carrier Feb 13 19:50:02.068238 containerd[1939]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000012938), "name":"cbr0", "type":"bridge"} Feb 13 19:50:02.068238 containerd[1939]: delegateAdd: netconf sent to delegate plugin: Feb 13 19:50:02.099686 containerd[1939]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T19:50:02.098643872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:02.099686 containerd[1939]: time="2025-02-13T19:50:02.099505004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:02.099686 containerd[1939]: time="2025-02-13T19:50:02.099534704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:02.100111 containerd[1939]: time="2025-02-13T19:50:02.099748748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:02.147459 systemd[1]: run-containerd-runc-k8s.io-26c6116bcbe9b5a624c6cf58126324f127290378eb3b805ca4e2784747aee97d-runc.4KgSWe.mount: Deactivated successfully. Feb 13 19:50:02.166036 systemd[1]: Started cri-containerd-26c6116bcbe9b5a624c6cf58126324f127290378eb3b805ca4e2784747aee97d.scope - libcontainer container 26c6116bcbe9b5a624c6cf58126324f127290378eb3b805ca4e2784747aee97d. Feb 13 19:50:02.237282 containerd[1939]: time="2025-02-13T19:50:02.236945228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2bhjv,Uid:8d0b3db1-05f4-4066-9d89-032f4c2c8048,Namespace:kube-system,Attempt:0,} returns sandbox id \"26c6116bcbe9b5a624c6cf58126324f127290378eb3b805ca4e2784747aee97d\"" Feb 13 19:50:02.245356 containerd[1939]: time="2025-02-13T19:50:02.244476020Z" level=info msg="CreateContainer within sandbox \"26c6116bcbe9b5a624c6cf58126324f127290378eb3b805ca4e2784747aee97d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:50:02.265149 containerd[1939]: time="2025-02-13T19:50:02.265062344Z" level=info msg="CreateContainer within sandbox \"26c6116bcbe9b5a624c6cf58126324f127290378eb3b805ca4e2784747aee97d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"226e21967d9529152b03123ae93fc439473eb8ebe6647839ffc997077f8a3089\"" Feb 13 19:50:02.270064 containerd[1939]: time="2025-02-13T19:50:02.267373388Z" level=info msg="StartContainer for \"226e21967d9529152b03123ae93fc439473eb8ebe6647839ffc997077f8a3089\"" Feb 13 19:50:02.335885 systemd[1]: Started cri-containerd-226e21967d9529152b03123ae93fc439473eb8ebe6647839ffc997077f8a3089.scope - libcontainer container 226e21967d9529152b03123ae93fc439473eb8ebe6647839ffc997077f8a3089. Feb 13 19:50:02.392613 containerd[1939]: time="2025-02-13T19:50:02.392294061Z" level=info msg="StartContainer for \"226e21967d9529152b03123ae93fc439473eb8ebe6647839ffc997077f8a3089\" returns successfully" Feb 13 19:50:02.960957 containerd[1939]: time="2025-02-13T19:50:02.960879924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tcdzx,Uid:03a32b38-1f84-403b-a43b-37c8f24015df,Namespace:kube-system,Attempt:0,}" Feb 13 19:50:03.001038 systemd-networkd[1758]: veth8b7e7667: Link UP Feb 13 19:50:03.006398 kernel: cni0: port 2(veth8b7e7667) entered blocking state Feb 13 19:50:03.006529 kernel: cni0: port 2(veth8b7e7667) entered disabled state Feb 13 19:50:03.006625 kernel: veth8b7e7667: entered allmulticast mode Feb 13 19:50:03.007959 kernel: veth8b7e7667: entered promiscuous mode Feb 13 19:50:03.011020 kernel: cni0: port 2(veth8b7e7667) entered blocking state Feb 13 19:50:03.011148 kernel: cni0: port 2(veth8b7e7667) entered forwarding state Feb 13 19:50:03.023255 systemd-networkd[1758]: veth8b7e7667: Gained carrier Feb 13 19:50:03.026731 containerd[1939]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000a48e8), "name":"cbr0", "type":"bridge"} Feb 13 19:50:03.026731 containerd[1939]: delegateAdd: netconf sent to delegate plugin: Feb 13 19:50:03.060835 containerd[1939]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T19:50:03.060590564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:50:03.060835 containerd[1939]: time="2025-02-13T19:50:03.060703220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:50:03.061366 containerd[1939]: time="2025-02-13T19:50:03.060765788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:03.061931 containerd[1939]: time="2025-02-13T19:50:03.061767464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:50:03.110916 systemd-networkd[1758]: veth6e69de23: Gained IPv6LL Feb 13 19:50:03.112862 systemd[1]: Started cri-containerd-6c1eb8ba4a963a0186fb880b76f6afecbdbfc6ed2b0ffa2b825082b16fe7d125.scope - libcontainer container 6c1eb8ba4a963a0186fb880b76f6afecbdbfc6ed2b0ffa2b825082b16fe7d125. Feb 13 19:50:03.193946 kubelet[3075]: I0213 19:50:03.193836 3075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-2bhjv" podStartSLOduration=21.193810101 podStartE2EDuration="21.193810101s" podCreationTimestamp="2025-02-13 19:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:50:03.160475001 +0000 UTC m=+25.516605296" watchObservedRunningTime="2025-02-13 19:50:03.193810101 +0000 UTC m=+25.549940384" Feb 13 19:50:03.240699 containerd[1939]: time="2025-02-13T19:50:03.240097269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tcdzx,Uid:03a32b38-1f84-403b-a43b-37c8f24015df,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c1eb8ba4a963a0186fb880b76f6afecbdbfc6ed2b0ffa2b825082b16fe7d125\"" Feb 13 19:50:03.254004 containerd[1939]: time="2025-02-13T19:50:03.253891305Z" level=info msg="CreateContainer within sandbox \"6c1eb8ba4a963a0186fb880b76f6afecbdbfc6ed2b0ffa2b825082b16fe7d125\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:50:03.290466 containerd[1939]: time="2025-02-13T19:50:03.290352118Z" level=info msg="CreateContainer within sandbox \"6c1eb8ba4a963a0186fb880b76f6afecbdbfc6ed2b0ffa2b825082b16fe7d125\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ce8c8fac6f6a328a91bc3d239425737da052edcfef8bfbe5e575d23ced2e27ad\"" Feb 13 19:50:03.291890 containerd[1939]: time="2025-02-13T19:50:03.291802210Z" level=info msg="StartContainer for \"ce8c8fac6f6a328a91bc3d239425737da052edcfef8bfbe5e575d23ced2e27ad\"" Feb 13 19:50:03.296264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount911969754.mount: Deactivated successfully. Feb 13 19:50:03.350232 systemd[1]: Started cri-containerd-ce8c8fac6f6a328a91bc3d239425737da052edcfef8bfbe5e575d23ced2e27ad.scope - libcontainer container ce8c8fac6f6a328a91bc3d239425737da052edcfef8bfbe5e575d23ced2e27ad. Feb 13 19:50:03.402765 containerd[1939]: time="2025-02-13T19:50:03.402695062Z" level=info msg="StartContainer for \"ce8c8fac6f6a328a91bc3d239425737da052edcfef8bfbe5e575d23ced2e27ad\" returns successfully" Feb 13 19:50:03.686734 systemd-networkd[1758]: cni0: Gained IPv6LL Feb 13 19:50:04.182339 kubelet[3075]: I0213 19:50:04.182171 3075 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-tcdzx" podStartSLOduration=22.18214669 podStartE2EDuration="22.18214669s" podCreationTimestamp="2025-02-13 19:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:50:04.155905498 +0000 UTC m=+26.512035889" watchObservedRunningTime="2025-02-13 19:50:04.18214669 +0000 UTC m=+26.538276973" Feb 13 19:50:05.030882 systemd-networkd[1758]: veth8b7e7667: Gained IPv6LL Feb 13 19:50:07.257052 ntpd[1898]: Listen normally on 9 cni0 192.168.0.1:123 Feb 13 19:50:07.257937 ntpd[1898]: 13 Feb 19:50:07 ntpd[1898]: Listen normally on 9 cni0 192.168.0.1:123 Feb 13 19:50:07.257937 ntpd[1898]: 13 Feb 19:50:07 ntpd[1898]: Listen normally on 10 cni0 [fe80::e0fd:5fff:fec4:e804%5]:123 Feb 13 19:50:07.257937 ntpd[1898]: 13 Feb 19:50:07 ntpd[1898]: Listen normally on 11 veth6e69de23 [fe80::3486:ccff:fe41:6a44%6]:123 Feb 13 19:50:07.257937 ntpd[1898]: 13 Feb 19:50:07 ntpd[1898]: Listen normally on 12 veth8b7e7667 [fe80::b4d9:4cff:febc:222d%7]:123 Feb 13 19:50:07.257192 ntpd[1898]: Listen normally on 10 cni0 [fe80::e0fd:5fff:fec4:e804%5]:123 Feb 13 19:50:07.257272 ntpd[1898]: Listen normally on 11 veth6e69de23 [fe80::3486:ccff:fe41:6a44%6]:123 Feb 13 19:50:07.257340 ntpd[1898]: Listen normally on 12 veth8b7e7667 [fe80::b4d9:4cff:febc:222d%7]:123 Feb 13 19:50:21.037158 systemd[1]: Started sshd@5-172.31.20.134:22-139.178.89.65:57888.service - OpenSSH per-connection server daemon (139.178.89.65:57888). Feb 13 19:50:21.218324 sshd[4218]: Accepted publickey for core from 139.178.89.65 port 57888 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:21.221599 sshd[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:21.231325 systemd-logind[1908]: New session 6 of user core. Feb 13 19:50:21.248113 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:50:21.526937 sshd[4218]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:21.536080 systemd[1]: sshd@5-172.31.20.134:22-139.178.89.65:57888.service: Deactivated successfully. Feb 13 19:50:21.539585 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:50:21.545108 systemd-logind[1908]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:50:21.547328 systemd-logind[1908]: Removed session 6. Feb 13 19:50:26.571513 systemd[1]: Started sshd@6-172.31.20.134:22-139.178.89.65:53470.service - OpenSSH per-connection server daemon (139.178.89.65:53470). Feb 13 19:50:26.745141 sshd[4258]: Accepted publickey for core from 139.178.89.65 port 53470 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:26.748012 sshd[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:26.757913 systemd-logind[1908]: New session 7 of user core. Feb 13 19:50:26.762845 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:50:27.018593 sshd[4258]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:27.024511 systemd[1]: sshd@6-172.31.20.134:22-139.178.89.65:53470.service: Deactivated successfully. Feb 13 19:50:27.029615 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:50:27.031747 systemd-logind[1908]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:50:27.034405 systemd-logind[1908]: Removed session 7. Feb 13 19:50:32.060071 systemd[1]: Started sshd@7-172.31.20.134:22-139.178.89.65:53476.service - OpenSSH per-connection server daemon (139.178.89.65:53476). Feb 13 19:50:32.246786 sshd[4314]: Accepted publickey for core from 139.178.89.65 port 53476 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:32.249372 sshd[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:32.258185 systemd-logind[1908]: New session 8 of user core. Feb 13 19:50:32.264843 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:50:32.515990 sshd[4314]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:32.522915 systemd[1]: sshd@7-172.31.20.134:22-139.178.89.65:53476.service: Deactivated successfully. Feb 13 19:50:32.529267 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:50:32.531342 systemd-logind[1908]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:50:32.534219 systemd-logind[1908]: Removed session 8. Feb 13 19:50:37.556095 systemd[1]: Started sshd@8-172.31.20.134:22-139.178.89.65:50408.service - OpenSSH per-connection server daemon (139.178.89.65:50408). Feb 13 19:50:37.725652 sshd[4349]: Accepted publickey for core from 139.178.89.65 port 50408 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:37.728707 sshd[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:37.739829 systemd-logind[1908]: New session 9 of user core. Feb 13 19:50:37.745354 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:50:38.003528 sshd[4349]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:38.013678 systemd[1]: sshd@8-172.31.20.134:22-139.178.89.65:50408.service: Deactivated successfully. Feb 13 19:50:38.019043 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:50:38.020888 systemd-logind[1908]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:50:38.023633 systemd-logind[1908]: Removed session 9. Feb 13 19:50:38.047088 systemd[1]: Started sshd@9-172.31.20.134:22-139.178.89.65:50422.service - OpenSSH per-connection server daemon (139.178.89.65:50422). Feb 13 19:50:38.224452 sshd[4365]: Accepted publickey for core from 139.178.89.65 port 50422 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:38.227863 sshd[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:38.235859 systemd-logind[1908]: New session 10 of user core. Feb 13 19:50:38.239837 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:50:38.591914 sshd[4365]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:38.601749 systemd[1]: sshd@9-172.31.20.134:22-139.178.89.65:50422.service: Deactivated successfully. Feb 13 19:50:38.609460 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:50:38.613900 systemd-logind[1908]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:50:38.654403 systemd[1]: Started sshd@10-172.31.20.134:22-139.178.89.65:50436.service - OpenSSH per-connection server daemon (139.178.89.65:50436). Feb 13 19:50:38.659488 systemd-logind[1908]: Removed session 10. Feb 13 19:50:38.853365 sshd[4376]: Accepted publickey for core from 139.178.89.65 port 50436 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:38.856817 sshd[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:38.866867 systemd-logind[1908]: New session 11 of user core. Feb 13 19:50:38.872990 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:50:39.119423 sshd[4376]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:39.125994 systemd[1]: sshd@10-172.31.20.134:22-139.178.89.65:50436.service: Deactivated successfully. Feb 13 19:50:39.130413 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:50:39.131943 systemd-logind[1908]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:50:39.133893 systemd-logind[1908]: Removed session 11. Feb 13 19:50:44.161187 systemd[1]: Started sshd@11-172.31.20.134:22-139.178.89.65:50448.service - OpenSSH per-connection server daemon (139.178.89.65:50448). Feb 13 19:50:44.347484 sshd[4411]: Accepted publickey for core from 139.178.89.65 port 50448 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:44.350762 sshd[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:44.360615 systemd-logind[1908]: New session 12 of user core. Feb 13 19:50:44.370994 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:50:44.637344 sshd[4411]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:44.643641 systemd[1]: sshd@11-172.31.20.134:22-139.178.89.65:50448.service: Deactivated successfully. Feb 13 19:50:44.647947 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:50:44.652497 systemd-logind[1908]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:50:44.655262 systemd-logind[1908]: Removed session 12. Feb 13 19:50:44.677109 systemd[1]: Started sshd@12-172.31.20.134:22-139.178.89.65:51574.service - OpenSSH per-connection server daemon (139.178.89.65:51574). Feb 13 19:50:44.857315 sshd[4424]: Accepted publickey for core from 139.178.89.65 port 51574 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:44.859970 sshd[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:44.868625 systemd-logind[1908]: New session 13 of user core. Feb 13 19:50:44.872813 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:50:45.179129 sshd[4424]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:45.185965 systemd[1]: sshd@12-172.31.20.134:22-139.178.89.65:51574.service: Deactivated successfully. Feb 13 19:50:45.189252 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:50:45.190429 systemd-logind[1908]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:50:45.192832 systemd-logind[1908]: Removed session 13. Feb 13 19:50:45.222106 systemd[1]: Started sshd@13-172.31.20.134:22-139.178.89.65:51590.service - OpenSSH per-connection server daemon (139.178.89.65:51590). Feb 13 19:50:45.402596 sshd[4435]: Accepted publickey for core from 139.178.89.65 port 51590 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:45.405872 sshd[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:45.414145 systemd-logind[1908]: New session 14 of user core. Feb 13 19:50:45.424855 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:50:47.928239 sshd[4435]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:47.939774 systemd[1]: sshd@13-172.31.20.134:22-139.178.89.65:51590.service: Deactivated successfully. Feb 13 19:50:47.945330 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:50:47.952911 systemd-logind[1908]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:50:47.990768 systemd[1]: Started sshd@14-172.31.20.134:22-139.178.89.65:51602.service - OpenSSH per-connection server daemon (139.178.89.65:51602). Feb 13 19:50:47.992907 systemd-logind[1908]: Removed session 14. Feb 13 19:50:48.167267 sshd[4474]: Accepted publickey for core from 139.178.89.65 port 51602 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:48.169993 sshd[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:48.178417 systemd-logind[1908]: New session 15 of user core. Feb 13 19:50:48.185828 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:50:48.708318 sshd[4474]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:48.716730 systemd[1]: sshd@14-172.31.20.134:22-139.178.89.65:51602.service: Deactivated successfully. Feb 13 19:50:48.720961 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:50:48.722363 systemd-logind[1908]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:50:48.724201 systemd-logind[1908]: Removed session 15. Feb 13 19:50:48.748078 systemd[1]: Started sshd@15-172.31.20.134:22-139.178.89.65:51604.service - OpenSSH per-connection server daemon (139.178.89.65:51604). Feb 13 19:50:48.932744 sshd[4485]: Accepted publickey for core from 139.178.89.65 port 51604 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:48.935378 sshd[4485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:48.943829 systemd-logind[1908]: New session 16 of user core. Feb 13 19:50:48.951824 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:50:49.193981 sshd[4485]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:49.199371 systemd-logind[1908]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:50:49.200258 systemd[1]: sshd@15-172.31.20.134:22-139.178.89.65:51604.service: Deactivated successfully. Feb 13 19:50:49.203933 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:50:49.208757 systemd-logind[1908]: Removed session 16. Feb 13 19:50:54.237092 systemd[1]: Started sshd@16-172.31.20.134:22-139.178.89.65:51610.service - OpenSSH per-connection server daemon (139.178.89.65:51610). Feb 13 19:50:54.421339 sshd[4519]: Accepted publickey for core from 139.178.89.65 port 51610 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:54.424591 sshd[4519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:54.432618 systemd-logind[1908]: New session 17 of user core. Feb 13 19:50:54.441045 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:50:54.689863 sshd[4519]: pam_unix(sshd:session): session closed for user core Feb 13 19:50:54.696096 systemd[1]: sshd@16-172.31.20.134:22-139.178.89.65:51610.service: Deactivated successfully. Feb 13 19:50:54.700237 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:50:54.701753 systemd-logind[1908]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:50:54.704296 systemd-logind[1908]: Removed session 17. Feb 13 19:50:59.731525 systemd[1]: Started sshd@17-172.31.20.134:22-139.178.89.65:58664.service - OpenSSH per-connection server daemon (139.178.89.65:58664). Feb 13 19:50:59.913416 sshd[4557]: Accepted publickey for core from 139.178.89.65 port 58664 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:50:59.916423 sshd[4557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:50:59.924880 systemd-logind[1908]: New session 18 of user core. Feb 13 19:50:59.935984 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:51:00.183261 sshd[4557]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:00.189061 systemd-logind[1908]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:51:00.189628 systemd[1]: sshd@17-172.31.20.134:22-139.178.89.65:58664.service: Deactivated successfully. Feb 13 19:51:00.194359 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:51:00.198429 systemd-logind[1908]: Removed session 18. Feb 13 19:51:05.226096 systemd[1]: Started sshd@18-172.31.20.134:22-139.178.89.65:51286.service - OpenSSH per-connection server daemon (139.178.89.65:51286). Feb 13 19:51:05.410623 sshd[4592]: Accepted publickey for core from 139.178.89.65 port 51286 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:05.414699 sshd[4592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:05.425298 systemd-logind[1908]: New session 19 of user core. Feb 13 19:51:05.432955 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:51:05.679777 sshd[4592]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:05.687473 systemd[1]: sshd@18-172.31.20.134:22-139.178.89.65:51286.service: Deactivated successfully. Feb 13 19:51:05.692295 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:51:05.694214 systemd-logind[1908]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:51:05.696430 systemd-logind[1908]: Removed session 19. Feb 13 19:51:10.720110 systemd[1]: Started sshd@19-172.31.20.134:22-139.178.89.65:51290.service - OpenSSH per-connection server daemon (139.178.89.65:51290). Feb 13 19:51:10.887289 sshd[4626]: Accepted publickey for core from 139.178.89.65 port 51290 ssh2: RSA SHA256:H27J0U/EpkvOcUDI+hexgwVcKe7FsK9V5j851fkSvZ4 Feb 13 19:51:10.890008 sshd[4626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:10.897620 systemd-logind[1908]: New session 20 of user core. Feb 13 19:51:10.907792 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:51:11.149020 sshd[4626]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:11.155288 systemd[1]: sshd@19-172.31.20.134:22-139.178.89.65:51290.service: Deactivated successfully. Feb 13 19:51:11.160244 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:51:11.165281 systemd-logind[1908]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:51:11.167428 systemd-logind[1908]: Removed session 20. Feb 13 19:51:25.118615 systemd[1]: cri-containerd-654fcd507060851b652c54b85e22f7d269a295792ed6af3eef30041044e5acba.scope: Deactivated successfully. Feb 13 19:51:25.119122 systemd[1]: cri-containerd-654fcd507060851b652c54b85e22f7d269a295792ed6af3eef30041044e5acba.scope: Consumed 4.190s CPU time, 18.6M memory peak, 0B memory swap peak. Feb 13 19:51:25.166836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-654fcd507060851b652c54b85e22f7d269a295792ed6af3eef30041044e5acba-rootfs.mount: Deactivated successfully. Feb 13 19:51:25.170519 containerd[1939]: time="2025-02-13T19:51:25.170389600Z" level=info msg="shim disconnected" id=654fcd507060851b652c54b85e22f7d269a295792ed6af3eef30041044e5acba namespace=k8s.io Feb 13 19:51:25.170519 containerd[1939]: time="2025-02-13T19:51:25.170516584Z" level=warning msg="cleaning up after shim disconnected" id=654fcd507060851b652c54b85e22f7d269a295792ed6af3eef30041044e5acba namespace=k8s.io Feb 13 19:51:25.171437 containerd[1939]: time="2025-02-13T19:51:25.170539420Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:25.345705 kubelet[3075]: I0213 19:51:25.344206 3075 scope.go:117] "RemoveContainer" containerID="654fcd507060851b652c54b85e22f7d269a295792ed6af3eef30041044e5acba" Feb 13 19:51:25.349960 containerd[1939]: time="2025-02-13T19:51:25.349871225Z" level=info msg="CreateContainer within sandbox \"9876929367db8cd582336d0380fbd66ffe6f601085399be6d96f7008fb449a30\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 19:51:25.373590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1731735590.mount: Deactivated successfully. Feb 13 19:51:25.381794 containerd[1939]: time="2025-02-13T19:51:25.381728129Z" level=info msg="CreateContainer within sandbox \"9876929367db8cd582336d0380fbd66ffe6f601085399be6d96f7008fb449a30\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"92b919d45f46ac4ad991502d0aee0abba01f203e0e1ce55257b9211cf6ef4bb3\"" Feb 13 19:51:25.383486 containerd[1939]: time="2025-02-13T19:51:25.383068877Z" level=info msg="StartContainer for \"92b919d45f46ac4ad991502d0aee0abba01f203e0e1ce55257b9211cf6ef4bb3\"" Feb 13 19:51:25.443527 systemd[1]: Started cri-containerd-92b919d45f46ac4ad991502d0aee0abba01f203e0e1ce55257b9211cf6ef4bb3.scope - libcontainer container 92b919d45f46ac4ad991502d0aee0abba01f203e0e1ce55257b9211cf6ef4bb3. Feb 13 19:51:25.518639 containerd[1939]: time="2025-02-13T19:51:25.518331234Z" level=info msg="StartContainer for \"92b919d45f46ac4ad991502d0aee0abba01f203e0e1ce55257b9211cf6ef4bb3\" returns successfully" Feb 13 19:51:29.684010 systemd[1]: cri-containerd-735d58c88c7903c8f5fa90aab77560970519e87fc4701ece53c3133888655196.scope: Deactivated successfully. Feb 13 19:51:29.684836 systemd[1]: cri-containerd-735d58c88c7903c8f5fa90aab77560970519e87fc4701ece53c3133888655196.scope: Consumed 3.250s CPU time, 16.4M memory peak, 0B memory swap peak. Feb 13 19:51:29.727927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-735d58c88c7903c8f5fa90aab77560970519e87fc4701ece53c3133888655196-rootfs.mount: Deactivated successfully. Feb 13 19:51:29.738295 containerd[1939]: time="2025-02-13T19:51:29.738214643Z" level=info msg="shim disconnected" id=735d58c88c7903c8f5fa90aab77560970519e87fc4701ece53c3133888655196 namespace=k8s.io Feb 13 19:51:29.738295 containerd[1939]: time="2025-02-13T19:51:29.738290279Z" level=warning msg="cleaning up after shim disconnected" id=735d58c88c7903c8f5fa90aab77560970519e87fc4701ece53c3133888655196 namespace=k8s.io Feb 13 19:51:29.739022 containerd[1939]: time="2025-02-13T19:51:29.738312455Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:30.315743 kubelet[3075]: E0213 19:51:30.315396 3075 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-134?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:51:30.364426 kubelet[3075]: I0213 19:51:30.364357 3075 scope.go:117] "RemoveContainer" containerID="735d58c88c7903c8f5fa90aab77560970519e87fc4701ece53c3133888655196" Feb 13 19:51:30.367298 containerd[1939]: time="2025-02-13T19:51:30.367240882Z" level=info msg="CreateContainer within sandbox \"8da9d2d5b0ed0b31abac8b54b491d8f4e88a7258b2886d719795e4268c5fad30\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 19:51:30.394063 containerd[1939]: time="2025-02-13T19:51:30.393925294Z" level=info msg="CreateContainer within sandbox \"8da9d2d5b0ed0b31abac8b54b491d8f4e88a7258b2886d719795e4268c5fad30\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a6211e097f6cf4dca7efbc1e04531041af2e86c7d4588267c391d7f21ac92f7b\"" Feb 13 19:51:30.394833 containerd[1939]: time="2025-02-13T19:51:30.394795306Z" level=info msg="StartContainer for \"a6211e097f6cf4dca7efbc1e04531041af2e86c7d4588267c391d7f21ac92f7b\"" Feb 13 19:51:30.447872 systemd[1]: Started cri-containerd-a6211e097f6cf4dca7efbc1e04531041af2e86c7d4588267c391d7f21ac92f7b.scope - libcontainer container a6211e097f6cf4dca7efbc1e04531041af2e86c7d4588267c391d7f21ac92f7b. Feb 13 19:51:30.538192 containerd[1939]: time="2025-02-13T19:51:30.537879695Z" level=info msg="StartContainer for \"a6211e097f6cf4dca7efbc1e04531041af2e86c7d4588267c391d7f21ac92f7b\" returns successfully" Feb 13 19:51:40.315814 kubelet[3075]: E0213 19:51:40.315722 3075 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-134?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"