Feb 13 19:03:22.211156 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 19:03:22.211210 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:46:24 -00 2025 Feb 13 19:03:22.211236 kernel: KASLR disabled due to lack of seed Feb 13 19:03:22.211253 kernel: efi: EFI v2.7 by EDK II Feb 13 19:03:22.211269 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x78503d98 Feb 13 19:03:22.211284 kernel: secureboot: Secure boot disabled Feb 13 19:03:22.211301 kernel: ACPI: Early table checksum verification disabled Feb 13 19:03:22.211316 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 19:03:22.211332 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 19:03:22.211348 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 19:03:22.211367 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 19:03:22.211383 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 19:03:22.211399 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 19:03:22.211414 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 19:03:22.211433 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 19:03:22.211453 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 19:03:22.211471 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 19:03:22.211488 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 19:03:22.211504 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 19:03:22.211557 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 19:03:22.211575 kernel: printk: bootconsole [uart0] enabled Feb 13 19:03:22.211591 kernel: NUMA: Failed to initialise from firmware Feb 13 19:03:22.211609 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:03:22.211625 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 19:03:22.211642 kernel: Zone ranges: Feb 13 19:03:22.211658 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 19:03:22.211680 kernel: DMA32 empty Feb 13 19:03:22.211697 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 19:03:22.211713 kernel: Movable zone start for each node Feb 13 19:03:22.211729 kernel: Early memory node ranges Feb 13 19:03:22.211746 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 19:03:22.211762 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 19:03:22.211778 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 19:03:22.211795 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 19:03:22.211811 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 19:03:22.211827 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 19:03:22.211843 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 19:03:22.211859 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 19:03:22.211880 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:03:22.211900 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 19:03:22.211924 kernel: psci: probing for conduit method from ACPI. Feb 13 19:03:22.211942 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 19:03:22.211959 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:03:22.211980 kernel: psci: Trusted OS migration not required Feb 13 19:03:22.211998 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:03:22.212014 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:03:22.212031 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:03:22.212049 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 19:03:22.212088 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:03:22.212108 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:03:22.212125 kernel: CPU features: detected: Spectre-v2 Feb 13 19:03:22.212142 kernel: CPU features: detected: Spectre-v3a Feb 13 19:03:22.212159 kernel: CPU features: detected: Spectre-BHB Feb 13 19:03:22.212176 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 19:03:22.212193 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 19:03:22.212216 kernel: alternatives: applying boot alternatives Feb 13 19:03:22.212235 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:03:22.212253 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:03:22.212271 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:03:22.212288 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:03:22.212305 kernel: Fallback order for Node 0: 0 Feb 13 19:03:22.212322 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 19:03:22.212338 kernel: Policy zone: Normal Feb 13 19:03:22.212355 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:03:22.212372 kernel: software IO TLB: area num 2. Feb 13 19:03:22.212393 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 19:03:22.212412 kernel: Memory: 3819960K/4030464K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 210504K reserved, 0K cma-reserved) Feb 13 19:03:22.212429 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:03:22.212446 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:03:22.212464 kernel: rcu: RCU event tracing is enabled. Feb 13 19:03:22.212482 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:03:22.212499 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:03:22.212516 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:03:22.212533 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:03:22.212550 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:03:22.212568 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:03:22.212589 kernel: GICv3: 96 SPIs implemented Feb 13 19:03:22.212606 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:03:22.212623 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:03:22.212641 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 19:03:22.212658 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 19:03:22.212676 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 19:03:22.212694 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:03:22.212711 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:03:22.212729 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 19:03:22.212747 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 19:03:22.212764 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 19:03:22.212782 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:03:22.212803 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 19:03:22.212820 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 19:03:22.212838 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 19:03:22.212857 kernel: Console: colour dummy device 80x25 Feb 13 19:03:22.212875 kernel: printk: console [tty1] enabled Feb 13 19:03:22.212893 kernel: ACPI: Core revision 20230628 Feb 13 19:03:22.212911 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 19:03:22.212928 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:03:22.212947 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:03:22.212969 kernel: landlock: Up and running. Feb 13 19:03:22.212986 kernel: SELinux: Initializing. Feb 13 19:03:22.213004 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:03:22.213022 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:03:22.213040 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:03:22.213096 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:03:22.213121 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:03:22.213141 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:03:22.213159 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 19:03:22.213184 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 19:03:22.213203 kernel: Remapping and enabling EFI services. Feb 13 19:03:22.213221 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:03:22.213239 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:03:22.213258 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 19:03:22.213276 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 19:03:22.213294 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 19:03:22.213311 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:03:22.213329 kernel: SMP: Total of 2 processors activated. Feb 13 19:03:22.213351 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:03:22.213369 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 19:03:22.213387 kernel: CPU features: detected: CRC32 instructions Feb 13 19:03:22.213416 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:03:22.213439 kernel: alternatives: applying system-wide alternatives Feb 13 19:03:22.213457 kernel: devtmpfs: initialized Feb 13 19:03:22.213475 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:03:22.213494 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:03:22.213512 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:03:22.213548 kernel: SMBIOS 3.0.0 present. Feb 13 19:03:22.213574 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 19:03:22.213593 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:03:22.213611 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:03:22.213630 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:03:22.213649 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:03:22.213667 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:03:22.213685 kernel: audit: type=2000 audit(0.283:1): state=initialized audit_enabled=0 res=1 Feb 13 19:03:22.213708 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:03:22.213726 kernel: cpuidle: using governor menu Feb 13 19:03:22.213745 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:03:22.213764 kernel: ASID allocator initialised with 65536 entries Feb 13 19:03:22.213784 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:03:22.213802 kernel: Serial: AMBA PL011 UART driver Feb 13 19:03:22.213821 kernel: Modules: 17440 pages in range for non-PLT usage Feb 13 19:03:22.213839 kernel: Modules: 508960 pages in range for PLT usage Feb 13 19:03:22.213858 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:03:22.213881 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:03:22.213900 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:03:22.213919 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:03:22.213938 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:03:22.213956 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:03:22.213975 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:03:22.213993 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:03:22.214012 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:03:22.214030 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:03:22.214053 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:03:22.216359 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:03:22.216380 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:03:22.216398 kernel: ACPI: Interpreter enabled Feb 13 19:03:22.216417 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:03:22.216435 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:03:22.216454 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 19:03:22.216764 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:03:22.216981 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:03:22.217219 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:03:22.217425 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 19:03:22.217649 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 19:03:22.217675 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 19:03:22.217695 kernel: acpiphp: Slot [1] registered Feb 13 19:03:22.217713 kernel: acpiphp: Slot [2] registered Feb 13 19:03:22.217731 kernel: acpiphp: Slot [3] registered Feb 13 19:03:22.217756 kernel: acpiphp: Slot [4] registered Feb 13 19:03:22.217774 kernel: acpiphp: Slot [5] registered Feb 13 19:03:22.217792 kernel: acpiphp: Slot [6] registered Feb 13 19:03:22.217810 kernel: acpiphp: Slot [7] registered Feb 13 19:03:22.217828 kernel: acpiphp: Slot [8] registered Feb 13 19:03:22.217846 kernel: acpiphp: Slot [9] registered Feb 13 19:03:22.217864 kernel: acpiphp: Slot [10] registered Feb 13 19:03:22.217882 kernel: acpiphp: Slot [11] registered Feb 13 19:03:22.217900 kernel: acpiphp: Slot [12] registered Feb 13 19:03:22.217918 kernel: acpiphp: Slot [13] registered Feb 13 19:03:22.217941 kernel: acpiphp: Slot [14] registered Feb 13 19:03:22.217959 kernel: acpiphp: Slot [15] registered Feb 13 19:03:22.217977 kernel: acpiphp: Slot [16] registered Feb 13 19:03:22.217995 kernel: acpiphp: Slot [17] registered Feb 13 19:03:22.218013 kernel: acpiphp: Slot [18] registered Feb 13 19:03:22.218032 kernel: acpiphp: Slot [19] registered Feb 13 19:03:22.218050 kernel: acpiphp: Slot [20] registered Feb 13 19:03:22.218110 kernel: acpiphp: Slot [21] registered Feb 13 19:03:22.218132 kernel: acpiphp: Slot [22] registered Feb 13 19:03:22.218156 kernel: acpiphp: Slot [23] registered Feb 13 19:03:22.218175 kernel: acpiphp: Slot [24] registered Feb 13 19:03:22.218193 kernel: acpiphp: Slot [25] registered Feb 13 19:03:22.218212 kernel: acpiphp: Slot [26] registered Feb 13 19:03:22.218230 kernel: acpiphp: Slot [27] registered Feb 13 19:03:22.218248 kernel: acpiphp: Slot [28] registered Feb 13 19:03:22.218266 kernel: acpiphp: Slot [29] registered Feb 13 19:03:22.218284 kernel: acpiphp: Slot [30] registered Feb 13 19:03:22.218302 kernel: acpiphp: Slot [31] registered Feb 13 19:03:22.218320 kernel: PCI host bridge to bus 0000:00 Feb 13 19:03:22.218535 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 19:03:22.218720 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:03:22.218904 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 19:03:22.219114 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 19:03:22.219366 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 19:03:22.219592 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 19:03:22.219813 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 19:03:22.220048 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 19:03:22.220311 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 19:03:22.220516 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:03:22.220733 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 19:03:22.220940 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 19:03:22.221237 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 19:03:22.221446 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 19:03:22.221668 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:03:22.221866 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 19:03:22.222080 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 19:03:22.222337 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 19:03:22.222545 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 19:03:22.223305 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 19:03:22.223537 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 19:03:22.223714 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:03:22.223891 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 19:03:22.223915 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:03:22.223935 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:03:22.223953 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:03:22.223972 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:03:22.223990 kernel: iommu: Default domain type: Translated Feb 13 19:03:22.224014 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:03:22.224032 kernel: efivars: Registered efivars operations Feb 13 19:03:22.224051 kernel: vgaarb: loaded Feb 13 19:03:22.224088 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:03:22.224109 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:03:22.224127 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:03:22.224145 kernel: pnp: PnP ACPI init Feb 13 19:03:22.228455 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 19:03:22.228516 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:03:22.228536 kernel: NET: Registered PF_INET protocol family Feb 13 19:03:22.228555 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:03:22.228575 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:03:22.228594 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:03:22.228613 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:03:22.228632 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:03:22.228651 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:03:22.228670 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:03:22.228694 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:03:22.228713 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:03:22.228731 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:03:22.228750 kernel: kvm [1]: HYP mode not available Feb 13 19:03:22.228770 kernel: Initialise system trusted keyrings Feb 13 19:03:22.228789 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:03:22.228808 kernel: Key type asymmetric registered Feb 13 19:03:22.228826 kernel: Asymmetric key parser 'x509' registered Feb 13 19:03:22.228844 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:03:22.228867 kernel: io scheduler mq-deadline registered Feb 13 19:03:22.228885 kernel: io scheduler kyber registered Feb 13 19:03:22.228904 kernel: io scheduler bfq registered Feb 13 19:03:22.229890 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 19:03:22.229975 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:03:22.229999 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:03:22.230019 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 19:03:22.230039 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 19:03:22.230089 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:03:22.230111 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 19:03:22.230356 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 19:03:22.230385 kernel: printk: console [ttyS0] disabled Feb 13 19:03:22.230405 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 19:03:22.230424 kernel: printk: console [ttyS0] enabled Feb 13 19:03:22.230442 kernel: printk: bootconsole [uart0] disabled Feb 13 19:03:22.230461 kernel: thunder_xcv, ver 1.0 Feb 13 19:03:22.230480 kernel: thunder_bgx, ver 1.0 Feb 13 19:03:22.230508 kernel: nicpf, ver 1.0 Feb 13 19:03:22.230527 kernel: nicvf, ver 1.0 Feb 13 19:03:22.230751 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:03:22.230938 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:03:21 UTC (1739473401) Feb 13 19:03:22.230963 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:03:22.230982 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 19:03:22.231001 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:03:22.231019 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:03:22.231043 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:03:22.231087 kernel: Segment Routing with IPv6 Feb 13 19:03:22.231109 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:03:22.231128 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:03:22.231146 kernel: Key type dns_resolver registered Feb 13 19:03:22.231165 kernel: registered taskstats version 1 Feb 13 19:03:22.231183 kernel: Loading compiled-in X.509 certificates Feb 13 19:03:22.231202 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 916055ad16f0ba578cce640a9ac58627fd43c936' Feb 13 19:03:22.231221 kernel: Key type .fscrypt registered Feb 13 19:03:22.231246 kernel: Key type fscrypt-provisioning registered Feb 13 19:03:22.231264 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:03:22.231283 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:03:22.231302 kernel: ima: No architecture policies found Feb 13 19:03:22.231321 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:03:22.231340 kernel: clk: Disabling unused clocks Feb 13 19:03:22.231358 kernel: Freeing unused kernel memory: 39680K Feb 13 19:03:22.231377 kernel: Run /init as init process Feb 13 19:03:22.231396 kernel: with arguments: Feb 13 19:03:22.231414 kernel: /init Feb 13 19:03:22.231438 kernel: with environment: Feb 13 19:03:22.231456 kernel: HOME=/ Feb 13 19:03:22.231475 kernel: TERM=linux Feb 13 19:03:22.231493 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:03:22.231517 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:03:22.231542 systemd[1]: Detected virtualization amazon. Feb 13 19:03:22.231563 systemd[1]: Detected architecture arm64. Feb 13 19:03:22.231588 systemd[1]: Running in initrd. Feb 13 19:03:22.231608 systemd[1]: No hostname configured, using default hostname. Feb 13 19:03:22.231627 systemd[1]: Hostname set to <localhost>. Feb 13 19:03:22.231648 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:03:22.231668 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:03:22.231688 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:03:22.231709 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:03:22.231731 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:03:22.231758 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:03:22.231780 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:03:22.231801 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:03:22.231825 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:03:22.231846 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:03:22.231866 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:03:22.231886 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:03:22.231911 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:03:22.231932 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:03:22.231951 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:03:22.231971 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:03:22.231992 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:03:22.232012 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:03:22.232032 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:03:22.232052 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:03:22.232265 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:03:22.232295 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:03:22.232316 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:03:22.232336 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:03:22.232357 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:03:22.232377 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:03:22.232397 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:03:22.232417 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:03:22.232437 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:03:22.232461 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:03:22.232482 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:03:22.232502 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:03:22.232566 systemd-journald[251]: Collecting audit messages is disabled. Feb 13 19:03:22.235224 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:03:22.235253 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:03:22.235281 systemd-journald[251]: Journal started Feb 13 19:03:22.235327 systemd-journald[251]: Runtime Journal (/run/log/journal/ec24674e53606fcadc32c42c865018d6) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:03:22.238529 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:03:22.226669 systemd-modules-load[252]: Inserted module 'overlay' Feb 13 19:03:22.251096 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:03:22.261123 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:03:22.267113 kernel: Bridge firewalling registered Feb 13 19:03:22.269243 systemd-modules-load[252]: Inserted module 'br_netfilter' Feb 13 19:03:22.271790 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:03:22.280154 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:03:22.285836 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:03:22.293766 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:03:22.310502 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:03:22.324475 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:03:22.331335 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:03:22.339211 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:03:22.350395 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:03:22.360342 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:03:22.385026 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:03:22.398159 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:03:22.420368 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:03:22.441225 systemd-resolved[280]: Positive Trust Anchors: Feb 13 19:03:22.443157 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:03:22.443222 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:03:22.469937 dracut-cmdline[290]: dracut-dracut-053 Feb 13 19:03:22.469937 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:03:22.627121 kernel: SCSI subsystem initialized Feb 13 19:03:22.637095 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:03:22.648106 kernel: iscsi: registered transport (tcp) Feb 13 19:03:22.672324 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:03:22.672497 kernel: QLogic iSCSI HBA Driver Feb 13 19:03:22.724116 kernel: random: crng init done Feb 13 19:03:22.724428 systemd-resolved[280]: Defaulting to hostname 'linux'. Feb 13 19:03:22.728188 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:03:22.732379 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:03:22.766319 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:03:22.778369 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:03:22.820971 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:03:22.821047 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:03:22.823095 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:03:22.888106 kernel: raid6: neonx8 gen() 6710 MB/s Feb 13 19:03:22.905131 kernel: raid6: neonx4 gen() 6457 MB/s Feb 13 19:03:22.922099 kernel: raid6: neonx2 gen() 5370 MB/s Feb 13 19:03:22.939101 kernel: raid6: neonx1 gen() 3927 MB/s Feb 13 19:03:22.956118 kernel: raid6: int64x8 gen() 3801 MB/s Feb 13 19:03:22.973099 kernel: raid6: int64x4 gen() 3687 MB/s Feb 13 19:03:22.990108 kernel: raid6: int64x2 gen() 3568 MB/s Feb 13 19:03:23.007912 kernel: raid6: int64x1 gen() 2761 MB/s Feb 13 19:03:23.008015 kernel: raid6: using algorithm neonx8 gen() 6710 MB/s Feb 13 19:03:23.025862 kernel: raid6: .... xor() 4899 MB/s, rmw enabled Feb 13 19:03:23.025920 kernel: raid6: using neon recovery algorithm Feb 13 19:03:23.034112 kernel: xor: measuring software checksum speed Feb 13 19:03:23.036199 kernel: 8regs : 10058 MB/sec Feb 13 19:03:23.036231 kernel: 32regs : 11923 MB/sec Feb 13 19:03:23.037356 kernel: arm64_neon : 9517 MB/sec Feb 13 19:03:23.037400 kernel: xor: using function: 32regs (11923 MB/sec) Feb 13 19:03:23.124136 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:03:23.147481 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:03:23.158529 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:03:23.194123 systemd-udevd[471]: Using default interface naming scheme 'v255'. Feb 13 19:03:23.202908 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:03:23.222520 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:03:23.270778 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Feb 13 19:03:23.328423 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:03:23.339396 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:03:23.459696 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:03:23.471440 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:03:23.529011 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:03:23.536256 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:03:23.539606 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:03:23.543032 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:03:23.558373 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:03:23.602189 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:03:23.692200 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:03:23.692284 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 19:03:23.713223 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 19:03:23.713496 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 19:03:23.713776 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:87:b9:ba:1f:6b Feb 13 19:03:23.695514 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:03:23.695733 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:03:23.701934 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:03:23.704133 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:03:23.704460 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:03:23.707423 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:03:23.722745 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:03:23.732904 (udev-worker)[544]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:03:23.763723 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 19:03:23.763790 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 19:03:23.773618 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:03:23.781108 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 19:03:23.787121 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:03:23.787207 kernel: GPT:9289727 != 16777215 Feb 13 19:03:23.787233 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:03:23.790036 kernel: GPT:9289727 != 16777215 Feb 13 19:03:23.790119 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:03:23.790147 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:03:23.791220 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:03:23.822206 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:03:23.875159 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (542) Feb 13 19:03:23.929132 kernel: BTRFS: device fsid 44fbcf53-fa5f-4fd4-b434-f067731b9a44 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (544) Feb 13 19:03:23.999101 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 19:03:24.021827 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 19:03:24.072269 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:03:24.084771 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 19:03:24.087479 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 19:03:24.104551 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:03:24.121406 disk-uuid[663]: Primary Header is updated. Feb 13 19:03:24.121406 disk-uuid[663]: Secondary Entries is updated. Feb 13 19:03:24.121406 disk-uuid[663]: Secondary Header is updated. Feb 13 19:03:24.131083 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:03:24.141094 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:03:25.153934 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:03:25.154031 disk-uuid[664]: The operation has completed successfully. Feb 13 19:03:25.375674 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:03:25.378268 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:03:25.445349 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:03:25.452318 sh[923]: Success Feb 13 19:03:25.471128 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:03:25.614444 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:03:25.639835 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:03:25.652859 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:03:25.681680 kernel: BTRFS info (device dm-0): first mount of filesystem 44fbcf53-fa5f-4fd4-b434-f067731b9a44 Feb 13 19:03:25.681783 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:03:25.681826 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:03:25.684782 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:03:25.684862 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:03:25.817127 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:03:25.853663 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:03:25.858570 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:03:25.868502 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:03:25.879420 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:03:25.907595 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:03:25.907689 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:03:25.909393 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:03:25.916585 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:03:25.937832 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:03:25.940650 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:03:25.954737 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:03:25.971691 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:03:26.119384 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:03:26.147467 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:03:26.212494 systemd-networkd[1117]: lo: Link UP Feb 13 19:03:26.212520 systemd-networkd[1117]: lo: Gained carrier Feb 13 19:03:26.222501 systemd-networkd[1117]: Enumeration completed Feb 13 19:03:26.223335 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:03:26.223343 systemd-networkd[1117]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:03:26.225166 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:03:26.231825 systemd[1]: Reached target network.target - Network. Feb 13 19:03:26.233258 systemd-networkd[1117]: eth0: Link UP Feb 13 19:03:26.233273 systemd-networkd[1117]: eth0: Gained carrier Feb 13 19:03:26.233291 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:03:26.260243 systemd-networkd[1117]: eth0: DHCPv4 address 172.31.26.128/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:03:26.334976 ignition[1024]: Ignition 2.20.0 Feb 13 19:03:26.335008 ignition[1024]: Stage: fetch-offline Feb 13 19:03:26.335539 ignition[1024]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:26.336646 ignition[1024]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:03:26.338510 ignition[1024]: Ignition finished successfully Feb 13 19:03:26.345781 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:03:26.357375 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:03:26.389767 ignition[1126]: Ignition 2.20.0 Feb 13 19:03:26.389803 ignition[1126]: Stage: fetch Feb 13 19:03:26.391025 ignition[1126]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:26.391055 ignition[1126]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:03:26.391353 ignition[1126]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:03:26.416345 ignition[1126]: PUT result: OK Feb 13 19:03:26.420027 ignition[1126]: parsed url from cmdline: "" Feb 13 19:03:26.420049 ignition[1126]: no config URL provided Feb 13 19:03:26.420097 ignition[1126]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:03:26.420128 ignition[1126]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:03:26.420166 ignition[1126]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:03:26.422368 ignition[1126]: PUT result: OK Feb 13 19:03:26.422475 ignition[1126]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 19:03:26.436092 unknown[1126]: fetched base config from "system" Feb 13 19:03:26.424855 ignition[1126]: GET result: OK Feb 13 19:03:26.436118 unknown[1126]: fetched base config from "system" Feb 13 19:03:26.425012 ignition[1126]: parsing config with SHA512: 9d382ebf16fd904bced2408b98b94cb9b723aba1621949ffc676915594fc38a7760a3dfa70245aaa096340b6f8367b8a3bc63ee14e336a37a458085142a4545e Feb 13 19:03:26.436133 unknown[1126]: fetched user config from "aws" Feb 13 19:03:26.436780 ignition[1126]: fetch: fetch complete Feb 13 19:03:26.447256 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:03:26.436797 ignition[1126]: fetch: fetch passed Feb 13 19:03:26.436908 ignition[1126]: Ignition finished successfully Feb 13 19:03:26.464638 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:03:26.512360 ignition[1132]: Ignition 2.20.0 Feb 13 19:03:26.512386 ignition[1132]: Stage: kargs Feb 13 19:03:26.513297 ignition[1132]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:26.513325 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:03:26.513495 ignition[1132]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:03:26.517734 ignition[1132]: PUT result: OK Feb 13 19:03:26.526661 ignition[1132]: kargs: kargs passed Feb 13 19:03:26.526812 ignition[1132]: Ignition finished successfully Feb 13 19:03:26.532253 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:03:26.548412 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:03:26.576925 ignition[1138]: Ignition 2.20.0 Feb 13 19:03:26.577569 ignition[1138]: Stage: disks Feb 13 19:03:26.579297 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:26.579329 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:03:26.579507 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:03:26.583967 ignition[1138]: PUT result: OK Feb 13 19:03:26.593233 ignition[1138]: disks: disks passed Feb 13 19:03:26.593392 ignition[1138]: Ignition finished successfully Feb 13 19:03:26.597146 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:03:26.600778 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:03:26.603361 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:03:26.607118 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:03:26.609204 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:03:26.611717 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:03:26.638103 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:03:26.688320 systemd-fsck[1146]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:03:26.696660 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:03:26.709340 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:03:26.818107 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e24df12d-6575-4a90-bef9-33573b9d63e7 r/w with ordered data mode. Quota mode: none. Feb 13 19:03:26.820329 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:03:26.823241 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:03:26.839300 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:03:26.844389 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:03:26.848231 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:03:26.848350 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:03:26.848409 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:03:26.879688 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:03:26.891389 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:03:26.900205 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1165) Feb 13 19:03:26.906696 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:03:26.906841 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:03:26.909845 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:03:26.917159 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:03:26.920681 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:03:27.326251 initrd-setup-root[1189]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:03:27.336873 initrd-setup-root[1196]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:03:27.360124 initrd-setup-root[1203]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:03:27.369541 initrd-setup-root[1210]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:03:27.718253 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:03:27.728281 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:03:27.745377 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:03:27.763349 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:03:27.765705 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:03:27.805767 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:03:27.827258 ignition[1277]: INFO : Ignition 2.20.0 Feb 13 19:03:27.827258 ignition[1277]: INFO : Stage: mount Feb 13 19:03:27.830767 ignition[1277]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:27.830767 ignition[1277]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:03:27.830767 ignition[1277]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:03:27.839266 ignition[1277]: INFO : PUT result: OK Feb 13 19:03:27.843871 ignition[1277]: INFO : mount: mount passed Feb 13 19:03:27.843871 ignition[1277]: INFO : Ignition finished successfully Feb 13 19:03:27.849640 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:03:27.862389 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:03:27.895041 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:03:27.922130 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1290) Feb 13 19:03:27.926458 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:03:27.926535 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:03:27.926562 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:03:27.933115 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:03:27.937025 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:03:27.975233 ignition[1307]: INFO : Ignition 2.20.0 Feb 13 19:03:27.975233 ignition[1307]: INFO : Stage: files Feb 13 19:03:27.978729 ignition[1307]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:27.978729 ignition[1307]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:03:27.978729 ignition[1307]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:03:27.985669 ignition[1307]: INFO : PUT result: OK Feb 13 19:03:27.990467 ignition[1307]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:03:27.994123 ignition[1307]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:03:27.994123 ignition[1307]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:03:28.026627 ignition[1307]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:03:28.029480 ignition[1307]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:03:28.034377 ignition[1307]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:03:28.034377 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:03:28.034377 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:03:28.030572 unknown[1307]: wrote ssh authorized keys file for user: core Feb 13 19:03:28.148213 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:03:28.259025 systemd-networkd[1117]: eth0: Gained IPv6LL Feb 13 19:03:28.312224 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:03:28.315908 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:03:28.315908 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:03:28.315908 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:03:28.315908 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:03:28.315908 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:03:28.332657 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:03:28.332657 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:03:28.332657 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:03:28.332657 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:03:28.332657 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:03:28.332657 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:03:28.332657 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:03:28.332657 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:03:28.332657 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 19:03:28.669412 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 19:03:29.075355 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 19:03:29.075355 ignition[1307]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 19:03:29.082389 ignition[1307]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:03:29.082389 ignition[1307]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:03:29.082389 ignition[1307]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 19:03:29.082389 ignition[1307]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:03:29.082389 ignition[1307]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:03:29.082389 ignition[1307]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:03:29.082389 ignition[1307]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:03:29.082389 ignition[1307]: INFO : files: files passed Feb 13 19:03:29.082389 ignition[1307]: INFO : Ignition finished successfully Feb 13 19:03:29.087595 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:03:29.118336 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:03:29.122541 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:03:29.135755 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:03:29.135975 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:03:29.172745 initrd-setup-root-after-ignition[1335]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:03:29.172745 initrd-setup-root-after-ignition[1335]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:03:29.183508 initrd-setup-root-after-ignition[1339]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:03:29.189452 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:03:29.194039 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:03:29.213589 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:03:29.264709 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:03:29.264941 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:03:29.269531 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:03:29.278045 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:03:29.280516 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:03:29.297029 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:03:29.326228 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:03:29.336483 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:03:29.378949 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:03:29.381971 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:03:29.389349 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:03:29.392519 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:03:29.392964 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:03:29.403930 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:03:29.406919 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:03:29.411463 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:03:29.414301 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:03:29.423701 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:03:29.426920 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:03:29.429545 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:03:29.439285 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:03:29.440327 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:03:29.440966 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:03:29.441881 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:03:29.442764 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:03:29.444750 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:03:29.446555 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:03:29.447326 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:03:29.456563 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:03:29.460227 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:03:29.460508 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:03:29.466899 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:03:29.467386 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:03:29.470336 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:03:29.470569 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:03:29.497653 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:03:29.508573 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:03:29.515345 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:03:29.516428 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:03:29.529261 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:03:29.531272 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:03:29.545846 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:03:29.546151 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:03:29.576612 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:03:29.585129 ignition[1359]: INFO : Ignition 2.20.0 Feb 13 19:03:29.585129 ignition[1359]: INFO : Stage: umount Feb 13 19:03:29.585129 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:29.585129 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:03:29.585129 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:03:29.596968 ignition[1359]: INFO : PUT result: OK Feb 13 19:03:29.601866 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:03:29.602887 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:03:29.611265 ignition[1359]: INFO : umount: umount passed Feb 13 19:03:29.611265 ignition[1359]: INFO : Ignition finished successfully Feb 13 19:03:29.616528 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:03:29.618661 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:03:29.623006 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:03:29.623323 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:03:29.627649 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:03:29.627779 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:03:29.631168 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:03:29.631655 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:03:29.635127 systemd[1]: Stopped target network.target - Network. Feb 13 19:03:29.636950 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:03:29.637133 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:03:29.639897 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:03:29.648045 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:03:29.648248 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:03:29.650767 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:03:29.652753 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:03:29.654833 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:03:29.654932 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:03:29.658487 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:03:29.658618 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:03:29.662673 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:03:29.663237 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:03:29.683566 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:03:29.683685 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:03:29.686279 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:03:29.686390 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:03:29.689213 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:03:29.696152 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:03:29.700128 systemd-networkd[1117]: eth0: DHCPv6 lease lost Feb 13 19:03:29.712846 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:03:29.715302 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:03:29.722800 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:03:29.723113 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:03:29.742967 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:03:29.743130 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:03:29.755442 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:03:29.759637 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:03:29.759781 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:03:29.764373 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:03:29.764493 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:03:29.770339 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:03:29.770458 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:03:29.784370 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:03:29.784500 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:03:29.797853 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:03:29.827873 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:03:29.829230 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:03:29.837142 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:03:29.837312 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:03:29.841704 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:03:29.841814 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:03:29.844270 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:03:29.844392 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:03:29.846811 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:03:29.846926 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:03:29.849879 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:03:29.849998 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:03:29.866713 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:03:29.870892 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:03:29.871035 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:03:29.881266 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:03:29.881582 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:03:29.890234 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:03:29.890355 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:03:29.892780 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:03:29.892898 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:03:29.907548 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:03:29.910664 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:03:29.929021 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:03:29.931581 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:03:29.938313 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:03:29.953547 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:03:29.972967 systemd[1]: Switching root. Feb 13 19:03:30.037317 systemd-journald[251]: Journal stopped Feb 13 19:03:32.819831 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Feb 13 19:03:32.820024 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:03:32.824590 kernel: SELinux: policy capability open_perms=1 Feb 13 19:03:32.824648 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:03:32.824680 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:03:32.824710 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:03:32.824741 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:03:32.824770 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:03:32.824818 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:03:32.824850 kernel: audit: type=1403 audit(1739473410.636:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:03:32.824893 systemd[1]: Successfully loaded SELinux policy in 89.173ms. Feb 13 19:03:32.824932 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 31.319ms. Feb 13 19:03:32.824977 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:03:32.825009 systemd[1]: Detected virtualization amazon. Feb 13 19:03:32.825041 systemd[1]: Detected architecture arm64. Feb 13 19:03:32.825146 systemd[1]: Detected first boot. Feb 13 19:03:32.825230 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:03:32.825273 zram_generator::config[1401]: No configuration found. Feb 13 19:03:32.825319 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:03:32.825353 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:03:32.825386 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:03:32.825420 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:03:32.825450 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:03:32.825483 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:03:32.825536 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:03:32.825579 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:03:32.825612 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:03:32.825644 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:03:32.825678 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:03:32.825708 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:03:32.825739 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:03:32.825769 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:03:32.825802 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:03:32.825832 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:03:32.825871 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:03:32.825908 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:03:32.825943 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:03:32.825978 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:03:32.826010 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:03:32.826049 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:03:32.826113 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:03:32.826157 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:03:32.826189 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:03:32.826223 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:03:32.826254 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:03:32.826285 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:03:32.826315 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:03:32.826347 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:03:32.826377 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:03:32.826408 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:03:32.826440 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:03:32.826478 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:03:32.826510 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:03:32.826541 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:03:32.826573 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:03:32.826606 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:03:32.826638 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:03:32.826669 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:03:32.826715 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:03:32.826755 systemd[1]: Reached target machines.target - Containers. Feb 13 19:03:32.826797 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:03:32.826830 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:03:32.826865 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:03:32.826896 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:03:32.826927 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:03:32.826960 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:03:32.826990 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:03:32.827020 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:03:32.832092 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:03:32.832184 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:03:32.832218 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:03:32.832248 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:03:32.832278 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:03:32.832307 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:03:32.832336 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:03:32.832366 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:03:32.832395 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:03:32.832435 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:03:32.832468 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:03:32.832502 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:03:32.832533 systemd[1]: Stopped verity-setup.service. Feb 13 19:03:32.832564 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:03:32.832594 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:03:32.832624 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:03:32.832653 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:03:32.832683 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:03:32.832718 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:03:32.832748 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:03:32.832777 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:03:32.832808 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:03:32.832843 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:03:32.832873 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:03:32.832907 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:03:32.832938 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:03:32.832967 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:03:32.832997 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:03:32.833029 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:03:32.835185 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:03:32.835285 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:03:32.835318 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:03:32.835351 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:03:32.835381 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:03:32.835410 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:03:32.835440 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:03:32.835479 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:03:32.835512 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:03:32.835603 systemd-journald[1483]: Collecting audit messages is disabled. Feb 13 19:03:32.835657 kernel: loop: module loaded Feb 13 19:03:32.835691 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:03:32.835722 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:03:32.835751 kernel: fuse: init (API version 7.39) Feb 13 19:03:32.835786 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:03:32.835816 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:03:32.835847 systemd-journald[1483]: Journal started Feb 13 19:03:32.835893 systemd-journald[1483]: Runtime Journal (/run/log/journal/ec24674e53606fcadc32c42c865018d6) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:03:32.842160 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:03:32.111740 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:03:32.169109 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 19:03:32.169924 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:03:32.842922 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:03:32.845446 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:03:32.849106 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:03:32.849432 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:03:32.854258 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:03:32.858478 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:03:32.922792 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:03:32.941404 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:03:32.946498 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:03:32.958015 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:03:32.964125 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:03:32.977088 kernel: ACPI: bus type drm_connector registered Feb 13 19:03:32.980385 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:03:32.982173 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:03:33.004112 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:03:33.007013 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:03:33.025754 kernel: loop0: detected capacity change from 0 to 113536 Feb 13 19:03:33.026374 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:03:33.036019 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:03:33.066275 systemd-journald[1483]: Time spent on flushing to /var/log/journal/ec24674e53606fcadc32c42c865018d6 is 137.233ms for 912 entries. Feb 13 19:03:33.066275 systemd-journald[1483]: System Journal (/var/log/journal/ec24674e53606fcadc32c42c865018d6) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:03:33.222393 systemd-journald[1483]: Received client request to flush runtime journal. Feb 13 19:03:33.222469 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:03:33.222504 kernel: loop1: detected capacity change from 0 to 53784 Feb 13 19:03:33.075434 systemd-tmpfiles[1496]: ACLs are not supported, ignoring. Feb 13 19:03:33.075461 systemd-tmpfiles[1496]: ACLs are not supported, ignoring. Feb 13 19:03:33.117626 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:03:33.133535 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:03:33.138783 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:03:33.147214 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:03:33.155134 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:03:33.225693 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:03:33.267099 kernel: loop2: detected capacity change from 0 to 194096 Feb 13 19:03:33.272414 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:03:33.286450 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:03:33.305915 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:03:33.323950 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:03:33.342661 udevadm[1553]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:03:33.383516 systemd-tmpfiles[1555]: ACLs are not supported, ignoring. Feb 13 19:03:33.384037 systemd-tmpfiles[1555]: ACLs are not supported, ignoring. Feb 13 19:03:33.392870 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:03:33.402144 kernel: loop3: detected capacity change from 0 to 116808 Feb 13 19:03:33.498641 kernel: loop4: detected capacity change from 0 to 113536 Feb 13 19:03:33.538729 kernel: loop5: detected capacity change from 0 to 53784 Feb 13 19:03:33.576140 kernel: loop6: detected capacity change from 0 to 194096 Feb 13 19:03:33.615109 kernel: loop7: detected capacity change from 0 to 116808 Feb 13 19:03:33.632113 (sd-merge)[1560]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 19:03:33.633234 (sd-merge)[1560]: Merged extensions into '/usr'. Feb 13 19:03:33.645618 systemd[1]: Reloading requested from client PID 1504 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:03:33.645659 systemd[1]: Reloading... Feb 13 19:03:33.903117 zram_generator::config[1592]: No configuration found. Feb 13 19:03:34.335968 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:03:34.472317 systemd[1]: Reloading finished in 822 ms. Feb 13 19:03:34.522968 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:03:34.526248 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:03:34.541420 systemd[1]: Starting ensure-sysext.service... Feb 13 19:03:34.555592 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:03:34.575494 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:03:34.591399 systemd[1]: Reloading requested from client PID 1638 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:03:34.591489 systemd[1]: Reloading... Feb 13 19:03:34.649203 systemd-tmpfiles[1639]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:03:34.650000 systemd-tmpfiles[1639]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:03:34.655890 systemd-tmpfiles[1639]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:03:34.656705 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Feb 13 19:03:34.656893 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Feb 13 19:03:34.679949 systemd-tmpfiles[1639]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:03:34.679977 systemd-tmpfiles[1639]: Skipping /boot Feb 13 19:03:34.696396 systemd-udevd[1640]: Using default interface naming scheme 'v255'. Feb 13 19:03:34.742899 systemd-tmpfiles[1639]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:03:34.742933 systemd-tmpfiles[1639]: Skipping /boot Feb 13 19:03:34.768125 ldconfig[1501]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:03:34.892097 zram_generator::config[1686]: No configuration found. Feb 13 19:03:35.018020 (udev-worker)[1665]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:03:35.305268 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:03:35.463186 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1721) Feb 13 19:03:35.486434 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:03:35.486697 systemd[1]: Reloading finished in 894 ms. Feb 13 19:03:35.524511 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:03:35.528577 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:03:35.532012 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:03:35.614889 systemd[1]: Finished ensure-sysext.service. Feb 13 19:03:35.658033 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:03:35.669680 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:03:35.672586 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:03:35.682389 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:03:35.689404 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:03:35.697188 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:03:35.703425 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:03:35.705766 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:03:35.710410 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:03:35.719451 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:03:35.729741 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:03:35.732313 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:03:35.741441 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:03:35.749351 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:03:35.754903 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:03:35.755566 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:03:35.799084 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:03:35.825436 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:03:35.845833 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:03:35.908128 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:03:35.955891 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:03:35.957027 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:03:35.963309 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:03:35.971663 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:03:35.973280 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:03:35.976587 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:03:35.977225 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:03:35.980797 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:03:35.990172 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:03:36.013404 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:03:36.022429 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:03:36.037360 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:03:36.053865 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:03:36.107531 augenrules[1881]: No rules Feb 13 19:03:36.118516 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:03:36.121204 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:03:36.128438 lvm[1877]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:03:36.144427 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:03:36.150012 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:03:36.163260 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:03:36.166213 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:03:36.179186 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:03:36.214985 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:03:36.218490 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:03:36.234772 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:03:36.272125 lvm[1898]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:03:36.322926 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:03:36.344217 systemd-networkd[1836]: lo: Link UP Feb 13 19:03:36.344237 systemd-networkd[1836]: lo: Gained carrier Feb 13 19:03:36.348056 systemd-networkd[1836]: Enumeration completed Feb 13 19:03:36.349199 systemd-networkd[1836]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:03:36.349219 systemd-networkd[1836]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:03:36.351589 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:03:36.356224 systemd-networkd[1836]: eth0: Link UP Feb 13 19:03:36.357946 systemd-networkd[1836]: eth0: Gained carrier Feb 13 19:03:36.357997 systemd-networkd[1836]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:03:36.359782 systemd-resolved[1839]: Positive Trust Anchors: Feb 13 19:03:36.359850 systemd-resolved[1839]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:03:36.359913 systemd-resolved[1839]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:03:36.366500 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:03:36.370210 systemd-networkd[1836]: eth0: DHCPv4 address 172.31.26.128/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:03:36.381969 systemd-resolved[1839]: Defaulting to hostname 'linux'. Feb 13 19:03:36.392368 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:03:36.395623 systemd[1]: Reached target network.target - Network. Feb 13 19:03:36.397721 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:03:36.400444 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:03:36.407275 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:03:36.409928 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:03:36.413187 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:03:36.415779 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:03:36.418302 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:03:36.420913 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:03:36.420977 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:03:36.423361 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:03:36.427653 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:03:36.433933 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:03:36.466591 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:03:36.469996 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:03:36.472697 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:03:36.475153 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:03:36.477272 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:03:36.477348 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:03:36.486299 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:03:36.492457 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:03:36.502551 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:03:36.509129 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:03:36.515865 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:03:36.519367 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:03:36.530521 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:03:36.548674 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:03:36.556962 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:03:36.566272 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:03:36.575761 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:03:36.584571 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:03:36.619425 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:03:36.622514 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:03:36.623937 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:03:36.629595 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:03:36.639577 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:03:36.660994 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:03:36.686809 jq[1907]: false Feb 13 19:03:36.665762 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:03:36.794662 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:03:36.796325 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:03:36.862320 jq[1922]: true Feb 13 19:03:36.855742 (ntainerd)[1942]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:03:36.872643 update_engine[1920]: I20250213 19:03:36.863582 1920 main.cc:92] Flatcar Update Engine starting Feb 13 19:03:36.867220 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:03:36.867606 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:03:36.907910 extend-filesystems[1908]: Found loop4 Feb 13 19:03:36.917009 extend-filesystems[1908]: Found loop5 Feb 13 19:03:36.917009 extend-filesystems[1908]: Found loop6 Feb 13 19:03:36.917009 extend-filesystems[1908]: Found loop7 Feb 13 19:03:36.917009 extend-filesystems[1908]: Found nvme0n1 Feb 13 19:03:36.917009 extend-filesystems[1908]: Found nvme0n1p1 Feb 13 19:03:36.917009 extend-filesystems[1908]: Found nvme0n1p2 Feb 13 19:03:36.917009 extend-filesystems[1908]: Found nvme0n1p3 Feb 13 19:03:36.917009 extend-filesystems[1908]: Found usr Feb 13 19:03:36.917009 extend-filesystems[1908]: Found nvme0n1p4 Feb 13 19:03:36.917009 extend-filesystems[1908]: Found nvme0n1p6 Feb 13 19:03:36.917009 extend-filesystems[1908]: Found nvme0n1p7 Feb 13 19:03:36.914007 dbus-daemon[1906]: [system] SELinux support is enabled Feb 13 19:03:36.950453 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:03:36.984932 extend-filesystems[1908]: Found nvme0n1p9 Feb 13 19:03:36.984932 extend-filesystems[1908]: Checking size of /dev/nvme0n1p9 Feb 13 19:03:36.992346 update_engine[1920]: I20250213 19:03:36.940315 1920 update_check_scheduler.cc:74] Next update check in 4m7s Feb 13 19:03:36.992423 coreos-metadata[1905]: Feb 13 19:03:36.956 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:03:36.992423 coreos-metadata[1905]: Feb 13 19:03:36.964 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 19:03:36.992423 coreos-metadata[1905]: Feb 13 19:03:36.972 INFO Fetch successful Feb 13 19:03:36.992423 coreos-metadata[1905]: Feb 13 19:03:36.972 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 19:03:36.992423 coreos-metadata[1905]: Feb 13 19:03:36.978 INFO Fetch successful Feb 13 19:03:36.992423 coreos-metadata[1905]: Feb 13 19:03:36.978 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 19:03:36.992423 coreos-metadata[1905]: Feb 13 19:03:36.979 INFO Fetch successful Feb 13 19:03:36.992423 coreos-metadata[1905]: Feb 13 19:03:36.979 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 19:03:36.992423 coreos-metadata[1905]: Feb 13 19:03:36.981 INFO Fetch successful Feb 13 19:03:36.992423 coreos-metadata[1905]: Feb 13 19:03:36.981 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 19:03:36.992423 coreos-metadata[1905]: Feb 13 19:03:36.985 INFO Fetch failed with 404: resource not found Feb 13 19:03:36.992423 coreos-metadata[1905]: Feb 13 19:03:36.985 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 19:03:36.935635 dbus-daemon[1906]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1836 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:03:37.003896 ntpd[1910]: 13 Feb 19:03:36 ntpd[1910]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:08:36 UTC 2025 (1): Starting Feb 13 19:03:37.003896 ntpd[1910]: 13 Feb 19:03:36 ntpd[1910]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:03:37.003896 ntpd[1910]: 13 Feb 19:03:36 ntpd[1910]: ---------------------------------------------------- Feb 13 19:03:37.003896 ntpd[1910]: 13 Feb 19:03:36 ntpd[1910]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:03:37.003896 ntpd[1910]: 13 Feb 19:03:36 ntpd[1910]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:03:37.003896 ntpd[1910]: 13 Feb 19:03:36 ntpd[1910]: corporation. Support and training for ntp-4 are Feb 13 19:03:37.003896 ntpd[1910]: 13 Feb 19:03:36 ntpd[1910]: available at https://www.nwtime.org/support Feb 13 19:03:37.003896 ntpd[1910]: 13 Feb 19:03:36 ntpd[1910]: ---------------------------------------------------- Feb 13 19:03:37.033382 coreos-metadata[1905]: Feb 13 19:03:36.995 INFO Fetch successful Feb 13 19:03:37.033382 coreos-metadata[1905]: Feb 13 19:03:36.995 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 19:03:37.033382 coreos-metadata[1905]: Feb 13 19:03:36.998 INFO Fetch successful Feb 13 19:03:37.033382 coreos-metadata[1905]: Feb 13 19:03:36.998 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 19:03:37.033382 coreos-metadata[1905]: Feb 13 19:03:37.012 INFO Fetch successful Feb 13 19:03:37.033382 coreos-metadata[1905]: Feb 13 19:03:37.012 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 19:03:37.033382 coreos-metadata[1905]: Feb 13 19:03:37.028 INFO Fetch successful Feb 13 19:03:37.033382 coreos-metadata[1905]: Feb 13 19:03:37.030 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 19:03:36.999808 ntpd[1910]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:08:36 UTC 2025 (1): Starting Feb 13 19:03:37.005345 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:03:37.065769 tar[1937]: linux-arm64/helm Feb 13 19:03:37.068613 coreos-metadata[1905]: Feb 13 19:03:37.045 INFO Fetch successful Feb 13 19:03:36.999885 ntpd[1910]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:03:37.068779 ntpd[1910]: 13 Feb 19:03:37 ntpd[1910]: proto: precision = 0.108 usec (-23) Feb 13 19:03:37.009921 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:03:36.999907 ntpd[1910]: ---------------------------------------------------- Feb 13 19:03:37.077571 ntpd[1910]: 13 Feb 19:03:37 ntpd[1910]: basedate set to 2025-02-01 Feb 13 19:03:37.077571 ntpd[1910]: 13 Feb 19:03:37 ntpd[1910]: gps base set to 2025-02-02 (week 2352) Feb 13 19:03:37.009980 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:03:36.999926 ntpd[1910]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:03:37.016253 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:03:36.999945 ntpd[1910]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:03:37.016319 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:03:36.999969 ntpd[1910]: corporation. Support and training for ntp-4 are Feb 13 19:03:37.022658 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:03:36.999987 ntpd[1910]: available at https://www.nwtime.org/support Feb 13 19:03:37.049693 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:03:37.000006 ntpd[1910]: ---------------------------------------------------- Feb 13 19:03:37.061156 systemd-logind[1919]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:03:37.017787 dbus-daemon[1906]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:03:37.061226 systemd-logind[1919]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 19:03:37.061367 ntpd[1910]: proto: precision = 0.108 usec (-23) Feb 13 19:03:37.064449 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:03:37.074491 ntpd[1910]: basedate set to 2025-02-01 Feb 13 19:03:37.068984 systemd-logind[1919]: New seat seat0. Feb 13 19:03:37.074529 ntpd[1910]: gps base set to 2025-02-02 (week 2352) Feb 13 19:03:37.073277 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:03:37.088006 jq[1945]: true Feb 13 19:03:37.119004 ntpd[1910]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:03:37.119724 ntpd[1910]: 13 Feb 19:03:37 ntpd[1910]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:03:37.119724 ntpd[1910]: 13 Feb 19:03:37 ntpd[1910]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:03:37.119724 ntpd[1910]: 13 Feb 19:03:37 ntpd[1910]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:03:37.119724 ntpd[1910]: 13 Feb 19:03:37 ntpd[1910]: Listen normally on 3 eth0 172.31.26.128:123 Feb 13 19:03:37.119724 ntpd[1910]: 13 Feb 19:03:37 ntpd[1910]: Listen normally on 4 lo [::1]:123 Feb 13 19:03:37.119724 ntpd[1910]: 13 Feb 19:03:37 ntpd[1910]: bind(21) AF_INET6 fe80::487:b9ff:feba:1f6b%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:03:37.119142 ntpd[1910]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:03:37.120054 ntpd[1910]: 13 Feb 19:03:37 ntpd[1910]: unable to create socket on eth0 (5) for fe80::487:b9ff:feba:1f6b%2#123 Feb 13 19:03:37.120054 ntpd[1910]: 13 Feb 19:03:37 ntpd[1910]: failed to init interface for address fe80::487:b9ff:feba:1f6b%2 Feb 13 19:03:37.120054 ntpd[1910]: 13 Feb 19:03:37 ntpd[1910]: Listening on routing socket on fd #21 for interface updates Feb 13 19:03:37.119462 ntpd[1910]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:03:37.119535 ntpd[1910]: Listen normally on 3 eth0 172.31.26.128:123 Feb 13 19:03:37.119607 ntpd[1910]: Listen normally on 4 lo [::1]:123 Feb 13 19:03:37.119688 ntpd[1910]: bind(21) AF_INET6 fe80::487:b9ff:feba:1f6b%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:03:37.119730 ntpd[1910]: unable to create socket on eth0 (5) for fe80::487:b9ff:feba:1f6b%2#123 Feb 13 19:03:37.119759 ntpd[1910]: failed to init interface for address fe80::487:b9ff:feba:1f6b%2 Feb 13 19:03:37.119817 ntpd[1910]: Listening on routing socket on fd #21 for interface updates Feb 13 19:03:37.137945 extend-filesystems[1908]: Resized partition /dev/nvme0n1p9 Feb 13 19:03:37.148644 extend-filesystems[1962]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:03:37.157699 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 19:03:37.164356 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:03:37.172965 ntpd[1910]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:03:37.173047 ntpd[1910]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:03:37.173547 ntpd[1910]: 13 Feb 19:03:37 ntpd[1910]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:03:37.185341 ntpd[1910]: 13 Feb 19:03:37 ntpd[1910]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:03:37.250102 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 19:03:37.271415 extend-filesystems[1962]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 19:03:37.271415 extend-filesystems[1962]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:03:37.271415 extend-filesystems[1962]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 19:03:37.287109 extend-filesystems[1908]: Resized filesystem in /dev/nvme0n1p9 Feb 13 19:03:37.291386 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:03:37.292242 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:03:37.307774 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:03:37.316816 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:03:37.395687 bash[1986]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:03:37.404843 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:03:37.430947 systemd[1]: Starting sshkeys.service... Feb 13 19:03:37.524576 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:03:37.541798 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:03:37.599291 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1717) Feb 13 19:03:37.699678 dbus-daemon[1906]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:03:37.700387 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:03:37.712215 dbus-daemon[1906]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1953 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:03:37.758284 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:03:37.845528 polkitd[2027]: Started polkitd version 121 Feb 13 19:03:37.884197 polkitd[2027]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:03:37.884356 polkitd[2027]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:03:37.892671 locksmithd[1954]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:03:37.898165 polkitd[2027]: Finished loading, compiling and executing 2 rules Feb 13 19:03:37.899442 dbus-daemon[1906]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:03:37.902354 containerd[1942]: time="2025-02-13T19:03:37.899285461Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:03:37.899813 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:03:37.906706 polkitd[2027]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:03:37.922391 systemd-networkd[1836]: eth0: Gained IPv6LL Feb 13 19:03:37.934968 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:03:37.945638 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:03:37.970443 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 19:03:37.986702 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:03:38.012762 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:03:38.095392 systemd-hostnamed[1953]: Hostname set to <ip-172-31-26-128> (transient) Feb 13 19:03:38.099054 systemd-resolved[1839]: System hostname changed to 'ip-172-31-26-128'. Feb 13 19:03:38.106241 coreos-metadata[1996]: Feb 13 19:03:38.105 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:03:38.115652 coreos-metadata[1996]: Feb 13 19:03:38.112 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 19:03:38.115652 coreos-metadata[1996]: Feb 13 19:03:38.115 INFO Fetch successful Feb 13 19:03:38.115652 coreos-metadata[1996]: Feb 13 19:03:38.115 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:03:38.120781 coreos-metadata[1996]: Feb 13 19:03:38.120 INFO Fetch successful Feb 13 19:03:38.130570 unknown[1996]: wrote ssh authorized keys file for user: core Feb 13 19:03:38.203289 amazon-ssm-agent[2067]: Initializing new seelog logger Feb 13 19:03:38.203289 amazon-ssm-agent[2067]: New Seelog Logger Creation Complete Feb 13 19:03:38.203289 amazon-ssm-agent[2067]: 2025/02/13 19:03:38 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:03:38.203289 amazon-ssm-agent[2067]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:03:38.211461 amazon-ssm-agent[2067]: 2025/02/13 19:03:38 processing appconfig overrides Feb 13 19:03:38.211461 amazon-ssm-agent[2067]: 2025/02/13 19:03:38 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:03:38.211461 amazon-ssm-agent[2067]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:03:38.211461 amazon-ssm-agent[2067]: 2025/02/13 19:03:38 processing appconfig overrides Feb 13 19:03:38.211461 amazon-ssm-agent[2067]: 2025/02/13 19:03:38 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:03:38.211461 amazon-ssm-agent[2067]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:03:38.211461 amazon-ssm-agent[2067]: 2025/02/13 19:03:38 processing appconfig overrides Feb 13 19:03:38.213822 amazon-ssm-agent[2067]: 2025-02-13 19:03:38 INFO Proxy environment variables: Feb 13 19:03:38.221163 amazon-ssm-agent[2067]: 2025/02/13 19:03:38 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:03:38.221163 amazon-ssm-agent[2067]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:03:38.221163 amazon-ssm-agent[2067]: 2025/02/13 19:03:38 processing appconfig overrides Feb 13 19:03:38.240189 update-ssh-keys[2097]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:03:38.241236 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:03:38.251160 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:03:38.270851 containerd[1942]: time="2025-02-13T19:03:38.270588851Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:38.289043 containerd[1942]: time="2025-02-13T19:03:38.288912107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:03:38.289194 containerd[1942]: time="2025-02-13T19:03:38.289046687Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:03:38.289194 containerd[1942]: time="2025-02-13T19:03:38.289122983Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:03:38.296284 containerd[1942]: time="2025-02-13T19:03:38.295440107Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:03:38.296284 containerd[1942]: time="2025-02-13T19:03:38.295716947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:38.296284 containerd[1942]: time="2025-02-13T19:03:38.296104907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:03:38.296284 containerd[1942]: time="2025-02-13T19:03:38.296150183Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:38.300427 containerd[1942]: time="2025-02-13T19:03:38.300337775Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:03:38.300427 containerd[1942]: time="2025-02-13T19:03:38.300414359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:38.300631 containerd[1942]: time="2025-02-13T19:03:38.300454475Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:03:38.300631 containerd[1942]: time="2025-02-13T19:03:38.300483623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:38.300740 containerd[1942]: time="2025-02-13T19:03:38.300718895Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:38.301298 containerd[1942]: time="2025-02-13T19:03:38.301232699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:38.301611 containerd[1942]: time="2025-02-13T19:03:38.301547087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:03:38.301611 containerd[1942]: time="2025-02-13T19:03:38.301604495Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:03:38.302501 containerd[1942]: time="2025-02-13T19:03:38.301864019Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:03:38.302501 containerd[1942]: time="2025-02-13T19:03:38.302028491Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:03:38.305177 systemd[1]: Finished sshkeys.service. Feb 13 19:03:38.314847 amazon-ssm-agent[2067]: 2025-02-13 19:03:38 INFO https_proxy: Feb 13 19:03:38.323304 containerd[1942]: time="2025-02-13T19:03:38.322667183Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:03:38.323304 containerd[1942]: time="2025-02-13T19:03:38.322798823Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:03:38.323304 containerd[1942]: time="2025-02-13T19:03:38.322932479Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:03:38.323304 containerd[1942]: time="2025-02-13T19:03:38.322986827Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:03:38.323304 containerd[1942]: time="2025-02-13T19:03:38.323027639Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:03:38.323619 containerd[1942]: time="2025-02-13T19:03:38.323377427Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:03:38.325382 containerd[1942]: time="2025-02-13T19:03:38.325306523Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:03:38.325723 containerd[1942]: time="2025-02-13T19:03:38.325661351Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:03:38.325844 containerd[1942]: time="2025-02-13T19:03:38.325722503Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:03:38.325844 containerd[1942]: time="2025-02-13T19:03:38.325761767Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:03:38.325844 containerd[1942]: time="2025-02-13T19:03:38.325795607Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:03:38.325844 containerd[1942]: time="2025-02-13T19:03:38.325830803Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:03:38.326006 containerd[1942]: time="2025-02-13T19:03:38.325862915Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:03:38.326006 containerd[1942]: time="2025-02-13T19:03:38.325894127Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:03:38.326006 containerd[1942]: time="2025-02-13T19:03:38.325927127Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:03:38.326006 containerd[1942]: time="2025-02-13T19:03:38.325956983Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:03:38.326233 containerd[1942]: time="2025-02-13T19:03:38.326003087Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:03:38.326233 containerd[1942]: time="2025-02-13T19:03:38.326034971Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:03:38.329640 containerd[1942]: time="2025-02-13T19:03:38.329181527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:03:38.329640 containerd[1942]: time="2025-02-13T19:03:38.329266163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:03:38.329640 containerd[1942]: time="2025-02-13T19:03:38.329307179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:03:38.329640 containerd[1942]: time="2025-02-13T19:03:38.329341451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:03:38.329640 containerd[1942]: time="2025-02-13T19:03:38.329371943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:03:38.329640 containerd[1942]: time="2025-02-13T19:03:38.329403359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:03:38.329640 containerd[1942]: time="2025-02-13T19:03:38.329433095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:03:38.329640 containerd[1942]: time="2025-02-13T19:03:38.329464715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:03:38.329640 containerd[1942]: time="2025-02-13T19:03:38.329517143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:03:38.329640 containerd[1942]: time="2025-02-13T19:03:38.329562431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:03:38.329640 containerd[1942]: time="2025-02-13T19:03:38.329594903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:03:38.329640 containerd[1942]: time="2025-02-13T19:03:38.329625827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:03:38.329640 containerd[1942]: time="2025-02-13T19:03:38.329657543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:03:38.330432 containerd[1942]: time="2025-02-13T19:03:38.329690411Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:03:38.330432 containerd[1942]: time="2025-02-13T19:03:38.329743019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:03:38.330432 containerd[1942]: time="2025-02-13T19:03:38.329781023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:03:38.330432 containerd[1942]: time="2025-02-13T19:03:38.329813231Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:03:38.330432 containerd[1942]: time="2025-02-13T19:03:38.329965643Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:03:38.330432 containerd[1942]: time="2025-02-13T19:03:38.330007643Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:03:38.330432 containerd[1942]: time="2025-02-13T19:03:38.330032039Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:03:38.335003 containerd[1942]: time="2025-02-13T19:03:38.332135399Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:03:38.335003 containerd[1942]: time="2025-02-13T19:03:38.333210875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:03:38.335003 containerd[1942]: time="2025-02-13T19:03:38.333260747Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:03:38.335003 containerd[1942]: time="2025-02-13T19:03:38.333287339Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:03:38.335003 containerd[1942]: time="2025-02-13T19:03:38.333312899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:03:38.335394 containerd[1942]: time="2025-02-13T19:03:38.333894791Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:03:38.335394 containerd[1942]: time="2025-02-13T19:03:38.334003271Z" level=info msg="Connect containerd service" Feb 13 19:03:38.343369 containerd[1942]: time="2025-02-13T19:03:38.341710043Z" level=info msg="using legacy CRI server" Feb 13 19:03:38.343369 containerd[1942]: time="2025-02-13T19:03:38.341772527Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:03:38.343369 containerd[1942]: time="2025-02-13T19:03:38.342036983Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:03:38.347907 containerd[1942]: time="2025-02-13T19:03:38.347804051Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:03:38.349336 containerd[1942]: time="2025-02-13T19:03:38.348168299Z" level=info msg="Start subscribing containerd event" Feb 13 19:03:38.349336 containerd[1942]: time="2025-02-13T19:03:38.348293255Z" level=info msg="Start recovering state" Feb 13 19:03:38.349336 containerd[1942]: time="2025-02-13T19:03:38.348445463Z" level=info msg="Start event monitor" Feb 13 19:03:38.349336 containerd[1942]: time="2025-02-13T19:03:38.348476939Z" level=info msg="Start snapshots syncer" Feb 13 19:03:38.349336 containerd[1942]: time="2025-02-13T19:03:38.348500591Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:03:38.349336 containerd[1942]: time="2025-02-13T19:03:38.348520223Z" level=info msg="Start streaming server" Feb 13 19:03:38.359458 containerd[1942]: time="2025-02-13T19:03:38.359229107Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:03:38.359900 containerd[1942]: time="2025-02-13T19:03:38.359837243Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:03:38.361128 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:03:38.376735 containerd[1942]: time="2025-02-13T19:03:38.369223535Z" level=info msg="containerd successfully booted in 0.474375s" Feb 13 19:03:38.431103 amazon-ssm-agent[2067]: 2025-02-13 19:03:38 INFO http_proxy: Feb 13 19:03:38.525761 amazon-ssm-agent[2067]: 2025-02-13 19:03:38 INFO no_proxy: Feb 13 19:03:38.627197 amazon-ssm-agent[2067]: 2025-02-13 19:03:38 INFO Checking if agent identity type OnPrem can be assumed Feb 13 19:03:38.727086 amazon-ssm-agent[2067]: 2025-02-13 19:03:38 INFO Checking if agent identity type EC2 can be assumed Feb 13 19:03:38.823359 amazon-ssm-agent[2067]: 2025-02-13 19:03:38 INFO Agent will take identity from EC2 Feb 13 19:03:38.923155 amazon-ssm-agent[2067]: 2025-02-13 19:03:38 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:03:38.996380 tar[1937]: linux-arm64/LICENSE Feb 13 19:03:38.997206 tar[1937]: linux-arm64/README.md Feb 13 19:03:39.024089 amazon-ssm-agent[2067]: 2025-02-13 19:03:38 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:03:39.041170 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:03:39.121576 amazon-ssm-agent[2067]: 2025-02-13 19:03:38 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:03:39.220869 amazon-ssm-agent[2067]: 2025-02-13 19:03:38 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 19:03:39.321159 amazon-ssm-agent[2067]: 2025-02-13 19:03:38 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 19:03:39.422854 amazon-ssm-agent[2067]: 2025-02-13 19:03:38 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 19:03:39.523224 amazon-ssm-agent[2067]: 2025-02-13 19:03:38 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 19:03:39.623468 amazon-ssm-agent[2067]: 2025-02-13 19:03:38 INFO [Registrar] Starting registrar module Feb 13 19:03:39.723840 amazon-ssm-agent[2067]: 2025-02-13 19:03:38 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 19:03:40.027381 ntpd[1910]: Listen normally on 6 eth0 [fe80::487:b9ff:feba:1f6b%2]:123 Feb 13 19:03:40.028704 ntpd[1910]: 13 Feb 19:03:40 ntpd[1910]: Listen normally on 6 eth0 [fe80::487:b9ff:feba:1f6b%2]:123 Feb 13 19:03:40.339667 sshd_keygen[1936]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:03:40.403366 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:03:40.426007 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:03:40.441585 systemd[1]: Started sshd@0-172.31.26.128:22-147.75.109.163:34150.service - OpenSSH per-connection server daemon (147.75.109.163:34150). Feb 13 19:03:40.481745 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:03:40.485184 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:03:40.508213 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:03:40.559569 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:03:40.578735 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:03:40.594823 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:03:40.598684 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:03:40.641618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:03:40.646141 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:03:40.648586 systemd[1]: Startup finished in 1.270s (kernel) + 8.796s (initrd) + 10.099s (userspace) = 20.166s. Feb 13 19:03:40.657177 (kubelet)[2157]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:03:40.791801 amazon-ssm-agent[2067]: 2025-02-13 19:03:40 INFO [EC2Identity] EC2 registration was successful. Feb 13 19:03:40.797829 sshd[2143]: Accepted publickey for core from 147.75.109.163 port 34150 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:40.804584 sshd-session[2143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:40.818008 amazon-ssm-agent[2067]: 2025-02-13 19:03:40 INFO [CredentialRefresher] credentialRefresher has started Feb 13 19:03:40.818008 amazon-ssm-agent[2067]: 2025-02-13 19:03:40 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 19:03:40.818008 amazon-ssm-agent[2067]: 2025-02-13 19:03:40 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 19:03:40.829185 systemd-logind[1919]: New session 1 of user core. Feb 13 19:03:40.831617 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:03:40.844054 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:03:40.880381 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:03:40.894732 amazon-ssm-agent[2067]: 2025-02-13 19:03:40 INFO [CredentialRefresher] Next credential rotation will be in 31.6999914139 minutes Feb 13 19:03:40.894090 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:03:40.910873 (systemd)[2164]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:03:41.146827 systemd[2164]: Queued start job for default target default.target. Feb 13 19:03:41.162517 systemd[2164]: Created slice app.slice - User Application Slice. Feb 13 19:03:41.162587 systemd[2164]: Reached target paths.target - Paths. Feb 13 19:03:41.162622 systemd[2164]: Reached target timers.target - Timers. Feb 13 19:03:41.165648 systemd[2164]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:03:41.210267 systemd[2164]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:03:41.211436 systemd[2164]: Reached target sockets.target - Sockets. Feb 13 19:03:41.211645 systemd[2164]: Reached target basic.target - Basic System. Feb 13 19:03:41.211839 systemd[2164]: Reached target default.target - Main User Target. Feb 13 19:03:41.211922 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:03:41.212126 systemd[2164]: Startup finished in 288ms. Feb 13 19:03:41.222354 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:03:41.388296 systemd[1]: Started sshd@1-172.31.26.128:22-147.75.109.163:36450.service - OpenSSH per-connection server daemon (147.75.109.163:36450). Feb 13 19:03:41.598780 sshd[2179]: Accepted publickey for core from 147.75.109.163 port 36450 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:41.602532 sshd-session[2179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:41.612406 systemd-logind[1919]: New session 2 of user core. Feb 13 19:03:41.617481 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:03:41.751082 sshd[2182]: Connection closed by 147.75.109.163 port 36450 Feb 13 19:03:41.752214 sshd-session[2179]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:41.759635 systemd[1]: sshd@1-172.31.26.128:22-147.75.109.163:36450.service: Deactivated successfully. Feb 13 19:03:41.764338 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:03:41.768769 systemd-logind[1919]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:03:41.783445 systemd-logind[1919]: Removed session 2. Feb 13 19:03:41.793823 systemd[1]: Started sshd@2-172.31.26.128:22-147.75.109.163:36460.service - OpenSSH per-connection server daemon (147.75.109.163:36460). Feb 13 19:03:41.868830 amazon-ssm-agent[2067]: 2025-02-13 19:03:41 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 19:03:41.970201 amazon-ssm-agent[2067]: 2025-02-13 19:03:41 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2190) started Feb 13 19:03:41.989103 sshd[2187]: Accepted publickey for core from 147.75.109.163 port 36460 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:41.992306 sshd-session[2187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:42.006489 systemd-logind[1919]: New session 3 of user core. Feb 13 19:03:42.014850 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:03:42.070550 amazon-ssm-agent[2067]: 2025-02-13 19:03:41 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 19:03:42.100618 kubelet[2157]: E0213 19:03:42.100451 2157 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:03:42.107784 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:03:42.108305 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:03:42.109340 systemd[1]: kubelet.service: Consumed 1.385s CPU time. Feb 13 19:03:42.142194 sshd[2197]: Connection closed by 147.75.109.163 port 36460 Feb 13 19:03:42.140892 sshd-session[2187]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:42.148870 systemd[1]: sshd@2-172.31.26.128:22-147.75.109.163:36460.service: Deactivated successfully. Feb 13 19:03:42.154357 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:03:42.156495 systemd-logind[1919]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:03:42.158432 systemd-logind[1919]: Removed session 3. Feb 13 19:03:42.187864 systemd[1]: Started sshd@3-172.31.26.128:22-147.75.109.163:36472.service - OpenSSH per-connection server daemon (147.75.109.163:36472). Feb 13 19:03:42.374973 sshd[2207]: Accepted publickey for core from 147.75.109.163 port 36472 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:42.378235 sshd-session[2207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:42.390568 systemd-logind[1919]: New session 4 of user core. Feb 13 19:03:42.399484 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:03:42.532975 sshd[2209]: Connection closed by 147.75.109.163 port 36472 Feb 13 19:03:42.533899 sshd-session[2207]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:42.539404 systemd[1]: sshd@3-172.31.26.128:22-147.75.109.163:36472.service: Deactivated successfully. Feb 13 19:03:42.543214 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:03:42.546025 systemd-logind[1919]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:03:42.549533 systemd-logind[1919]: Removed session 4. Feb 13 19:03:42.572625 systemd[1]: Started sshd@4-172.31.26.128:22-147.75.109.163:36478.service - OpenSSH per-connection server daemon (147.75.109.163:36478). Feb 13 19:03:42.766552 sshd[2214]: Accepted publickey for core from 147.75.109.163 port 36478 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:42.768555 sshd-session[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:42.776971 systemd-logind[1919]: New session 5 of user core. Feb 13 19:03:42.787330 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:03:42.904525 sudo[2217]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:03:42.905305 sudo[2217]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:03:43.633034 (dockerd)[2235]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:03:43.633257 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:03:44.042317 dockerd[2235]: time="2025-02-13T19:03:44.041343051Z" level=info msg="Starting up" Feb 13 19:03:44.172233 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4079383373-merged.mount: Deactivated successfully. Feb 13 19:03:44.277111 dockerd[2235]: time="2025-02-13T19:03:44.276679181Z" level=info msg="Loading containers: start." Feb 13 19:03:44.541091 kernel: Initializing XFRM netlink socket Feb 13 19:03:44.573709 (udev-worker)[2259]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:03:44.678689 systemd-networkd[1836]: docker0: Link UP Feb 13 19:03:44.725270 dockerd[2235]: time="2025-02-13T19:03:44.725114347Z" level=info msg="Loading containers: done." Feb 13 19:03:44.754908 dockerd[2235]: time="2025-02-13T19:03:44.754206751Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:03:44.754908 dockerd[2235]: time="2025-02-13T19:03:44.754350451Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 19:03:44.754908 dockerd[2235]: time="2025-02-13T19:03:44.754528891Z" level=info msg="Daemon has completed initialization" Feb 13 19:03:44.807682 dockerd[2235]: time="2025-02-13T19:03:44.807603463Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:03:44.808129 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:03:46.228633 containerd[1942]: time="2025-02-13T19:03:46.228435487Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 19:03:46.872501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount835338688.mount: Deactivated successfully. Feb 13 19:03:48.758657 containerd[1942]: time="2025-02-13T19:03:48.758585783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:48.762037 containerd[1942]: time="2025-02-13T19:03:48.761638316Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865207" Feb 13 19:03:48.763268 containerd[1942]: time="2025-02-13T19:03:48.762647191Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:48.770826 containerd[1942]: time="2025-02-13T19:03:48.770582290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:48.773708 containerd[1942]: time="2025-02-13T19:03:48.773482247Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 2.544952762s" Feb 13 19:03:48.773708 containerd[1942]: time="2025-02-13T19:03:48.773560364Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 19:03:48.819964 containerd[1942]: time="2025-02-13T19:03:48.819905876Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 19:03:50.734714 containerd[1942]: time="2025-02-13T19:03:50.734297595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:50.736696 containerd[1942]: time="2025-02-13T19:03:50.736586638Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898594" Feb 13 19:03:50.737744 containerd[1942]: time="2025-02-13T19:03:50.737171033Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:50.743761 containerd[1942]: time="2025-02-13T19:03:50.743642132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:50.746684 containerd[1942]: time="2025-02-13T19:03:50.746230869Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 1.926257635s" Feb 13 19:03:50.746684 containerd[1942]: time="2025-02-13T19:03:50.746306695Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 19:03:50.792306 containerd[1942]: time="2025-02-13T19:03:50.792233472Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 19:03:52.358590 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:03:52.376778 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:03:52.464116 containerd[1942]: time="2025-02-13T19:03:52.463577311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:52.466094 containerd[1942]: time="2025-02-13T19:03:52.465604573Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164934" Feb 13 19:03:52.467802 containerd[1942]: time="2025-02-13T19:03:52.467742180Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:52.476174 containerd[1942]: time="2025-02-13T19:03:52.476052223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:52.482390 containerd[1942]: time="2025-02-13T19:03:52.482172629Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.689866785s" Feb 13 19:03:52.482390 containerd[1942]: time="2025-02-13T19:03:52.482242158Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 19:03:52.550916 containerd[1942]: time="2025-02-13T19:03:52.550109392Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:03:52.746216 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:03:52.764126 (kubelet)[2513]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:03:52.863293 kubelet[2513]: E0213 19:03:52.863107 2513 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:03:52.871200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:03:52.871592 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:03:53.829528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1776618707.mount: Deactivated successfully. Feb 13 19:03:54.304581 containerd[1942]: time="2025-02-13T19:03:54.304049828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:54.305914 containerd[1942]: time="2025-02-13T19:03:54.305837942Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663370" Feb 13 19:03:54.307266 containerd[1942]: time="2025-02-13T19:03:54.307168484Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:54.313175 containerd[1942]: time="2025-02-13T19:03:54.312850558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:54.316747 containerd[1942]: time="2025-02-13T19:03:54.316372128Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.765936019s" Feb 13 19:03:54.316747 containerd[1942]: time="2025-02-13T19:03:54.316438455Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 19:03:54.357612 containerd[1942]: time="2025-02-13T19:03:54.357543081Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:03:54.948536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2994085747.mount: Deactivated successfully. Feb 13 19:03:56.056117 containerd[1942]: time="2025-02-13T19:03:56.056012022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:56.059477 containerd[1942]: time="2025-02-13T19:03:56.059402930Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 19:03:56.062585 containerd[1942]: time="2025-02-13T19:03:56.062515277Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:56.067489 containerd[1942]: time="2025-02-13T19:03:56.067399666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:56.070027 containerd[1942]: time="2025-02-13T19:03:56.069815929Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.712207901s" Feb 13 19:03:56.070027 containerd[1942]: time="2025-02-13T19:03:56.069874220Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:03:56.111789 containerd[1942]: time="2025-02-13T19:03:56.111486863Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 19:03:56.637344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1560396381.mount: Deactivated successfully. Feb 13 19:03:56.646678 containerd[1942]: time="2025-02-13T19:03:56.646431487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:56.648177 containerd[1942]: time="2025-02-13T19:03:56.648090101Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Feb 13 19:03:56.648876 containerd[1942]: time="2025-02-13T19:03:56.648738784Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:56.656102 containerd[1942]: time="2025-02-13T19:03:56.654254249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:03:56.656102 containerd[1942]: time="2025-02-13T19:03:56.655884210Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 544.340699ms" Feb 13 19:03:56.656102 containerd[1942]: time="2025-02-13T19:03:56.655929763Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 19:03:56.700036 containerd[1942]: time="2025-02-13T19:03:56.699960721Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 19:03:57.240110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1937494999.mount: Deactivated successfully. Feb 13 19:04:00.147599 containerd[1942]: time="2025-02-13T19:04:00.147516571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:00.150391 containerd[1942]: time="2025-02-13T19:04:00.150308402Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Feb 13 19:04:00.152436 containerd[1942]: time="2025-02-13T19:04:00.152361367Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:00.160626 containerd[1942]: time="2025-02-13T19:04:00.160542654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:00.163251 containerd[1942]: time="2025-02-13T19:04:00.162846222Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.462819679s" Feb 13 19:04:00.163251 containerd[1942]: time="2025-02-13T19:04:00.162906768Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 19:04:03.122001 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:04:03.132477 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:04:03.443510 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:04:03.453862 (kubelet)[2701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:04:03.548114 kubelet[2701]: E0213 19:04:03.546215 2701 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:04:03.551854 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:04:03.553960 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:04:08.133550 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:04:09.416401 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:04:09.429546 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:04:09.480247 systemd[1]: Reloading requested from client PID 2718 ('systemctl') (unit session-5.scope)... Feb 13 19:04:09.480288 systemd[1]: Reloading... Feb 13 19:04:09.764115 zram_generator::config[2761]: No configuration found. Feb 13 19:04:09.996541 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:04:10.161719 systemd[1]: Reloading finished in 680 ms. Feb 13 19:04:10.273568 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:04:10.273760 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:04:10.275161 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:04:10.294891 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:04:10.579408 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:04:10.590753 (kubelet)[2822]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:04:10.694354 kubelet[2822]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:04:10.695093 kubelet[2822]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:04:10.695093 kubelet[2822]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:04:10.695093 kubelet[2822]: I0213 19:04:10.694961 2822 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:04:11.351968 kubelet[2822]: I0213 19:04:11.351873 2822 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:04:11.354107 kubelet[2822]: I0213 19:04:11.352363 2822 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:04:11.354107 kubelet[2822]: I0213 19:04:11.352979 2822 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:04:11.384765 kubelet[2822]: E0213 19:04:11.384713 2822 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.26.128:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.26.128:6443: connect: connection refused Feb 13 19:04:11.385508 kubelet[2822]: I0213 19:04:11.385404 2822 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:04:11.401670 kubelet[2822]: I0213 19:04:11.401630 2822 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:04:11.402417 kubelet[2822]: I0213 19:04:11.402364 2822 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:04:11.402819 kubelet[2822]: I0213 19:04:11.402541 2822 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-128","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:04:11.403112 kubelet[2822]: I0213 19:04:11.403090 2822 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:04:11.403228 kubelet[2822]: I0213 19:04:11.403209 2822 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:04:11.403539 kubelet[2822]: I0213 19:04:11.403519 2822 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:04:11.405283 kubelet[2822]: I0213 19:04:11.405257 2822 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:04:11.405438 kubelet[2822]: I0213 19:04:11.405417 2822 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:04:11.405625 kubelet[2822]: I0213 19:04:11.405606 2822 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:04:11.405762 kubelet[2822]: I0213 19:04:11.405742 2822 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:04:11.407365 kubelet[2822]: W0213 19:04:11.407299 2822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.128:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.128:6443: connect: connection refused Feb 13 19:04:11.410128 kubelet[2822]: E0213 19:04:11.408295 2822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.26.128:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.128:6443: connect: connection refused Feb 13 19:04:11.410128 kubelet[2822]: W0213 19:04:11.408183 2822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-128&limit=500&resourceVersion=0": dial tcp 172.31.26.128:6443: connect: connection refused Feb 13 19:04:11.410128 kubelet[2822]: E0213 19:04:11.408368 2822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.26.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-128&limit=500&resourceVersion=0": dial tcp 172.31.26.128:6443: connect: connection refused Feb 13 19:04:11.410128 kubelet[2822]: I0213 19:04:11.408577 2822 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:04:11.410128 kubelet[2822]: I0213 19:04:11.409036 2822 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:04:11.410128 kubelet[2822]: W0213 19:04:11.409177 2822 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:04:11.411864 kubelet[2822]: I0213 19:04:11.411823 2822 server.go:1264] "Started kubelet" Feb 13 19:04:11.422009 kubelet[2822]: I0213 19:04:11.421946 2822 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:04:11.425724 kubelet[2822]: I0213 19:04:11.425628 2822 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:04:11.426129 kubelet[2822]: I0213 19:04:11.426102 2822 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:04:11.426297 kubelet[2822]: I0213 19:04:11.426261 2822 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:04:11.430434 kubelet[2822]: I0213 19:04:11.430391 2822 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:04:11.433369 kubelet[2822]: E0213 19:04:11.433156 2822 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.128:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.128:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-128.1823d9e34f9f3722 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-128,UID:ip-172-31-26-128,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-128,},FirstTimestamp:2025-02-13 19:04:11.41178141 +0000 UTC m=+0.813447152,LastTimestamp:2025-02-13 19:04:11.41178141 +0000 UTC m=+0.813447152,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-128,}" Feb 13 19:04:11.440556 kubelet[2822]: E0213 19:04:11.440495 2822 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:04:11.440885 kubelet[2822]: E0213 19:04:11.440822 2822 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-26-128\" not found" Feb 13 19:04:11.440980 kubelet[2822]: I0213 19:04:11.440911 2822 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:04:11.441170 kubelet[2822]: I0213 19:04:11.441136 2822 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:04:11.441317 kubelet[2822]: I0213 19:04:11.441284 2822 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:04:11.442177 kubelet[2822]: W0213 19:04:11.441853 2822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.128:6443: connect: connection refused Feb 13 19:04:11.442177 kubelet[2822]: E0213 19:04:11.441952 2822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.26.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.128:6443: connect: connection refused Feb 13 19:04:11.443793 kubelet[2822]: E0213 19:04:11.443708 2822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-128?timeout=10s\": dial tcp 172.31.26.128:6443: connect: connection refused" interval="200ms" Feb 13 19:04:11.447408 kubelet[2822]: I0213 19:04:11.447298 2822 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:04:11.447408 kubelet[2822]: I0213 19:04:11.447331 2822 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:04:11.447713 kubelet[2822]: I0213 19:04:11.447515 2822 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:04:11.482375 kubelet[2822]: I0213 19:04:11.481728 2822 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:04:11.482375 kubelet[2822]: I0213 19:04:11.481772 2822 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:04:11.482375 kubelet[2822]: I0213 19:04:11.481818 2822 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:04:11.484969 kubelet[2822]: I0213 19:04:11.484889 2822 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:04:11.487610 kubelet[2822]: I0213 19:04:11.487556 2822 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:04:11.487704 kubelet[2822]: I0213 19:04:11.487622 2822 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:04:11.487704 kubelet[2822]: I0213 19:04:11.487661 2822 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:04:11.487704 kubelet[2822]: E0213 19:04:11.487759 2822 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:04:11.490390 kubelet[2822]: I0213 19:04:11.490334 2822 policy_none.go:49] "None policy: Start" Feb 13 19:04:11.495359 kubelet[2822]: W0213 19:04:11.495313 2822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.128:6443: connect: connection refused Feb 13 19:04:11.495593 kubelet[2822]: E0213 19:04:11.495569 2822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.26.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.128:6443: connect: connection refused Feb 13 19:04:11.496018 kubelet[2822]: I0213 19:04:11.495993 2822 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:04:11.496238 kubelet[2822]: I0213 19:04:11.496207 2822 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:04:11.514857 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:04:11.536053 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:04:11.544276 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:04:11.545863 kubelet[2822]: I0213 19:04:11.545550 2822 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-128" Feb 13 19:04:11.546198 kubelet[2822]: E0213 19:04:11.546157 2822 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.26.128:6443/api/v1/nodes\": dial tcp 172.31.26.128:6443: connect: connection refused" node="ip-172-31-26-128" Feb 13 19:04:11.559110 kubelet[2822]: I0213 19:04:11.559042 2822 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:04:11.560755 kubelet[2822]: I0213 19:04:11.560663 2822 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:04:11.561859 kubelet[2822]: I0213 19:04:11.561010 2822 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:04:11.566483 kubelet[2822]: E0213 19:04:11.566440 2822 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-26-128\" not found" Feb 13 19:04:11.588154 kubelet[2822]: I0213 19:04:11.588038 2822 topology_manager.go:215] "Topology Admit Handler" podUID="10e63b9a9ec70888b25f343403546c01" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-26-128" Feb 13 19:04:11.590345 kubelet[2822]: I0213 19:04:11.590046 2822 topology_manager.go:215] "Topology Admit Handler" podUID="773b16d190ab37a1166250c126b61362" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-26-128" Feb 13 19:04:11.592441 kubelet[2822]: I0213 19:04:11.592387 2822 topology_manager.go:215] "Topology Admit Handler" podUID="437a4e2708d9e80aa1137cc65da85d9c" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-26-128" Feb 13 19:04:11.608619 systemd[1]: Created slice kubepods-burstable-pod10e63b9a9ec70888b25f343403546c01.slice - libcontainer container kubepods-burstable-pod10e63b9a9ec70888b25f343403546c01.slice. Feb 13 19:04:11.628286 systemd[1]: Created slice kubepods-burstable-pod773b16d190ab37a1166250c126b61362.slice - libcontainer container kubepods-burstable-pod773b16d190ab37a1166250c126b61362.slice. Feb 13 19:04:11.636192 systemd[1]: Created slice kubepods-burstable-pod437a4e2708d9e80aa1137cc65da85d9c.slice - libcontainer container kubepods-burstable-pod437a4e2708d9e80aa1137cc65da85d9c.slice. Feb 13 19:04:11.642732 kubelet[2822]: I0213 19:04:11.642297 2822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/10e63b9a9ec70888b25f343403546c01-ca-certs\") pod \"kube-apiserver-ip-172-31-26-128\" (UID: \"10e63b9a9ec70888b25f343403546c01\") " pod="kube-system/kube-apiserver-ip-172-31-26-128" Feb 13 19:04:11.642732 kubelet[2822]: I0213 19:04:11.642354 2822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/10e63b9a9ec70888b25f343403546c01-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-128\" (UID: \"10e63b9a9ec70888b25f343403546c01\") " pod="kube-system/kube-apiserver-ip-172-31-26-128" Feb 13 19:04:11.642732 kubelet[2822]: I0213 19:04:11.642392 2822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/10e63b9a9ec70888b25f343403546c01-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-128\" (UID: \"10e63b9a9ec70888b25f343403546c01\") " pod="kube-system/kube-apiserver-ip-172-31-26-128" Feb 13 19:04:11.642732 kubelet[2822]: I0213 19:04:11.642429 2822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/773b16d190ab37a1166250c126b61362-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-128\" (UID: \"773b16d190ab37a1166250c126b61362\") " pod="kube-system/kube-controller-manager-ip-172-31-26-128" Feb 13 19:04:11.642732 kubelet[2822]: I0213 19:04:11.642464 2822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/773b16d190ab37a1166250c126b61362-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-128\" (UID: \"773b16d190ab37a1166250c126b61362\") " pod="kube-system/kube-controller-manager-ip-172-31-26-128" Feb 13 19:04:11.643102 kubelet[2822]: I0213 19:04:11.642498 2822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/773b16d190ab37a1166250c126b61362-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-128\" (UID: \"773b16d190ab37a1166250c126b61362\") " pod="kube-system/kube-controller-manager-ip-172-31-26-128" Feb 13 19:04:11.643102 kubelet[2822]: I0213 19:04:11.642532 2822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/773b16d190ab37a1166250c126b61362-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-128\" (UID: \"773b16d190ab37a1166250c126b61362\") " pod="kube-system/kube-controller-manager-ip-172-31-26-128" Feb 13 19:04:11.643102 kubelet[2822]: I0213 19:04:11.642567 2822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/773b16d190ab37a1166250c126b61362-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-128\" (UID: \"773b16d190ab37a1166250c126b61362\") " pod="kube-system/kube-controller-manager-ip-172-31-26-128" Feb 13 19:04:11.643102 kubelet[2822]: I0213 19:04:11.642600 2822 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/437a4e2708d9e80aa1137cc65da85d9c-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-128\" (UID: \"437a4e2708d9e80aa1137cc65da85d9c\") " pod="kube-system/kube-scheduler-ip-172-31-26-128" Feb 13 19:04:11.644526 kubelet[2822]: E0213 19:04:11.644470 2822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-128?timeout=10s\": dial tcp 172.31.26.128:6443: connect: connection refused" interval="400ms" Feb 13 19:04:11.749324 kubelet[2822]: I0213 19:04:11.749104 2822 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-128" Feb 13 19:04:11.750199 kubelet[2822]: E0213 19:04:11.750139 2822 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.26.128:6443/api/v1/nodes\": dial tcp 172.31.26.128:6443: connect: connection refused" node="ip-172-31-26-128" Feb 13 19:04:11.922358 containerd[1942]: time="2025-02-13T19:04:11.922197188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-128,Uid:10e63b9a9ec70888b25f343403546c01,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:11.934173 containerd[1942]: time="2025-02-13T19:04:11.934107746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-128,Uid:773b16d190ab37a1166250c126b61362,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:11.941595 containerd[1942]: time="2025-02-13T19:04:11.941523001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-128,Uid:437a4e2708d9e80aa1137cc65da85d9c,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:12.045752 kubelet[2822]: E0213 19:04:12.045683 2822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-128?timeout=10s\": dial tcp 172.31.26.128:6443: connect: connection refused" interval="800ms" Feb 13 19:04:12.153157 kubelet[2822]: I0213 19:04:12.152719 2822 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-128" Feb 13 19:04:12.153442 kubelet[2822]: E0213 19:04:12.153187 2822 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.26.128:6443/api/v1/nodes\": dial tcp 172.31.26.128:6443: connect: connection refused" node="ip-172-31-26-128" Feb 13 19:04:12.443853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4175307725.mount: Deactivated successfully. Feb 13 19:04:12.459174 containerd[1942]: time="2025-02-13T19:04:12.458251928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:04:12.467135 containerd[1942]: time="2025-02-13T19:04:12.467012826Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 19:04:12.469112 containerd[1942]: time="2025-02-13T19:04:12.469035602Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:04:12.472123 containerd[1942]: time="2025-02-13T19:04:12.471883217Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:04:12.475718 containerd[1942]: time="2025-02-13T19:04:12.475643109Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:04:12.477918 containerd[1942]: time="2025-02-13T19:04:12.477851564Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:04:12.480190 containerd[1942]: time="2025-02-13T19:04:12.480106292Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:04:12.482707 containerd[1942]: time="2025-02-13T19:04:12.482564486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:04:12.486796 containerd[1942]: time="2025-02-13T19:04:12.486464119Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 544.830534ms" Feb 13 19:04:12.490973 containerd[1942]: time="2025-02-13T19:04:12.490575387Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 568.258066ms" Feb 13 19:04:12.498581 containerd[1942]: time="2025-02-13T19:04:12.498484518Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 564.249852ms" Feb 13 19:04:12.516415 kubelet[2822]: W0213 19:04:12.516267 2822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.26.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-128&limit=500&resourceVersion=0": dial tcp 172.31.26.128:6443: connect: connection refused Feb 13 19:04:12.516415 kubelet[2822]: E0213 19:04:12.516375 2822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.26.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-128&limit=500&resourceVersion=0": dial tcp 172.31.26.128:6443: connect: connection refused Feb 13 19:04:12.700473 kubelet[2822]: W0213 19:04:12.700195 2822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.26.128:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.128:6443: connect: connection refused Feb 13 19:04:12.700473 kubelet[2822]: E0213 19:04:12.700296 2822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.26.128:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.128:6443: connect: connection refused Feb 13 19:04:12.719192 containerd[1942]: time="2025-02-13T19:04:12.718790179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:12.719192 containerd[1942]: time="2025-02-13T19:04:12.718931036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:12.719192 containerd[1942]: time="2025-02-13T19:04:12.718970617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:12.720255 containerd[1942]: time="2025-02-13T19:04:12.719765591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:12.737467 containerd[1942]: time="2025-02-13T19:04:12.736695626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:12.737467 containerd[1942]: time="2025-02-13T19:04:12.736815302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:12.737467 containerd[1942]: time="2025-02-13T19:04:12.736846138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:12.737467 containerd[1942]: time="2025-02-13T19:04:12.737027740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:12.739177 containerd[1942]: time="2025-02-13T19:04:12.738827739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:12.739177 containerd[1942]: time="2025-02-13T19:04:12.738982450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:12.739177 containerd[1942]: time="2025-02-13T19:04:12.739037814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:12.742754 containerd[1942]: time="2025-02-13T19:04:12.742472620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:12.799394 systemd[1]: Started cri-containerd-197bd559bbc53504f7b017b77c31d6c38ae0953b614ccdbccefde42f3d3527ee.scope - libcontainer container 197bd559bbc53504f7b017b77c31d6c38ae0953b614ccdbccefde42f3d3527ee. Feb 13 19:04:12.823598 systemd[1]: Started cri-containerd-a4ae55c42eed74116581dbec11b8a7b5d54256f4d21d434792daf9449572d15d.scope - libcontainer container a4ae55c42eed74116581dbec11b8a7b5d54256f4d21d434792daf9449572d15d. Feb 13 19:04:12.831413 systemd[1]: Started cri-containerd-c66abdc1e09dcbadec38bf06db6da11f5836151a0148a8596673c56de7f3b9a7.scope - libcontainer container c66abdc1e09dcbadec38bf06db6da11f5836151a0148a8596673c56de7f3b9a7. Feb 13 19:04:12.847471 kubelet[2822]: E0213 19:04:12.847235 2822 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-128?timeout=10s\": dial tcp 172.31.26.128:6443: connect: connection refused" interval="1.6s" Feb 13 19:04:12.926407 containerd[1942]: time="2025-02-13T19:04:12.926338769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-128,Uid:773b16d190ab37a1166250c126b61362,Namespace:kube-system,Attempt:0,} returns sandbox id \"197bd559bbc53504f7b017b77c31d6c38ae0953b614ccdbccefde42f3d3527ee\"" Feb 13 19:04:12.943719 kubelet[2822]: W0213 19:04:12.942908 2822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.26.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.128:6443: connect: connection refused Feb 13 19:04:12.943719 kubelet[2822]: E0213 19:04:12.943007 2822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.26.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.128:6443: connect: connection refused Feb 13 19:04:12.948155 containerd[1942]: time="2025-02-13T19:04:12.947008557Z" level=info msg="CreateContainer within sandbox \"197bd559bbc53504f7b017b77c31d6c38ae0953b614ccdbccefde42f3d3527ee\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:04:12.953249 containerd[1942]: time="2025-02-13T19:04:12.952027914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-128,Uid:10e63b9a9ec70888b25f343403546c01,Namespace:kube-system,Attempt:0,} returns sandbox id \"c66abdc1e09dcbadec38bf06db6da11f5836151a0148a8596673c56de7f3b9a7\"" Feb 13 19:04:12.962271 kubelet[2822]: I0213 19:04:12.961645 2822 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-128" Feb 13 19:04:12.964176 kubelet[2822]: E0213 19:04:12.962812 2822 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.26.128:6443/api/v1/nodes\": dial tcp 172.31.26.128:6443: connect: connection refused" node="ip-172-31-26-128" Feb 13 19:04:12.973551 containerd[1942]: time="2025-02-13T19:04:12.973466254Z" level=info msg="CreateContainer within sandbox \"c66abdc1e09dcbadec38bf06db6da11f5836151a0148a8596673c56de7f3b9a7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:04:12.988437 containerd[1942]: time="2025-02-13T19:04:12.988352667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-128,Uid:437a4e2708d9e80aa1137cc65da85d9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4ae55c42eed74116581dbec11b8a7b5d54256f4d21d434792daf9449572d15d\"" Feb 13 19:04:12.996466 containerd[1942]: time="2025-02-13T19:04:12.996305481Z" level=info msg="CreateContainer within sandbox \"a4ae55c42eed74116581dbec11b8a7b5d54256f4d21d434792daf9449572d15d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:04:13.024720 containerd[1942]: time="2025-02-13T19:04:13.024652604Z" level=info msg="CreateContainer within sandbox \"197bd559bbc53504f7b017b77c31d6c38ae0953b614ccdbccefde42f3d3527ee\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4005b6ea0ae1cc215126039956aba60663597f8a320add2e081ed828e17ed301\"" Feb 13 19:04:13.026192 containerd[1942]: time="2025-02-13T19:04:13.026132364Z" level=info msg="StartContainer for \"4005b6ea0ae1cc215126039956aba60663597f8a320add2e081ed828e17ed301\"" Feb 13 19:04:13.028512 kubelet[2822]: W0213 19:04:13.028376 2822 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.26.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.128:6443: connect: connection refused Feb 13 19:04:13.028512 kubelet[2822]: E0213 19:04:13.028472 2822 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.26.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.128:6443: connect: connection refused Feb 13 19:04:13.036422 containerd[1942]: time="2025-02-13T19:04:13.035873771Z" level=info msg="CreateContainer within sandbox \"c66abdc1e09dcbadec38bf06db6da11f5836151a0148a8596673c56de7f3b9a7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f9415b717addb1d9175abf128577cfd129f0f23677f137f4ba0e3845ffe161be\"" Feb 13 19:04:13.037167 containerd[1942]: time="2025-02-13T19:04:13.037094328Z" level=info msg="StartContainer for \"f9415b717addb1d9175abf128577cfd129f0f23677f137f4ba0e3845ffe161be\"" Feb 13 19:04:13.049412 containerd[1942]: time="2025-02-13T19:04:13.049267303Z" level=info msg="CreateContainer within sandbox \"a4ae55c42eed74116581dbec11b8a7b5d54256f4d21d434792daf9449572d15d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b8aac71603db1d5bbc5f847377b3a7c77aab07c5fffd4d36489abad4523859ed\"" Feb 13 19:04:13.050952 containerd[1942]: time="2025-02-13T19:04:13.050671056Z" level=info msg="StartContainer for \"b8aac71603db1d5bbc5f847377b3a7c77aab07c5fffd4d36489abad4523859ed\"" Feb 13 19:04:13.108261 systemd[1]: Started cri-containerd-4005b6ea0ae1cc215126039956aba60663597f8a320add2e081ed828e17ed301.scope - libcontainer container 4005b6ea0ae1cc215126039956aba60663597f8a320add2e081ed828e17ed301. Feb 13 19:04:13.122246 systemd[1]: Started cri-containerd-f9415b717addb1d9175abf128577cfd129f0f23677f137f4ba0e3845ffe161be.scope - libcontainer container f9415b717addb1d9175abf128577cfd129f0f23677f137f4ba0e3845ffe161be. Feb 13 19:04:13.138422 systemd[1]: Started cri-containerd-b8aac71603db1d5bbc5f847377b3a7c77aab07c5fffd4d36489abad4523859ed.scope - libcontainer container b8aac71603db1d5bbc5f847377b3a7c77aab07c5fffd4d36489abad4523859ed. Feb 13 19:04:13.263365 containerd[1942]: time="2025-02-13T19:04:13.262389277Z" level=info msg="StartContainer for \"f9415b717addb1d9175abf128577cfd129f0f23677f137f4ba0e3845ffe161be\" returns successfully" Feb 13 19:04:13.284091 containerd[1942]: time="2025-02-13T19:04:13.283599359Z" level=info msg="StartContainer for \"4005b6ea0ae1cc215126039956aba60663597f8a320add2e081ed828e17ed301\" returns successfully" Feb 13 19:04:13.294815 containerd[1942]: time="2025-02-13T19:04:13.294743068Z" level=info msg="StartContainer for \"b8aac71603db1d5bbc5f847377b3a7c77aab07c5fffd4d36489abad4523859ed\" returns successfully" Feb 13 19:04:14.567647 kubelet[2822]: I0213 19:04:14.567591 2822 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-128" Feb 13 19:04:17.334436 kubelet[2822]: E0213 19:04:17.334368 2822 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-26-128\" not found" node="ip-172-31-26-128" Feb 13 19:04:17.401509 kubelet[2822]: I0213 19:04:17.401164 2822 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-26-128" Feb 13 19:04:17.415168 kubelet[2822]: I0213 19:04:17.411805 2822 apiserver.go:52] "Watching apiserver" Feb 13 19:04:17.442184 kubelet[2822]: I0213 19:04:17.442144 2822 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:04:19.562000 systemd[1]: Reloading requested from client PID 3095 ('systemctl') (unit session-5.scope)... Feb 13 19:04:19.562035 systemd[1]: Reloading... Feb 13 19:04:19.834234 zram_generator::config[3141]: No configuration found. Feb 13 19:04:20.137169 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:04:20.351355 systemd[1]: Reloading finished in 788 ms. Feb 13 19:04:20.427616 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:04:20.445133 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:04:20.445773 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:04:20.445872 systemd[1]: kubelet.service: Consumed 1.562s CPU time, 115.6M memory peak, 0B memory swap peak. Feb 13 19:04:20.455927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:04:20.782399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:04:20.788344 (kubelet)[3195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:04:20.894106 kubelet[3195]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:04:20.894106 kubelet[3195]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:04:20.894106 kubelet[3195]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:04:20.894106 kubelet[3195]: I0213 19:04:20.891488 3195 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:04:20.900747 kubelet[3195]: I0213 19:04:20.900706 3195 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:04:20.901184 kubelet[3195]: I0213 19:04:20.900923 3195 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:04:20.901859 kubelet[3195]: I0213 19:04:20.901822 3195 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:04:20.905553 kubelet[3195]: I0213 19:04:20.905136 3195 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:04:20.907769 kubelet[3195]: I0213 19:04:20.907730 3195 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:04:20.927278 kubelet[3195]: I0213 19:04:20.926265 3195 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:04:20.927278 kubelet[3195]: I0213 19:04:20.926736 3195 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:04:20.930402 kubelet[3195]: I0213 19:04:20.926792 3195 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-128","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:04:20.930402 kubelet[3195]: I0213 19:04:20.927613 3195 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:04:20.930402 kubelet[3195]: I0213 19:04:20.927637 3195 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:04:20.930402 kubelet[3195]: I0213 19:04:20.927715 3195 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:04:20.930402 kubelet[3195]: I0213 19:04:20.927887 3195 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:04:20.930870 kubelet[3195]: I0213 19:04:20.927910 3195 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:04:20.930870 kubelet[3195]: I0213 19:04:20.927974 3195 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:04:20.930870 kubelet[3195]: I0213 19:04:20.928011 3195 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:04:20.949018 kubelet[3195]: I0213 19:04:20.948923 3195 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:04:20.951542 kubelet[3195]: I0213 19:04:20.950214 3195 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:04:20.957964 kubelet[3195]: I0213 19:04:20.953253 3195 server.go:1264] "Started kubelet" Feb 13 19:04:20.969036 kubelet[3195]: I0213 19:04:20.968867 3195 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:04:20.973624 kubelet[3195]: I0213 19:04:20.973523 3195 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:04:20.978100 kubelet[3195]: I0213 19:04:20.976810 3195 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:04:20.979474 kubelet[3195]: I0213 19:04:20.978431 3195 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:04:20.980205 kubelet[3195]: I0213 19:04:20.980079 3195 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:04:20.998298 kubelet[3195]: I0213 19:04:20.997979 3195 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:04:21.001224 kubelet[3195]: I0213 19:04:21.001043 3195 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:04:21.007775 kubelet[3195]: I0213 19:04:21.005045 3195 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:04:21.018532 kubelet[3195]: I0213 19:04:21.018492 3195 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:04:21.018885 kubelet[3195]: I0213 19:04:21.018850 3195 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:04:21.034627 kubelet[3195]: I0213 19:04:21.034484 3195 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:04:21.038299 kubelet[3195]: I0213 19:04:21.038147 3195 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:04:21.047640 kubelet[3195]: I0213 19:04:21.047505 3195 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:04:21.047640 kubelet[3195]: I0213 19:04:21.047701 3195 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:04:21.047640 kubelet[3195]: I0213 19:04:21.047736 3195 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:04:21.047640 kubelet[3195]: E0213 19:04:21.047816 3195 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:04:21.122273 kubelet[3195]: I0213 19:04:21.121160 3195 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-26-128" Feb 13 19:04:21.151229 kubelet[3195]: I0213 19:04:21.150538 3195 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-26-128" Feb 13 19:04:21.156127 kubelet[3195]: I0213 19:04:21.155709 3195 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-26-128" Feb 13 19:04:21.160558 kubelet[3195]: E0213 19:04:21.152740 3195 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:04:21.239565 kubelet[3195]: I0213 19:04:21.238200 3195 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:04:21.239565 kubelet[3195]: I0213 19:04:21.238233 3195 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:04:21.239565 kubelet[3195]: I0213 19:04:21.238269 3195 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:04:21.239565 kubelet[3195]: I0213 19:04:21.238819 3195 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:04:21.239565 kubelet[3195]: I0213 19:04:21.238842 3195 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:04:21.239565 kubelet[3195]: I0213 19:04:21.238879 3195 policy_none.go:49] "None policy: Start" Feb 13 19:04:21.241221 kubelet[3195]: I0213 19:04:21.241178 3195 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:04:21.241351 kubelet[3195]: I0213 19:04:21.241228 3195 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:04:21.242354 kubelet[3195]: I0213 19:04:21.242290 3195 state_mem.go:75] "Updated machine memory state" Feb 13 19:04:21.258586 kubelet[3195]: I0213 19:04:21.256997 3195 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:04:21.258586 kubelet[3195]: I0213 19:04:21.257297 3195 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:04:21.260437 kubelet[3195]: I0213 19:04:21.260332 3195 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:04:21.369001 kubelet[3195]: I0213 19:04:21.368557 3195 topology_manager.go:215] "Topology Admit Handler" podUID="10e63b9a9ec70888b25f343403546c01" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-26-128" Feb 13 19:04:21.369001 kubelet[3195]: I0213 19:04:21.368758 3195 topology_manager.go:215] "Topology Admit Handler" podUID="773b16d190ab37a1166250c126b61362" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-26-128" Feb 13 19:04:21.369001 kubelet[3195]: I0213 19:04:21.368857 3195 topology_manager.go:215] "Topology Admit Handler" podUID="437a4e2708d9e80aa1137cc65da85d9c" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-26-128" Feb 13 19:04:21.408471 kubelet[3195]: I0213 19:04:21.408402 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/773b16d190ab37a1166250c126b61362-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-128\" (UID: \"773b16d190ab37a1166250c126b61362\") " pod="kube-system/kube-controller-manager-ip-172-31-26-128" Feb 13 19:04:21.408768 kubelet[3195]: I0213 19:04:21.408479 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/773b16d190ab37a1166250c126b61362-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-128\" (UID: \"773b16d190ab37a1166250c126b61362\") " pod="kube-system/kube-controller-manager-ip-172-31-26-128" Feb 13 19:04:21.408768 kubelet[3195]: I0213 19:04:21.408555 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/437a4e2708d9e80aa1137cc65da85d9c-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-128\" (UID: \"437a4e2708d9e80aa1137cc65da85d9c\") " pod="kube-system/kube-scheduler-ip-172-31-26-128" Feb 13 19:04:21.408768 kubelet[3195]: I0213 19:04:21.408602 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/10e63b9a9ec70888b25f343403546c01-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-128\" (UID: \"10e63b9a9ec70888b25f343403546c01\") " pod="kube-system/kube-apiserver-ip-172-31-26-128" Feb 13 19:04:21.408768 kubelet[3195]: I0213 19:04:21.408651 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/773b16d190ab37a1166250c126b61362-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-128\" (UID: \"773b16d190ab37a1166250c126b61362\") " pod="kube-system/kube-controller-manager-ip-172-31-26-128" Feb 13 19:04:21.408768 kubelet[3195]: I0213 19:04:21.408695 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/773b16d190ab37a1166250c126b61362-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-128\" (UID: \"773b16d190ab37a1166250c126b61362\") " pod="kube-system/kube-controller-manager-ip-172-31-26-128" Feb 13 19:04:21.410253 kubelet[3195]: I0213 19:04:21.408751 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/773b16d190ab37a1166250c126b61362-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-128\" (UID: \"773b16d190ab37a1166250c126b61362\") " pod="kube-system/kube-controller-manager-ip-172-31-26-128" Feb 13 19:04:21.410253 kubelet[3195]: I0213 19:04:21.408791 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/10e63b9a9ec70888b25f343403546c01-ca-certs\") pod \"kube-apiserver-ip-172-31-26-128\" (UID: \"10e63b9a9ec70888b25f343403546c01\") " pod="kube-system/kube-apiserver-ip-172-31-26-128" Feb 13 19:04:21.410253 kubelet[3195]: I0213 19:04:21.409269 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/10e63b9a9ec70888b25f343403546c01-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-128\" (UID: \"10e63b9a9ec70888b25f343403546c01\") " pod="kube-system/kube-apiserver-ip-172-31-26-128" Feb 13 19:04:21.949817 kubelet[3195]: I0213 19:04:21.949686 3195 apiserver.go:52] "Watching apiserver" Feb 13 19:04:22.003122 kubelet[3195]: I0213 19:04:22.002350 3195 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:04:22.134629 kubelet[3195]: E0213 19:04:22.134544 3195 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-26-128\" already exists" pod="kube-system/kube-apiserver-ip-172-31-26-128" Feb 13 19:04:22.184470 kubelet[3195]: I0213 19:04:22.183265 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-26-128" podStartSLOduration=1.183242042 podStartE2EDuration="1.183242042s" podCreationTimestamp="2025-02-13 19:04:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:04:22.164107186 +0000 UTC m=+1.369063303" watchObservedRunningTime="2025-02-13 19:04:22.183242042 +0000 UTC m=+1.388198255" Feb 13 19:04:22.216466 kubelet[3195]: I0213 19:04:22.216270 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-26-128" podStartSLOduration=1.216247179 podStartE2EDuration="1.216247179s" podCreationTimestamp="2025-02-13 19:04:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:04:22.184905142 +0000 UTC m=+1.389861247" watchObservedRunningTime="2025-02-13 19:04:22.216247179 +0000 UTC m=+1.421203284" Feb 13 19:04:22.248940 kubelet[3195]: I0213 19:04:22.248817 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-26-128" podStartSLOduration=1.2487541580000001 podStartE2EDuration="1.248754158s" podCreationTimestamp="2025-02-13 19:04:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:04:22.219017853 +0000 UTC m=+1.423973994" watchObservedRunningTime="2025-02-13 19:04:22.248754158 +0000 UTC m=+1.453710310" Feb 13 19:04:22.484027 update_engine[1920]: I20250213 19:04:22.483214 1920 update_attempter.cc:509] Updating boot flags... Feb 13 19:04:22.603183 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3256) Feb 13 19:04:22.961139 sudo[2217]: pam_unix(sudo:session): session closed for user root Feb 13 19:04:22.985396 sshd[2216]: Connection closed by 147.75.109.163 port 36478 Feb 13 19:04:22.986778 sshd-session[2214]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:23.006476 systemd[1]: sshd@4-172.31.26.128:22-147.75.109.163:36478.service: Deactivated successfully. Feb 13 19:04:23.014563 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:04:23.019011 systemd[1]: session-5.scope: Consumed 11.831s CPU time, 191.4M memory peak, 0B memory swap peak. Feb 13 19:04:23.035184 systemd-logind[1919]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:04:23.052197 systemd-logind[1919]: Removed session 5. Feb 13 19:04:23.073190 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3246) Feb 13 19:04:33.252984 kubelet[3195]: I0213 19:04:33.252845 3195 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:04:33.253966 containerd[1942]: time="2025-02-13T19:04:33.253853760Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:04:33.254452 kubelet[3195]: I0213 19:04:33.254241 3195 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:04:34.191009 kubelet[3195]: I0213 19:04:34.190919 3195 topology_manager.go:215] "Topology Admit Handler" podUID="380452ad-cd01-4220-a854-35e6e0106a1f" podNamespace="kube-system" podName="kube-proxy-t4xc5" Feb 13 19:04:34.213437 systemd[1]: Created slice kubepods-besteffort-pod380452ad_cd01_4220_a854_35e6e0106a1f.slice - libcontainer container kubepods-besteffort-pod380452ad_cd01_4220_a854_35e6e0106a1f.slice. Feb 13 19:04:34.225954 kubelet[3195]: I0213 19:04:34.225875 3195 topology_manager.go:215] "Topology Admit Handler" podUID="4444a882-a1e5-4ff5-974f-22b8e36b5326" podNamespace="kube-flannel" podName="kube-flannel-ds-6gwdf" Feb 13 19:04:34.251842 systemd[1]: Created slice kubepods-burstable-pod4444a882_a1e5_4ff5_974f_22b8e36b5326.slice - libcontainer container kubepods-burstable-pod4444a882_a1e5_4ff5_974f_22b8e36b5326.slice. Feb 13 19:04:34.295777 kubelet[3195]: I0213 19:04:34.295709 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/380452ad-cd01-4220-a854-35e6e0106a1f-kube-proxy\") pod \"kube-proxy-t4xc5\" (UID: \"380452ad-cd01-4220-a854-35e6e0106a1f\") " pod="kube-system/kube-proxy-t4xc5" Feb 13 19:04:34.296383 kubelet[3195]: I0213 19:04:34.295780 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/380452ad-cd01-4220-a854-35e6e0106a1f-lib-modules\") pod \"kube-proxy-t4xc5\" (UID: \"380452ad-cd01-4220-a854-35e6e0106a1f\") " pod="kube-system/kube-proxy-t4xc5" Feb 13 19:04:34.296383 kubelet[3195]: I0213 19:04:34.295830 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/4444a882-a1e5-4ff5-974f-22b8e36b5326-cni-plugin\") pod \"kube-flannel-ds-6gwdf\" (UID: \"4444a882-a1e5-4ff5-974f-22b8e36b5326\") " pod="kube-flannel/kube-flannel-ds-6gwdf" Feb 13 19:04:34.296383 kubelet[3195]: I0213 19:04:34.295869 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/380452ad-cd01-4220-a854-35e6e0106a1f-xtables-lock\") pod \"kube-proxy-t4xc5\" (UID: \"380452ad-cd01-4220-a854-35e6e0106a1f\") " pod="kube-system/kube-proxy-t4xc5" Feb 13 19:04:34.296383 kubelet[3195]: I0213 19:04:34.295908 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfl86\" (UniqueName: \"kubernetes.io/projected/4444a882-a1e5-4ff5-974f-22b8e36b5326-kube-api-access-cfl86\") pod \"kube-flannel-ds-6gwdf\" (UID: \"4444a882-a1e5-4ff5-974f-22b8e36b5326\") " pod="kube-flannel/kube-flannel-ds-6gwdf" Feb 13 19:04:34.296383 kubelet[3195]: I0213 19:04:34.295948 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvbrx\" (UniqueName: \"kubernetes.io/projected/380452ad-cd01-4220-a854-35e6e0106a1f-kube-api-access-wvbrx\") pod \"kube-proxy-t4xc5\" (UID: \"380452ad-cd01-4220-a854-35e6e0106a1f\") " pod="kube-system/kube-proxy-t4xc5" Feb 13 19:04:34.296650 kubelet[3195]: I0213 19:04:34.295988 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/4444a882-a1e5-4ff5-974f-22b8e36b5326-run\") pod \"kube-flannel-ds-6gwdf\" (UID: \"4444a882-a1e5-4ff5-974f-22b8e36b5326\") " pod="kube-flannel/kube-flannel-ds-6gwdf" Feb 13 19:04:34.296650 kubelet[3195]: I0213 19:04:34.296027 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/4444a882-a1e5-4ff5-974f-22b8e36b5326-flannel-cfg\") pod \"kube-flannel-ds-6gwdf\" (UID: \"4444a882-a1e5-4ff5-974f-22b8e36b5326\") " pod="kube-flannel/kube-flannel-ds-6gwdf" Feb 13 19:04:34.296650 kubelet[3195]: I0213 19:04:34.296084 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4444a882-a1e5-4ff5-974f-22b8e36b5326-xtables-lock\") pod \"kube-flannel-ds-6gwdf\" (UID: \"4444a882-a1e5-4ff5-974f-22b8e36b5326\") " pod="kube-flannel/kube-flannel-ds-6gwdf" Feb 13 19:04:34.296650 kubelet[3195]: I0213 19:04:34.296125 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/4444a882-a1e5-4ff5-974f-22b8e36b5326-cni\") pod \"kube-flannel-ds-6gwdf\" (UID: \"4444a882-a1e5-4ff5-974f-22b8e36b5326\") " pod="kube-flannel/kube-flannel-ds-6gwdf" Feb 13 19:04:34.533213 containerd[1942]: time="2025-02-13T19:04:34.533022451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t4xc5,Uid:380452ad-cd01-4220-a854-35e6e0106a1f,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:34.564949 containerd[1942]: time="2025-02-13T19:04:34.564900236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-6gwdf,Uid:4444a882-a1e5-4ff5-974f-22b8e36b5326,Namespace:kube-flannel,Attempt:0,}" Feb 13 19:04:34.601736 containerd[1942]: time="2025-02-13T19:04:34.599334795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:34.601736 containerd[1942]: time="2025-02-13T19:04:34.599448342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:34.601736 containerd[1942]: time="2025-02-13T19:04:34.599477571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:34.601736 containerd[1942]: time="2025-02-13T19:04:34.601499232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:34.649286 containerd[1942]: time="2025-02-13T19:04:34.648034249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:34.649286 containerd[1942]: time="2025-02-13T19:04:34.648189355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:34.649286 containerd[1942]: time="2025-02-13T19:04:34.648227856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:34.651681 containerd[1942]: time="2025-02-13T19:04:34.650999921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:34.655690 systemd[1]: Started cri-containerd-feca896bd2a291f638be24407c7976beaf7860ca045cd1281be37965c1f47b3d.scope - libcontainer container feca896bd2a291f638be24407c7976beaf7860ca045cd1281be37965c1f47b3d. Feb 13 19:04:34.699468 systemd[1]: Started cri-containerd-858b85f4b7ce0b9553cb79cd1608f8bd716999fc80da4ac25bd8445cd3ac18c5.scope - libcontainer container 858b85f4b7ce0b9553cb79cd1608f8bd716999fc80da4ac25bd8445cd3ac18c5. Feb 13 19:04:34.740853 containerd[1942]: time="2025-02-13T19:04:34.740772865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t4xc5,Uid:380452ad-cd01-4220-a854-35e6e0106a1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"feca896bd2a291f638be24407c7976beaf7860ca045cd1281be37965c1f47b3d\"" Feb 13 19:04:34.754012 containerd[1942]: time="2025-02-13T19:04:34.753933701Z" level=info msg="CreateContainer within sandbox \"feca896bd2a291f638be24407c7976beaf7860ca045cd1281be37965c1f47b3d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:04:34.798125 containerd[1942]: time="2025-02-13T19:04:34.797931459Z" level=info msg="CreateContainer within sandbox \"feca896bd2a291f638be24407c7976beaf7860ca045cd1281be37965c1f47b3d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eddca812db2fc957a858bc9524278f75beda2ecf6c3bede008296f06c9d5d99a\"" Feb 13 19:04:34.799426 containerd[1942]: time="2025-02-13T19:04:34.798956658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-6gwdf,Uid:4444a882-a1e5-4ff5-974f-22b8e36b5326,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"858b85f4b7ce0b9553cb79cd1608f8bd716999fc80da4ac25bd8445cd3ac18c5\"" Feb 13 19:04:34.802561 containerd[1942]: time="2025-02-13T19:04:34.802179458Z" level=info msg="StartContainer for \"eddca812db2fc957a858bc9524278f75beda2ecf6c3bede008296f06c9d5d99a\"" Feb 13 19:04:34.813149 containerd[1942]: time="2025-02-13T19:04:34.810220044Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 19:04:34.883442 systemd[1]: Started cri-containerd-eddca812db2fc957a858bc9524278f75beda2ecf6c3bede008296f06c9d5d99a.scope - libcontainer container eddca812db2fc957a858bc9524278f75beda2ecf6c3bede008296f06c9d5d99a. Feb 13 19:04:34.944474 containerd[1942]: time="2025-02-13T19:04:34.944260023Z" level=info msg="StartContainer for \"eddca812db2fc957a858bc9524278f75beda2ecf6c3bede008296f06c9d5d99a\" returns successfully" Feb 13 19:04:35.168025 kubelet[3195]: I0213 19:04:35.167843 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t4xc5" podStartSLOduration=1.167820927 podStartE2EDuration="1.167820927s" podCreationTimestamp="2025-02-13 19:04:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:04:35.166225149 +0000 UTC m=+14.371181290" watchObservedRunningTime="2025-02-13 19:04:35.167820927 +0000 UTC m=+14.372777032" Feb 13 19:04:36.977163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount933476686.mount: Deactivated successfully. Feb 13 19:04:37.060231 containerd[1942]: time="2025-02-13T19:04:37.060141443Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:37.062305 containerd[1942]: time="2025-02-13T19:04:37.062217808Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Feb 13 19:04:37.067149 containerd[1942]: time="2025-02-13T19:04:37.065490623Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:37.071822 containerd[1942]: time="2025-02-13T19:04:37.071726340Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:37.074424 containerd[1942]: time="2025-02-13T19:04:37.073175647Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 2.262873132s" Feb 13 19:04:37.074424 containerd[1942]: time="2025-02-13T19:04:37.073232714Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 13 19:04:37.077365 containerd[1942]: time="2025-02-13T19:04:37.076972911Z" level=info msg="CreateContainer within sandbox \"858b85f4b7ce0b9553cb79cd1608f8bd716999fc80da4ac25bd8445cd3ac18c5\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 19:04:37.103472 containerd[1942]: time="2025-02-13T19:04:37.103401834Z" level=info msg="CreateContainer within sandbox \"858b85f4b7ce0b9553cb79cd1608f8bd716999fc80da4ac25bd8445cd3ac18c5\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"0e31d0f6d63e9b7aeffacec2b8e34af161325a6a519bb0f5127caa23dfd62d24\"" Feb 13 19:04:37.104921 containerd[1942]: time="2025-02-13T19:04:37.104874973Z" level=info msg="StartContainer for \"0e31d0f6d63e9b7aeffacec2b8e34af161325a6a519bb0f5127caa23dfd62d24\"" Feb 13 19:04:37.170389 systemd[1]: Started cri-containerd-0e31d0f6d63e9b7aeffacec2b8e34af161325a6a519bb0f5127caa23dfd62d24.scope - libcontainer container 0e31d0f6d63e9b7aeffacec2b8e34af161325a6a519bb0f5127caa23dfd62d24. Feb 13 19:04:37.218357 containerd[1942]: time="2025-02-13T19:04:37.218298933Z" level=info msg="StartContainer for \"0e31d0f6d63e9b7aeffacec2b8e34af161325a6a519bb0f5127caa23dfd62d24\" returns successfully" Feb 13 19:04:37.221310 systemd[1]: cri-containerd-0e31d0f6d63e9b7aeffacec2b8e34af161325a6a519bb0f5127caa23dfd62d24.scope: Deactivated successfully. Feb 13 19:04:37.296194 containerd[1942]: time="2025-02-13T19:04:37.295818530Z" level=info msg="shim disconnected" id=0e31d0f6d63e9b7aeffacec2b8e34af161325a6a519bb0f5127caa23dfd62d24 namespace=k8s.io Feb 13 19:04:37.296194 containerd[1942]: time="2025-02-13T19:04:37.295896096Z" level=warning msg="cleaning up after shim disconnected" id=0e31d0f6d63e9b7aeffacec2b8e34af161325a6a519bb0f5127caa23dfd62d24 namespace=k8s.io Feb 13 19:04:37.296194 containerd[1942]: time="2025-02-13T19:04:37.295917169Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:37.832141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e31d0f6d63e9b7aeffacec2b8e34af161325a6a519bb0f5127caa23dfd62d24-rootfs.mount: Deactivated successfully. Feb 13 19:04:38.173189 containerd[1942]: time="2025-02-13T19:04:38.172967891Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 19:04:40.379323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4126847422.mount: Deactivated successfully. Feb 13 19:04:41.711124 containerd[1942]: time="2025-02-13T19:04:41.710819803Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:41.713230 containerd[1942]: time="2025-02-13T19:04:41.713137283Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Feb 13 19:04:41.715885 containerd[1942]: time="2025-02-13T19:04:41.715804269Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:41.735122 containerd[1942]: time="2025-02-13T19:04:41.734157260Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:41.738528 containerd[1942]: time="2025-02-13T19:04:41.738451292Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 3.565059653s" Feb 13 19:04:41.738528 containerd[1942]: time="2025-02-13T19:04:41.738523424Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 13 19:04:41.745011 containerd[1942]: time="2025-02-13T19:04:41.744930247Z" level=info msg="CreateContainer within sandbox \"858b85f4b7ce0b9553cb79cd1608f8bd716999fc80da4ac25bd8445cd3ac18c5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:04:41.771837 containerd[1942]: time="2025-02-13T19:04:41.771775458Z" level=info msg="CreateContainer within sandbox \"858b85f4b7ce0b9553cb79cd1608f8bd716999fc80da4ac25bd8445cd3ac18c5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"93697c6daf1fd26ede33e24ac6584fddc51b54b977f09c8a59a31d69b40e9bbf\"" Feb 13 19:04:41.774538 containerd[1942]: time="2025-02-13T19:04:41.774394695Z" level=info msg="StartContainer for \"93697c6daf1fd26ede33e24ac6584fddc51b54b977f09c8a59a31d69b40e9bbf\"" Feb 13 19:04:41.836911 systemd[1]: run-containerd-runc-k8s.io-93697c6daf1fd26ede33e24ac6584fddc51b54b977f09c8a59a31d69b40e9bbf-runc.BkcxBu.mount: Deactivated successfully. Feb 13 19:04:41.851604 systemd[1]: Started cri-containerd-93697c6daf1fd26ede33e24ac6584fddc51b54b977f09c8a59a31d69b40e9bbf.scope - libcontainer container 93697c6daf1fd26ede33e24ac6584fddc51b54b977f09c8a59a31d69b40e9bbf. Feb 13 19:04:41.901232 systemd[1]: cri-containerd-93697c6daf1fd26ede33e24ac6584fddc51b54b977f09c8a59a31d69b40e9bbf.scope: Deactivated successfully. Feb 13 19:04:41.907113 containerd[1942]: time="2025-02-13T19:04:41.906824708Z" level=info msg="StartContainer for \"93697c6daf1fd26ede33e24ac6584fddc51b54b977f09c8a59a31d69b40e9bbf\" returns successfully" Feb 13 19:04:41.921215 kubelet[3195]: I0213 19:04:41.920847 3195 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:04:41.991482 kubelet[3195]: I0213 19:04:41.991290 3195 topology_manager.go:215] "Topology Admit Handler" podUID="1229ef1a-4166-41c8-8d8a-ad7da9674bb7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xrjt8" Feb 13 19:04:42.005884 systemd[1]: Created slice kubepods-burstable-pod1229ef1a_4166_41c8_8d8a_ad7da9674bb7.slice - libcontainer container kubepods-burstable-pod1229ef1a_4166_41c8_8d8a_ad7da9674bb7.slice. Feb 13 19:04:42.019962 kubelet[3195]: I0213 19:04:42.014111 3195 topology_manager.go:215] "Topology Admit Handler" podUID="7d42cee3-ce88-4812-8e4e-8950df7011a2" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dzl52" Feb 13 19:04:42.034130 systemd[1]: Created slice kubepods-burstable-pod7d42cee3_ce88_4812_8e4e_8950df7011a2.slice - libcontainer container kubepods-burstable-pod7d42cee3_ce88_4812_8e4e_8950df7011a2.slice. Feb 13 19:04:42.066076 kubelet[3195]: I0213 19:04:42.065936 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2fm7\" (UniqueName: \"kubernetes.io/projected/7d42cee3-ce88-4812-8e4e-8950df7011a2-kube-api-access-h2fm7\") pod \"coredns-7db6d8ff4d-dzl52\" (UID: \"7d42cee3-ce88-4812-8e4e-8950df7011a2\") " pod="kube-system/coredns-7db6d8ff4d-dzl52" Feb 13 19:04:42.066292 kubelet[3195]: I0213 19:04:42.066114 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1229ef1a-4166-41c8-8d8a-ad7da9674bb7-config-volume\") pod \"coredns-7db6d8ff4d-xrjt8\" (UID: \"1229ef1a-4166-41c8-8d8a-ad7da9674bb7\") " pod="kube-system/coredns-7db6d8ff4d-xrjt8" Feb 13 19:04:42.067042 kubelet[3195]: I0213 19:04:42.066987 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d42cee3-ce88-4812-8e4e-8950df7011a2-config-volume\") pod \"coredns-7db6d8ff4d-dzl52\" (UID: \"7d42cee3-ce88-4812-8e4e-8950df7011a2\") " pod="kube-system/coredns-7db6d8ff4d-dzl52" Feb 13 19:04:42.067193 kubelet[3195]: I0213 19:04:42.067135 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvx5v\" (UniqueName: \"kubernetes.io/projected/1229ef1a-4166-41c8-8d8a-ad7da9674bb7-kube-api-access-hvx5v\") pod \"coredns-7db6d8ff4d-xrjt8\" (UID: \"1229ef1a-4166-41c8-8d8a-ad7da9674bb7\") " pod="kube-system/coredns-7db6d8ff4d-xrjt8" Feb 13 19:04:42.119916 containerd[1942]: time="2025-02-13T19:04:42.119494160Z" level=info msg="shim disconnected" id=93697c6daf1fd26ede33e24ac6584fddc51b54b977f09c8a59a31d69b40e9bbf namespace=k8s.io Feb 13 19:04:42.119916 containerd[1942]: time="2025-02-13T19:04:42.119588769Z" level=warning msg="cleaning up after shim disconnected" id=93697c6daf1fd26ede33e24ac6584fddc51b54b977f09c8a59a31d69b40e9bbf namespace=k8s.io Feb 13 19:04:42.119916 containerd[1942]: time="2025-02-13T19:04:42.119609663Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:42.196836 containerd[1942]: time="2025-02-13T19:04:42.196619790Z" level=info msg="CreateContainer within sandbox \"858b85f4b7ce0b9553cb79cd1608f8bd716999fc80da4ac25bd8445cd3ac18c5\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 19:04:42.227663 containerd[1942]: time="2025-02-13T19:04:42.227585744Z" level=info msg="CreateContainer within sandbox \"858b85f4b7ce0b9553cb79cd1608f8bd716999fc80da4ac25bd8445cd3ac18c5\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"da558d609e711b119e253f3a7528b5dbd438ac48f7b94b775e0719e7f82b2268\"" Feb 13 19:04:42.231318 containerd[1942]: time="2025-02-13T19:04:42.231251242Z" level=info msg="StartContainer for \"da558d609e711b119e253f3a7528b5dbd438ac48f7b94b775e0719e7f82b2268\"" Feb 13 19:04:42.293677 systemd[1]: Started cri-containerd-da558d609e711b119e253f3a7528b5dbd438ac48f7b94b775e0719e7f82b2268.scope - libcontainer container da558d609e711b119e253f3a7528b5dbd438ac48f7b94b775e0719e7f82b2268. Feb 13 19:04:42.330687 containerd[1942]: time="2025-02-13T19:04:42.330568855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xrjt8,Uid:1229ef1a-4166-41c8-8d8a-ad7da9674bb7,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:42.361303 containerd[1942]: time="2025-02-13T19:04:42.361162263Z" level=info msg="StartContainer for \"da558d609e711b119e253f3a7528b5dbd438ac48f7b94b775e0719e7f82b2268\" returns successfully" Feb 13 19:04:42.361689 containerd[1942]: time="2025-02-13T19:04:42.361530107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dzl52,Uid:7d42cee3-ce88-4812-8e4e-8950df7011a2,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:42.427242 containerd[1942]: time="2025-02-13T19:04:42.425850714Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xrjt8,Uid:1229ef1a-4166-41c8-8d8a-ad7da9674bb7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"416c69137874b50c4f7725394128ba2e00db3105ced9304bbc358b027e826bc3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:04:42.428665 kubelet[3195]: E0213 19:04:42.427254 3195 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"416c69137874b50c4f7725394128ba2e00db3105ced9304bbc358b027e826bc3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:04:42.428665 kubelet[3195]: E0213 19:04:42.427355 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"416c69137874b50c4f7725394128ba2e00db3105ced9304bbc358b027e826bc3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-xrjt8" Feb 13 19:04:42.428665 kubelet[3195]: E0213 19:04:42.427388 3195 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"416c69137874b50c4f7725394128ba2e00db3105ced9304bbc358b027e826bc3\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-xrjt8" Feb 13 19:04:42.428665 kubelet[3195]: E0213 19:04:42.427457 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-xrjt8_kube-system(1229ef1a-4166-41c8-8d8a-ad7da9674bb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-xrjt8_kube-system(1229ef1a-4166-41c8-8d8a-ad7da9674bb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"416c69137874b50c4f7725394128ba2e00db3105ced9304bbc358b027e826bc3\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-xrjt8" podUID="1229ef1a-4166-41c8-8d8a-ad7da9674bb7" Feb 13 19:04:42.436237 containerd[1942]: time="2025-02-13T19:04:42.435977297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dzl52,Uid:7d42cee3-ce88-4812-8e4e-8950df7011a2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"86c76abdaa31449c7f5c0a66896065a78e3f472862547036f6af75c583b5a789\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:04:42.438170 kubelet[3195]: E0213 19:04:42.436717 3195 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86c76abdaa31449c7f5c0a66896065a78e3f472862547036f6af75c583b5a789\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:04:42.438170 kubelet[3195]: E0213 19:04:42.436862 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86c76abdaa31449c7f5c0a66896065a78e3f472862547036f6af75c583b5a789\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-dzl52" Feb 13 19:04:42.438170 kubelet[3195]: E0213 19:04:42.436931 3195 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86c76abdaa31449c7f5c0a66896065a78e3f472862547036f6af75c583b5a789\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-dzl52" Feb 13 19:04:42.438170 kubelet[3195]: E0213 19:04:42.437012 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-dzl52_kube-system(7d42cee3-ce88-4812-8e4e-8950df7011a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-dzl52_kube-system(7d42cee3-ce88-4812-8e4e-8950df7011a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86c76abdaa31449c7f5c0a66896065a78e3f472862547036f6af75c583b5a789\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-dzl52" podUID="7d42cee3-ce88-4812-8e4e-8950df7011a2" Feb 13 19:04:42.769196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93697c6daf1fd26ede33e24ac6584fddc51b54b977f09c8a59a31d69b40e9bbf-rootfs.mount: Deactivated successfully. Feb 13 19:04:43.214502 kubelet[3195]: I0213 19:04:43.213514 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-6gwdf" podStartSLOduration=2.278631303 podStartE2EDuration="9.213488241s" podCreationTimestamp="2025-02-13 19:04:34 +0000 UTC" firstStartedPulling="2025-02-13 19:04:34.806248998 +0000 UTC m=+14.011205103" lastFinishedPulling="2025-02-13 19:04:41.741105936 +0000 UTC m=+20.946062041" observedRunningTime="2025-02-13 19:04:43.213319042 +0000 UTC m=+22.418275171" watchObservedRunningTime="2025-02-13 19:04:43.213488241 +0000 UTC m=+22.418444346" Feb 13 19:04:43.467967 (udev-worker)[3925]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:04:43.493650 systemd-networkd[1836]: flannel.1: Link UP Feb 13 19:04:43.493667 systemd-networkd[1836]: flannel.1: Gained carrier Feb 13 19:04:44.610647 systemd-networkd[1836]: flannel.1: Gained IPv6LL Feb 13 19:04:47.026341 ntpd[1910]: Listen normally on 7 flannel.1 192.168.0.0:123 Feb 13 19:04:47.026475 ntpd[1910]: Listen normally on 8 flannel.1 [fe80::34:5fff:fec2:d069%4]:123 Feb 13 19:04:47.027301 ntpd[1910]: 13 Feb 19:04:47 ntpd[1910]: Listen normally on 7 flannel.1 192.168.0.0:123 Feb 13 19:04:47.027301 ntpd[1910]: 13 Feb 19:04:47 ntpd[1910]: Listen normally on 8 flannel.1 [fe80::34:5fff:fec2:d069%4]:123 Feb 13 19:04:54.049513 containerd[1942]: time="2025-02-13T19:04:54.049359207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xrjt8,Uid:1229ef1a-4166-41c8-8d8a-ad7da9674bb7,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:54.085743 systemd-networkd[1836]: cni0: Link UP Feb 13 19:04:54.085770 systemd-networkd[1836]: cni0: Gained carrier Feb 13 19:04:54.093988 (udev-worker)[4064]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:04:54.094204 systemd-networkd[1836]: cni0: Lost carrier Feb 13 19:04:54.104107 kernel: cni0: port 1(veth7aaadfae) entered blocking state Feb 13 19:04:54.104229 kernel: cni0: port 1(veth7aaadfae) entered disabled state Feb 13 19:04:54.103763 systemd-networkd[1836]: veth7aaadfae: Link UP Feb 13 19:04:54.108354 kernel: veth7aaadfae: entered allmulticast mode Feb 13 19:04:54.108482 kernel: veth7aaadfae: entered promiscuous mode Feb 13 19:04:54.112581 kernel: cni0: port 1(veth7aaadfae) entered blocking state Feb 13 19:04:54.112660 kernel: cni0: port 1(veth7aaadfae) entered forwarding state Feb 13 19:04:54.117105 kernel: cni0: port 1(veth7aaadfae) entered disabled state Feb 13 19:04:54.119514 (udev-worker)[4068]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:04:54.129008 kernel: cni0: port 1(veth7aaadfae) entered blocking state Feb 13 19:04:54.129333 kernel: cni0: port 1(veth7aaadfae) entered forwarding state Feb 13 19:04:54.129019 systemd-networkd[1836]: veth7aaadfae: Gained carrier Feb 13 19:04:54.130461 systemd-networkd[1836]: cni0: Gained carrier Feb 13 19:04:54.136309 containerd[1942]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000014938), "name":"cbr0", "type":"bridge"} Feb 13 19:04:54.136309 containerd[1942]: delegateAdd: netconf sent to delegate plugin: Feb 13 19:04:54.183545 containerd[1942]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T19:04:54.183344410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:54.183545 containerd[1942]: time="2025-02-13T19:04:54.183436476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:54.183545 containerd[1942]: time="2025-02-13T19:04:54.183492871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:54.184034 containerd[1942]: time="2025-02-13T19:04:54.183678371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:54.236383 systemd[1]: Started cri-containerd-c0271bbe855ca5d21733763fe8897e2616fca573260ad31626801d1c4819cb9e.scope - libcontainer container c0271bbe855ca5d21733763fe8897e2616fca573260ad31626801d1c4819cb9e. Feb 13 19:04:54.306597 containerd[1942]: time="2025-02-13T19:04:54.306283654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xrjt8,Uid:1229ef1a-4166-41c8-8d8a-ad7da9674bb7,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0271bbe855ca5d21733763fe8897e2616fca573260ad31626801d1c4819cb9e\"" Feb 13 19:04:54.317257 containerd[1942]: time="2025-02-13T19:04:54.316830518Z" level=info msg="CreateContainer within sandbox \"c0271bbe855ca5d21733763fe8897e2616fca573260ad31626801d1c4819cb9e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:04:54.346967 containerd[1942]: time="2025-02-13T19:04:54.346814798Z" level=info msg="CreateContainer within sandbox \"c0271bbe855ca5d21733763fe8897e2616fca573260ad31626801d1c4819cb9e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c6cfc650b94ac1c5320aef898e2fa9a9d0e0061fd220be664d7227d48c67a4f4\"" Feb 13 19:04:54.348973 containerd[1942]: time="2025-02-13T19:04:54.347594204Z" level=info msg="StartContainer for \"c6cfc650b94ac1c5320aef898e2fa9a9d0e0061fd220be664d7227d48c67a4f4\"" Feb 13 19:04:54.393383 systemd[1]: Started cri-containerd-c6cfc650b94ac1c5320aef898e2fa9a9d0e0061fd220be664d7227d48c67a4f4.scope - libcontainer container c6cfc650b94ac1c5320aef898e2fa9a9d0e0061fd220be664d7227d48c67a4f4. Feb 13 19:04:54.441161 containerd[1942]: time="2025-02-13T19:04:54.441049965Z" level=info msg="StartContainer for \"c6cfc650b94ac1c5320aef898e2fa9a9d0e0061fd220be664d7227d48c67a4f4\" returns successfully" Feb 13 19:04:55.274358 kubelet[3195]: I0213 19:04:55.274254 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xrjt8" podStartSLOduration=21.274230233 podStartE2EDuration="21.274230233s" podCreationTimestamp="2025-02-13 19:04:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:04:55.247865394 +0000 UTC m=+34.452821523" watchObservedRunningTime="2025-02-13 19:04:55.274230233 +0000 UTC m=+34.479186374" Feb 13 19:04:55.298337 systemd-networkd[1836]: cni0: Gained IPv6LL Feb 13 19:04:55.746418 systemd-networkd[1836]: veth7aaadfae: Gained IPv6LL Feb 13 19:04:56.049685 containerd[1942]: time="2025-02-13T19:04:56.049532063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dzl52,Uid:7d42cee3-ce88-4812-8e4e-8950df7011a2,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:56.096415 systemd-networkd[1836]: vethcea41470: Link UP Feb 13 19:04:56.100107 kernel: cni0: port 2(vethcea41470) entered blocking state Feb 13 19:04:56.100234 kernel: cni0: port 2(vethcea41470) entered disabled state Feb 13 19:04:56.100321 kernel: vethcea41470: entered allmulticast mode Feb 13 19:04:56.102309 kernel: vethcea41470: entered promiscuous mode Feb 13 19:04:56.116535 kernel: cni0: port 2(vethcea41470) entered blocking state Feb 13 19:04:56.116641 kernel: cni0: port 2(vethcea41470) entered forwarding state Feb 13 19:04:56.115463 systemd-networkd[1836]: vethcea41470: Gained carrier Feb 13 19:04:56.119870 containerd[1942]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000018938), "name":"cbr0", "type":"bridge"} Feb 13 19:04:56.119870 containerd[1942]: delegateAdd: netconf sent to delegate plugin: Feb 13 19:04:56.151755 containerd[1942]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T19:04:56.151497795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:56.151755 containerd[1942]: time="2025-02-13T19:04:56.151620145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:56.152162 containerd[1942]: time="2025-02-13T19:04:56.151705291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:56.152904 containerd[1942]: time="2025-02-13T19:04:56.152695599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:56.195367 systemd[1]: Started cri-containerd-1801ac128aeb807938c66ab288088d815c167efebbf3f9b77664a2dc3da6beac.scope - libcontainer container 1801ac128aeb807938c66ab288088d815c167efebbf3f9b77664a2dc3da6beac. Feb 13 19:04:56.277470 containerd[1942]: time="2025-02-13T19:04:56.277298315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dzl52,Uid:7d42cee3-ce88-4812-8e4e-8950df7011a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1801ac128aeb807938c66ab288088d815c167efebbf3f9b77664a2dc3da6beac\"" Feb 13 19:04:56.286514 containerd[1942]: time="2025-02-13T19:04:56.286442973Z" level=info msg="CreateContainer within sandbox \"1801ac128aeb807938c66ab288088d815c167efebbf3f9b77664a2dc3da6beac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:04:56.326323 containerd[1942]: time="2025-02-13T19:04:56.326156918Z" level=info msg="CreateContainer within sandbox \"1801ac128aeb807938c66ab288088d815c167efebbf3f9b77664a2dc3da6beac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"260febd9f1dabcd1dd1e1d4fc87a88b76df2724d1d6c85a2a792704db7318343\"" Feb 13 19:04:56.327798 containerd[1942]: time="2025-02-13T19:04:56.327525121Z" level=info msg="StartContainer for \"260febd9f1dabcd1dd1e1d4fc87a88b76df2724d1d6c85a2a792704db7318343\"" Feb 13 19:04:56.387620 systemd[1]: Started cri-containerd-260febd9f1dabcd1dd1e1d4fc87a88b76df2724d1d6c85a2a792704db7318343.scope - libcontainer container 260febd9f1dabcd1dd1e1d4fc87a88b76df2724d1d6c85a2a792704db7318343. Feb 13 19:04:56.442698 containerd[1942]: time="2025-02-13T19:04:56.442634094Z" level=info msg="StartContainer for \"260febd9f1dabcd1dd1e1d4fc87a88b76df2724d1d6c85a2a792704db7318343\" returns successfully" Feb 13 19:04:57.154315 systemd-networkd[1836]: vethcea41470: Gained IPv6LL Feb 13 19:04:57.261684 kubelet[3195]: I0213 19:04:57.261324 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-dzl52" podStartSLOduration=23.261296825 podStartE2EDuration="23.261296825s" podCreationTimestamp="2025-02-13 19:04:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:04:57.261297664 +0000 UTC m=+36.466253769" watchObservedRunningTime="2025-02-13 19:04:57.261296825 +0000 UTC m=+36.466252930" Feb 13 19:05:00.026378 ntpd[1910]: Listen normally on 9 cni0 192.168.0.1:123 Feb 13 19:05:00.027324 ntpd[1910]: 13 Feb 19:05:00 ntpd[1910]: Listen normally on 9 cni0 192.168.0.1:123 Feb 13 19:05:00.027324 ntpd[1910]: 13 Feb 19:05:00 ntpd[1910]: Listen normally on 10 cni0 [fe80::f0a6:57ff:fed4:eaf8%5]:123 Feb 13 19:05:00.027324 ntpd[1910]: 13 Feb 19:05:00 ntpd[1910]: Listen normally on 11 veth7aaadfae [fe80::f423:b6ff:fe2b:40aa%6]:123 Feb 13 19:05:00.027324 ntpd[1910]: 13 Feb 19:05:00 ntpd[1910]: Listen normally on 12 vethcea41470 [fe80::874:7ff:feb9:b814%7]:123 Feb 13 19:05:00.026528 ntpd[1910]: Listen normally on 10 cni0 [fe80::f0a6:57ff:fed4:eaf8%5]:123 Feb 13 19:05:00.026613 ntpd[1910]: Listen normally on 11 veth7aaadfae [fe80::f423:b6ff:fe2b:40aa%6]:123 Feb 13 19:05:00.026683 ntpd[1910]: Listen normally on 12 vethcea41470 [fe80::874:7ff:feb9:b814%7]:123 Feb 13 19:05:06.061584 systemd[1]: Started sshd@5-172.31.26.128:22-147.75.109.163:34390.service - OpenSSH per-connection server daemon (147.75.109.163:34390). Feb 13 19:05:06.247110 sshd[4313]: Accepted publickey for core from 147.75.109.163 port 34390 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:06.249870 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:06.258742 systemd-logind[1919]: New session 6 of user core. Feb 13 19:05:06.264927 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:05:06.544800 sshd[4315]: Connection closed by 147.75.109.163 port 34390 Feb 13 19:05:06.545701 sshd-session[4313]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:06.551910 systemd[1]: sshd@5-172.31.26.128:22-147.75.109.163:34390.service: Deactivated successfully. Feb 13 19:05:06.557202 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:05:06.560666 systemd-logind[1919]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:05:06.562514 systemd-logind[1919]: Removed session 6. Feb 13 19:05:11.584569 systemd[1]: Started sshd@6-172.31.26.128:22-147.75.109.163:45212.service - OpenSSH per-connection server daemon (147.75.109.163:45212). Feb 13 19:05:11.767906 sshd[4349]: Accepted publickey for core from 147.75.109.163 port 45212 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:11.770699 sshd-session[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:11.780034 systemd-logind[1919]: New session 7 of user core. Feb 13 19:05:11.787362 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:05:12.043942 sshd[4351]: Connection closed by 147.75.109.163 port 45212 Feb 13 19:05:12.044803 sshd-session[4349]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:12.052596 systemd[1]: sshd@6-172.31.26.128:22-147.75.109.163:45212.service: Deactivated successfully. Feb 13 19:05:12.057330 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:05:12.059026 systemd-logind[1919]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:05:12.062931 systemd-logind[1919]: Removed session 7. Feb 13 19:05:17.082045 systemd[1]: Started sshd@7-172.31.26.128:22-147.75.109.163:45218.service - OpenSSH per-connection server daemon (147.75.109.163:45218). Feb 13 19:05:17.290860 sshd[4385]: Accepted publickey for core from 147.75.109.163 port 45218 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:17.297604 sshd-session[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:17.307427 systemd-logind[1919]: New session 8 of user core. Feb 13 19:05:17.316452 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:05:17.598276 sshd[4387]: Connection closed by 147.75.109.163 port 45218 Feb 13 19:05:17.600031 sshd-session[4385]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:17.608161 systemd-logind[1919]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:05:17.609484 systemd[1]: sshd@7-172.31.26.128:22-147.75.109.163:45218.service: Deactivated successfully. Feb 13 19:05:17.614808 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:05:17.636795 systemd-logind[1919]: Removed session 8. Feb 13 19:05:17.643594 systemd[1]: Started sshd@8-172.31.26.128:22-147.75.109.163:45232.service - OpenSSH per-connection server daemon (147.75.109.163:45232). Feb 13 19:05:17.844042 sshd[4398]: Accepted publickey for core from 147.75.109.163 port 45232 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:17.848243 sshd-session[4398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:17.860818 systemd-logind[1919]: New session 9 of user core. Feb 13 19:05:17.872434 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:05:18.212048 sshd[4400]: Connection closed by 147.75.109.163 port 45232 Feb 13 19:05:18.213562 sshd-session[4398]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:18.225104 systemd-logind[1919]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:05:18.227507 systemd[1]: sshd@8-172.31.26.128:22-147.75.109.163:45232.service: Deactivated successfully. Feb 13 19:05:18.238497 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:05:18.259353 systemd-logind[1919]: Removed session 9. Feb 13 19:05:18.269172 systemd[1]: Started sshd@9-172.31.26.128:22-147.75.109.163:45246.service - OpenSSH per-connection server daemon (147.75.109.163:45246). Feb 13 19:05:18.470264 sshd[4409]: Accepted publickey for core from 147.75.109.163 port 45246 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:18.473026 sshd-session[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:18.481372 systemd-logind[1919]: New session 10 of user core. Feb 13 19:05:18.488507 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:05:18.745572 sshd[4411]: Connection closed by 147.75.109.163 port 45246 Feb 13 19:05:18.744535 sshd-session[4409]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:18.753451 systemd[1]: sshd@9-172.31.26.128:22-147.75.109.163:45246.service: Deactivated successfully. Feb 13 19:05:18.759564 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:05:18.761670 systemd-logind[1919]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:05:18.764115 systemd-logind[1919]: Removed session 10. Feb 13 19:05:23.784937 systemd[1]: Started sshd@10-172.31.26.128:22-147.75.109.163:46752.service - OpenSSH per-connection server daemon (147.75.109.163:46752). Feb 13 19:05:23.985995 sshd[4451]: Accepted publickey for core from 147.75.109.163 port 46752 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:23.988970 sshd-session[4451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:23.996629 systemd-logind[1919]: New session 11 of user core. Feb 13 19:05:24.005419 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:05:24.250226 sshd[4468]: Connection closed by 147.75.109.163 port 46752 Feb 13 19:05:24.251047 sshd-session[4451]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:24.259163 systemd[1]: sshd@10-172.31.26.128:22-147.75.109.163:46752.service: Deactivated successfully. Feb 13 19:05:24.265730 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:05:24.268041 systemd-logind[1919]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:05:24.270006 systemd-logind[1919]: Removed session 11. Feb 13 19:05:29.299555 systemd[1]: Started sshd@11-172.31.26.128:22-147.75.109.163:46756.service - OpenSSH per-connection server daemon (147.75.109.163:46756). Feb 13 19:05:29.481551 sshd[4500]: Accepted publickey for core from 147.75.109.163 port 46756 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:29.484799 sshd-session[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:29.494670 systemd-logind[1919]: New session 12 of user core. Feb 13 19:05:29.504362 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:05:29.744181 sshd[4502]: Connection closed by 147.75.109.163 port 46756 Feb 13 19:05:29.745496 sshd-session[4500]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:29.752821 systemd[1]: sshd@11-172.31.26.128:22-147.75.109.163:46756.service: Deactivated successfully. Feb 13 19:05:29.758396 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:05:29.760395 systemd-logind[1919]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:05:29.762383 systemd-logind[1919]: Removed session 12. Feb 13 19:05:34.785962 systemd[1]: Started sshd@12-172.31.26.128:22-147.75.109.163:35126.service - OpenSSH per-connection server daemon (147.75.109.163:35126). Feb 13 19:05:34.986272 sshd[4533]: Accepted publickey for core from 147.75.109.163 port 35126 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:34.989287 sshd-session[4533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:34.999420 systemd-logind[1919]: New session 13 of user core. Feb 13 19:05:35.005415 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:05:35.254412 sshd[4535]: Connection closed by 147.75.109.163 port 35126 Feb 13 19:05:35.255485 sshd-session[4533]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:35.264896 systemd[1]: sshd@12-172.31.26.128:22-147.75.109.163:35126.service: Deactivated successfully. Feb 13 19:05:35.269412 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:05:35.273750 systemd-logind[1919]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:05:35.275991 systemd-logind[1919]: Removed session 13. Feb 13 19:05:40.300347 systemd[1]: Started sshd@13-172.31.26.128:22-147.75.109.163:46382.service - OpenSSH per-connection server daemon (147.75.109.163:46382). Feb 13 19:05:40.482125 sshd[4569]: Accepted publickey for core from 147.75.109.163 port 46382 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:40.484637 sshd-session[4569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:40.495837 systemd-logind[1919]: New session 14 of user core. Feb 13 19:05:40.500365 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:05:40.751398 sshd[4571]: Connection closed by 147.75.109.163 port 46382 Feb 13 19:05:40.751190 sshd-session[4569]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:40.758619 systemd[1]: sshd@13-172.31.26.128:22-147.75.109.163:46382.service: Deactivated successfully. Feb 13 19:05:40.763580 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:05:40.768781 systemd-logind[1919]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:05:40.787718 systemd-logind[1919]: Removed session 14. Feb 13 19:05:40.796798 systemd[1]: Started sshd@14-172.31.26.128:22-147.75.109.163:46396.service - OpenSSH per-connection server daemon (147.75.109.163:46396). Feb 13 19:05:40.989243 sshd[4582]: Accepted publickey for core from 147.75.109.163 port 46396 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:40.991734 sshd-session[4582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:41.000606 systemd-logind[1919]: New session 15 of user core. Feb 13 19:05:41.007371 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:05:41.305944 sshd[4584]: Connection closed by 147.75.109.163 port 46396 Feb 13 19:05:41.307299 sshd-session[4582]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:41.315462 systemd[1]: sshd@14-172.31.26.128:22-147.75.109.163:46396.service: Deactivated successfully. Feb 13 19:05:41.320847 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:05:41.322641 systemd-logind[1919]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:05:41.324677 systemd-logind[1919]: Removed session 15. Feb 13 19:05:41.341002 systemd[1]: Started sshd@15-172.31.26.128:22-147.75.109.163:46412.service - OpenSSH per-connection server daemon (147.75.109.163:46412). Feb 13 19:05:41.548456 sshd[4593]: Accepted publickey for core from 147.75.109.163 port 46412 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:41.552389 sshd-session[4593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:41.563756 systemd-logind[1919]: New session 16 of user core. Feb 13 19:05:41.573080 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:05:44.188771 sshd[4595]: Connection closed by 147.75.109.163 port 46412 Feb 13 19:05:44.189272 sshd-session[4593]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:44.199937 systemd[1]: sshd@15-172.31.26.128:22-147.75.109.163:46412.service: Deactivated successfully. Feb 13 19:05:44.209695 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:05:44.214868 systemd-logind[1919]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:05:44.248802 systemd[1]: Started sshd@16-172.31.26.128:22-147.75.109.163:46418.service - OpenSSH per-connection server daemon (147.75.109.163:46418). Feb 13 19:05:44.253213 systemd-logind[1919]: Removed session 16. Feb 13 19:05:44.462866 sshd[4630]: Accepted publickey for core from 147.75.109.163 port 46418 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:44.465493 sshd-session[4630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:44.474498 systemd-logind[1919]: New session 17 of user core. Feb 13 19:05:44.482357 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:05:44.973658 sshd[4635]: Connection closed by 147.75.109.163 port 46418 Feb 13 19:05:44.974704 sshd-session[4630]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:44.983013 systemd[1]: sshd@16-172.31.26.128:22-147.75.109.163:46418.service: Deactivated successfully. Feb 13 19:05:44.987226 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:05:44.991901 systemd-logind[1919]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:05:44.994033 systemd-logind[1919]: Removed session 17. Feb 13 19:05:45.013165 systemd[1]: Started sshd@17-172.31.26.128:22-147.75.109.163:46420.service - OpenSSH per-connection server daemon (147.75.109.163:46420). Feb 13 19:05:45.209965 sshd[4644]: Accepted publickey for core from 147.75.109.163 port 46420 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:45.213708 sshd-session[4644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:45.221398 systemd-logind[1919]: New session 18 of user core. Feb 13 19:05:45.229378 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:05:45.478650 sshd[4646]: Connection closed by 147.75.109.163 port 46420 Feb 13 19:05:45.479863 sshd-session[4644]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:45.486471 systemd[1]: sshd@17-172.31.26.128:22-147.75.109.163:46420.service: Deactivated successfully. Feb 13 19:05:45.494051 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:05:45.498148 systemd-logind[1919]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:05:45.500357 systemd-logind[1919]: Removed session 18. Feb 13 19:05:50.520523 systemd[1]: Started sshd@18-172.31.26.128:22-147.75.109.163:50968.service - OpenSSH per-connection server daemon (147.75.109.163:50968). Feb 13 19:05:50.704185 sshd[4677]: Accepted publickey for core from 147.75.109.163 port 50968 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:50.707292 sshd-session[4677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:50.716972 systemd-logind[1919]: New session 19 of user core. Feb 13 19:05:50.723345 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:05:50.971574 sshd[4679]: Connection closed by 147.75.109.163 port 50968 Feb 13 19:05:50.972139 sshd-session[4677]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:50.979605 systemd[1]: sshd@18-172.31.26.128:22-147.75.109.163:50968.service: Deactivated successfully. Feb 13 19:05:50.984433 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:05:50.986659 systemd-logind[1919]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:05:50.989498 systemd-logind[1919]: Removed session 19. Feb 13 19:05:56.011596 systemd[1]: Started sshd@19-172.31.26.128:22-147.75.109.163:50976.service - OpenSSH per-connection server daemon (147.75.109.163:50976). Feb 13 19:05:56.205112 sshd[4714]: Accepted publickey for core from 147.75.109.163 port 50976 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:56.208216 sshd-session[4714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:56.217235 systemd-logind[1919]: New session 20 of user core. Feb 13 19:05:56.225334 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:05:56.477563 sshd[4718]: Connection closed by 147.75.109.163 port 50976 Feb 13 19:05:56.479413 sshd-session[4714]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:56.487624 systemd[1]: sshd@19-172.31.26.128:22-147.75.109.163:50976.service: Deactivated successfully. Feb 13 19:05:56.497597 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:05:56.499682 systemd-logind[1919]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:05:56.504570 systemd-logind[1919]: Removed session 20. Feb 13 19:06:01.520646 systemd[1]: Started sshd@20-172.31.26.128:22-147.75.109.163:46406.service - OpenSSH per-connection server daemon (147.75.109.163:46406). Feb 13 19:06:01.717735 sshd[4750]: Accepted publickey for core from 147.75.109.163 port 46406 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:06:01.720357 sshd-session[4750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:01.728801 systemd-logind[1919]: New session 21 of user core. Feb 13 19:06:01.735347 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:06:01.977280 sshd[4752]: Connection closed by 147.75.109.163 port 46406 Feb 13 19:06:01.978319 sshd-session[4750]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:01.983488 systemd[1]: sshd@20-172.31.26.128:22-147.75.109.163:46406.service: Deactivated successfully. Feb 13 19:06:01.988526 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:06:01.993954 systemd-logind[1919]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:06:01.996361 systemd-logind[1919]: Removed session 21. Feb 13 19:06:07.022727 systemd[1]: Started sshd@21-172.31.26.128:22-147.75.109.163:46422.service - OpenSSH per-connection server daemon (147.75.109.163:46422). Feb 13 19:06:07.219202 sshd[4786]: Accepted publickey for core from 147.75.109.163 port 46422 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:06:07.222299 sshd-session[4786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:07.231823 systemd-logind[1919]: New session 22 of user core. Feb 13 19:06:07.237313 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:06:07.492738 sshd[4788]: Connection closed by 147.75.109.163 port 46422 Feb 13 19:06:07.494190 sshd-session[4786]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:07.503778 systemd[1]: sshd@21-172.31.26.128:22-147.75.109.163:46422.service: Deactivated successfully. Feb 13 19:06:07.509992 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:06:07.511511 systemd-logind[1919]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:06:07.513764 systemd-logind[1919]: Removed session 22. Feb 13 19:06:22.282511 systemd[1]: cri-containerd-4005b6ea0ae1cc215126039956aba60663597f8a320add2e081ed828e17ed301.scope: Deactivated successfully. Feb 13 19:06:22.283039 systemd[1]: cri-containerd-4005b6ea0ae1cc215126039956aba60663597f8a320add2e081ed828e17ed301.scope: Consumed 3.905s CPU time, 22.3M memory peak, 0B memory swap peak. Feb 13 19:06:22.323464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4005b6ea0ae1cc215126039956aba60663597f8a320add2e081ed828e17ed301-rootfs.mount: Deactivated successfully. Feb 13 19:06:22.338479 containerd[1942]: time="2025-02-13T19:06:22.338395947Z" level=info msg="shim disconnected" id=4005b6ea0ae1cc215126039956aba60663597f8a320add2e081ed828e17ed301 namespace=k8s.io Feb 13 19:06:22.339170 containerd[1942]: time="2025-02-13T19:06:22.339091911Z" level=warning msg="cleaning up after shim disconnected" id=4005b6ea0ae1cc215126039956aba60663597f8a320add2e081ed828e17ed301 namespace=k8s.io Feb 13 19:06:22.339170 containerd[1942]: time="2025-02-13T19:06:22.339125451Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:06:22.362304 containerd[1942]: time="2025-02-13T19:06:22.361970739Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:06:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:06:22.479753 kubelet[3195]: I0213 19:06:22.478950 3195 scope.go:117] "RemoveContainer" containerID="4005b6ea0ae1cc215126039956aba60663597f8a320add2e081ed828e17ed301" Feb 13 19:06:22.484573 containerd[1942]: time="2025-02-13T19:06:22.484521616Z" level=info msg="CreateContainer within sandbox \"197bd559bbc53504f7b017b77c31d6c38ae0953b614ccdbccefde42f3d3527ee\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 19:06:22.515789 containerd[1942]: time="2025-02-13T19:06:22.515732692Z" level=info msg="CreateContainer within sandbox \"197bd559bbc53504f7b017b77c31d6c38ae0953b614ccdbccefde42f3d3527ee\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8be27c613a5ba123fad616628aa1c6c7172877a15564a81b92e5ec538d7610c9\"" Feb 13 19:06:22.516848 containerd[1942]: time="2025-02-13T19:06:22.516788428Z" level=info msg="StartContainer for \"8be27c613a5ba123fad616628aa1c6c7172877a15564a81b92e5ec538d7610c9\"" Feb 13 19:06:22.568382 systemd[1]: Started cri-containerd-8be27c613a5ba123fad616628aa1c6c7172877a15564a81b92e5ec538d7610c9.scope - libcontainer container 8be27c613a5ba123fad616628aa1c6c7172877a15564a81b92e5ec538d7610c9. Feb 13 19:06:22.642882 containerd[1942]: time="2025-02-13T19:06:22.642347909Z" level=info msg="StartContainer for \"8be27c613a5ba123fad616628aa1c6c7172877a15564a81b92e5ec538d7610c9\" returns successfully" Feb 13 19:06:22.963210 kubelet[3195]: E0213 19:06:22.962875 3195 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-128?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:06:27.029977 systemd[1]: cri-containerd-b8aac71603db1d5bbc5f847377b3a7c77aab07c5fffd4d36489abad4523859ed.scope: Deactivated successfully. Feb 13 19:06:27.030454 systemd[1]: cri-containerd-b8aac71603db1d5bbc5f847377b3a7c77aab07c5fffd4d36489abad4523859ed.scope: Consumed 2.404s CPU time, 16.1M memory peak, 0B memory swap peak. Feb 13 19:06:27.074718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8aac71603db1d5bbc5f847377b3a7c77aab07c5fffd4d36489abad4523859ed-rootfs.mount: Deactivated successfully. Feb 13 19:06:27.089149 containerd[1942]: time="2025-02-13T19:06:27.089032999Z" level=info msg="shim disconnected" id=b8aac71603db1d5bbc5f847377b3a7c77aab07c5fffd4d36489abad4523859ed namespace=k8s.io Feb 13 19:06:27.090155 containerd[1942]: time="2025-02-13T19:06:27.089133043Z" level=warning msg="cleaning up after shim disconnected" id=b8aac71603db1d5bbc5f847377b3a7c77aab07c5fffd4d36489abad4523859ed namespace=k8s.io Feb 13 19:06:27.090155 containerd[1942]: time="2025-02-13T19:06:27.089172919Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:06:27.501942 kubelet[3195]: I0213 19:06:27.501867 3195 scope.go:117] "RemoveContainer" containerID="b8aac71603db1d5bbc5f847377b3a7c77aab07c5fffd4d36489abad4523859ed" Feb 13 19:06:27.505688 containerd[1942]: time="2025-02-13T19:06:27.505616049Z" level=info msg="CreateContainer within sandbox \"a4ae55c42eed74116581dbec11b8a7b5d54256f4d21d434792daf9449572d15d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 19:06:27.538669 containerd[1942]: time="2025-02-13T19:06:27.538603245Z" level=info msg="CreateContainer within sandbox \"a4ae55c42eed74116581dbec11b8a7b5d54256f4d21d434792daf9449572d15d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e927caa109d79b37e6df9d59a898b1cb20738b942e2b0ebdb8a9f66c06bd059d\"" Feb 13 19:06:27.540324 containerd[1942]: time="2025-02-13T19:06:27.539437305Z" level=info msg="StartContainer for \"e927caa109d79b37e6df9d59a898b1cb20738b942e2b0ebdb8a9f66c06bd059d\"" Feb 13 19:06:27.597786 systemd[1]: Started cri-containerd-e927caa109d79b37e6df9d59a898b1cb20738b942e2b0ebdb8a9f66c06bd059d.scope - libcontainer container e927caa109d79b37e6df9d59a898b1cb20738b942e2b0ebdb8a9f66c06bd059d. Feb 13 19:06:27.676288 containerd[1942]: time="2025-02-13T19:06:27.676167862Z" level=info msg="StartContainer for \"e927caa109d79b37e6df9d59a898b1cb20738b942e2b0ebdb8a9f66c06bd059d\" returns successfully" Feb 13 19:06:32.964525 kubelet[3195]: E0213 19:06:32.963965 3195 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-128?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"