Feb 13 19:03:39.238754 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 19:03:39.238812 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:46:24 -00 2025 Feb 13 19:03:39.238839 kernel: KASLR disabled due to lack of seed Feb 13 19:03:39.238857 kernel: efi: EFI v2.7 by EDK II Feb 13 19:03:39.238875 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x78503d98 Feb 13 19:03:39.238892 kernel: secureboot: Secure boot disabled Feb 13 19:03:39.238910 kernel: ACPI: Early table checksum verification disabled Feb 13 19:03:39.238926 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 19:03:39.238943 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 19:03:39.238960 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 19:03:39.238982 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 19:03:39.238998 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 19:03:39.239014 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 19:03:39.239031 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 19:03:39.239050 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 19:03:39.239071 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 19:03:39.239088 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 19:03:39.239105 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 19:03:39.239121 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 19:03:39.239138 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 19:03:39.239155 kernel: printk: bootconsole [uart0] enabled Feb 13 19:03:39.239171 kernel: NUMA: Failed to initialise from firmware Feb 13 19:03:39.239188 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:03:39.239205 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 19:03:39.239222 kernel: Zone ranges: Feb 13 19:03:39.239241 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 19:03:39.239265 kernel: DMA32 empty Feb 13 19:03:39.239282 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 19:03:39.239298 kernel: Movable zone start for each node Feb 13 19:03:39.239315 kernel: Early memory node ranges Feb 13 19:03:39.239334 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 19:03:39.239351 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 19:03:39.239369 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 19:03:39.239386 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 19:03:39.239403 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 19:03:39.239420 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 19:03:39.239437 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 19:03:39.239453 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 19:03:39.239475 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 19:03:39.239493 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 19:03:39.241619 kernel: psci: probing for conduit method from ACPI. Feb 13 19:03:39.241643 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 19:03:39.241661 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:03:39.241684 kernel: psci: Trusted OS migration not required Feb 13 19:03:39.241702 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:03:39.241719 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:03:39.241737 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:03:39.241755 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 19:03:39.241773 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:03:39.241809 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:03:39.241830 kernel: CPU features: detected: Spectre-v2 Feb 13 19:03:39.241848 kernel: CPU features: detected: Spectre-v3a Feb 13 19:03:39.241866 kernel: CPU features: detected: Spectre-BHB Feb 13 19:03:39.241883 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 19:03:39.241900 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 19:03:39.241923 kernel: alternatives: applying boot alternatives Feb 13 19:03:39.241943 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:03:39.241962 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:03:39.241979 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:03:39.241997 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:03:39.242014 kernel: Fallback order for Node 0: 0 Feb 13 19:03:39.242032 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 19:03:39.242049 kernel: Policy zone: Normal Feb 13 19:03:39.242066 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:03:39.242083 kernel: software IO TLB: area num 2. Feb 13 19:03:39.242105 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 19:03:39.242123 kernel: Memory: 3819960K/4030464K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 210504K reserved, 0K cma-reserved) Feb 13 19:03:39.242140 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:03:39.242158 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:03:39.242175 kernel: rcu: RCU event tracing is enabled. Feb 13 19:03:39.242193 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:03:39.242211 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:03:39.242228 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:03:39.242245 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:03:39.242263 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:03:39.242280 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:03:39.242302 kernel: GICv3: 96 SPIs implemented Feb 13 19:03:39.242320 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:03:39.242337 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:03:39.242354 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 19:03:39.242371 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 19:03:39.242388 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 19:03:39.242407 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:03:39.242424 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:03:39.242442 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 19:03:39.242460 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 19:03:39.242478 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 19:03:39.242496 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:03:39.242543 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 19:03:39.242562 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 19:03:39.242579 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 19:03:39.242596 kernel: Console: colour dummy device 80x25 Feb 13 19:03:39.242614 kernel: printk: console [tty1] enabled Feb 13 19:03:39.242632 kernel: ACPI: Core revision 20230628 Feb 13 19:03:39.242650 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 19:03:39.242668 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:03:39.242686 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:03:39.242704 kernel: landlock: Up and running. Feb 13 19:03:39.242729 kernel: SELinux: Initializing. Feb 13 19:03:39.242747 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:03:39.242765 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:03:39.242783 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:03:39.242801 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:03:39.242819 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:03:39.242837 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:03:39.242855 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 19:03:39.242880 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 19:03:39.242898 kernel: Remapping and enabling EFI services. Feb 13 19:03:39.242916 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:03:39.242933 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:03:39.242951 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 19:03:39.242968 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 19:03:39.242986 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 19:03:39.243004 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:03:39.243022 kernel: SMP: Total of 2 processors activated. Feb 13 19:03:39.243040 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:03:39.243062 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 19:03:39.243080 kernel: CPU features: detected: CRC32 instructions Feb 13 19:03:39.243109 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:03:39.243132 kernel: alternatives: applying system-wide alternatives Feb 13 19:03:39.243150 kernel: devtmpfs: initialized Feb 13 19:03:39.243168 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:03:39.243186 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:03:39.243204 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:03:39.243223 kernel: SMBIOS 3.0.0 present. Feb 13 19:03:39.243244 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 19:03:39.243263 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:03:39.243281 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:03:39.243299 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:03:39.243318 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:03:39.243337 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:03:39.243355 kernel: audit: type=2000 audit(0.229:1): state=initialized audit_enabled=0 res=1 Feb 13 19:03:39.243377 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:03:39.243396 kernel: cpuidle: using governor menu Feb 13 19:03:39.243414 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:03:39.243433 kernel: ASID allocator initialised with 65536 entries Feb 13 19:03:39.243451 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:03:39.243469 kernel: Serial: AMBA PL011 UART driver Feb 13 19:03:39.243487 kernel: Modules: 17440 pages in range for non-PLT usage Feb 13 19:03:39.243616 kernel: Modules: 508960 pages in range for PLT usage Feb 13 19:03:39.243637 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:03:39.243662 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:03:39.243681 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:03:39.243699 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:03:39.243717 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:03:39.243735 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:03:39.243753 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:03:39.243772 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:03:39.243790 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:03:39.246603 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:03:39.246642 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:03:39.246662 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:03:39.246681 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:03:39.246699 kernel: ACPI: Interpreter enabled Feb 13 19:03:39.246717 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:03:39.246735 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:03:39.246753 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 19:03:39.247110 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:03:39.247323 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:03:39.247559 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:03:39.247776 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 19:03:39.247980 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 19:03:39.248007 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 19:03:39.248026 kernel: acpiphp: Slot [1] registered Feb 13 19:03:39.248044 kernel: acpiphp: Slot [2] registered Feb 13 19:03:39.248062 kernel: acpiphp: Slot [3] registered Feb 13 19:03:39.248089 kernel: acpiphp: Slot [4] registered Feb 13 19:03:39.248108 kernel: acpiphp: Slot [5] registered Feb 13 19:03:39.248126 kernel: acpiphp: Slot [6] registered Feb 13 19:03:39.248145 kernel: acpiphp: Slot [7] registered Feb 13 19:03:39.248163 kernel: acpiphp: Slot [8] registered Feb 13 19:03:39.248181 kernel: acpiphp: Slot [9] registered Feb 13 19:03:39.248199 kernel: acpiphp: Slot [10] registered Feb 13 19:03:39.248220 kernel: acpiphp: Slot [11] registered Feb 13 19:03:39.248238 kernel: acpiphp: Slot [12] registered Feb 13 19:03:39.248256 kernel: acpiphp: Slot [13] registered Feb 13 19:03:39.248280 kernel: acpiphp: Slot [14] registered Feb 13 19:03:39.248298 kernel: acpiphp: Slot [15] registered Feb 13 19:03:39.248316 kernel: acpiphp: Slot [16] registered Feb 13 19:03:39.248334 kernel: acpiphp: Slot [17] registered Feb 13 19:03:39.248352 kernel: acpiphp: Slot [18] registered Feb 13 19:03:39.248370 kernel: acpiphp: Slot [19] registered Feb 13 19:03:39.248388 kernel: acpiphp: Slot [20] registered Feb 13 19:03:39.248407 kernel: acpiphp: Slot [21] registered Feb 13 19:03:39.248425 kernel: acpiphp: Slot [22] registered Feb 13 19:03:39.248447 kernel: acpiphp: Slot [23] registered Feb 13 19:03:39.248466 kernel: acpiphp: Slot [24] registered Feb 13 19:03:39.248484 kernel: acpiphp: Slot [25] registered Feb 13 19:03:39.250565 kernel: acpiphp: Slot [26] registered Feb 13 19:03:39.250596 kernel: acpiphp: Slot [27] registered Feb 13 19:03:39.250615 kernel: acpiphp: Slot [28] registered Feb 13 19:03:39.250633 kernel: acpiphp: Slot [29] registered Feb 13 19:03:39.250651 kernel: acpiphp: Slot [30] registered Feb 13 19:03:39.250669 kernel: acpiphp: Slot [31] registered Feb 13 19:03:39.250687 kernel: PCI host bridge to bus 0000:00 Feb 13 19:03:39.250934 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 19:03:39.251117 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:03:39.251314 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 19:03:39.251548 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 19:03:39.254696 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 19:03:39.254947 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 19:03:39.255158 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 19:03:39.255372 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 19:03:39.257758 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 19:03:39.258035 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:03:39.258264 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 19:03:39.258471 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 19:03:39.258700 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 19:03:39.258917 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 19:03:39.259139 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:03:39.259353 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 19:03:39.260207 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 19:03:39.260458 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 19:03:39.261632 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 19:03:39.261908 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 19:03:39.262174 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 19:03:39.262523 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:03:39.262771 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 19:03:39.262800 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:03:39.262820 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:03:39.262839 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:03:39.262857 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:03:39.262876 kernel: iommu: Default domain type: Translated Feb 13 19:03:39.262904 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:03:39.262922 kernel: efivars: Registered efivars operations Feb 13 19:03:39.262940 kernel: vgaarb: loaded Feb 13 19:03:39.262958 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:03:39.262976 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:03:39.262995 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:03:39.263013 kernel: pnp: PnP ACPI init Feb 13 19:03:39.263242 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 19:03:39.263274 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:03:39.263293 kernel: NET: Registered PF_INET protocol family Feb 13 19:03:39.263312 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:03:39.263331 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:03:39.263349 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:03:39.263368 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:03:39.263386 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:03:39.263404 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:03:39.263422 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:03:39.263446 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:03:39.263464 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:03:39.263482 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:03:39.263517 kernel: kvm [1]: HYP mode not available Feb 13 19:03:39.263569 kernel: Initialise system trusted keyrings Feb 13 19:03:39.263589 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:03:39.263608 kernel: Key type asymmetric registered Feb 13 19:03:39.263626 kernel: Asymmetric key parser 'x509' registered Feb 13 19:03:39.263646 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:03:39.263675 kernel: io scheduler mq-deadline registered Feb 13 19:03:39.263694 kernel: io scheduler kyber registered Feb 13 19:03:39.263712 kernel: io scheduler bfq registered Feb 13 19:03:39.263959 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 19:03:39.263989 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:03:39.264009 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:03:39.264028 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 19:03:39.264047 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 19:03:39.264073 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:03:39.264093 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 19:03:39.264320 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 19:03:39.264349 kernel: printk: console [ttyS0] disabled Feb 13 19:03:39.264368 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 19:03:39.264388 kernel: printk: console [ttyS0] enabled Feb 13 19:03:39.264408 kernel: printk: bootconsole [uart0] disabled Feb 13 19:03:39.264426 kernel: thunder_xcv, ver 1.0 Feb 13 19:03:39.264445 kernel: thunder_bgx, ver 1.0 Feb 13 19:03:39.264465 kernel: nicpf, ver 1.0 Feb 13 19:03:39.264491 kernel: nicvf, ver 1.0 Feb 13 19:03:39.264894 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:03:39.265091 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:03:38 UTC (1739473418) Feb 13 19:03:39.265117 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:03:39.265137 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 19:03:39.265156 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:03:39.265174 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:03:39.265200 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:03:39.265219 kernel: Segment Routing with IPv6 Feb 13 19:03:39.265238 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:03:39.265256 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:03:39.265275 kernel: Key type dns_resolver registered Feb 13 19:03:39.265294 kernel: registered taskstats version 1 Feb 13 19:03:39.265313 kernel: Loading compiled-in X.509 certificates Feb 13 19:03:39.265332 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 916055ad16f0ba578cce640a9ac58627fd43c936' Feb 13 19:03:39.265350 kernel: Key type .fscrypt registered Feb 13 19:03:39.265368 kernel: Key type fscrypt-provisioning registered Feb 13 19:03:39.265391 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:03:39.265409 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:03:39.265427 kernel: ima: No architecture policies found Feb 13 19:03:39.265446 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:03:39.265464 kernel: clk: Disabling unused clocks Feb 13 19:03:39.265482 kernel: Freeing unused kernel memory: 39680K Feb 13 19:03:39.265521 kernel: Run /init as init process Feb 13 19:03:39.265543 kernel: with arguments: Feb 13 19:03:39.265561 kernel: /init Feb 13 19:03:39.265585 kernel: with environment: Feb 13 19:03:39.265603 kernel: HOME=/ Feb 13 19:03:39.265621 kernel: TERM=linux Feb 13 19:03:39.265639 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:03:39.265661 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:03:39.265684 systemd[1]: Detected virtualization amazon. Feb 13 19:03:39.265704 systemd[1]: Detected architecture arm64. Feb 13 19:03:39.265728 systemd[1]: Running in initrd. Feb 13 19:03:39.265747 systemd[1]: No hostname configured, using default hostname. Feb 13 19:03:39.265766 systemd[1]: Hostname set to . Feb 13 19:03:39.265787 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:03:39.265827 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:03:39.265847 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:03:39.265868 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:03:39.265889 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:03:39.265915 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:03:39.265936 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:03:39.265957 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:03:39.265980 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:03:39.266001 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:03:39.266022 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:03:39.266042 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:03:39.266069 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:03:39.266090 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:03:39.266110 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:03:39.266130 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:03:39.266151 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:03:39.266171 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:03:39.266191 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:03:39.266212 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:03:39.266233 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:03:39.266260 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:03:39.266280 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:03:39.266300 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:03:39.266320 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:03:39.266340 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:03:39.266360 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:03:39.266380 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:03:39.266400 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:03:39.266424 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:03:39.266444 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:03:39.266466 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:03:39.266488 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:03:39.267782 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:03:39.267867 systemd-journald[251]: Collecting audit messages is disabled. Feb 13 19:03:39.267922 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:03:39.267944 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:03:39.267964 kernel: Bridge firewalling registered Feb 13 19:03:39.267989 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:03:39.268010 systemd-journald[251]: Journal started Feb 13 19:03:39.268048 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2699bb8aa148b05d4fe4b6dd5d92c7) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:03:39.227583 systemd-modules-load[252]: Inserted module 'overlay' Feb 13 19:03:39.263601 systemd-modules-load[252]: Inserted module 'br_netfilter' Feb 13 19:03:39.289193 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:03:39.281610 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:03:39.294827 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:03:39.308474 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:03:39.314090 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:03:39.324712 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:03:39.351588 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:03:39.365941 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:03:39.375201 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:03:39.395328 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:03:39.400916 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:03:39.424853 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:03:39.429041 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:03:39.438013 dracut-cmdline[280]: dracut-dracut-053 Feb 13 19:03:39.444229 dracut-cmdline[280]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5785d28b783f64f8b8d29b6ea80baf9f88b0129b21e0dd81447612b348e04e7a Feb 13 19:03:39.515267 systemd-resolved[290]: Positive Trust Anchors: Feb 13 19:03:39.515327 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:03:39.515390 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:03:39.621579 kernel: SCSI subsystem initialized Feb 13 19:03:39.632577 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:03:39.644559 kernel: iscsi: registered transport (tcp) Feb 13 19:03:39.668766 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:03:39.668858 kernel: QLogic iSCSI HBA Driver Feb 13 19:03:39.739558 kernel: random: crng init done Feb 13 19:03:39.739934 systemd-resolved[290]: Defaulting to hostname 'linux'. Feb 13 19:03:39.744328 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:03:39.762324 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:03:39.778599 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:03:39.790014 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:03:39.823552 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:03:39.823631 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:03:39.826539 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:03:39.894551 kernel: raid6: neonx8 gen() 6751 MB/s Feb 13 19:03:39.911538 kernel: raid6: neonx4 gen() 6569 MB/s Feb 13 19:03:39.928541 kernel: raid6: neonx2 gen() 5462 MB/s Feb 13 19:03:39.945542 kernel: raid6: neonx1 gen() 3949 MB/s Feb 13 19:03:39.962545 kernel: raid6: int64x8 gen() 3820 MB/s Feb 13 19:03:39.979553 kernel: raid6: int64x4 gen() 3716 MB/s Feb 13 19:03:39.996558 kernel: raid6: int64x2 gen() 3613 MB/s Feb 13 19:03:40.014339 kernel: raid6: int64x1 gen() 2765 MB/s Feb 13 19:03:40.014416 kernel: raid6: using algorithm neonx8 gen() 6751 MB/s Feb 13 19:03:40.032293 kernel: raid6: .... xor() 4880 MB/s, rmw enabled Feb 13 19:03:40.032358 kernel: raid6: using neon recovery algorithm Feb 13 19:03:40.040548 kernel: xor: measuring software checksum speed Feb 13 19:03:40.041537 kernel: 8regs : 10062 MB/sec Feb 13 19:03:40.043731 kernel: 32regs : 10837 MB/sec Feb 13 19:03:40.043768 kernel: arm64_neon : 9544 MB/sec Feb 13 19:03:40.043793 kernel: xor: using function: 32regs (10837 MB/sec) Feb 13 19:03:40.130560 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:03:40.153409 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:03:40.165958 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:03:40.211048 systemd-udevd[470]: Using default interface naming scheme 'v255'. Feb 13 19:03:40.219485 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:03:40.233957 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:03:40.276993 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Feb 13 19:03:40.333612 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:03:40.343884 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:03:40.476219 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:03:40.491213 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:03:40.535885 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:03:40.540892 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:03:40.547438 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:03:40.552669 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:03:40.580343 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:03:40.608629 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:03:40.704735 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:03:40.704802 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 19:03:40.732223 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 19:03:40.732571 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 19:03:40.732810 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:96:cf:c3:d3:51 Feb 13 19:03:40.723219 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:03:40.723491 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:03:40.729838 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:03:40.733173 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:03:40.733384 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:03:40.737098 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:03:40.745695 (udev-worker)[542]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:03:40.766948 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:03:40.775539 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 19:03:40.778554 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 19:03:40.789550 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 19:03:40.802542 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:03:40.802696 kernel: GPT:9289727 != 16777215 Feb 13 19:03:40.802722 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:03:40.806185 kernel: GPT:9289727 != 16777215 Feb 13 19:03:40.806243 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:03:40.808570 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:03:40.816829 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:03:40.833835 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:03:40.885298 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:03:40.898785 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (539) Feb 13 19:03:40.915534 kernel: BTRFS: device fsid 44fbcf53-fa5f-4fd4-b434-f067731b9a44 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (515) Feb 13 19:03:40.995352 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 19:03:41.038863 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 19:03:41.064236 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 19:03:41.069963 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 19:03:41.084468 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:03:41.100947 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:03:41.122571 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:03:41.123620 disk-uuid[660]: Primary Header is updated. Feb 13 19:03:41.123620 disk-uuid[660]: Secondary Entries is updated. Feb 13 19:03:41.123620 disk-uuid[660]: Secondary Header is updated. Feb 13 19:03:41.153577 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:03:42.163559 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:03:42.164819 disk-uuid[662]: The operation has completed successfully. Feb 13 19:03:42.369439 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:03:42.369679 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:03:42.424749 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:03:42.434874 sh[921]: Success Feb 13 19:03:42.456000 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:03:42.612258 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:03:42.623924 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:03:42.631961 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:03:42.693215 kernel: BTRFS info (device dm-0): first mount of filesystem 44fbcf53-fa5f-4fd4-b434-f067731b9a44 Feb 13 19:03:42.693314 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:03:42.693342 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:03:42.696243 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:03:42.696290 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:03:42.762571 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:03:42.796244 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:03:42.800588 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:03:42.813861 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:03:42.827060 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:03:42.850213 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:03:42.850297 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:03:42.851485 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:03:42.864200 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:03:42.882243 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:03:42.885733 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:03:42.902614 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:03:42.912964 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:03:43.039397 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:03:43.050865 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:03:43.109542 systemd-networkd[1114]: lo: Link UP Feb 13 19:03:43.109564 systemd-networkd[1114]: lo: Gained carrier Feb 13 19:03:43.114075 systemd-networkd[1114]: Enumeration completed Feb 13 19:03:43.116080 systemd-networkd[1114]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:03:43.116091 systemd-networkd[1114]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:03:43.121398 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:03:43.127903 systemd[1]: Reached target network.target - Network. Feb 13 19:03:43.128484 systemd-networkd[1114]: eth0: Link UP Feb 13 19:03:43.128493 systemd-networkd[1114]: eth0: Gained carrier Feb 13 19:03:43.128537 systemd-networkd[1114]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:03:43.167621 systemd-networkd[1114]: eth0: DHCPv4 address 172.31.22.68/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:03:43.292745 ignition[1020]: Ignition 2.20.0 Feb 13 19:03:43.292767 ignition[1020]: Stage: fetch-offline Feb 13 19:03:43.293192 ignition[1020]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:43.293215 ignition[1020]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:03:43.296788 ignition[1020]: Ignition finished successfully Feb 13 19:03:43.303961 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:03:43.323964 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:03:43.351651 ignition[1122]: Ignition 2.20.0 Feb 13 19:03:43.351673 ignition[1122]: Stage: fetch Feb 13 19:03:43.352335 ignition[1122]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:43.352476 ignition[1122]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:03:43.353157 ignition[1122]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:03:43.363022 ignition[1122]: PUT result: OK Feb 13 19:03:43.366484 ignition[1122]: parsed url from cmdline: "" Feb 13 19:03:43.366526 ignition[1122]: no config URL provided Feb 13 19:03:43.366544 ignition[1122]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:03:43.366575 ignition[1122]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:03:43.366611 ignition[1122]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:03:43.368381 ignition[1122]: PUT result: OK Feb 13 19:03:43.369294 ignition[1122]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 19:03:43.377736 ignition[1122]: GET result: OK Feb 13 19:03:43.377889 ignition[1122]: parsing config with SHA512: b57fd7e85b8a34d8be295cd1e6a1ad7b2baf59f31bcc90516e696a8dbf205377873465d17711e41fa795884ff4e2858f8c7fb6c64c0fb12c0e53d79b6866c537 Feb 13 19:03:43.388393 unknown[1122]: fetched base config from "system" Feb 13 19:03:43.388426 unknown[1122]: fetched base config from "system" Feb 13 19:03:43.390238 ignition[1122]: fetch: fetch complete Feb 13 19:03:43.388441 unknown[1122]: fetched user config from "aws" Feb 13 19:03:43.390252 ignition[1122]: fetch: fetch passed Feb 13 19:03:43.390428 ignition[1122]: Ignition finished successfully Feb 13 19:03:43.399557 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:03:43.414839 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:03:43.440535 ignition[1129]: Ignition 2.20.0 Feb 13 19:03:43.440561 ignition[1129]: Stage: kargs Feb 13 19:03:43.442396 ignition[1129]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:43.442423 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:03:43.442674 ignition[1129]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:03:43.446079 ignition[1129]: PUT result: OK Feb 13 19:03:43.455894 ignition[1129]: kargs: kargs passed Feb 13 19:03:43.456090 ignition[1129]: Ignition finished successfully Feb 13 19:03:43.461706 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:03:43.478823 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:03:43.508910 ignition[1135]: Ignition 2.20.0 Feb 13 19:03:43.508945 ignition[1135]: Stage: disks Feb 13 19:03:43.510208 ignition[1135]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:43.510246 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:03:43.510579 ignition[1135]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:03:43.513557 ignition[1135]: PUT result: OK Feb 13 19:03:43.522850 ignition[1135]: disks: disks passed Feb 13 19:03:43.523298 ignition[1135]: Ignition finished successfully Feb 13 19:03:43.530580 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:03:43.534130 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:03:43.536355 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:03:43.538772 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:03:43.540651 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:03:43.542572 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:03:43.553787 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:03:43.604629 systemd-fsck[1143]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:03:43.611020 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:03:43.624681 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:03:43.707570 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e24df12d-6575-4a90-bef9-33573b9d63e7 r/w with ordered data mode. Quota mode: none. Feb 13 19:03:43.708549 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:03:43.714579 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:03:43.725905 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:03:43.734809 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:03:43.742905 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:03:43.743431 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:03:43.743482 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:03:43.770532 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1162) Feb 13 19:03:43.770772 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:03:43.777831 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:03:43.777871 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:03:43.779437 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:03:43.783943 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:03:43.792546 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:03:43.797216 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:03:44.181143 initrd-setup-root[1186]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:03:44.191421 initrd-setup-root[1193]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:03:44.214077 initrd-setup-root[1200]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:03:44.222551 initrd-setup-root[1207]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:03:44.548900 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:03:44.559692 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:03:44.566803 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:03:44.585958 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:03:44.588203 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:03:44.610368 systemd-networkd[1114]: eth0: Gained IPv6LL Feb 13 19:03:44.637024 ignition[1275]: INFO : Ignition 2.20.0 Feb 13 19:03:44.637024 ignition[1275]: INFO : Stage: mount Feb 13 19:03:44.640961 ignition[1275]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:44.640961 ignition[1275]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:03:44.640961 ignition[1275]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:03:44.646320 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:03:44.651298 ignition[1275]: INFO : PUT result: OK Feb 13 19:03:44.655313 ignition[1275]: INFO : mount: mount passed Feb 13 19:03:44.656924 ignition[1275]: INFO : Ignition finished successfully Feb 13 19:03:44.660998 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:03:44.668800 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:03:44.721087 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:03:44.748245 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1288) Feb 13 19:03:44.748314 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 76ff7707-a10f-40e5-bc71-1b3a44c2c51f Feb 13 19:03:44.749894 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:03:44.749935 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:03:44.756539 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:03:44.760593 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:03:44.793163 ignition[1305]: INFO : Ignition 2.20.0 Feb 13 19:03:44.793163 ignition[1305]: INFO : Stage: files Feb 13 19:03:44.797678 ignition[1305]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:44.797678 ignition[1305]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:03:44.797678 ignition[1305]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:03:44.797678 ignition[1305]: INFO : PUT result: OK Feb 13 19:03:44.809206 ignition[1305]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:03:44.813624 ignition[1305]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:03:44.813624 ignition[1305]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:03:44.835991 ignition[1305]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:03:44.841431 ignition[1305]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:03:44.844342 unknown[1305]: wrote ssh authorized keys file for user: core Feb 13 19:03:44.846735 ignition[1305]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:03:44.851560 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:03:44.851560 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 19:03:44.918708 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:03:45.072928 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 19:03:45.077080 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:03:45.077080 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:03:45.077080 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:03:45.077080 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:03:45.077080 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:03:45.077080 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:03:45.077080 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:03:45.100837 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:03:45.100837 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:03:45.100837 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:03:45.100837 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:03:45.100837 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:03:45.100837 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:03:45.100837 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 19:03:45.588380 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 19:03:45.994629 ignition[1305]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 19:03:45.994629 ignition[1305]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 19:03:46.015896 ignition[1305]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:03:46.015896 ignition[1305]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:03:46.015896 ignition[1305]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 19:03:46.015896 ignition[1305]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:03:46.015896 ignition[1305]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:03:46.015896 ignition[1305]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:03:46.015896 ignition[1305]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:03:46.015896 ignition[1305]: INFO : files: files passed Feb 13 19:03:46.015896 ignition[1305]: INFO : Ignition finished successfully Feb 13 19:03:46.004584 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:03:46.039578 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:03:46.050349 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:03:46.060052 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:03:46.062691 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:03:46.101711 initrd-setup-root-after-ignition[1333]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:03:46.101711 initrd-setup-root-after-ignition[1333]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:03:46.109141 initrd-setup-root-after-ignition[1337]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:03:46.115354 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:03:46.120290 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:03:46.139020 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:03:46.195695 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:03:46.196446 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:03:46.203688 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:03:46.205681 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:03:46.207908 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:03:46.227487 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:03:46.262814 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:03:46.278387 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:03:46.315208 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:03:46.319488 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:03:46.322884 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:03:46.324962 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:03:46.325259 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:03:46.328195 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:03:46.331298 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:03:46.342364 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:03:46.347481 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:03:46.353275 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:03:46.357772 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:03:46.360120 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:03:46.363216 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:03:46.372325 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:03:46.374423 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:03:46.378920 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:03:46.379172 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:03:46.385910 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:03:46.388109 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:03:46.390680 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:03:46.393164 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:03:46.396495 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:03:46.396790 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:03:46.406159 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:03:46.406479 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:03:46.417893 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:03:46.418388 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:03:46.433967 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:03:46.442725 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:03:46.448132 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:03:46.448452 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:03:46.454131 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:03:46.456342 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:03:46.483642 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:03:46.484044 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:03:46.495600 ignition[1357]: INFO : Ignition 2.20.0 Feb 13 19:03:46.498285 ignition[1357]: INFO : Stage: umount Feb 13 19:03:46.498285 ignition[1357]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:03:46.498285 ignition[1357]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:03:46.498285 ignition[1357]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:03:46.508353 ignition[1357]: INFO : PUT result: OK Feb 13 19:03:46.513152 ignition[1357]: INFO : umount: umount passed Feb 13 19:03:46.518726 ignition[1357]: INFO : Ignition finished successfully Feb 13 19:03:46.517773 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:03:46.519873 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:03:46.522447 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:03:46.536706 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:03:46.536948 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:03:46.540884 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:03:46.541334 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:03:46.547377 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:03:46.547549 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:03:46.549574 systemd[1]: Stopped target network.target - Network. Feb 13 19:03:46.551238 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:03:46.551326 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:03:46.553597 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:03:46.556960 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:03:46.563987 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:03:46.567002 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:03:46.568729 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:03:46.570891 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:03:46.571016 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:03:46.574449 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:03:46.574542 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:03:46.576450 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:03:46.576565 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:03:46.578446 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:03:46.578546 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:03:46.580826 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:03:46.583066 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:03:46.587338 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:03:46.587546 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:03:46.591624 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:03:46.591785 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:03:46.595131 systemd-networkd[1114]: eth0: DHCPv6 lease lost Feb 13 19:03:46.602246 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:03:46.602563 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:03:46.612174 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:03:46.613089 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:03:46.620141 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:03:46.620281 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:03:46.632023 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:03:46.635683 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:03:46.635820 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:03:46.638704 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:03:46.638813 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:03:46.640991 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:03:46.641089 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:03:46.643374 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:03:46.643459 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:03:46.673790 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:03:46.712345 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:03:46.713093 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:03:46.721444 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:03:46.723328 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:03:46.726565 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:03:46.726742 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:03:46.729089 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:03:46.729171 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:03:46.734250 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:03:46.734346 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:03:46.737120 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:03:46.737215 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:03:46.743070 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:03:46.743176 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:03:46.759962 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:03:46.770636 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:03:46.770764 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:03:46.773160 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:03:46.773252 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:03:46.775660 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:03:46.775739 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:03:46.778079 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:03:46.778162 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:03:46.802417 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:03:46.803940 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:03:46.812184 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:03:46.831785 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:03:46.849119 systemd[1]: Switching root. Feb 13 19:03:46.908154 systemd-journald[251]: Journal stopped Feb 13 19:03:49.469165 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Feb 13 19:03:49.469444 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:03:49.476557 kernel: SELinux: policy capability open_perms=1 Feb 13 19:03:49.476658 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:03:49.476691 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:03:49.476721 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:03:49.476752 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:03:49.476789 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:03:49.476820 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:03:49.476849 kernel: audit: type=1403 audit(1739473427.419:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:03:49.476893 systemd[1]: Successfully loaded SELinux policy in 91.119ms. Feb 13 19:03:49.476940 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.960ms. Feb 13 19:03:49.476986 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:03:49.477020 systemd[1]: Detected virtualization amazon. Feb 13 19:03:49.477051 systemd[1]: Detected architecture arm64. Feb 13 19:03:49.477082 systemd[1]: Detected first boot. Feb 13 19:03:49.477113 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:03:49.477146 zram_generator::config[1400]: No configuration found. Feb 13 19:03:49.477183 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:03:49.477212 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:03:49.477243 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:03:49.477277 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:03:49.477307 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:03:49.477337 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:03:49.477369 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:03:49.477400 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:03:49.477434 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:03:49.477466 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:03:49.477495 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:03:49.477552 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:03:49.477589 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:03:49.477620 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:03:49.477655 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:03:49.477688 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:03:49.477726 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:03:49.477797 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:03:49.477848 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:03:49.477883 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:03:49.477917 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:03:49.477969 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:03:49.478010 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:03:49.478044 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:03:49.478079 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:03:49.478120 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:03:49.478158 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:03:49.478194 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:03:49.478225 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:03:49.478257 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:03:49.478291 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:03:49.478322 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:03:49.478363 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:03:49.478395 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:03:49.478440 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:03:49.478476 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:03:49.480013 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:03:49.480070 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:03:49.480104 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:03:49.480144 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:03:49.480180 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:03:49.480212 systemd[1]: Reached target machines.target - Containers. Feb 13 19:03:49.480245 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:03:49.480288 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:03:49.480318 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:03:49.480347 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:03:49.480377 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:03:49.480408 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:03:49.480445 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:03:49.480476 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:03:49.480554 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:03:49.480604 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:03:49.480635 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:03:49.480675 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:03:49.480711 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:03:49.480749 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:03:49.480780 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:03:49.480816 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:03:49.480846 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:03:49.480882 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:03:49.480921 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:03:49.480954 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:03:49.480986 systemd[1]: Stopped verity-setup.service. Feb 13 19:03:49.481020 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:03:49.481055 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:03:49.481085 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:03:49.481114 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:03:49.481147 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:03:49.481179 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:03:49.481216 kernel: fuse: init (API version 7.39) Feb 13 19:03:49.481249 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:03:49.481279 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:03:49.481307 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:03:49.481335 kernel: loop: module loaded Feb 13 19:03:49.481367 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:03:49.481397 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:03:49.481426 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:03:49.481455 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:03:49.481484 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:03:49.481544 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:03:49.481576 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:03:49.481608 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:03:49.481644 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:03:49.481675 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:03:49.481705 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:03:49.481733 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:03:49.481832 systemd-journald[1485]: Collecting audit messages is disabled. Feb 13 19:03:49.481889 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:03:49.481919 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:03:49.481949 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:03:49.481979 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:03:49.482010 systemd-journald[1485]: Journal started Feb 13 19:03:49.482059 systemd-journald[1485]: Runtime Journal (/run/log/journal/ec2699bb8aa148b05d4fe4b6dd5d92c7) is 8.0M, max 75.3M, 67.3M free. Feb 13 19:03:48.778070 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:03:48.836554 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 19:03:48.837447 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:03:49.500606 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:03:49.508972 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:03:49.512752 kernel: ACPI: bus type drm_connector registered Feb 13 19:03:49.527892 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:03:49.531557 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:03:49.549527 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:03:49.549631 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:03:49.572250 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:03:49.574648 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:03:49.584597 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:03:49.596802 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:03:49.623681 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:03:49.630062 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:03:49.638689 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:03:49.641729 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:03:49.642149 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:03:49.645064 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:03:49.648785 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:03:49.652422 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:03:49.657652 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:03:49.754658 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:03:49.768389 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:03:49.781565 kernel: loop0: detected capacity change from 0 to 53784 Feb 13 19:03:49.786141 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:03:49.793876 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:03:49.801844 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:03:49.852801 systemd-tmpfiles[1512]: ACLs are not supported, ignoring. Feb 13 19:03:49.852839 systemd-tmpfiles[1512]: ACLs are not supported, ignoring. Feb 13 19:03:49.858639 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:03:49.865417 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:03:49.873309 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:03:49.880944 udevadm[1539]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:03:49.892414 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:03:49.898232 systemd-journald[1485]: Time spent on flushing to /var/log/journal/ec2699bb8aa148b05d4fe4b6dd5d92c7 is 41.284ms for 921 entries. Feb 13 19:03:49.898232 systemd-journald[1485]: System Journal (/var/log/journal/ec2699bb8aa148b05d4fe4b6dd5d92c7) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:03:49.955790 systemd-journald[1485]: Received client request to flush runtime journal. Feb 13 19:03:49.955891 kernel: loop1: detected capacity change from 0 to 189592 Feb 13 19:03:49.900171 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:03:49.916271 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:03:49.965681 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:03:50.029592 kernel: loop2: detected capacity change from 0 to 116808 Feb 13 19:03:50.033607 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:03:50.047986 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:03:50.112433 systemd-tmpfiles[1552]: ACLs are not supported, ignoring. Feb 13 19:03:50.112476 systemd-tmpfiles[1552]: ACLs are not supported, ignoring. Feb 13 19:03:50.125059 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:03:50.152245 kernel: loop3: detected capacity change from 0 to 113536 Feb 13 19:03:50.252656 kernel: loop4: detected capacity change from 0 to 53784 Feb 13 19:03:50.279579 kernel: loop5: detected capacity change from 0 to 189592 Feb 13 19:03:50.322765 kernel: loop6: detected capacity change from 0 to 116808 Feb 13 19:03:50.342612 kernel: loop7: detected capacity change from 0 to 113536 Feb 13 19:03:50.364882 (sd-merge)[1557]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 19:03:50.369023 (sd-merge)[1557]: Merged extensions into '/usr'. Feb 13 19:03:50.376132 systemd[1]: Reloading requested from client PID 1511 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:03:50.376172 systemd[1]: Reloading... Feb 13 19:03:50.573589 zram_generator::config[1583]: No configuration found. Feb 13 19:03:50.927875 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:03:51.050235 systemd[1]: Reloading finished in 673 ms. Feb 13 19:03:51.089838 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:03:51.094181 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:03:51.115704 systemd[1]: Starting ensure-sysext.service... Feb 13 19:03:51.123154 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:03:51.138125 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:03:51.156132 systemd[1]: Reloading requested from client PID 1635 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:03:51.156159 systemd[1]: Reloading... Feb 13 19:03:51.215608 systemd-tmpfiles[1636]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:03:51.216265 systemd-tmpfiles[1636]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:03:51.219487 systemd-tmpfiles[1636]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:03:51.221354 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. Feb 13 19:03:51.221521 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. Feb 13 19:03:51.232082 systemd-tmpfiles[1636]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:03:51.232122 systemd-tmpfiles[1636]: Skipping /boot Feb 13 19:03:51.273459 systemd-udevd[1637]: Using default interface naming scheme 'v255'. Feb 13 19:03:51.275022 systemd-tmpfiles[1636]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:03:51.277042 systemd-tmpfiles[1636]: Skipping /boot Feb 13 19:03:51.417542 zram_generator::config[1668]: No configuration found. Feb 13 19:03:51.463788 ldconfig[1507]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:03:51.623746 (udev-worker)[1707]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:03:51.824287 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:03:51.889620 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1714) Feb 13 19:03:52.001833 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:03:52.003273 systemd[1]: Reloading finished in 846 ms. Feb 13 19:03:52.063287 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:03:52.067642 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:03:52.078423 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:03:52.198858 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:03:52.219543 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:03:52.232055 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:03:52.237382 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:03:52.240206 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:03:52.245743 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:03:52.256053 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:03:52.265693 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:03:52.283818 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:03:52.291033 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:03:52.293219 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:03:52.302042 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:03:52.308930 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:03:52.317701 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:03:52.333066 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:03:52.335322 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:03:52.347103 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:03:52.354310 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:03:52.373558 lvm[1835]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:03:52.365054 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:03:52.367615 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:03:52.385918 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:03:52.387809 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:03:52.391923 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:03:52.394649 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:03:52.403109 systemd[1]: Finished ensure-sysext.service. Feb 13 19:03:52.431030 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:03:52.433666 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:03:52.446187 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:03:52.446870 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:03:52.456142 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:03:52.468653 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:03:52.491144 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:03:52.500713 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:03:52.504263 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:03:52.516177 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:03:52.577140 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:03:52.582641 lvm[1872]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:03:52.592913 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:03:52.617847 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:03:52.621220 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:03:52.623891 augenrules[1880]: No rules Feb 13 19:03:52.630221 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:03:52.634490 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:03:52.644361 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:03:52.691901 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:03:52.696872 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:03:52.782815 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:03:52.810771 systemd-networkd[1848]: lo: Link UP Feb 13 19:03:52.810790 systemd-networkd[1848]: lo: Gained carrier Feb 13 19:03:52.814289 systemd-networkd[1848]: Enumeration completed Feb 13 19:03:52.814716 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:03:52.820643 systemd-networkd[1848]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:03:52.820664 systemd-networkd[1848]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:03:52.824819 systemd-networkd[1848]: eth0: Link UP Feb 13 19:03:52.825345 systemd-networkd[1848]: eth0: Gained carrier Feb 13 19:03:52.825534 systemd-networkd[1848]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:03:52.830421 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:03:52.833829 systemd-networkd[1848]: eth0: DHCPv4 address 172.31.22.68/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:03:52.834979 systemd-resolved[1849]: Positive Trust Anchors: Feb 13 19:03:52.835040 systemd-resolved[1849]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:03:52.835104 systemd-resolved[1849]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:03:52.849624 systemd-resolved[1849]: Defaulting to hostname 'linux'. Feb 13 19:03:52.853906 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:03:52.857193 systemd[1]: Reached target network.target - Network. Feb 13 19:03:52.859100 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:03:52.861425 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:03:52.863952 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:03:52.866386 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:03:52.869111 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:03:52.871582 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:03:52.873905 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:03:52.876206 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:03:52.876260 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:03:52.877935 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:03:52.884568 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:03:52.890689 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:03:52.917936 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:03:52.921045 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:03:52.923369 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:03:52.925197 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:03:52.927400 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:03:52.927458 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:03:52.933753 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:03:52.947359 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:03:52.952962 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:03:52.961844 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:03:52.976124 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:03:52.978151 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:03:52.984137 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:03:52.993286 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:03:53.001797 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:03:53.013805 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:03:53.020139 jq[1905]: false Feb 13 19:03:53.020816 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:03:53.036944 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:03:53.064045 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:03:53.068349 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:03:53.069196 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:03:53.074833 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:03:53.085274 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:03:53.095322 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:03:53.098725 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:03:53.116185 dbus-daemon[1904]: [system] SELinux support is enabled Feb 13 19:03:53.139344 dbus-daemon[1904]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1848 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:03:53.123364 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:03:53.153954 dbus-daemon[1904]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:03:53.154999 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:03:53.160018 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:03:53.160157 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:03:53.163892 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:03:53.163954 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:03:53.202939 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:03:53.265533 tar[1927]: linux-arm64/helm Feb 13 19:03:53.274450 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:03:53.275374 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:03:53.300180 jq[1918]: true Feb 13 19:03:53.309494 extend-filesystems[1906]: Found loop4 Feb 13 19:03:53.309494 extend-filesystems[1906]: Found loop5 Feb 13 19:03:53.309494 extend-filesystems[1906]: Found loop6 Feb 13 19:03:53.309494 extend-filesystems[1906]: Found loop7 Feb 13 19:03:53.309494 extend-filesystems[1906]: Found nvme0n1 Feb 13 19:03:53.309494 extend-filesystems[1906]: Found nvme0n1p1 Feb 13 19:03:53.309494 extend-filesystems[1906]: Found nvme0n1p2 Feb 13 19:03:53.327776 extend-filesystems[1906]: Found nvme0n1p3 Feb 13 19:03:53.327776 extend-filesystems[1906]: Found usr Feb 13 19:03:53.327776 extend-filesystems[1906]: Found nvme0n1p4 Feb 13 19:03:53.327776 extend-filesystems[1906]: Found nvme0n1p6 Feb 13 19:03:53.327776 extend-filesystems[1906]: Found nvme0n1p7 Feb 13 19:03:53.327776 extend-filesystems[1906]: Found nvme0n1p9 Feb 13 19:03:53.327776 extend-filesystems[1906]: Checking size of /dev/nvme0n1p9 Feb 13 19:03:53.363040 ntpd[1908]: 13 Feb 19:03:53 ntpd[1908]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:08:36 UTC 2025 (1): Starting Feb 13 19:03:53.363040 ntpd[1908]: 13 Feb 19:03:53 ntpd[1908]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:03:53.363040 ntpd[1908]: 13 Feb 19:03:53 ntpd[1908]: ---------------------------------------------------- Feb 13 19:03:53.363040 ntpd[1908]: 13 Feb 19:03:53 ntpd[1908]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:03:53.363040 ntpd[1908]: 13 Feb 19:03:53 ntpd[1908]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:03:53.363040 ntpd[1908]: 13 Feb 19:03:53 ntpd[1908]: corporation. Support and training for ntp-4 are Feb 13 19:03:53.363040 ntpd[1908]: 13 Feb 19:03:53 ntpd[1908]: available at https://www.nwtime.org/support Feb 13 19:03:53.363040 ntpd[1908]: 13 Feb 19:03:53 ntpd[1908]: ---------------------------------------------------- Feb 13 19:03:53.349114 ntpd[1908]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:08:36 UTC 2025 (1): Starting Feb 13 19:03:53.364292 coreos-metadata[1903]: Feb 13 19:03:53.341 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:03:53.364292 coreos-metadata[1903]: Feb 13 19:03:53.350 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 19:03:53.364292 coreos-metadata[1903]: Feb 13 19:03:53.352 INFO Fetch successful Feb 13 19:03:53.364292 coreos-metadata[1903]: Feb 13 19:03:53.354 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 19:03:53.337280 (ntainerd)[1943]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:03:53.349179 ntpd[1908]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:03:53.366911 coreos-metadata[1903]: Feb 13 19:03:53.364 INFO Fetch successful Feb 13 19:03:53.366911 coreos-metadata[1903]: Feb 13 19:03:53.364 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 19:03:53.349200 ntpd[1908]: ---------------------------------------------------- Feb 13 19:03:53.349218 ntpd[1908]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:03:53.349243 ntpd[1908]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:03:53.349266 ntpd[1908]: corporation. Support and training for ntp-4 are Feb 13 19:03:53.349284 ntpd[1908]: available at https://www.nwtime.org/support Feb 13 19:03:53.349304 ntpd[1908]: ---------------------------------------------------- Feb 13 19:03:53.370119 coreos-metadata[1903]: Feb 13 19:03:53.369 INFO Fetch successful Feb 13 19:03:53.370119 coreos-metadata[1903]: Feb 13 19:03:53.370 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 19:03:53.378110 coreos-metadata[1903]: Feb 13 19:03:53.378 INFO Fetch successful Feb 13 19:03:53.378110 coreos-metadata[1903]: Feb 13 19:03:53.378 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 19:03:53.380157 ntpd[1908]: proto: precision = 0.096 usec (-23) Feb 13 19:03:53.380314 ntpd[1908]: 13 Feb 19:03:53 ntpd[1908]: proto: precision = 0.096 usec (-23) Feb 13 19:03:53.381732 coreos-metadata[1903]: Feb 13 19:03:53.381 INFO Fetch failed with 404: resource not found Feb 13 19:03:53.381732 coreos-metadata[1903]: Feb 13 19:03:53.381 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 19:03:53.386651 ntpd[1908]: basedate set to 2025-02-01 Feb 13 19:03:53.386700 ntpd[1908]: gps base set to 2025-02-02 (week 2352) Feb 13 19:03:53.386861 ntpd[1908]: 13 Feb 19:03:53 ntpd[1908]: basedate set to 2025-02-01 Feb 13 19:03:53.386861 ntpd[1908]: 13 Feb 19:03:53 ntpd[1908]: gps base set to 2025-02-02 (week 2352) Feb 13 19:03:53.389230 coreos-metadata[1903]: Feb 13 19:03:53.389 INFO Fetch successful Feb 13 19:03:53.389230 coreos-metadata[1903]: Feb 13 19:03:53.389 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 19:03:53.392880 coreos-metadata[1903]: Feb 13 19:03:53.392 INFO Fetch successful Feb 13 19:03:53.392880 coreos-metadata[1903]: Feb 13 19:03:53.392 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 19:03:53.400090 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:03:53.402840 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:03:53.405888 coreos-metadata[1903]: Feb 13 19:03:53.403 INFO Fetch successful Feb 13 19:03:53.405888 coreos-metadata[1903]: Feb 13 19:03:53.405 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 19:03:53.405888 coreos-metadata[1903]: Feb 13 19:03:53.405 INFO Fetch successful Feb 13 19:03:53.405888 coreos-metadata[1903]: Feb 13 19:03:53.405 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 19:03:53.407739 ntpd[1908]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:03:53.409643 ntpd[1908]: 13 Feb 19:03:53 ntpd[1908]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:03:53.409643 ntpd[1908]: 13 Feb 19:03:53 ntpd[1908]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:03:53.409643 ntpd[1908]: 13 Feb 19:03:53 ntpd[1908]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:03:53.409643 ntpd[1908]: 13 Feb 19:03:53 ntpd[1908]: Listen normally on 3 eth0 172.31.22.68:123 Feb 13 19:03:53.409643 ntpd[1908]: 13 Feb 19:03:53 ntpd[1908]: Listen normally on 4 lo [::1]:123 Feb 13 19:03:53.409643 ntpd[1908]: 13 Feb 19:03:53 ntpd[1908]: bind(21) AF_INET6 fe80::496:cfff:fec3:d351%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:03:53.409643 ntpd[1908]: 13 Feb 19:03:53 ntpd[1908]: unable to create socket on eth0 (5) for fe80::496:cfff:fec3:d351%2#123 Feb 13 19:03:53.409643 ntpd[1908]: 13 Feb 19:03:53 ntpd[1908]: failed to init interface for address fe80::496:cfff:fec3:d351%2 Feb 13 19:03:53.409643 ntpd[1908]: 13 Feb 19:03:53 ntpd[1908]: Listening on routing socket on fd #21 for interface updates Feb 13 19:03:53.407828 ntpd[1908]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:03:53.408073 ntpd[1908]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:03:53.408132 ntpd[1908]: Listen normally on 3 eth0 172.31.22.68:123 Feb 13 19:03:53.408198 ntpd[1908]: Listen normally on 4 lo [::1]:123 Feb 13 19:03:53.408268 ntpd[1908]: bind(21) AF_INET6 fe80::496:cfff:fec3:d351%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:03:53.408306 ntpd[1908]: unable to create socket on eth0 (5) for fe80::496:cfff:fec3:d351%2#123 Feb 13 19:03:53.408334 ntpd[1908]: failed to init interface for address fe80::496:cfff:fec3:d351%2 Feb 13 19:03:53.408384 ntpd[1908]: Listening on routing socket on fd #21 for interface updates Feb 13 19:03:53.414875 systemd-logind[1915]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:03:53.414920 systemd-logind[1915]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 19:03:53.417579 systemd-logind[1915]: New seat seat0. Feb 13 19:03:53.421380 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:03:53.421587 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:03:53.424886 ntpd[1908]: 13 Feb 19:03:53 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:03:53.424886 ntpd[1908]: 13 Feb 19:03:53 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:03:53.421453 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:03:53.433660 coreos-metadata[1903]: Feb 13 19:03:53.416 INFO Fetch successful Feb 13 19:03:53.442149 extend-filesystems[1906]: Resized partition /dev/nvme0n1p9 Feb 13 19:03:53.447166 extend-filesystems[1956]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:03:53.476536 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 19:03:53.477190 update_engine[1917]: I20250213 19:03:53.476982 1917 main.cc:92] Flatcar Update Engine starting Feb 13 19:03:53.503493 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:03:53.511131 update_engine[1917]: I20250213 19:03:53.505925 1917 update_check_scheduler.cc:74] Next update check in 7m18s Feb 13 19:03:53.511482 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:03:53.530940 jq[1946]: true Feb 13 19:03:53.519620 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:03:53.656564 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 19:03:53.640844 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:03:53.649346 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:03:53.688269 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1675) Feb 13 19:03:53.688433 extend-filesystems[1956]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 19:03:53.688433 extend-filesystems[1956]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:03:53.688433 extend-filesystems[1956]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 19:03:53.712687 extend-filesystems[1906]: Resized filesystem in /dev/nvme0n1p9 Feb 13 19:03:53.690399 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:03:53.693082 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:03:53.747166 dbus-daemon[1904]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:03:53.747467 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:03:53.754493 dbus-daemon[1904]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1932 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:03:53.794818 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:03:53.868858 polkitd[2002]: Started polkitd version 121 Feb 13 19:03:53.872373 bash[2015]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:03:53.894037 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:03:53.924289 polkitd[2002]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:03:53.924402 polkitd[2002]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:03:53.935037 polkitd[2002]: Finished loading, compiling and executing 2 rules Feb 13 19:03:53.936239 dbus-daemon[1904]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:03:53.936852 polkitd[2002]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:03:53.948773 locksmithd[1960]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:03:53.953049 systemd[1]: Starting sshkeys.service... Feb 13 19:03:53.954958 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:03:53.989286 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:03:54.031627 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:03:54.055453 systemd-hostnamed[1932]: Hostname set to (transient) Feb 13 19:03:54.055859 systemd-resolved[1849]: System hostname changed to 'ip-172-31-22-68'. Feb 13 19:03:54.083921 systemd-networkd[1848]: eth0: Gained IPv6LL Feb 13 19:03:54.102408 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:03:54.109990 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:03:54.153975 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 19:03:54.167067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:03:54.177237 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:03:54.396538 containerd[1943]: time="2025-02-13T19:03:54.388462126Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:03:54.423454 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:03:54.494387 containerd[1943]: time="2025-02-13T19:03:54.493703915Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:54.499135 coreos-metadata[2039]: Feb 13 19:03:54.494 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:03:54.522140 amazon-ssm-agent[2063]: Initializing new seelog logger Feb 13 19:03:54.522140 amazon-ssm-agent[2063]: New Seelog Logger Creation Complete Feb 13 19:03:54.522140 amazon-ssm-agent[2063]: 2025/02/13 19:03:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:03:54.522140 amazon-ssm-agent[2063]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:03:54.522140 amazon-ssm-agent[2063]: 2025/02/13 19:03:54 processing appconfig overrides Feb 13 19:03:54.524225 amazon-ssm-agent[2063]: 2025/02/13 19:03:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:03:54.524225 amazon-ssm-agent[2063]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:03:54.524316 coreos-metadata[2039]: Feb 13 19:03:54.522 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 19:03:54.526810 amazon-ssm-agent[2063]: 2025-02-13 19:03:54 INFO Proxy environment variables: Feb 13 19:03:54.526810 amazon-ssm-agent[2063]: 2025/02/13 19:03:54 processing appconfig overrides Feb 13 19:03:54.526810 amazon-ssm-agent[2063]: 2025/02/13 19:03:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:03:54.526810 amazon-ssm-agent[2063]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:03:54.527880 coreos-metadata[2039]: Feb 13 19:03:54.527 INFO Fetch successful Feb 13 19:03:54.531157 coreos-metadata[2039]: Feb 13 19:03:54.528 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:03:54.531157 coreos-metadata[2039]: Feb 13 19:03:54.531 INFO Fetch successful Feb 13 19:03:54.531787 containerd[1943]: time="2025-02-13T19:03:54.531721643Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:03:54.534103 amazon-ssm-agent[2063]: 2025/02/13 19:03:54 processing appconfig overrides Feb 13 19:03:54.534318 containerd[1943]: time="2025-02-13T19:03:54.533613719Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:03:54.534318 containerd[1943]: time="2025-02-13T19:03:54.533684531Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:03:54.540396 containerd[1943]: time="2025-02-13T19:03:54.537351887Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:03:54.540396 containerd[1943]: time="2025-02-13T19:03:54.537426911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:54.540396 containerd[1943]: time="2025-02-13T19:03:54.537655859Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:03:54.540396 containerd[1943]: time="2025-02-13T19:03:54.537695963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:54.540396 containerd[1943]: time="2025-02-13T19:03:54.538026107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:03:54.540396 containerd[1943]: time="2025-02-13T19:03:54.538057139Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:54.540396 containerd[1943]: time="2025-02-13T19:03:54.538086539Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:03:54.540396 containerd[1943]: time="2025-02-13T19:03:54.538109807Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:54.540396 containerd[1943]: time="2025-02-13T19:03:54.538267463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:54.540396 containerd[1943]: time="2025-02-13T19:03:54.538938167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:03:54.540396 containerd[1943]: time="2025-02-13T19:03:54.539215487Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:03:54.539758 unknown[2039]: wrote ssh authorized keys file for user: core Feb 13 19:03:54.541696 containerd[1943]: time="2025-02-13T19:03:54.539258123Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:03:54.550979 amazon-ssm-agent[2063]: 2025/02/13 19:03:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:03:54.550979 amazon-ssm-agent[2063]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:03:54.560087 containerd[1943]: time="2025-02-13T19:03:54.551738735Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:03:54.560087 containerd[1943]: time="2025-02-13T19:03:54.551869727Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:03:54.560360 amazon-ssm-agent[2063]: 2025/02/13 19:03:54 processing appconfig overrides Feb 13 19:03:54.580536 containerd[1943]: time="2025-02-13T19:03:54.577095875Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:03:54.580536 containerd[1943]: time="2025-02-13T19:03:54.577230983Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:03:54.580536 containerd[1943]: time="2025-02-13T19:03:54.577267451Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:03:54.580536 containerd[1943]: time="2025-02-13T19:03:54.577328699Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:03:54.580536 containerd[1943]: time="2025-02-13T19:03:54.577375391Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:03:54.580536 containerd[1943]: time="2025-02-13T19:03:54.577707527Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:03:54.580536 containerd[1943]: time="2025-02-13T19:03:54.578308403Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:03:54.580536 containerd[1943]: time="2025-02-13T19:03:54.578726807Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:03:54.580536 containerd[1943]: time="2025-02-13T19:03:54.578773631Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:03:54.580536 containerd[1943]: time="2025-02-13T19:03:54.578814551Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:03:54.580536 containerd[1943]: time="2025-02-13T19:03:54.578851115Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:03:54.580536 containerd[1943]: time="2025-02-13T19:03:54.578890247Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:03:54.580536 containerd[1943]: time="2025-02-13T19:03:54.578924891Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:03:54.580536 containerd[1943]: time="2025-02-13T19:03:54.578959907Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:03:54.581306 containerd[1943]: time="2025-02-13T19:03:54.578997515Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:03:54.581306 containerd[1943]: time="2025-02-13T19:03:54.579029207Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:03:54.581306 containerd[1943]: time="2025-02-13T19:03:54.579059063Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:03:54.581306 containerd[1943]: time="2025-02-13T19:03:54.579088247Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:03:54.581306 containerd[1943]: time="2025-02-13T19:03:54.579131447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:03:54.581306 containerd[1943]: time="2025-02-13T19:03:54.579163979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:03:54.581306 containerd[1943]: time="2025-02-13T19:03:54.579197291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:03:54.581306 containerd[1943]: time="2025-02-13T19:03:54.579247283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:03:54.581306 containerd[1943]: time="2025-02-13T19:03:54.579278375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:03:54.581306 containerd[1943]: time="2025-02-13T19:03:54.579307787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:03:54.581306 containerd[1943]: time="2025-02-13T19:03:54.579335303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:03:54.581306 containerd[1943]: time="2025-02-13T19:03:54.579364883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:03:54.581306 containerd[1943]: time="2025-02-13T19:03:54.579394631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:03:54.581306 containerd[1943]: time="2025-02-13T19:03:54.579429779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:03:54.582022 containerd[1943]: time="2025-02-13T19:03:54.579459935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:03:54.594312 containerd[1943]: time="2025-02-13T19:03:54.579488291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:03:54.594312 containerd[1943]: time="2025-02-13T19:03:54.591555467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:03:54.594312 containerd[1943]: time="2025-02-13T19:03:54.591667679Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:03:54.594312 containerd[1943]: time="2025-02-13T19:03:54.591743555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:03:54.594312 containerd[1943]: time="2025-02-13T19:03:54.591784775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:03:54.594312 containerd[1943]: time="2025-02-13T19:03:54.591820871Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:03:54.594312 containerd[1943]: time="2025-02-13T19:03:54.591983363Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:03:54.594312 containerd[1943]: time="2025-02-13T19:03:54.592026251Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:03:54.594312 containerd[1943]: time="2025-02-13T19:03:54.592053671Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:03:54.594312 containerd[1943]: time="2025-02-13T19:03:54.592084007Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:03:54.594312 containerd[1943]: time="2025-02-13T19:03:54.592107635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:03:54.594312 containerd[1943]: time="2025-02-13T19:03:54.592145987Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:03:54.594312 containerd[1943]: time="2025-02-13T19:03:54.592171655Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:03:54.594312 containerd[1943]: time="2025-02-13T19:03:54.592198187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:03:54.598686 containerd[1943]: time="2025-02-13T19:03:54.597651731Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:03:54.598686 containerd[1943]: time="2025-02-13T19:03:54.597843623Z" level=info msg="Connect containerd service" Feb 13 19:03:54.598686 containerd[1943]: time="2025-02-13T19:03:54.597968759Z" level=info msg="using legacy CRI server" Feb 13 19:03:54.598686 containerd[1943]: time="2025-02-13T19:03:54.597990443Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:03:54.621546 containerd[1943]: time="2025-02-13T19:03:54.618812220Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:03:54.629213 containerd[1943]: time="2025-02-13T19:03:54.627947304Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:03:54.629213 containerd[1943]: time="2025-02-13T19:03:54.628965624Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:03:54.629213 containerd[1943]: time="2025-02-13T19:03:54.629160024Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:03:54.629556 containerd[1943]: time="2025-02-13T19:03:54.629320140Z" level=info msg="Start subscribing containerd event" Feb 13 19:03:54.629556 containerd[1943]: time="2025-02-13T19:03:54.629428020Z" level=info msg="Start recovering state" Feb 13 19:03:54.632537 amazon-ssm-agent[2063]: 2025-02-13 19:03:54 INFO no_proxy: Feb 13 19:03:54.640442 containerd[1943]: time="2025-02-13T19:03:54.640309104Z" level=info msg="Start event monitor" Feb 13 19:03:54.640602 containerd[1943]: time="2025-02-13T19:03:54.640404048Z" level=info msg="Start snapshots syncer" Feb 13 19:03:54.640602 containerd[1943]: time="2025-02-13T19:03:54.640545744Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:03:54.640602 containerd[1943]: time="2025-02-13T19:03:54.640568016Z" level=info msg="Start streaming server" Feb 13 19:03:54.654187 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:03:54.655084 containerd[1943]: time="2025-02-13T19:03:54.655000368Z" level=info msg="containerd successfully booted in 0.271906s" Feb 13 19:03:54.662664 update-ssh-keys[2109]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:03:54.668639 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:03:54.681580 systemd[1]: Finished sshkeys.service. Feb 13 19:03:54.732633 amazon-ssm-agent[2063]: 2025-02-13 19:03:54 INFO https_proxy: Feb 13 19:03:54.836591 amazon-ssm-agent[2063]: 2025-02-13 19:03:54 INFO http_proxy: Feb 13 19:03:54.938652 amazon-ssm-agent[2063]: 2025-02-13 19:03:54 INFO Checking if agent identity type OnPrem can be assumed Feb 13 19:03:55.040603 amazon-ssm-agent[2063]: 2025-02-13 19:03:54 INFO Checking if agent identity type EC2 can be assumed Feb 13 19:03:55.141778 amazon-ssm-agent[2063]: 2025-02-13 19:03:54 INFO Agent will take identity from EC2 Feb 13 19:03:55.241484 amazon-ssm-agent[2063]: 2025-02-13 19:03:54 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:03:55.339592 amazon-ssm-agent[2063]: 2025-02-13 19:03:54 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:03:55.355876 amazon-ssm-agent[2063]: 2025-02-13 19:03:54 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:03:55.358858 amazon-ssm-agent[2063]: 2025-02-13 19:03:54 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 19:03:55.358858 amazon-ssm-agent[2063]: 2025-02-13 19:03:54 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 19:03:55.358858 amazon-ssm-agent[2063]: 2025-02-13 19:03:54 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 19:03:55.358858 amazon-ssm-agent[2063]: 2025-02-13 19:03:54 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 19:03:55.358858 amazon-ssm-agent[2063]: 2025-02-13 19:03:54 INFO [Registrar] Starting registrar module Feb 13 19:03:55.358858 amazon-ssm-agent[2063]: 2025-02-13 19:03:54 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 19:03:55.358858 amazon-ssm-agent[2063]: 2025-02-13 19:03:55 INFO [EC2Identity] EC2 registration was successful. Feb 13 19:03:55.358858 amazon-ssm-agent[2063]: 2025-02-13 19:03:55 INFO [CredentialRefresher] credentialRefresher has started Feb 13 19:03:55.358858 amazon-ssm-agent[2063]: 2025-02-13 19:03:55 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 19:03:55.358858 amazon-ssm-agent[2063]: 2025-02-13 19:03:55 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 19:03:55.438846 amazon-ssm-agent[2063]: 2025-02-13 19:03:55 INFO [CredentialRefresher] Next credential rotation will be in 32.11661199776667 minutes Feb 13 19:03:55.560787 tar[1927]: linux-arm64/LICENSE Feb 13 19:03:55.560787 tar[1927]: linux-arm64/README.md Feb 13 19:03:55.607348 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:03:56.160956 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:03:56.172714 (kubelet)[2137]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:03:56.366223 ntpd[1908]: Listen normally on 6 eth0 [fe80::496:cfff:fec3:d351%2]:123 Feb 13 19:03:56.368255 ntpd[1908]: 13 Feb 19:03:56 ntpd[1908]: Listen normally on 6 eth0 [fe80::496:cfff:fec3:d351%2]:123 Feb 13 19:03:56.404240 amazon-ssm-agent[2063]: 2025-02-13 19:03:56 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 19:03:56.505988 amazon-ssm-agent[2063]: 2025-02-13 19:03:56 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2143) started Feb 13 19:03:56.613939 amazon-ssm-agent[2063]: 2025-02-13 19:03:56 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 19:03:56.998662 sshd_keygen[1934]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:03:57.047712 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:03:57.060694 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:03:57.081000 systemd[1]: Started sshd@0-172.31.22.68:22-147.75.109.163:47556.service - OpenSSH per-connection server daemon (147.75.109.163:47556). Feb 13 19:03:57.108271 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:03:57.108731 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:03:57.119103 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:03:57.164719 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:03:57.178269 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:03:57.196044 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:03:57.199026 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:03:57.201881 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:03:57.206801 systemd[1]: Startup finished in 1.219s (kernel) + 8.572s (initrd) + 9.876s (userspace) = 19.669s. Feb 13 19:03:57.302081 kubelet[2137]: E0213 19:03:57.301928 2137 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:03:57.307061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:03:57.307746 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:03:57.308583 systemd[1]: kubelet.service: Consumed 1.310s CPU time. Feb 13 19:03:57.349986 sshd[2163]: Accepted publickey for core from 147.75.109.163 port 47556 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:57.355060 sshd-session[2163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:57.373371 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:03:57.382167 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:03:57.386953 systemd-logind[1915]: New session 1 of user core. Feb 13 19:03:57.416182 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:03:57.426252 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:03:57.447557 (systemd)[2179]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:03:57.673425 systemd[2179]: Queued start job for default target default.target. Feb 13 19:03:57.682157 systemd[2179]: Created slice app.slice - User Application Slice. Feb 13 19:03:57.682237 systemd[2179]: Reached target paths.target - Paths. Feb 13 19:03:57.682272 systemd[2179]: Reached target timers.target - Timers. Feb 13 19:03:57.685046 systemd[2179]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:03:57.719930 systemd[2179]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:03:57.720199 systemd[2179]: Reached target sockets.target - Sockets. Feb 13 19:03:57.720233 systemd[2179]: Reached target basic.target - Basic System. Feb 13 19:03:57.720338 systemd[2179]: Reached target default.target - Main User Target. Feb 13 19:03:57.720405 systemd[2179]: Startup finished in 259ms. Feb 13 19:03:57.720981 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:03:57.734785 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:03:57.895266 systemd[1]: Started sshd@1-172.31.22.68:22-147.75.109.163:47568.service - OpenSSH per-connection server daemon (147.75.109.163:47568). Feb 13 19:03:58.096567 sshd[2190]: Accepted publickey for core from 147.75.109.163 port 47568 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:58.099428 sshd-session[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:58.108847 systemd-logind[1915]: New session 2 of user core. Feb 13 19:03:58.115774 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:03:58.240624 sshd[2192]: Connection closed by 147.75.109.163 port 47568 Feb 13 19:03:58.241672 sshd-session[2190]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:58.254032 systemd[1]: sshd@1-172.31.22.68:22-147.75.109.163:47568.service: Deactivated successfully. Feb 13 19:03:58.257626 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:03:58.261089 systemd-logind[1915]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:03:58.280183 systemd[1]: Started sshd@2-172.31.22.68:22-147.75.109.163:47572.service - OpenSSH per-connection server daemon (147.75.109.163:47572). Feb 13 19:03:58.282815 systemd-logind[1915]: Removed session 2. Feb 13 19:03:58.471565 sshd[2197]: Accepted publickey for core from 147.75.109.163 port 47572 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:58.474843 sshd-session[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:58.484706 systemd-logind[1915]: New session 3 of user core. Feb 13 19:03:58.494780 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:03:58.614607 sshd[2199]: Connection closed by 147.75.109.163 port 47572 Feb 13 19:03:58.615817 sshd-session[2197]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:58.623353 systemd-logind[1915]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:03:58.625496 systemd[1]: sshd@2-172.31.22.68:22-147.75.109.163:47572.service: Deactivated successfully. Feb 13 19:03:58.629610 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:03:58.631484 systemd-logind[1915]: Removed session 3. Feb 13 19:03:58.654061 systemd[1]: Started sshd@3-172.31.22.68:22-147.75.109.163:47586.service - OpenSSH per-connection server daemon (147.75.109.163:47586). Feb 13 19:03:58.844548 sshd[2204]: Accepted publickey for core from 147.75.109.163 port 47586 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:58.848110 sshd-session[2204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:58.857755 systemd-logind[1915]: New session 4 of user core. Feb 13 19:03:58.868876 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:03:59.001559 sshd[2206]: Connection closed by 147.75.109.163 port 47586 Feb 13 19:03:59.000818 sshd-session[2204]: pam_unix(sshd:session): session closed for user core Feb 13 19:03:59.008398 systemd[1]: sshd@3-172.31.22.68:22-147.75.109.163:47586.service: Deactivated successfully. Feb 13 19:03:59.009071 systemd-logind[1915]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:03:59.012801 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:03:59.017618 systemd-logind[1915]: Removed session 4. Feb 13 19:03:59.044042 systemd[1]: Started sshd@4-172.31.22.68:22-147.75.109.163:47602.service - OpenSSH per-connection server daemon (147.75.109.163:47602). Feb 13 19:03:59.221464 sshd[2211]: Accepted publickey for core from 147.75.109.163 port 47602 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:03:59.223535 sshd-session[2211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:03:59.231429 systemd-logind[1915]: New session 5 of user core. Feb 13 19:03:59.244034 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:03:59.360484 sudo[2214]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:03:59.361175 sudo[2214]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:04:00.082010 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:04:00.093098 (dockerd)[2232]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:04:00.592203 systemd-resolved[1849]: Clock change detected. Flushing caches. Feb 13 19:04:00.821136 dockerd[2232]: time="2025-02-13T19:04:00.820196421Z" level=info msg="Starting up" Feb 13 19:04:01.029212 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1420026308-merged.mount: Deactivated successfully. Feb 13 19:04:01.145324 systemd[1]: var-lib-docker-metacopy\x2dcheck177786435-merged.mount: Deactivated successfully. Feb 13 19:04:01.156843 dockerd[2232]: time="2025-02-13T19:04:01.156621786Z" level=info msg="Loading containers: start." Feb 13 19:04:01.469140 kernel: Initializing XFRM netlink socket Feb 13 19:04:01.528878 (udev-worker)[2253]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:04:01.676348 systemd-networkd[1848]: docker0: Link UP Feb 13 19:04:01.729994 dockerd[2232]: time="2025-02-13T19:04:01.729739881Z" level=info msg="Loading containers: done." Feb 13 19:04:01.763752 dockerd[2232]: time="2025-02-13T19:04:01.763566669Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:04:01.764252 dockerd[2232]: time="2025-02-13T19:04:01.763782033Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 19:04:01.764374 dockerd[2232]: time="2025-02-13T19:04:01.764220021Z" level=info msg="Daemon has completed initialization" Feb 13 19:04:01.857483 dockerd[2232]: time="2025-02-13T19:04:01.856631266Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:04:01.857307 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:04:02.019817 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1251870522-merged.mount: Deactivated successfully. Feb 13 19:04:03.045715 containerd[1943]: time="2025-02-13T19:04:03.045623792Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 19:04:03.662942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1167519605.mount: Deactivated successfully. Feb 13 19:04:04.989845 containerd[1943]: time="2025-02-13T19:04:04.989728273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:04.992321 containerd[1943]: time="2025-02-13T19:04:04.992209177Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620375" Feb 13 19:04:04.993511 containerd[1943]: time="2025-02-13T19:04:04.993420841Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:05.000831 containerd[1943]: time="2025-02-13T19:04:05.000737457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:05.003591 containerd[1943]: time="2025-02-13T19:04:05.003292473Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 1.957596189s" Feb 13 19:04:05.003591 containerd[1943]: time="2025-02-13T19:04:05.003351345Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 19:04:05.005033 containerd[1943]: time="2025-02-13T19:04:05.004973469Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 19:04:06.820679 containerd[1943]: time="2025-02-13T19:04:06.820332254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:06.822144 containerd[1943]: time="2025-02-13T19:04:06.822060662Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471773" Feb 13 19:04:06.823292 containerd[1943]: time="2025-02-13T19:04:06.823210502Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:06.830464 containerd[1943]: time="2025-02-13T19:04:06.830368166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:06.833981 containerd[1943]: time="2025-02-13T19:04:06.832510322Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 1.827294309s" Feb 13 19:04:06.833981 containerd[1943]: time="2025-02-13T19:04:06.832612286Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 19:04:06.833981 containerd[1943]: time="2025-02-13T19:04:06.833640591Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 19:04:07.783395 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:04:07.796410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:04:08.136636 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:04:08.159285 (kubelet)[2487]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:04:08.266455 kubelet[2487]: E0213 19:04:08.266231 2487 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:04:08.275323 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:04:08.275711 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:04:08.673357 containerd[1943]: time="2025-02-13T19:04:08.673293400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:08.676534 containerd[1943]: time="2025-02-13T19:04:08.676335784Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:08.676534 containerd[1943]: time="2025-02-13T19:04:08.676439332Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024540" Feb 13 19:04:08.686223 containerd[1943]: time="2025-02-13T19:04:08.686158348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:08.687446 containerd[1943]: time="2025-02-13T19:04:08.686739160Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 1.853047545s" Feb 13 19:04:08.687446 containerd[1943]: time="2025-02-13T19:04:08.686829496Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 19:04:08.687668 containerd[1943]: time="2025-02-13T19:04:08.687597676Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 19:04:09.963904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2269991681.mount: Deactivated successfully. Feb 13 19:04:10.502918 containerd[1943]: time="2025-02-13T19:04:10.502769945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:10.505266 containerd[1943]: time="2025-02-13T19:04:10.505152065Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769256" Feb 13 19:04:10.506408 containerd[1943]: time="2025-02-13T19:04:10.506311157Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:10.511301 containerd[1943]: time="2025-02-13T19:04:10.511214393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:10.513315 containerd[1943]: time="2025-02-13T19:04:10.512997185Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.825333929s" Feb 13 19:04:10.513315 containerd[1943]: time="2025-02-13T19:04:10.513109229Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 19:04:10.514707 containerd[1943]: time="2025-02-13T19:04:10.514332689Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:04:11.045115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount514693287.mount: Deactivated successfully. Feb 13 19:04:12.094477 containerd[1943]: time="2025-02-13T19:04:12.094414529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:12.097161 containerd[1943]: time="2025-02-13T19:04:12.097097117Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 19:04:12.097874 containerd[1943]: time="2025-02-13T19:04:12.097834337Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:12.103429 containerd[1943]: time="2025-02-13T19:04:12.103376177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:12.105742 containerd[1943]: time="2025-02-13T19:04:12.105680213Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.591282664s" Feb 13 19:04:12.105866 containerd[1943]: time="2025-02-13T19:04:12.105738485Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 19:04:12.106695 containerd[1943]: time="2025-02-13T19:04:12.106533017Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:04:12.611285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1833066217.mount: Deactivated successfully. Feb 13 19:04:12.622914 containerd[1943]: time="2025-02-13T19:04:12.622785667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:12.623783 containerd[1943]: time="2025-02-13T19:04:12.623511187Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Feb 13 19:04:12.625943 containerd[1943]: time="2025-02-13T19:04:12.625837867Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:12.633142 containerd[1943]: time="2025-02-13T19:04:12.631487407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:12.633559 containerd[1943]: time="2025-02-13T19:04:12.633498859Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 526.877174ms" Feb 13 19:04:12.633718 containerd[1943]: time="2025-02-13T19:04:12.633687247Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 19:04:12.634540 containerd[1943]: time="2025-02-13T19:04:12.634487623Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 19:04:13.239031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1879566178.mount: Deactivated successfully. Feb 13 19:04:15.547128 containerd[1943]: time="2025-02-13T19:04:15.546371650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:15.548899 containerd[1943]: time="2025-02-13T19:04:15.548813122Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Feb 13 19:04:15.550435 containerd[1943]: time="2025-02-13T19:04:15.550336510Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:15.556798 containerd[1943]: time="2025-02-13T19:04:15.556716214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:15.559628 containerd[1943]: time="2025-02-13T19:04:15.559379914Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.924818107s" Feb 13 19:04:15.559628 containerd[1943]: time="2025-02-13T19:04:15.559436350Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 19:04:18.419834 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:04:18.430757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:04:18.749337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:04:18.763681 (kubelet)[2632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:04:18.846094 kubelet[2632]: E0213 19:04:18.844570 2632 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:04:18.850235 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:04:18.850558 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:04:24.318546 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:04:25.285158 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:04:25.294782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:04:25.359032 systemd[1]: Reloading requested from client PID 2649 ('systemctl') (unit session-5.scope)... Feb 13 19:04:25.359111 systemd[1]: Reloading... Feb 13 19:04:25.588128 zram_generator::config[2695]: No configuration found. Feb 13 19:04:25.829948 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:04:25.998509 systemd[1]: Reloading finished in 638 ms. Feb 13 19:04:26.086033 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:04:26.086235 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:04:26.088120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:04:26.101859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:04:26.513734 (kubelet)[2751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:04:26.514802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:04:26.603540 kubelet[2751]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:04:26.603540 kubelet[2751]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:04:26.603540 kubelet[2751]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:04:26.603540 kubelet[2751]: I0213 19:04:26.603373 2751 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:04:27.431477 kubelet[2751]: I0213 19:04:27.431407 2751 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:04:27.431477 kubelet[2751]: I0213 19:04:27.431458 2751 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:04:27.431993 kubelet[2751]: I0213 19:04:27.431938 2751 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:04:27.471926 kubelet[2751]: E0213 19:04:27.471811 2751 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.22.68:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.22.68:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:04:27.474527 kubelet[2751]: I0213 19:04:27.474159 2751 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:04:27.491572 kubelet[2751]: E0213 19:04:27.491461 2751 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:04:27.491572 kubelet[2751]: I0213 19:04:27.491556 2751 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:04:27.499266 kubelet[2751]: I0213 19:04:27.499201 2751 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:04:27.499640 kubelet[2751]: I0213 19:04:27.499593 2751 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:04:27.500115 kubelet[2751]: I0213 19:04:27.499981 2751 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:04:27.500550 kubelet[2751]: I0213 19:04:27.500093 2751 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-68","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:04:27.500782 kubelet[2751]: I0213 19:04:27.500571 2751 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:04:27.500782 kubelet[2751]: I0213 19:04:27.500596 2751 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:04:27.500890 kubelet[2751]: I0213 19:04:27.500858 2751 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:04:27.504012 kubelet[2751]: I0213 19:04:27.503943 2751 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:04:27.504012 kubelet[2751]: I0213 19:04:27.504003 2751 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:04:27.504286 kubelet[2751]: I0213 19:04:27.504058 2751 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:04:27.504286 kubelet[2751]: I0213 19:04:27.504105 2751 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:04:27.514131 kubelet[2751]: W0213 19:04:27.512685 2751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-68&limit=500&resourceVersion=0": dial tcp 172.31.22.68:6443: connect: connection refused Feb 13 19:04:27.514131 kubelet[2751]: E0213 19:04:27.512828 2751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.22.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-68&limit=500&resourceVersion=0\": dial tcp 172.31.22.68:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:04:27.514972 kubelet[2751]: W0213 19:04:27.514849 2751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.22.68:6443: connect: connection refused Feb 13 19:04:27.515212 kubelet[2751]: E0213 19:04:27.514976 2751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.22.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.68:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:04:27.515276 kubelet[2751]: I0213 19:04:27.515234 2751 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:04:27.518911 kubelet[2751]: I0213 19:04:27.518605 2751 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:04:27.520228 kubelet[2751]: W0213 19:04:27.519994 2751 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:04:27.522439 kubelet[2751]: I0213 19:04:27.522195 2751 server.go:1269] "Started kubelet" Feb 13 19:04:27.522926 kubelet[2751]: I0213 19:04:27.522841 2751 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:04:27.525359 kubelet[2751]: I0213 19:04:27.525298 2751 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:04:27.527586 kubelet[2751]: I0213 19:04:27.527479 2751 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:04:27.528420 kubelet[2751]: I0213 19:04:27.528342 2751 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:04:27.530868 kubelet[2751]: E0213 19:04:27.528716 2751 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.68:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.68:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-68.1823d9e70fdfc1c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-68,UID:ip-172-31-22-68,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-68,},FirstTimestamp:2025-02-13 19:04:27.522138561 +0000 UTC m=+0.998783394,LastTimestamp:2025-02-13 19:04:27.522138561 +0000 UTC m=+0.998783394,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-68,}" Feb 13 19:04:27.533165 kubelet[2751]: I0213 19:04:27.532746 2751 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:04:27.533794 kubelet[2751]: I0213 19:04:27.533746 2751 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:04:27.541579 kubelet[2751]: I0213 19:04:27.541512 2751 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:04:27.542869 kubelet[2751]: I0213 19:04:27.541760 2751 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:04:27.542869 kubelet[2751]: I0213 19:04:27.541879 2751 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:04:27.542869 kubelet[2751]: W0213 19:04:27.542564 2751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.68:6443: connect: connection refused Feb 13 19:04:27.542869 kubelet[2751]: E0213 19:04:27.542647 2751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.22.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.68:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:04:27.544440 kubelet[2751]: I0213 19:04:27.544366 2751 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:04:27.544601 kubelet[2751]: I0213 19:04:27.544557 2751 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:04:27.547415 kubelet[2751]: I0213 19:04:27.546850 2751 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:04:27.548147 kubelet[2751]: E0213 19:04:27.547984 2751 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-22-68\" not found" Feb 13 19:04:27.550224 kubelet[2751]: E0213 19:04:27.550148 2751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-68?timeout=10s\": dial tcp 172.31.22.68:6443: connect: connection refused" interval="200ms" Feb 13 19:04:27.552192 kubelet[2751]: E0213 19:04:27.552151 2751 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:04:27.595096 kubelet[2751]: I0213 19:04:27.595025 2751 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:04:27.595784 kubelet[2751]: I0213 19:04:27.595374 2751 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:04:27.595784 kubelet[2751]: I0213 19:04:27.595413 2751 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:04:27.600870 kubelet[2751]: I0213 19:04:27.600754 2751 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:04:27.603392 kubelet[2751]: I0213 19:04:27.603348 2751 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:04:27.603593 kubelet[2751]: I0213 19:04:27.603571 2751 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:04:27.604618 kubelet[2751]: I0213 19:04:27.604154 2751 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:04:27.604618 kubelet[2751]: E0213 19:04:27.604237 2751 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:04:27.604906 kubelet[2751]: I0213 19:04:27.604881 2751 policy_none.go:49] "None policy: Start" Feb 13 19:04:27.606440 kubelet[2751]: W0213 19:04:27.605919 2751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.68:6443: connect: connection refused Feb 13 19:04:27.606440 kubelet[2751]: E0213 19:04:27.606011 2751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.22.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.68:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:04:27.608031 kubelet[2751]: I0213 19:04:27.607990 2751 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:04:27.608720 kubelet[2751]: I0213 19:04:27.608365 2751 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:04:27.624220 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:04:27.646727 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:04:27.648638 kubelet[2751]: E0213 19:04:27.648602 2751 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-22-68\" not found" Feb 13 19:04:27.654578 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:04:27.667119 kubelet[2751]: I0213 19:04:27.666313 2751 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:04:27.667119 kubelet[2751]: I0213 19:04:27.666884 2751 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:04:27.667119 kubelet[2751]: I0213 19:04:27.666909 2751 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:04:27.667400 kubelet[2751]: I0213 19:04:27.667367 2751 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:04:27.672839 kubelet[2751]: E0213 19:04:27.672779 2751 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-22-68\" not found" Feb 13 19:04:27.724803 systemd[1]: Created slice kubepods-burstable-podc5e9af9db0b5e08a4cdc285786466390.slice - libcontainer container kubepods-burstable-podc5e9af9db0b5e08a4cdc285786466390.slice. Feb 13 19:04:27.745485 kubelet[2751]: I0213 19:04:27.744084 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/519c2c09f5ed5daa193bc92b5718e2ee-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-68\" (UID: \"519c2c09f5ed5daa193bc92b5718e2ee\") " pod="kube-system/kube-controller-manager-ip-172-31-22-68" Feb 13 19:04:27.745485 kubelet[2751]: I0213 19:04:27.744180 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/519c2c09f5ed5daa193bc92b5718e2ee-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-68\" (UID: \"519c2c09f5ed5daa193bc92b5718e2ee\") " pod="kube-system/kube-controller-manager-ip-172-31-22-68" Feb 13 19:04:27.745485 kubelet[2751]: I0213 19:04:27.744242 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/519c2c09f5ed5daa193bc92b5718e2ee-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-68\" (UID: \"519c2c09f5ed5daa193bc92b5718e2ee\") " pod="kube-system/kube-controller-manager-ip-172-31-22-68" Feb 13 19:04:27.745485 kubelet[2751]: I0213 19:04:27.744286 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b7a035521dc8bbfc41191382036dd7e-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-68\" (UID: \"8b7a035521dc8bbfc41191382036dd7e\") " pod="kube-system/kube-scheduler-ip-172-31-22-68" Feb 13 19:04:27.745485 kubelet[2751]: I0213 19:04:27.744326 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/519c2c09f5ed5daa193bc92b5718e2ee-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-68\" (UID: \"519c2c09f5ed5daa193bc92b5718e2ee\") " pod="kube-system/kube-controller-manager-ip-172-31-22-68" Feb 13 19:04:27.745856 kubelet[2751]: I0213 19:04:27.744375 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/519c2c09f5ed5daa193bc92b5718e2ee-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-68\" (UID: \"519c2c09f5ed5daa193bc92b5718e2ee\") " pod="kube-system/kube-controller-manager-ip-172-31-22-68" Feb 13 19:04:27.745856 kubelet[2751]: I0213 19:04:27.744422 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5e9af9db0b5e08a4cdc285786466390-ca-certs\") pod \"kube-apiserver-ip-172-31-22-68\" (UID: \"c5e9af9db0b5e08a4cdc285786466390\") " pod="kube-system/kube-apiserver-ip-172-31-22-68" Feb 13 19:04:27.745856 kubelet[2751]: I0213 19:04:27.744458 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5e9af9db0b5e08a4cdc285786466390-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-68\" (UID: \"c5e9af9db0b5e08a4cdc285786466390\") " pod="kube-system/kube-apiserver-ip-172-31-22-68" Feb 13 19:04:27.745856 kubelet[2751]: I0213 19:04:27.744534 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5e9af9db0b5e08a4cdc285786466390-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-68\" (UID: \"c5e9af9db0b5e08a4cdc285786466390\") " pod="kube-system/kube-apiserver-ip-172-31-22-68" Feb 13 19:04:27.751679 kubelet[2751]: E0213 19:04:27.751359 2751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-68?timeout=10s\": dial tcp 172.31.22.68:6443: connect: connection refused" interval="400ms" Feb 13 19:04:27.754527 systemd[1]: Created slice kubepods-burstable-pod519c2c09f5ed5daa193bc92b5718e2ee.slice - libcontainer container kubepods-burstable-pod519c2c09f5ed5daa193bc92b5718e2ee.slice. Feb 13 19:04:27.771918 kubelet[2751]: I0213 19:04:27.771475 2751 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-68" Feb 13 19:04:27.772526 kubelet[2751]: E0213 19:04:27.772440 2751 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.22.68:6443/api/v1/nodes\": dial tcp 172.31.22.68:6443: connect: connection refused" node="ip-172-31-22-68" Feb 13 19:04:27.775446 systemd[1]: Created slice kubepods-burstable-pod8b7a035521dc8bbfc41191382036dd7e.slice - libcontainer container kubepods-burstable-pod8b7a035521dc8bbfc41191382036dd7e.slice. Feb 13 19:04:27.976795 kubelet[2751]: I0213 19:04:27.976733 2751 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-68" Feb 13 19:04:27.977835 kubelet[2751]: E0213 19:04:27.977739 2751 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.22.68:6443/api/v1/nodes\": dial tcp 172.31.22.68:6443: connect: connection refused" node="ip-172-31-22-68" Feb 13 19:04:28.047979 containerd[1943]: time="2025-02-13T19:04:28.047826596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-68,Uid:c5e9af9db0b5e08a4cdc285786466390,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:28.067727 containerd[1943]: time="2025-02-13T19:04:28.067630112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-68,Uid:519c2c09f5ed5daa193bc92b5718e2ee,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:28.081996 containerd[1943]: time="2025-02-13T19:04:28.081634232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-68,Uid:8b7a035521dc8bbfc41191382036dd7e,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:28.152682 kubelet[2751]: E0213 19:04:28.152616 2751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-68?timeout=10s\": dial tcp 172.31.22.68:6443: connect: connection refused" interval="800ms" Feb 13 19:04:28.337396 kubelet[2751]: W0213 19:04:28.337186 2751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-68&limit=500&resourceVersion=0": dial tcp 172.31.22.68:6443: connect: connection refused Feb 13 19:04:28.337396 kubelet[2751]: E0213 19:04:28.337295 2751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.22.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-68&limit=500&resourceVersion=0\": dial tcp 172.31.22.68:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:04:28.381144 kubelet[2751]: I0213 19:04:28.380471 2751 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-68" Feb 13 19:04:28.381144 kubelet[2751]: E0213 19:04:28.380963 2751 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.22.68:6443/api/v1/nodes\": dial tcp 172.31.22.68:6443: connect: connection refused" node="ip-172-31-22-68" Feb 13 19:04:28.600721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount663555607.mount: Deactivated successfully. Feb 13 19:04:28.617667 containerd[1943]: time="2025-02-13T19:04:28.617598095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:04:28.626146 containerd[1943]: time="2025-02-13T19:04:28.625845911Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 19:04:28.629148 containerd[1943]: time="2025-02-13T19:04:28.628211495Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:04:28.631432 containerd[1943]: time="2025-02-13T19:04:28.631342007Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:04:28.635259 containerd[1943]: time="2025-02-13T19:04:28.635165519Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:04:28.637568 containerd[1943]: time="2025-02-13T19:04:28.637444967Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:04:28.639643 containerd[1943]: time="2025-02-13T19:04:28.639500039Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:04:28.641853 containerd[1943]: time="2025-02-13T19:04:28.641731079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:04:28.644361 containerd[1943]: time="2025-02-13T19:04:28.643965227Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 596.026455ms" Feb 13 19:04:28.651987 containerd[1943]: time="2025-02-13T19:04:28.651906971Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 570.159399ms" Feb 13 19:04:28.662823 containerd[1943]: time="2025-02-13T19:04:28.662636939Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 594.881283ms" Feb 13 19:04:28.669895 kubelet[2751]: W0213 19:04:28.669707 2751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.68:6443: connect: connection refused Feb 13 19:04:28.670668 kubelet[2751]: E0213 19:04:28.669941 2751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.22.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.68:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:04:28.898253 containerd[1943]: time="2025-02-13T19:04:28.896234748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:28.898253 containerd[1943]: time="2025-02-13T19:04:28.897293580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:28.898253 containerd[1943]: time="2025-02-13T19:04:28.898170888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:28.898940 containerd[1943]: time="2025-02-13T19:04:28.898346628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:28.900131 containerd[1943]: time="2025-02-13T19:04:28.899483052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:28.900131 containerd[1943]: time="2025-02-13T19:04:28.899625420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:28.900131 containerd[1943]: time="2025-02-13T19:04:28.899655972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:28.900131 containerd[1943]: time="2025-02-13T19:04:28.899834040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:28.911573 containerd[1943]: time="2025-02-13T19:04:28.910984320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:28.911573 containerd[1943]: time="2025-02-13T19:04:28.911247372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:28.911573 containerd[1943]: time="2025-02-13T19:04:28.911289504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:28.912201 containerd[1943]: time="2025-02-13T19:04:28.911528424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:28.920795 kubelet[2751]: W0213 19:04:28.920689 2751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.22.68:6443: connect: connection refused Feb 13 19:04:28.920795 kubelet[2751]: E0213 19:04:28.920795 2751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.22.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.68:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:04:28.954455 kubelet[2751]: E0213 19:04:28.954360 2751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-68?timeout=10s\": dial tcp 172.31.22.68:6443: connect: connection refused" interval="1.6s" Feb 13 19:04:28.965531 systemd[1]: Started cri-containerd-32979b379a58cddd0b75e10132743f258f56da7c189de7a50a5eefc5449b7a6c.scope - libcontainer container 32979b379a58cddd0b75e10132743f258f56da7c189de7a50a5eefc5449b7a6c. Feb 13 19:04:28.973502 systemd[1]: Started cri-containerd-7c8c90b644726ec71bd4343d106e3e4420e2a22aedf450dacaf12b85e6db3e7f.scope - libcontainer container 7c8c90b644726ec71bd4343d106e3e4420e2a22aedf450dacaf12b85e6db3e7f. Feb 13 19:04:28.989839 systemd[1]: Started cri-containerd-48e1df3c603239631678348c6e8086bfee8eacc8478b09bf767baffae595d78d.scope - libcontainer container 48e1df3c603239631678348c6e8086bfee8eacc8478b09bf767baffae595d78d. Feb 13 19:04:29.083764 containerd[1943]: time="2025-02-13T19:04:29.082030521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-68,Uid:8b7a035521dc8bbfc41191382036dd7e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c8c90b644726ec71bd4343d106e3e4420e2a22aedf450dacaf12b85e6db3e7f\"" Feb 13 19:04:29.099040 containerd[1943]: time="2025-02-13T19:04:29.098440941Z" level=info msg="CreateContainer within sandbox \"7c8c90b644726ec71bd4343d106e3e4420e2a22aedf450dacaf12b85e6db3e7f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:04:29.125680 kubelet[2751]: W0213 19:04:29.125362 2751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.68:6443: connect: connection refused Feb 13 19:04:29.126198 kubelet[2751]: E0213 19:04:29.125835 2751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.22.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.68:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:04:29.141672 containerd[1943]: time="2025-02-13T19:04:29.141508101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-68,Uid:519c2c09f5ed5daa193bc92b5718e2ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"32979b379a58cddd0b75e10132743f258f56da7c189de7a50a5eefc5449b7a6c\"" Feb 13 19:04:29.154406 containerd[1943]: time="2025-02-13T19:04:29.151767585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-68,Uid:c5e9af9db0b5e08a4cdc285786466390,Namespace:kube-system,Attempt:0,} returns sandbox id \"48e1df3c603239631678348c6e8086bfee8eacc8478b09bf767baffae595d78d\"" Feb 13 19:04:29.155863 containerd[1943]: time="2025-02-13T19:04:29.155479221Z" level=info msg="CreateContainer within sandbox \"32979b379a58cddd0b75e10132743f258f56da7c189de7a50a5eefc5449b7a6c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:04:29.158298 containerd[1943]: time="2025-02-13T19:04:29.158209845Z" level=info msg="CreateContainer within sandbox \"48e1df3c603239631678348c6e8086bfee8eacc8478b09bf767baffae595d78d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:04:29.171017 containerd[1943]: time="2025-02-13T19:04:29.170709933Z" level=info msg="CreateContainer within sandbox \"7c8c90b644726ec71bd4343d106e3e4420e2a22aedf450dacaf12b85e6db3e7f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3a360d3ff8e54a12a68d0e551d92fdcb44dbf071b4a788f249b405736dd24601\"" Feb 13 19:04:29.171810 containerd[1943]: time="2025-02-13T19:04:29.171754329Z" level=info msg="StartContainer for \"3a360d3ff8e54a12a68d0e551d92fdcb44dbf071b4a788f249b405736dd24601\"" Feb 13 19:04:29.194205 kubelet[2751]: I0213 19:04:29.193584 2751 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-68" Feb 13 19:04:29.194798 kubelet[2751]: E0213 19:04:29.194736 2751 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.22.68:6443/api/v1/nodes\": dial tcp 172.31.22.68:6443: connect: connection refused" node="ip-172-31-22-68" Feb 13 19:04:29.230701 containerd[1943]: time="2025-02-13T19:04:29.229727698Z" level=info msg="CreateContainer within sandbox \"48e1df3c603239631678348c6e8086bfee8eacc8478b09bf767baffae595d78d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"381c1c113e59decb33ee9150f5eacba149b2528ce6b7952a04f8a427bbf5b094\"" Feb 13 19:04:29.233248 containerd[1943]: time="2025-02-13T19:04:29.232413478Z" level=info msg="StartContainer for \"381c1c113e59decb33ee9150f5eacba149b2528ce6b7952a04f8a427bbf5b094\"" Feb 13 19:04:29.233248 containerd[1943]: time="2025-02-13T19:04:29.233033998Z" level=info msg="CreateContainer within sandbox \"32979b379a58cddd0b75e10132743f258f56da7c189de7a50a5eefc5449b7a6c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"809a5a30b892184210ae88a708ea9bf6e11572e4befae4fd3c0daf66457d7e3a\"" Feb 13 19:04:29.234385 containerd[1943]: time="2025-02-13T19:04:29.234305050Z" level=info msg="StartContainer for \"809a5a30b892184210ae88a708ea9bf6e11572e4befae4fd3c0daf66457d7e3a\"" Feb 13 19:04:29.249392 systemd[1]: Started cri-containerd-3a360d3ff8e54a12a68d0e551d92fdcb44dbf071b4a788f249b405736dd24601.scope - libcontainer container 3a360d3ff8e54a12a68d0e551d92fdcb44dbf071b4a788f249b405736dd24601. Feb 13 19:04:29.342965 systemd[1]: Started cri-containerd-809a5a30b892184210ae88a708ea9bf6e11572e4befae4fd3c0daf66457d7e3a.scope - libcontainer container 809a5a30b892184210ae88a708ea9bf6e11572e4befae4fd3c0daf66457d7e3a. Feb 13 19:04:29.380504 systemd[1]: Started cri-containerd-381c1c113e59decb33ee9150f5eacba149b2528ce6b7952a04f8a427bbf5b094.scope - libcontainer container 381c1c113e59decb33ee9150f5eacba149b2528ce6b7952a04f8a427bbf5b094. Feb 13 19:04:29.398492 containerd[1943]: time="2025-02-13T19:04:29.397977971Z" level=info msg="StartContainer for \"3a360d3ff8e54a12a68d0e551d92fdcb44dbf071b4a788f249b405736dd24601\" returns successfully" Feb 13 19:04:29.471424 containerd[1943]: time="2025-02-13T19:04:29.471337331Z" level=info msg="StartContainer for \"809a5a30b892184210ae88a708ea9bf6e11572e4befae4fd3c0daf66457d7e3a\" returns successfully" Feb 13 19:04:29.512707 containerd[1943]: time="2025-02-13T19:04:29.512504519Z" level=info msg="StartContainer for \"381c1c113e59decb33ee9150f5eacba149b2528ce6b7952a04f8a427bbf5b094\" returns successfully" Feb 13 19:04:29.662636 kubelet[2751]: E0213 19:04:29.662571 2751 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.22.68:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.22.68:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:04:30.800635 kubelet[2751]: I0213 19:04:30.799055 2751 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-68" Feb 13 19:04:34.165278 kubelet[2751]: E0213 19:04:34.165171 2751 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-22-68\" not found" node="ip-172-31-22-68" Feb 13 19:04:34.253659 kubelet[2751]: I0213 19:04:34.253529 2751 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-22-68" Feb 13 19:04:34.253659 kubelet[2751]: E0213 19:04:34.253603 2751 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-22-68\": node \"ip-172-31-22-68\" not found" Feb 13 19:04:34.518486 kubelet[2751]: I0213 19:04:34.518370 2751 apiserver.go:52] "Watching apiserver" Feb 13 19:04:34.542674 kubelet[2751]: I0213 19:04:34.542606 2751 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:04:36.537899 systemd[1]: Reloading requested from client PID 3026 ('systemctl') (unit session-5.scope)... Feb 13 19:04:36.538755 systemd[1]: Reloading... Feb 13 19:04:36.891164 zram_generator::config[3069]: No configuration found. Feb 13 19:04:37.192380 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:04:37.460090 systemd[1]: Reloading finished in 920 ms. Feb 13 19:04:37.562994 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:04:37.581621 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:04:37.582270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:04:37.582366 systemd[1]: kubelet.service: Consumed 1.823s CPU time, 115.0M memory peak, 0B memory swap peak. Feb 13 19:04:37.594825 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:04:37.983435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:04:38.003995 (kubelet)[3125]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:04:38.113475 kubelet[3125]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:04:38.113475 kubelet[3125]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:04:38.113475 kubelet[3125]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:04:38.114289 kubelet[3125]: I0213 19:04:38.113594 3125 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:04:38.141529 kubelet[3125]: I0213 19:04:38.141406 3125 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:04:38.141529 kubelet[3125]: I0213 19:04:38.141505 3125 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:04:38.142652 kubelet[3125]: I0213 19:04:38.142331 3125 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:04:38.146238 kubelet[3125]: I0213 19:04:38.146163 3125 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:04:38.154274 kubelet[3125]: I0213 19:04:38.154140 3125 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:04:38.169513 kubelet[3125]: E0213 19:04:38.169422 3125 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:04:38.169513 kubelet[3125]: I0213 19:04:38.169511 3125 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:04:38.180530 kubelet[3125]: I0213 19:04:38.180472 3125 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:04:38.180694 kubelet[3125]: I0213 19:04:38.180679 3125 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:04:38.180948 kubelet[3125]: I0213 19:04:38.180887 3125 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:04:38.182361 kubelet[3125]: I0213 19:04:38.180946 3125 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-68","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:04:38.182590 kubelet[3125]: I0213 19:04:38.182385 3125 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:04:38.182590 kubelet[3125]: I0213 19:04:38.182413 3125 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:04:38.182590 kubelet[3125]: I0213 19:04:38.182496 3125 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:04:38.182824 kubelet[3125]: I0213 19:04:38.182804 3125 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:04:38.182883 kubelet[3125]: I0213 19:04:38.182840 3125 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:04:38.182883 kubelet[3125]: I0213 19:04:38.182876 3125 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:04:38.182976 kubelet[3125]: I0213 19:04:38.182896 3125 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:04:38.201176 kubelet[3125]: I0213 19:04:38.201101 3125 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:04:38.202419 kubelet[3125]: I0213 19:04:38.202133 3125 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:04:38.203092 kubelet[3125]: I0213 19:04:38.203003 3125 server.go:1269] "Started kubelet" Feb 13 19:04:38.227792 kubelet[3125]: I0213 19:04:38.227146 3125 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:04:38.231351 kubelet[3125]: I0213 19:04:38.231268 3125 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:04:38.234366 kubelet[3125]: I0213 19:04:38.234217 3125 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:04:38.239565 kubelet[3125]: I0213 19:04:38.235434 3125 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:04:38.239565 kubelet[3125]: I0213 19:04:38.238729 3125 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:04:38.247144 kubelet[3125]: E0213 19:04:38.244697 3125 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-22-68\" not found" Feb 13 19:04:38.251131 kubelet[3125]: I0213 19:04:38.249965 3125 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:04:38.251131 kubelet[3125]: I0213 19:04:38.250456 3125 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:04:38.261275 kubelet[3125]: I0213 19:04:38.261112 3125 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:04:38.261653 kubelet[3125]: I0213 19:04:38.261610 3125 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:04:38.279521 kubelet[3125]: I0213 19:04:38.279433 3125 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:04:38.279769 kubelet[3125]: I0213 19:04:38.279697 3125 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:04:38.319152 kubelet[3125]: I0213 19:04:38.318988 3125 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:04:38.337922 kubelet[3125]: I0213 19:04:38.337859 3125 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:04:38.348190 kubelet[3125]: I0213 19:04:38.347921 3125 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:04:38.348190 kubelet[3125]: I0213 19:04:38.348007 3125 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:04:38.348190 kubelet[3125]: I0213 19:04:38.348042 3125 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:04:38.349430 kubelet[3125]: E0213 19:04:38.348345 3125 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:04:38.356191 kubelet[3125]: E0213 19:04:38.355959 3125 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:04:38.449461 kubelet[3125]: E0213 19:04:38.449358 3125 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:04:38.510719 kubelet[3125]: I0213 19:04:38.510023 3125 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:04:38.510719 kubelet[3125]: I0213 19:04:38.510058 3125 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:04:38.510719 kubelet[3125]: I0213 19:04:38.510131 3125 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:04:38.510719 kubelet[3125]: I0213 19:04:38.510452 3125 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:04:38.510719 kubelet[3125]: I0213 19:04:38.510478 3125 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:04:38.510719 kubelet[3125]: I0213 19:04:38.510523 3125 policy_none.go:49] "None policy: Start" Feb 13 19:04:38.516917 kubelet[3125]: I0213 19:04:38.516841 3125 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:04:38.516917 kubelet[3125]: I0213 19:04:38.516924 3125 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:04:38.518060 kubelet[3125]: I0213 19:04:38.517713 3125 state_mem.go:75] "Updated machine memory state" Feb 13 19:04:38.537350 kubelet[3125]: I0213 19:04:38.537258 3125 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:04:38.537581 kubelet[3125]: I0213 19:04:38.537549 3125 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:04:38.537652 kubelet[3125]: I0213 19:04:38.537582 3125 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:04:38.539924 kubelet[3125]: I0213 19:04:38.539846 3125 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:04:38.670171 kubelet[3125]: E0213 19:04:38.669670 3125 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-22-68\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-22-68" Feb 13 19:04:38.674521 kubelet[3125]: I0213 19:04:38.673011 3125 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-68" Feb 13 19:04:38.691397 kubelet[3125]: I0213 19:04:38.689857 3125 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-22-68" Feb 13 19:04:38.691397 kubelet[3125]: I0213 19:04:38.689984 3125 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-22-68" Feb 13 19:04:38.759686 kubelet[3125]: I0213 19:04:38.759448 3125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5e9af9db0b5e08a4cdc285786466390-ca-certs\") pod \"kube-apiserver-ip-172-31-22-68\" (UID: \"c5e9af9db0b5e08a4cdc285786466390\") " pod="kube-system/kube-apiserver-ip-172-31-22-68" Feb 13 19:04:38.759686 kubelet[3125]: I0213 19:04:38.759567 3125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5e9af9db0b5e08a4cdc285786466390-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-68\" (UID: \"c5e9af9db0b5e08a4cdc285786466390\") " pod="kube-system/kube-apiserver-ip-172-31-22-68" Feb 13 19:04:38.759686 kubelet[3125]: I0213 19:04:38.759623 3125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/519c2c09f5ed5daa193bc92b5718e2ee-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-68\" (UID: \"519c2c09f5ed5daa193bc92b5718e2ee\") " pod="kube-system/kube-controller-manager-ip-172-31-22-68" Feb 13 19:04:38.759686 kubelet[3125]: I0213 19:04:38.759664 3125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/519c2c09f5ed5daa193bc92b5718e2ee-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-68\" (UID: \"519c2c09f5ed5daa193bc92b5718e2ee\") " pod="kube-system/kube-controller-manager-ip-172-31-22-68" Feb 13 19:04:38.759686 kubelet[3125]: I0213 19:04:38.759707 3125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b7a035521dc8bbfc41191382036dd7e-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-68\" (UID: \"8b7a035521dc8bbfc41191382036dd7e\") " pod="kube-system/kube-scheduler-ip-172-31-22-68" Feb 13 19:04:38.760185 kubelet[3125]: I0213 19:04:38.759769 3125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5e9af9db0b5e08a4cdc285786466390-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-68\" (UID: \"c5e9af9db0b5e08a4cdc285786466390\") " pod="kube-system/kube-apiserver-ip-172-31-22-68" Feb 13 19:04:38.760185 kubelet[3125]: I0213 19:04:38.759808 3125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/519c2c09f5ed5daa193bc92b5718e2ee-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-68\" (UID: \"519c2c09f5ed5daa193bc92b5718e2ee\") " pod="kube-system/kube-controller-manager-ip-172-31-22-68" Feb 13 19:04:38.760185 kubelet[3125]: I0213 19:04:38.759844 3125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/519c2c09f5ed5daa193bc92b5718e2ee-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-68\" (UID: \"519c2c09f5ed5daa193bc92b5718e2ee\") " pod="kube-system/kube-controller-manager-ip-172-31-22-68" Feb 13 19:04:38.760185 kubelet[3125]: I0213 19:04:38.759879 3125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/519c2c09f5ed5daa193bc92b5718e2ee-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-68\" (UID: \"519c2c09f5ed5daa193bc92b5718e2ee\") " pod="kube-system/kube-controller-manager-ip-172-31-22-68" Feb 13 19:04:38.765413 update_engine[1917]: I20250213 19:04:38.764051 1917 update_attempter.cc:509] Updating boot flags... Feb 13 19:04:38.950778 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3181) Feb 13 19:04:39.193024 kubelet[3125]: I0213 19:04:39.188995 3125 apiserver.go:52] "Watching apiserver" Feb 13 19:04:39.250517 kubelet[3125]: I0213 19:04:39.250416 3125 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:04:39.455998 kubelet[3125]: E0213 19:04:39.455452 3125 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-22-68\" already exists" pod="kube-system/kube-apiserver-ip-172-31-22-68" Feb 13 19:04:39.575195 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3180) Feb 13 19:04:39.725932 kubelet[3125]: I0213 19:04:39.722686 3125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-22-68" podStartSLOduration=1.722657818 podStartE2EDuration="1.722657818s" podCreationTimestamp="2025-02-13 19:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:04:39.618061905 +0000 UTC m=+1.604604633" watchObservedRunningTime="2025-02-13 19:04:39.722657818 +0000 UTC m=+1.709200498" Feb 13 19:04:39.821158 kubelet[3125]: I0213 19:04:39.820114 3125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-22-68" podStartSLOduration=2.82003405 podStartE2EDuration="2.82003405s" podCreationTimestamp="2025-02-13 19:04:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:04:39.725687998 +0000 UTC m=+1.712230666" watchObservedRunningTime="2025-02-13 19:04:39.82003405 +0000 UTC m=+1.806576730" Feb 13 19:04:39.828740 kubelet[3125]: I0213 19:04:39.827512 3125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-22-68" podStartSLOduration=1.827359558 podStartE2EDuration="1.827359558s" podCreationTimestamp="2025-02-13 19:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:04:39.824515702 +0000 UTC m=+1.811058382" watchObservedRunningTime="2025-02-13 19:04:39.827359558 +0000 UTC m=+1.813902274" Feb 13 19:04:40.350149 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3180) Feb 13 19:04:41.061712 sudo[2214]: pam_unix(sudo:session): session closed for user root Feb 13 19:04:41.085161 sshd[2213]: Connection closed by 147.75.109.163 port 47602 Feb 13 19:04:41.086874 sshd-session[2211]: pam_unix(sshd:session): session closed for user core Feb 13 19:04:41.094559 systemd[1]: sshd@4-172.31.22.68:22-147.75.109.163:47602.service: Deactivated successfully. Feb 13 19:04:41.099929 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:04:41.100914 systemd[1]: session-5.scope: Consumed 11.958s CPU time, 153.1M memory peak, 0B memory swap peak. Feb 13 19:04:41.105030 systemd-logind[1915]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:04:41.108227 systemd-logind[1915]: Removed session 5. Feb 13 19:04:41.966171 kubelet[3125]: I0213 19:04:41.966005 3125 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:04:41.967538 kubelet[3125]: I0213 19:04:41.967267 3125 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:04:41.967769 containerd[1943]: time="2025-02-13T19:04:41.966746629Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:04:42.663204 systemd[1]: Created slice kubepods-besteffort-pod03dd948f_b736_439e_9e5e_2e1d728d2c9c.slice - libcontainer container kubepods-besteffort-pod03dd948f_b736_439e_9e5e_2e1d728d2c9c.slice. Feb 13 19:04:42.701570 systemd[1]: Created slice kubepods-burstable-pod97d04ddd_ec64_41b9_86b5_66e5b0291266.slice - libcontainer container kubepods-burstable-pod97d04ddd_ec64_41b9_86b5_66e5b0291266.slice. Feb 13 19:04:42.706445 kubelet[3125]: I0213 19:04:42.705429 3125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03dd948f-b736-439e-9e5e-2e1d728d2c9c-xtables-lock\") pod \"kube-proxy-bvjvn\" (UID: \"03dd948f-b736-439e-9e5e-2e1d728d2c9c\") " pod="kube-system/kube-proxy-bvjvn" Feb 13 19:04:42.706445 kubelet[3125]: I0213 19:04:42.705566 3125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dr2n\" (UniqueName: \"kubernetes.io/projected/03dd948f-b736-439e-9e5e-2e1d728d2c9c-kube-api-access-9dr2n\") pod \"kube-proxy-bvjvn\" (UID: \"03dd948f-b736-439e-9e5e-2e1d728d2c9c\") " pod="kube-system/kube-proxy-bvjvn" Feb 13 19:04:42.706445 kubelet[3125]: I0213 19:04:42.705694 3125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/97d04ddd-ec64-41b9-86b5-66e5b0291266-cni-plugin\") pod \"kube-flannel-ds-zflm6\" (UID: \"97d04ddd-ec64-41b9-86b5-66e5b0291266\") " pod="kube-flannel/kube-flannel-ds-zflm6" Feb 13 19:04:42.706445 kubelet[3125]: I0213 19:04:42.705771 3125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7h6j\" (UniqueName: \"kubernetes.io/projected/97d04ddd-ec64-41b9-86b5-66e5b0291266-kube-api-access-b7h6j\") pod \"kube-flannel-ds-zflm6\" (UID: \"97d04ddd-ec64-41b9-86b5-66e5b0291266\") " pod="kube-flannel/kube-flannel-ds-zflm6" Feb 13 19:04:42.706445 kubelet[3125]: I0213 19:04:42.705821 3125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/03dd948f-b736-439e-9e5e-2e1d728d2c9c-kube-proxy\") pod \"kube-proxy-bvjvn\" (UID: \"03dd948f-b736-439e-9e5e-2e1d728d2c9c\") " pod="kube-system/kube-proxy-bvjvn" Feb 13 19:04:42.706854 kubelet[3125]: I0213 19:04:42.705863 3125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/97d04ddd-ec64-41b9-86b5-66e5b0291266-run\") pod \"kube-flannel-ds-zflm6\" (UID: \"97d04ddd-ec64-41b9-86b5-66e5b0291266\") " pod="kube-flannel/kube-flannel-ds-zflm6" Feb 13 19:04:42.706854 kubelet[3125]: I0213 19:04:42.705897 3125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/97d04ddd-ec64-41b9-86b5-66e5b0291266-cni\") pod \"kube-flannel-ds-zflm6\" (UID: \"97d04ddd-ec64-41b9-86b5-66e5b0291266\") " pod="kube-flannel/kube-flannel-ds-zflm6" Feb 13 19:04:42.706854 kubelet[3125]: I0213 19:04:42.705940 3125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03dd948f-b736-439e-9e5e-2e1d728d2c9c-lib-modules\") pod \"kube-proxy-bvjvn\" (UID: \"03dd948f-b736-439e-9e5e-2e1d728d2c9c\") " pod="kube-system/kube-proxy-bvjvn" Feb 13 19:04:42.706854 kubelet[3125]: I0213 19:04:42.705980 3125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/97d04ddd-ec64-41b9-86b5-66e5b0291266-flannel-cfg\") pod \"kube-flannel-ds-zflm6\" (UID: \"97d04ddd-ec64-41b9-86b5-66e5b0291266\") " pod="kube-flannel/kube-flannel-ds-zflm6" Feb 13 19:04:42.706854 kubelet[3125]: I0213 19:04:42.706016 3125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97d04ddd-ec64-41b9-86b5-66e5b0291266-xtables-lock\") pod \"kube-flannel-ds-zflm6\" (UID: \"97d04ddd-ec64-41b9-86b5-66e5b0291266\") " pod="kube-flannel/kube-flannel-ds-zflm6" Feb 13 19:04:42.835952 kubelet[3125]: E0213 19:04:42.835482 3125 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 19:04:42.835952 kubelet[3125]: E0213 19:04:42.835576 3125 projected.go:194] Error preparing data for projected volume kube-api-access-9dr2n for pod kube-system/kube-proxy-bvjvn: configmap "kube-root-ca.crt" not found Feb 13 19:04:42.835952 kubelet[3125]: E0213 19:04:42.835738 3125 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/03dd948f-b736-439e-9e5e-2e1d728d2c9c-kube-api-access-9dr2n podName:03dd948f-b736-439e-9e5e-2e1d728d2c9c nodeName:}" failed. No retries permitted until 2025-02-13 19:04:43.335689121 +0000 UTC m=+5.322231813 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9dr2n" (UniqueName: "kubernetes.io/projected/03dd948f-b736-439e-9e5e-2e1d728d2c9c-kube-api-access-9dr2n") pod "kube-proxy-bvjvn" (UID: "03dd948f-b736-439e-9e5e-2e1d728d2c9c") : configmap "kube-root-ca.crt" not found Feb 13 19:04:43.015897 containerd[1943]: time="2025-02-13T19:04:43.015825046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-zflm6,Uid:97d04ddd-ec64-41b9-86b5-66e5b0291266,Namespace:kube-flannel,Attempt:0,}" Feb 13 19:04:43.072049 containerd[1943]: time="2025-02-13T19:04:43.071591938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:43.072049 containerd[1943]: time="2025-02-13T19:04:43.071715515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:43.072049 containerd[1943]: time="2025-02-13T19:04:43.071800907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:43.072049 containerd[1943]: time="2025-02-13T19:04:43.072263051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:43.130507 systemd[1]: Started cri-containerd-f83d1f05b59a85819f9a4e89d5a0c82c6181a62a0ae02c93cd9f8a776348d40b.scope - libcontainer container f83d1f05b59a85819f9a4e89d5a0c82c6181a62a0ae02c93cd9f8a776348d40b. Feb 13 19:04:43.228720 containerd[1943]: time="2025-02-13T19:04:43.228623999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-zflm6,Uid:97d04ddd-ec64-41b9-86b5-66e5b0291266,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"f83d1f05b59a85819f9a4e89d5a0c82c6181a62a0ae02c93cd9f8a776348d40b\"" Feb 13 19:04:43.233754 containerd[1943]: time="2025-02-13T19:04:43.233674403Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 19:04:43.581523 containerd[1943]: time="2025-02-13T19:04:43.581352637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bvjvn,Uid:03dd948f-b736-439e-9e5e-2e1d728d2c9c,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:43.634256 containerd[1943]: time="2025-02-13T19:04:43.633897913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:04:43.635548 containerd[1943]: time="2025-02-13T19:04:43.635364481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:04:43.635548 containerd[1943]: time="2025-02-13T19:04:43.635429773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:43.636221 containerd[1943]: time="2025-02-13T19:04:43.635732029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:04:43.672437 systemd[1]: Started cri-containerd-787585670a7f862cebd54d00bef4b7181767ac5885665db7f4522fbc05966181.scope - libcontainer container 787585670a7f862cebd54d00bef4b7181767ac5885665db7f4522fbc05966181. Feb 13 19:04:43.729391 containerd[1943]: time="2025-02-13T19:04:43.729226202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bvjvn,Uid:03dd948f-b736-439e-9e5e-2e1d728d2c9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"787585670a7f862cebd54d00bef4b7181767ac5885665db7f4522fbc05966181\"" Feb 13 19:04:43.739129 containerd[1943]: time="2025-02-13T19:04:43.739004102Z" level=info msg="CreateContainer within sandbox \"787585670a7f862cebd54d00bef4b7181767ac5885665db7f4522fbc05966181\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:04:43.794106 containerd[1943]: time="2025-02-13T19:04:43.793992758Z" level=info msg="CreateContainer within sandbox \"787585670a7f862cebd54d00bef4b7181767ac5885665db7f4522fbc05966181\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"56c98fe51b33bed51e3693d86cf2584de705779fddc70fda5bd8b37be9bf5e86\"" Feb 13 19:04:43.796755 containerd[1943]: time="2025-02-13T19:04:43.796395146Z" level=info msg="StartContainer for \"56c98fe51b33bed51e3693d86cf2584de705779fddc70fda5bd8b37be9bf5e86\"" Feb 13 19:04:43.862643 systemd[1]: Started cri-containerd-56c98fe51b33bed51e3693d86cf2584de705779fddc70fda5bd8b37be9bf5e86.scope - libcontainer container 56c98fe51b33bed51e3693d86cf2584de705779fddc70fda5bd8b37be9bf5e86. Feb 13 19:04:43.965871 containerd[1943]: time="2025-02-13T19:04:43.965327367Z" level=info msg="StartContainer for \"56c98fe51b33bed51e3693d86cf2584de705779fddc70fda5bd8b37be9bf5e86\" returns successfully" Feb 13 19:04:44.938843 kubelet[3125]: I0213 19:04:44.938678 3125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bvjvn" podStartSLOduration=2.938646952 podStartE2EDuration="2.938646952s" podCreationTimestamp="2025-02-13 19:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:04:44.48496403 +0000 UTC m=+6.471506710" watchObservedRunningTime="2025-02-13 19:04:44.938646952 +0000 UTC m=+6.925189632" Feb 13 19:04:45.446486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2393724262.mount: Deactivated successfully. Feb 13 19:04:45.538044 containerd[1943]: time="2025-02-13T19:04:45.537946083Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:45.540164 containerd[1943]: time="2025-02-13T19:04:45.540015711Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673531" Feb 13 19:04:45.542901 containerd[1943]: time="2025-02-13T19:04:45.542795511Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:45.549326 containerd[1943]: time="2025-02-13T19:04:45.549215547Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:45.551895 containerd[1943]: time="2025-02-13T19:04:45.551482131Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 2.317737552s" Feb 13 19:04:45.551895 containerd[1943]: time="2025-02-13T19:04:45.551559927Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 13 19:04:45.557785 containerd[1943]: time="2025-02-13T19:04:45.557681523Z" level=info msg="CreateContainer within sandbox \"f83d1f05b59a85819f9a4e89d5a0c82c6181a62a0ae02c93cd9f8a776348d40b\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 19:04:45.593944 containerd[1943]: time="2025-02-13T19:04:45.593693871Z" level=info msg="CreateContainer within sandbox \"f83d1f05b59a85819f9a4e89d5a0c82c6181a62a0ae02c93cd9f8a776348d40b\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"7f6de4990974485acc18238fe4754d47d5e0d32824eb0f7ca105b4faaa83c8af\"" Feb 13 19:04:45.595127 containerd[1943]: time="2025-02-13T19:04:45.594896463Z" level=info msg="StartContainer for \"7f6de4990974485acc18238fe4754d47d5e0d32824eb0f7ca105b4faaa83c8af\"" Feb 13 19:04:45.664581 systemd[1]: Started cri-containerd-7f6de4990974485acc18238fe4754d47d5e0d32824eb0f7ca105b4faaa83c8af.scope - libcontainer container 7f6de4990974485acc18238fe4754d47d5e0d32824eb0f7ca105b4faaa83c8af. Feb 13 19:04:45.743212 containerd[1943]: time="2025-02-13T19:04:45.742921324Z" level=info msg="StartContainer for \"7f6de4990974485acc18238fe4754d47d5e0d32824eb0f7ca105b4faaa83c8af\" returns successfully" Feb 13 19:04:45.744454 systemd[1]: cri-containerd-7f6de4990974485acc18238fe4754d47d5e0d32824eb0f7ca105b4faaa83c8af.scope: Deactivated successfully. Feb 13 19:04:45.835725 containerd[1943]: time="2025-02-13T19:04:45.835609720Z" level=info msg="shim disconnected" id=7f6de4990974485acc18238fe4754d47d5e0d32824eb0f7ca105b4faaa83c8af namespace=k8s.io Feb 13 19:04:45.836573 containerd[1943]: time="2025-02-13T19:04:45.836246536Z" level=warning msg="cleaning up after shim disconnected" id=7f6de4990974485acc18238fe4754d47d5e0d32824eb0f7ca105b4faaa83c8af namespace=k8s.io Feb 13 19:04:45.836573 containerd[1943]: time="2025-02-13T19:04:45.836352016Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:46.260436 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f6de4990974485acc18238fe4754d47d5e0d32824eb0f7ca105b4faaa83c8af-rootfs.mount: Deactivated successfully. Feb 13 19:04:46.479108 containerd[1943]: time="2025-02-13T19:04:46.476868891Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 19:04:48.794343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1680319651.mount: Deactivated successfully. Feb 13 19:04:50.216161 containerd[1943]: time="2025-02-13T19:04:50.215241450Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:50.218227 containerd[1943]: time="2025-02-13T19:04:50.218107050Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874260" Feb 13 19:04:50.221279 containerd[1943]: time="2025-02-13T19:04:50.221188494Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:50.230661 containerd[1943]: time="2025-02-13T19:04:50.230482962Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:04:50.233577 containerd[1943]: time="2025-02-13T19:04:50.233508390Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 3.756548275s" Feb 13 19:04:50.234147 containerd[1943]: time="2025-02-13T19:04:50.233838018Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 13 19:04:50.243275 containerd[1943]: time="2025-02-13T19:04:50.243123582Z" level=info msg="CreateContainer within sandbox \"f83d1f05b59a85819f9a4e89d5a0c82c6181a62a0ae02c93cd9f8a776348d40b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:04:50.276472 containerd[1943]: time="2025-02-13T19:04:50.276242634Z" level=info msg="CreateContainer within sandbox \"f83d1f05b59a85819f9a4e89d5a0c82c6181a62a0ae02c93cd9f8a776348d40b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4419075c2149bb205bcd237664d21107870c762af8beb4819c411d80df54d97e\"" Feb 13 19:04:50.278263 containerd[1943]: time="2025-02-13T19:04:50.277956990Z" level=info msg="StartContainer for \"4419075c2149bb205bcd237664d21107870c762af8beb4819c411d80df54d97e\"" Feb 13 19:04:50.354047 systemd[1]: Started cri-containerd-4419075c2149bb205bcd237664d21107870c762af8beb4819c411d80df54d97e.scope - libcontainer container 4419075c2149bb205bcd237664d21107870c762af8beb4819c411d80df54d97e. Feb 13 19:04:50.413555 systemd[1]: cri-containerd-4419075c2149bb205bcd237664d21107870c762af8beb4819c411d80df54d97e.scope: Deactivated successfully. Feb 13 19:04:50.421694 containerd[1943]: time="2025-02-13T19:04:50.421642387Z" level=info msg="StartContainer for \"4419075c2149bb205bcd237664d21107870c762af8beb4819c411d80df54d97e\" returns successfully" Feb 13 19:04:50.424242 kubelet[3125]: I0213 19:04:50.423598 3125 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 19:04:50.558389 systemd[1]: Created slice kubepods-burstable-pod8647dcd6_2ff4_4f5c_a0ec_8f1e7fd5b97f.slice - libcontainer container kubepods-burstable-pod8647dcd6_2ff4_4f5c_a0ec_8f1e7fd5b97f.slice. Feb 13 19:04:50.569184 kubelet[3125]: I0213 19:04:50.568374 3125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8647dcd6-2ff4-4f5c-a0ec-8f1e7fd5b97f-config-volume\") pod \"coredns-6f6b679f8f-cwgsm\" (UID: \"8647dcd6-2ff4-4f5c-a0ec-8f1e7fd5b97f\") " pod="kube-system/coredns-6f6b679f8f-cwgsm" Feb 13 19:04:50.569184 kubelet[3125]: I0213 19:04:50.568461 3125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7xcs\" (UniqueName: \"kubernetes.io/projected/a18ae8c7-e925-47e9-9375-4bb02d5d42e0-kube-api-access-l7xcs\") pod \"coredns-6f6b679f8f-vnt2x\" (UID: \"a18ae8c7-e925-47e9-9375-4bb02d5d42e0\") " pod="kube-system/coredns-6f6b679f8f-vnt2x" Feb 13 19:04:50.569184 kubelet[3125]: I0213 19:04:50.568514 3125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a18ae8c7-e925-47e9-9375-4bb02d5d42e0-config-volume\") pod \"coredns-6f6b679f8f-vnt2x\" (UID: \"a18ae8c7-e925-47e9-9375-4bb02d5d42e0\") " pod="kube-system/coredns-6f6b679f8f-vnt2x" Feb 13 19:04:50.569184 kubelet[3125]: I0213 19:04:50.568613 3125 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj5cq\" (UniqueName: \"kubernetes.io/projected/8647dcd6-2ff4-4f5c-a0ec-8f1e7fd5b97f-kube-api-access-vj5cq\") pod \"coredns-6f6b679f8f-cwgsm\" (UID: \"8647dcd6-2ff4-4f5c-a0ec-8f1e7fd5b97f\") " pod="kube-system/coredns-6f6b679f8f-cwgsm" Feb 13 19:04:50.597042 systemd[1]: Created slice kubepods-burstable-poda18ae8c7_e925_47e9_9375_4bb02d5d42e0.slice - libcontainer container kubepods-burstable-poda18ae8c7_e925_47e9_9375_4bb02d5d42e0.slice. Feb 13 19:04:50.649141 containerd[1943]: time="2025-02-13T19:04:50.648538844Z" level=info msg="shim disconnected" id=4419075c2149bb205bcd237664d21107870c762af8beb4819c411d80df54d97e namespace=k8s.io Feb 13 19:04:50.649681 containerd[1943]: time="2025-02-13T19:04:50.649552556Z" level=warning msg="cleaning up after shim disconnected" id=4419075c2149bb205bcd237664d21107870c762af8beb4819c411d80df54d97e namespace=k8s.io Feb 13 19:04:50.650244 containerd[1943]: time="2025-02-13T19:04:50.649890308Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:04:50.884254 containerd[1943]: time="2025-02-13T19:04:50.884104869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-cwgsm,Uid:8647dcd6-2ff4-4f5c-a0ec-8f1e7fd5b97f,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:50.910049 containerd[1943]: time="2025-02-13T19:04:50.909492549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vnt2x,Uid:a18ae8c7-e925-47e9-9375-4bb02d5d42e0,Namespace:kube-system,Attempt:0,}" Feb 13 19:04:50.940299 containerd[1943]: time="2025-02-13T19:04:50.939922954Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-cwgsm,Uid:8647dcd6-2ff4-4f5c-a0ec-8f1e7fd5b97f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"abcdb93dcca460d030a0a2499c5fce371cd5e3ce92c0c8bb01059ffde9b7eeef\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:04:50.941392 kubelet[3125]: E0213 19:04:50.940977 3125 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abcdb93dcca460d030a0a2499c5fce371cd5e3ce92c0c8bb01059ffde9b7eeef\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:04:50.941392 kubelet[3125]: E0213 19:04:50.941138 3125 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abcdb93dcca460d030a0a2499c5fce371cd5e3ce92c0c8bb01059ffde9b7eeef\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-cwgsm" Feb 13 19:04:50.941392 kubelet[3125]: E0213 19:04:50.941179 3125 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abcdb93dcca460d030a0a2499c5fce371cd5e3ce92c0c8bb01059ffde9b7eeef\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-cwgsm" Feb 13 19:04:50.941392 kubelet[3125]: E0213 19:04:50.941264 3125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-cwgsm_kube-system(8647dcd6-2ff4-4f5c-a0ec-8f1e7fd5b97f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-cwgsm_kube-system(8647dcd6-2ff4-4f5c-a0ec-8f1e7fd5b97f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abcdb93dcca460d030a0a2499c5fce371cd5e3ce92c0c8bb01059ffde9b7eeef\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-cwgsm" podUID="8647dcd6-2ff4-4f5c-a0ec-8f1e7fd5b97f" Feb 13 19:04:50.963826 containerd[1943]: time="2025-02-13T19:04:50.963715882Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vnt2x,Uid:a18ae8c7-e925-47e9-9375-4bb02d5d42e0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"238cc41b18322b46aac500513c8db37b23a63f62d4af41a5433d72896d69be4a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:04:50.964514 kubelet[3125]: E0213 19:04:50.964403 3125 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"238cc41b18322b46aac500513c8db37b23a63f62d4af41a5433d72896d69be4a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:04:50.964672 kubelet[3125]: E0213 19:04:50.964571 3125 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"238cc41b18322b46aac500513c8db37b23a63f62d4af41a5433d72896d69be4a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-vnt2x" Feb 13 19:04:50.964672 kubelet[3125]: E0213 19:04:50.964654 3125 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"238cc41b18322b46aac500513c8db37b23a63f62d4af41a5433d72896d69be4a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-vnt2x" Feb 13 19:04:50.964887 kubelet[3125]: E0213 19:04:50.964800 3125 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-vnt2x_kube-system(a18ae8c7-e925-47e9-9375-4bb02d5d42e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-vnt2x_kube-system(a18ae8c7-e925-47e9-9375-4bb02d5d42e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"238cc41b18322b46aac500513c8db37b23a63f62d4af41a5433d72896d69be4a\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-vnt2x" podUID="a18ae8c7-e925-47e9-9375-4bb02d5d42e0" Feb 13 19:04:51.265293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4419075c2149bb205bcd237664d21107870c762af8beb4819c411d80df54d97e-rootfs.mount: Deactivated successfully. Feb 13 19:04:51.544228 containerd[1943]: time="2025-02-13T19:04:51.543981597Z" level=info msg="CreateContainer within sandbox \"f83d1f05b59a85819f9a4e89d5a0c82c6181a62a0ae02c93cd9f8a776348d40b\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 19:04:51.583496 containerd[1943]: time="2025-02-13T19:04:51.583403565Z" level=info msg="CreateContainer within sandbox \"f83d1f05b59a85819f9a4e89d5a0c82c6181a62a0ae02c93cd9f8a776348d40b\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"0f565c37daf6dd5fa9242677cdba4fab25f5cdd9f5864f191ea4563b94324fd3\"" Feb 13 19:04:51.584762 containerd[1943]: time="2025-02-13T19:04:51.584691153Z" level=info msg="StartContainer for \"0f565c37daf6dd5fa9242677cdba4fab25f5cdd9f5864f191ea4563b94324fd3\"" Feb 13 19:04:51.646448 systemd[1]: Started cri-containerd-0f565c37daf6dd5fa9242677cdba4fab25f5cdd9f5864f191ea4563b94324fd3.scope - libcontainer container 0f565c37daf6dd5fa9242677cdba4fab25f5cdd9f5864f191ea4563b94324fd3. Feb 13 19:04:51.704804 containerd[1943]: time="2025-02-13T19:04:51.704716953Z" level=info msg="StartContainer for \"0f565c37daf6dd5fa9242677cdba4fab25f5cdd9f5864f191ea4563b94324fd3\" returns successfully" Feb 13 19:04:52.795623 (udev-worker)[3934]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:04:52.820922 systemd-networkd[1848]: flannel.1: Link UP Feb 13 19:04:52.820945 systemd-networkd[1848]: flannel.1: Gained carrier Feb 13 19:04:53.891868 systemd-networkd[1848]: flannel.1: Gained IPv6LL Feb 13 19:04:56.591658 ntpd[1908]: Listen normally on 7 flannel.1 192.168.0.0:123 Feb 13 19:04:56.591800 ntpd[1908]: Listen normally on 8 flannel.1 [fe80::888e:83ff:fe3c:5f75%4]:123 Feb 13 19:04:56.592429 ntpd[1908]: 13 Feb 19:04:56 ntpd[1908]: Listen normally on 7 flannel.1 192.168.0.0:123 Feb 13 19:04:56.592429 ntpd[1908]: 13 Feb 19:04:56 ntpd[1908]: Listen normally on 8 flannel.1 [fe80::888e:83ff:fe3c:5f75%4]:123 Feb 13 19:05:02.352447 containerd[1943]: time="2025-02-13T19:05:02.351258438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vnt2x,Uid:a18ae8c7-e925-47e9-9375-4bb02d5d42e0,Namespace:kube-system,Attempt:0,}" Feb 13 19:05:02.403338 systemd-networkd[1848]: cni0: Link UP Feb 13 19:05:02.403363 systemd-networkd[1848]: cni0: Gained carrier Feb 13 19:05:02.417025 (udev-worker)[4055]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:05:02.423182 kernel: cni0: port 1(veth6e7e519f) entered blocking state Feb 13 19:05:02.423257 kernel: cni0: port 1(veth6e7e519f) entered disabled state Feb 13 19:05:02.418194 systemd-networkd[1848]: veth6e7e519f: Link UP Feb 13 19:05:02.419437 systemd-networkd[1848]: cni0: Lost carrier Feb 13 19:05:02.424557 kernel: veth6e7e519f: entered allmulticast mode Feb 13 19:05:02.426590 kernel: veth6e7e519f: entered promiscuous mode Feb 13 19:05:02.431505 (udev-worker)[4057]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:05:02.441194 kernel: cni0: port 1(veth6e7e519f) entered blocking state Feb 13 19:05:02.441320 kernel: cni0: port 1(veth6e7e519f) entered forwarding state Feb 13 19:05:02.440882 systemd-networkd[1848]: veth6e7e519f: Gained carrier Feb 13 19:05:02.447589 systemd-networkd[1848]: cni0: Gained carrier Feb 13 19:05:02.451267 containerd[1943]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000948e8), "name":"cbr0", "type":"bridge"} Feb 13 19:05:02.451267 containerd[1943]: delegateAdd: netconf sent to delegate plugin: Feb 13 19:05:02.503204 containerd[1943]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T19:05:02.502974727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:05:02.504511 containerd[1943]: time="2025-02-13T19:05:02.504210823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:05:02.504511 containerd[1943]: time="2025-02-13T19:05:02.504305551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:02.505015 containerd[1943]: time="2025-02-13T19:05:02.504565147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:02.551473 systemd[1]: Started cri-containerd-644ecabfc9d0ec8dcd89823f6ba5d4abb5324f5e5c81b4fc792a92069acc4a8f.scope - libcontainer container 644ecabfc9d0ec8dcd89823f6ba5d4abb5324f5e5c81b4fc792a92069acc4a8f. Feb 13 19:05:02.634194 containerd[1943]: time="2025-02-13T19:05:02.632833400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vnt2x,Uid:a18ae8c7-e925-47e9-9375-4bb02d5d42e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"644ecabfc9d0ec8dcd89823f6ba5d4abb5324f5e5c81b4fc792a92069acc4a8f\"" Feb 13 19:05:02.642668 containerd[1943]: time="2025-02-13T19:05:02.642335876Z" level=info msg="CreateContainer within sandbox \"644ecabfc9d0ec8dcd89823f6ba5d4abb5324f5e5c81b4fc792a92069acc4a8f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:05:02.679802 containerd[1943]: time="2025-02-13T19:05:02.679495772Z" level=info msg="CreateContainer within sandbox \"644ecabfc9d0ec8dcd89823f6ba5d4abb5324f5e5c81b4fc792a92069acc4a8f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a44711d1cd20dfc7fca077454fee0a5cc0811ce81f20cc79e4ad9372d38d0bba\"" Feb 13 19:05:02.683518 containerd[1943]: time="2025-02-13T19:05:02.683252960Z" level=info msg="StartContainer for \"a44711d1cd20dfc7fca077454fee0a5cc0811ce81f20cc79e4ad9372d38d0bba\"" Feb 13 19:05:02.754665 systemd[1]: Started cri-containerd-a44711d1cd20dfc7fca077454fee0a5cc0811ce81f20cc79e4ad9372d38d0bba.scope - libcontainer container a44711d1cd20dfc7fca077454fee0a5cc0811ce81f20cc79e4ad9372d38d0bba. Feb 13 19:05:02.838051 containerd[1943]: time="2025-02-13T19:05:02.837957549Z" level=info msg="StartContainer for \"a44711d1cd20dfc7fca077454fee0a5cc0811ce81f20cc79e4ad9372d38d0bba\" returns successfully" Feb 13 19:05:03.349989 containerd[1943]: time="2025-02-13T19:05:03.349832407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-cwgsm,Uid:8647dcd6-2ff4-4f5c-a0ec-8f1e7fd5b97f,Namespace:kube-system,Attempt:0,}" Feb 13 19:05:03.406621 (udev-worker)[4056]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:05:03.406662 systemd-networkd[1848]: veth9ae71690: Link UP Feb 13 19:05:03.413668 kernel: cni0: port 2(veth9ae71690) entered blocking state Feb 13 19:05:03.413800 kernel: cni0: port 2(veth9ae71690) entered disabled state Feb 13 19:05:03.418326 kernel: veth9ae71690: entered allmulticast mode Feb 13 19:05:03.420416 kernel: veth9ae71690: entered promiscuous mode Feb 13 19:05:03.427709 kernel: cni0: port 2(veth9ae71690) entered blocking state Feb 13 19:05:03.430391 kernel: cni0: port 2(veth9ae71690) entered forwarding state Feb 13 19:05:03.430579 kernel: cni0: port 2(veth9ae71690) entered disabled state Feb 13 19:05:03.446680 kernel: cni0: port 2(veth9ae71690) entered blocking state Feb 13 19:05:03.446852 kernel: cni0: port 2(veth9ae71690) entered forwarding state Feb 13 19:05:03.447576 systemd-networkd[1848]: veth9ae71690: Gained carrier Feb 13 19:05:03.459047 containerd[1943]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000016938), "name":"cbr0", "type":"bridge"} Feb 13 19:05:03.459047 containerd[1943]: delegateAdd: netconf sent to delegate plugin: Feb 13 19:05:03.496190 containerd[1943]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T19:05:03.495678320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:05:03.496190 containerd[1943]: time="2025-02-13T19:05:03.495832772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:05:03.496190 containerd[1943]: time="2025-02-13T19:05:03.495871808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:03.497198 containerd[1943]: time="2025-02-13T19:05:03.497009324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:05:03.539799 systemd[1]: run-containerd-runc-k8s.io-781ffb9ca1634c1de339077de198e7c1fcc0c2ac790358b43573262ea233ebf0-runc.45o3rj.mount: Deactivated successfully. Feb 13 19:05:03.552491 systemd[1]: Started cri-containerd-781ffb9ca1634c1de339077de198e7c1fcc0c2ac790358b43573262ea233ebf0.scope - libcontainer container 781ffb9ca1634c1de339077de198e7c1fcc0c2ac790358b43573262ea233ebf0. Feb 13 19:05:03.621927 kubelet[3125]: I0213 19:05:03.619449 3125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-zflm6" podStartSLOduration=14.615184358 podStartE2EDuration="21.619424277s" podCreationTimestamp="2025-02-13 19:04:42 +0000 UTC" firstStartedPulling="2025-02-13 19:04:43.232377791 +0000 UTC m=+5.218920471" lastFinishedPulling="2025-02-13 19:04:50.236617722 +0000 UTC m=+12.223160390" observedRunningTime="2025-02-13 19:04:52.585017326 +0000 UTC m=+14.571560030" watchObservedRunningTime="2025-02-13 19:05:03.619424277 +0000 UTC m=+25.605966957" Feb 13 19:05:03.659903 kubelet[3125]: I0213 19:05:03.659123 3125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-vnt2x" podStartSLOduration=21.659098701 podStartE2EDuration="21.659098701s" podCreationTimestamp="2025-02-13 19:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:05:03.625320969 +0000 UTC m=+25.611863661" watchObservedRunningTime="2025-02-13 19:05:03.659098701 +0000 UTC m=+25.645641393" Feb 13 19:05:03.690524 containerd[1943]: time="2025-02-13T19:05:03.690412425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-cwgsm,Uid:8647dcd6-2ff4-4f5c-a0ec-8f1e7fd5b97f,Namespace:kube-system,Attempt:0,} returns sandbox id \"781ffb9ca1634c1de339077de198e7c1fcc0c2ac790358b43573262ea233ebf0\"" Feb 13 19:05:03.701831 containerd[1943]: time="2025-02-13T19:05:03.701744973Z" level=info msg="CreateContainer within sandbox \"781ffb9ca1634c1de339077de198e7c1fcc0c2ac790358b43573262ea233ebf0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:05:03.750594 containerd[1943]: time="2025-02-13T19:05:03.750503277Z" level=info msg="CreateContainer within sandbox \"781ffb9ca1634c1de339077de198e7c1fcc0c2ac790358b43573262ea233ebf0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0d67712361b91d7f17dec14c37c87140b143b0748818a594bcfaea3d3f79971b\"" Feb 13 19:05:03.753495 containerd[1943]: time="2025-02-13T19:05:03.753421929Z" level=info msg="StartContainer for \"0d67712361b91d7f17dec14c37c87140b143b0748818a594bcfaea3d3f79971b\"" Feb 13 19:05:03.813412 systemd[1]: Started cri-containerd-0d67712361b91d7f17dec14c37c87140b143b0748818a594bcfaea3d3f79971b.scope - libcontainer container 0d67712361b91d7f17dec14c37c87140b143b0748818a594bcfaea3d3f79971b. Feb 13 19:05:03.875941 systemd-networkd[1848]: cni0: Gained IPv6LL Feb 13 19:05:03.880540 systemd-networkd[1848]: veth6e7e519f: Gained IPv6LL Feb 13 19:05:03.888995 containerd[1943]: time="2025-02-13T19:05:03.888857434Z" level=info msg="StartContainer for \"0d67712361b91d7f17dec14c37c87140b143b0748818a594bcfaea3d3f79971b\" returns successfully" Feb 13 19:05:04.373915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1975795693.mount: Deactivated successfully. Feb 13 19:05:04.613647 kubelet[3125]: I0213 19:05:04.613512 3125 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-cwgsm" podStartSLOduration=22.613491057 podStartE2EDuration="22.613491057s" podCreationTimestamp="2025-02-13 19:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:05:04.612178125 +0000 UTC m=+26.598720829" watchObservedRunningTime="2025-02-13 19:05:04.613491057 +0000 UTC m=+26.600033749" Feb 13 19:05:05.219411 systemd-networkd[1848]: veth9ae71690: Gained IPv6LL Feb 13 19:05:07.591901 ntpd[1908]: Listen normally on 9 cni0 192.168.0.1:123 Feb 13 19:05:07.592058 ntpd[1908]: Listen normally on 10 cni0 [fe80::88c6:ff:fe62:f241%5]:123 Feb 13 19:05:07.592627 ntpd[1908]: 13 Feb 19:05:07 ntpd[1908]: Listen normally on 9 cni0 192.168.0.1:123 Feb 13 19:05:07.592627 ntpd[1908]: 13 Feb 19:05:07 ntpd[1908]: Listen normally on 10 cni0 [fe80::88c6:ff:fe62:f241%5]:123 Feb 13 19:05:07.592627 ntpd[1908]: 13 Feb 19:05:07 ntpd[1908]: Listen normally on 11 veth6e7e519f [fe80::4000:47ff:fe94:ec3d%6]:123 Feb 13 19:05:07.592627 ntpd[1908]: 13 Feb 19:05:07 ntpd[1908]: Listen normally on 12 veth9ae71690 [fe80::ac69:86ff:fe4c:bde%7]:123 Feb 13 19:05:07.592183 ntpd[1908]: Listen normally on 11 veth6e7e519f [fe80::4000:47ff:fe94:ec3d%6]:123 Feb 13 19:05:07.592257 ntpd[1908]: Listen normally on 12 veth9ae71690 [fe80::ac69:86ff:fe4c:bde%7]:123 Feb 13 19:05:21.829041 systemd[1]: Started sshd@5-172.31.22.68:22-147.75.109.163:36568.service - OpenSSH per-connection server daemon (147.75.109.163:36568). Feb 13 19:05:22.025458 sshd[4354]: Accepted publickey for core from 147.75.109.163 port 36568 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:22.028570 sshd-session[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:22.037670 systemd-logind[1915]: New session 6 of user core. Feb 13 19:05:22.046575 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:05:22.315595 sshd[4356]: Connection closed by 147.75.109.163 port 36568 Feb 13 19:05:22.316642 sshd-session[4354]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:22.322479 systemd[1]: sshd@5-172.31.22.68:22-147.75.109.163:36568.service: Deactivated successfully. Feb 13 19:05:22.323185 systemd-logind[1915]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:05:22.328968 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:05:22.333747 systemd-logind[1915]: Removed session 6. Feb 13 19:05:27.357769 systemd[1]: Started sshd@6-172.31.22.68:22-147.75.109.163:36584.service - OpenSSH per-connection server daemon (147.75.109.163:36584). Feb 13 19:05:27.551702 sshd[4390]: Accepted publickey for core from 147.75.109.163 port 36584 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:27.554340 sshd-session[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:27.562490 systemd-logind[1915]: New session 7 of user core. Feb 13 19:05:27.572350 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:05:27.837175 sshd[4392]: Connection closed by 147.75.109.163 port 36584 Feb 13 19:05:27.838174 sshd-session[4390]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:27.845216 systemd[1]: sshd@6-172.31.22.68:22-147.75.109.163:36584.service: Deactivated successfully. Feb 13 19:05:27.850112 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:05:27.851858 systemd-logind[1915]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:05:27.854044 systemd-logind[1915]: Removed session 7. Feb 13 19:05:32.877714 systemd[1]: Started sshd@7-172.31.22.68:22-147.75.109.163:45732.service - OpenSSH per-connection server daemon (147.75.109.163:45732). Feb 13 19:05:33.080601 sshd[4426]: Accepted publickey for core from 147.75.109.163 port 45732 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:33.083753 sshd-session[4426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:33.092723 systemd-logind[1915]: New session 8 of user core. Feb 13 19:05:33.102394 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:05:33.385619 sshd[4434]: Connection closed by 147.75.109.163 port 45732 Feb 13 19:05:33.386724 sshd-session[4426]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:33.392284 systemd-logind[1915]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:05:33.393025 systemd[1]: sshd@7-172.31.22.68:22-147.75.109.163:45732.service: Deactivated successfully. Feb 13 19:05:33.397602 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:05:33.402270 systemd-logind[1915]: Removed session 8. Feb 13 19:05:38.428965 systemd[1]: Started sshd@8-172.31.22.68:22-147.75.109.163:45736.service - OpenSSH per-connection server daemon (147.75.109.163:45736). Feb 13 19:05:38.639661 sshd[4483]: Accepted publickey for core from 147.75.109.163 port 45736 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:38.643392 sshd-session[4483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:38.657675 systemd-logind[1915]: New session 9 of user core. Feb 13 19:05:38.661943 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:05:38.971091 sshd[4485]: Connection closed by 147.75.109.163 port 45736 Feb 13 19:05:38.972763 sshd-session[4483]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:38.983360 systemd[1]: sshd@8-172.31.22.68:22-147.75.109.163:45736.service: Deactivated successfully. Feb 13 19:05:38.989177 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:05:38.994583 systemd-logind[1915]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:05:39.019659 systemd[1]: Started sshd@9-172.31.22.68:22-147.75.109.163:45746.service - OpenSSH per-connection server daemon (147.75.109.163:45746). Feb 13 19:05:39.022645 systemd-logind[1915]: Removed session 9. Feb 13 19:05:39.209825 sshd[4497]: Accepted publickey for core from 147.75.109.163 port 45746 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:39.212938 sshd-session[4497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:39.222495 systemd-logind[1915]: New session 10 of user core. Feb 13 19:05:39.228386 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:05:39.599184 sshd[4499]: Connection closed by 147.75.109.163 port 45746 Feb 13 19:05:39.599015 sshd-session[4497]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:39.610940 systemd[1]: sshd@9-172.31.22.68:22-147.75.109.163:45746.service: Deactivated successfully. Feb 13 19:05:39.622534 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:05:39.627248 systemd-logind[1915]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:05:39.648263 systemd[1]: Started sshd@10-172.31.22.68:22-147.75.109.163:59470.service - OpenSSH per-connection server daemon (147.75.109.163:59470). Feb 13 19:05:39.651819 systemd-logind[1915]: Removed session 10. Feb 13 19:05:39.851319 sshd[4507]: Accepted publickey for core from 147.75.109.163 port 59470 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:39.854813 sshd-session[4507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:39.865293 systemd-logind[1915]: New session 11 of user core. Feb 13 19:05:39.875553 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:05:40.135322 sshd[4509]: Connection closed by 147.75.109.163 port 59470 Feb 13 19:05:40.136596 sshd-session[4507]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:40.144410 systemd[1]: sshd@10-172.31.22.68:22-147.75.109.163:59470.service: Deactivated successfully. Feb 13 19:05:40.151355 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:05:40.156924 systemd-logind[1915]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:05:40.159769 systemd-logind[1915]: Removed session 11. Feb 13 19:05:45.180554 systemd[1]: Started sshd@11-172.31.22.68:22-147.75.109.163:59474.service - OpenSSH per-connection server daemon (147.75.109.163:59474). Feb 13 19:05:45.362934 sshd[4543]: Accepted publickey for core from 147.75.109.163 port 59474 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:45.365939 sshd-session[4543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:45.375887 systemd-logind[1915]: New session 12 of user core. Feb 13 19:05:45.383320 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:05:45.650377 sshd[4545]: Connection closed by 147.75.109.163 port 59474 Feb 13 19:05:45.652193 sshd-session[4543]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:45.660626 systemd-logind[1915]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:05:45.661019 systemd[1]: sshd@11-172.31.22.68:22-147.75.109.163:59474.service: Deactivated successfully. Feb 13 19:05:45.668884 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:05:45.676756 systemd-logind[1915]: Removed session 12. Feb 13 19:05:50.697363 systemd[1]: Started sshd@12-172.31.22.68:22-147.75.109.163:36310.service - OpenSSH per-connection server daemon (147.75.109.163:36310). Feb 13 19:05:50.903163 sshd[4577]: Accepted publickey for core from 147.75.109.163 port 36310 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:50.907028 sshd-session[4577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:50.920455 systemd-logind[1915]: New session 13 of user core. Feb 13 19:05:50.929530 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:05:51.187243 sshd[4579]: Connection closed by 147.75.109.163 port 36310 Feb 13 19:05:51.187027 sshd-session[4577]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:51.193779 systemd[1]: sshd@12-172.31.22.68:22-147.75.109.163:36310.service: Deactivated successfully. Feb 13 19:05:51.198360 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:05:51.203296 systemd-logind[1915]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:05:51.206026 systemd-logind[1915]: Removed session 13. Feb 13 19:05:51.228688 systemd[1]: Started sshd@13-172.31.22.68:22-147.75.109.163:36318.service - OpenSSH per-connection server daemon (147.75.109.163:36318). Feb 13 19:05:51.427190 sshd[4589]: Accepted publickey for core from 147.75.109.163 port 36318 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:51.429755 sshd-session[4589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:51.439816 systemd-logind[1915]: New session 14 of user core. Feb 13 19:05:51.446329 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:05:51.755112 sshd[4591]: Connection closed by 147.75.109.163 port 36318 Feb 13 19:05:51.756799 sshd-session[4589]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:51.763595 systemd-logind[1915]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:05:51.768005 systemd[1]: sshd@13-172.31.22.68:22-147.75.109.163:36318.service: Deactivated successfully. Feb 13 19:05:51.773358 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:05:51.775826 systemd-logind[1915]: Removed session 14. Feb 13 19:05:51.796834 systemd[1]: Started sshd@14-172.31.22.68:22-147.75.109.163:36334.service - OpenSSH per-connection server daemon (147.75.109.163:36334). Feb 13 19:05:51.995533 sshd[4600]: Accepted publickey for core from 147.75.109.163 port 36334 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:51.998015 sshd-session[4600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:52.006537 systemd-logind[1915]: New session 15 of user core. Feb 13 19:05:52.012422 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:05:54.609330 sshd[4602]: Connection closed by 147.75.109.163 port 36334 Feb 13 19:05:54.609202 sshd-session[4600]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:54.622544 systemd[1]: sshd@14-172.31.22.68:22-147.75.109.163:36334.service: Deactivated successfully. Feb 13 19:05:54.633689 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:05:54.636927 systemd-logind[1915]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:05:54.668021 systemd[1]: Started sshd@15-172.31.22.68:22-147.75.109.163:36344.service - OpenSSH per-connection server daemon (147.75.109.163:36344). Feb 13 19:05:54.670487 systemd-logind[1915]: Removed session 15. Feb 13 19:05:54.866357 sshd[4639]: Accepted publickey for core from 147.75.109.163 port 36344 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:54.869254 sshd-session[4639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:54.878583 systemd-logind[1915]: New session 16 of user core. Feb 13 19:05:54.888358 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:05:55.365458 sshd[4641]: Connection closed by 147.75.109.163 port 36344 Feb 13 19:05:55.366291 sshd-session[4639]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:55.376637 systemd-logind[1915]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:05:55.378999 systemd[1]: sshd@15-172.31.22.68:22-147.75.109.163:36344.service: Deactivated successfully. Feb 13 19:05:55.385129 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:05:55.387915 systemd-logind[1915]: Removed session 16. Feb 13 19:05:55.412743 systemd[1]: Started sshd@16-172.31.22.68:22-147.75.109.163:36358.service - OpenSSH per-connection server daemon (147.75.109.163:36358). Feb 13 19:05:55.616234 sshd[4650]: Accepted publickey for core from 147.75.109.163 port 36358 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:05:55.618822 sshd-session[4650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:05:55.627273 systemd-logind[1915]: New session 17 of user core. Feb 13 19:05:55.634373 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:05:55.891808 sshd[4652]: Connection closed by 147.75.109.163 port 36358 Feb 13 19:05:55.892972 sshd-session[4650]: pam_unix(sshd:session): session closed for user core Feb 13 19:05:55.901395 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:05:55.903567 systemd[1]: sshd@16-172.31.22.68:22-147.75.109.163:36358.service: Deactivated successfully. Feb 13 19:05:55.908180 systemd-logind[1915]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:05:55.910951 systemd-logind[1915]: Removed session 17. Feb 13 19:06:00.935670 systemd[1]: Started sshd@17-172.31.22.68:22-147.75.109.163:46574.service - OpenSSH per-connection server daemon (147.75.109.163:46574). Feb 13 19:06:01.139358 sshd[4685]: Accepted publickey for core from 147.75.109.163 port 46574 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:06:01.142786 sshd-session[4685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:01.154381 systemd-logind[1915]: New session 18 of user core. Feb 13 19:06:01.163382 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:06:01.424607 sshd[4687]: Connection closed by 147.75.109.163 port 46574 Feb 13 19:06:01.425859 sshd-session[4685]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:01.433788 systemd[1]: sshd@17-172.31.22.68:22-147.75.109.163:46574.service: Deactivated successfully. Feb 13 19:06:01.439760 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:06:01.443359 systemd-logind[1915]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:06:01.447432 systemd-logind[1915]: Removed session 18. Feb 13 19:06:06.466727 systemd[1]: Started sshd@18-172.31.22.68:22-147.75.109.163:46576.service - OpenSSH per-connection server daemon (147.75.109.163:46576). Feb 13 19:06:06.659167 sshd[4721]: Accepted publickey for core from 147.75.109.163 port 46576 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:06:06.661952 sshd-session[4721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:06.670406 systemd-logind[1915]: New session 19 of user core. Feb 13 19:06:06.686482 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:06:06.939090 sshd[4723]: Connection closed by 147.75.109.163 port 46576 Feb 13 19:06:06.940526 sshd-session[4721]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:06.947393 systemd[1]: sshd@18-172.31.22.68:22-147.75.109.163:46576.service: Deactivated successfully. Feb 13 19:06:06.952994 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:06:06.958026 systemd-logind[1915]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:06:06.960647 systemd-logind[1915]: Removed session 19. Feb 13 19:06:11.985693 systemd[1]: Started sshd@19-172.31.22.68:22-147.75.109.163:60904.service - OpenSSH per-connection server daemon (147.75.109.163:60904). Feb 13 19:06:12.171219 sshd[4755]: Accepted publickey for core from 147.75.109.163 port 60904 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:06:12.173990 sshd-session[4755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:12.183360 systemd-logind[1915]: New session 20 of user core. Feb 13 19:06:12.191358 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:06:12.444470 sshd[4757]: Connection closed by 147.75.109.163 port 60904 Feb 13 19:06:12.446206 sshd-session[4755]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:12.452668 systemd[1]: sshd@19-172.31.22.68:22-147.75.109.163:60904.service: Deactivated successfully. Feb 13 19:06:12.456195 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:06:12.460146 systemd-logind[1915]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:06:12.462473 systemd-logind[1915]: Removed session 20. Feb 13 19:06:17.484607 systemd[1]: Started sshd@20-172.31.22.68:22-147.75.109.163:60920.service - OpenSSH per-connection server daemon (147.75.109.163:60920). Feb 13 19:06:17.676952 sshd[4790]: Accepted publickey for core from 147.75.109.163 port 60920 ssh2: RSA SHA256:Iozg8PmY6DgBPfCrNQT/67nZTE1uR/Q+lH4JycYwSyU Feb 13 19:06:17.680736 sshd-session[4790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:06:17.691208 systemd-logind[1915]: New session 21 of user core. Feb 13 19:06:17.700412 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:06:17.958981 sshd[4793]: Connection closed by 147.75.109.163 port 60920 Feb 13 19:06:17.959844 sshd-session[4790]: pam_unix(sshd:session): session closed for user core Feb 13 19:06:17.966053 systemd[1]: sshd@20-172.31.22.68:22-147.75.109.163:60920.service: Deactivated successfully. Feb 13 19:06:17.969845 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:06:17.971117 systemd-logind[1915]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:06:17.972815 systemd-logind[1915]: Removed session 21. Feb 13 19:06:31.523346 systemd[1]: cri-containerd-809a5a30b892184210ae88a708ea9bf6e11572e4befae4fd3c0daf66457d7e3a.scope: Deactivated successfully. Feb 13 19:06:31.525211 systemd[1]: cri-containerd-809a5a30b892184210ae88a708ea9bf6e11572e4befae4fd3c0daf66457d7e3a.scope: Consumed 4.487s CPU time, 18.1M memory peak, 0B memory swap peak. Feb 13 19:06:31.578017 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-809a5a30b892184210ae88a708ea9bf6e11572e4befae4fd3c0daf66457d7e3a-rootfs.mount: Deactivated successfully. Feb 13 19:06:31.593303 containerd[1943]: time="2025-02-13T19:06:31.592873762Z" level=info msg="shim disconnected" id=809a5a30b892184210ae88a708ea9bf6e11572e4befae4fd3c0daf66457d7e3a namespace=k8s.io Feb 13 19:06:31.593303 containerd[1943]: time="2025-02-13T19:06:31.592954894Z" level=warning msg="cleaning up after shim disconnected" id=809a5a30b892184210ae88a708ea9bf6e11572e4befae4fd3c0daf66457d7e3a namespace=k8s.io Feb 13 19:06:31.593303 containerd[1943]: time="2025-02-13T19:06:31.592975750Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:06:31.849902 kubelet[3125]: I0213 19:06:31.849463 3125 scope.go:117] "RemoveContainer" containerID="809a5a30b892184210ae88a708ea9bf6e11572e4befae4fd3c0daf66457d7e3a" Feb 13 19:06:31.856043 containerd[1943]: time="2025-02-13T19:06:31.855897491Z" level=info msg="CreateContainer within sandbox \"32979b379a58cddd0b75e10132743f258f56da7c189de7a50a5eefc5449b7a6c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 19:06:31.890296 containerd[1943]: time="2025-02-13T19:06:31.890208899Z" level=info msg="CreateContainer within sandbox \"32979b379a58cddd0b75e10132743f258f56da7c189de7a50a5eefc5449b7a6c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"71e2855294b388bc8cbed3c82340273bdc290330fdf000e541fa983de76b6406\"" Feb 13 19:06:31.891079 containerd[1943]: time="2025-02-13T19:06:31.891013271Z" level=info msg="StartContainer for \"71e2855294b388bc8cbed3c82340273bdc290330fdf000e541fa983de76b6406\"" Feb 13 19:06:31.956639 systemd[1]: Started cri-containerd-71e2855294b388bc8cbed3c82340273bdc290330fdf000e541fa983de76b6406.scope - libcontainer container 71e2855294b388bc8cbed3c82340273bdc290330fdf000e541fa983de76b6406. Feb 13 19:06:32.031765 containerd[1943]: time="2025-02-13T19:06:32.031689056Z" level=info msg="StartContainer for \"71e2855294b388bc8cbed3c82340273bdc290330fdf000e541fa983de76b6406\" returns successfully" Feb 13 19:06:32.577551 systemd[1]: run-containerd-runc-k8s.io-71e2855294b388bc8cbed3c82340273bdc290330fdf000e541fa983de76b6406-runc.cEdBFr.mount: Deactivated successfully. Feb 13 19:06:37.475048 systemd[1]: cri-containerd-3a360d3ff8e54a12a68d0e551d92fdcb44dbf071b4a788f249b405736dd24601.scope: Deactivated successfully. Feb 13 19:06:37.475976 systemd[1]: cri-containerd-3a360d3ff8e54a12a68d0e551d92fdcb44dbf071b4a788f249b405736dd24601.scope: Consumed 4.278s CPU time, 16.2M memory peak, 0B memory swap peak. Feb 13 19:06:37.521353 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a360d3ff8e54a12a68d0e551d92fdcb44dbf071b4a788f249b405736dd24601-rootfs.mount: Deactivated successfully. Feb 13 19:06:37.536660 containerd[1943]: time="2025-02-13T19:06:37.536513847Z" level=info msg="shim disconnected" id=3a360d3ff8e54a12a68d0e551d92fdcb44dbf071b4a788f249b405736dd24601 namespace=k8s.io Feb 13 19:06:37.536660 containerd[1943]: time="2025-02-13T19:06:37.536596731Z" level=warning msg="cleaning up after shim disconnected" id=3a360d3ff8e54a12a68d0e551d92fdcb44dbf071b4a788f249b405736dd24601 namespace=k8s.io Feb 13 19:06:37.536660 containerd[1943]: time="2025-02-13T19:06:37.536616411Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:06:37.889469 kubelet[3125]: I0213 19:06:37.889310 3125 scope.go:117] "RemoveContainer" containerID="3a360d3ff8e54a12a68d0e551d92fdcb44dbf071b4a788f249b405736dd24601" Feb 13 19:06:37.894239 containerd[1943]: time="2025-02-13T19:06:37.893945165Z" level=info msg="CreateContainer within sandbox \"7c8c90b644726ec71bd4343d106e3e4420e2a22aedf450dacaf12b85e6db3e7f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 19:06:37.940045 containerd[1943]: time="2025-02-13T19:06:37.939894089Z" level=info msg="CreateContainer within sandbox \"7c8c90b644726ec71bd4343d106e3e4420e2a22aedf450dacaf12b85e6db3e7f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"c31630b36d0e8fc9b86724b9dab9c7bff1fe1e1ec961b3eb19b937b624aaf0c5\"" Feb 13 19:06:37.941361 containerd[1943]: time="2025-02-13T19:06:37.940665101Z" level=info msg="StartContainer for \"c31630b36d0e8fc9b86724b9dab9c7bff1fe1e1ec961b3eb19b937b624aaf0c5\"" Feb 13 19:06:38.007405 systemd[1]: Started cri-containerd-c31630b36d0e8fc9b86724b9dab9c7bff1fe1e1ec961b3eb19b937b624aaf0c5.scope - libcontainer container c31630b36d0e8fc9b86724b9dab9c7bff1fe1e1ec961b3eb19b937b624aaf0c5. Feb 13 19:06:38.083599 containerd[1943]: time="2025-02-13T19:06:38.083409158Z" level=info msg="StartContainer for \"c31630b36d0e8fc9b86724b9dab9c7bff1fe1e1ec961b3eb19b937b624aaf0c5\" returns successfully" Feb 13 19:06:40.158630 kubelet[3125]: E0213 19:06:40.156692 3125 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-68?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:06:50.158949 kubelet[3125]: E0213 19:06:50.157704 3125 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-68?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"