Feb 13 15:08:09.190457 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 15:08:09.190507 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Feb 13 13:51:50 -00 2025 Feb 13 15:08:09.190532 kernel: KASLR disabled due to lack of seed Feb 13 15:08:09.190549 kernel: efi: EFI v2.7 by EDK II Feb 13 15:08:09.190565 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Feb 13 15:08:09.190581 kernel: secureboot: Secure boot disabled Feb 13 15:08:09.190598 kernel: ACPI: Early table checksum verification disabled Feb 13 15:08:09.190614 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 15:08:09.190630 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 15:08:09.190646 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 15:08:09.190668 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 15:08:09.190684 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 15:08:09.190700 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 15:08:09.190717 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 15:08:09.190735 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 15:08:09.190757 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 15:08:09.190774 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 15:08:09.190792 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 15:08:09.190809 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 15:08:09.190825 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 15:08:09.190844 kernel: printk: bootconsole [uart0] enabled Feb 13 15:08:09.190860 kernel: NUMA: Failed to initialise from firmware Feb 13 15:08:09.190877 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 15:08:09.190894 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 15:08:09.190910 kernel: Zone ranges: Feb 13 15:08:09.190926 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 15:08:09.190947 kernel: DMA32 empty Feb 13 15:08:09.190964 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 15:08:09.190980 kernel: Movable zone start for each node Feb 13 15:08:09.190996 kernel: Early memory node ranges Feb 13 15:08:09.191012 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 15:08:09.191028 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 15:08:09.191044 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 15:08:09.191061 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 15:08:09.191077 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 15:08:09.191094 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 15:08:09.191110 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 15:08:09.193179 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 15:08:09.193247 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 15:08:09.193266 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 15:08:09.193290 kernel: psci: probing for conduit method from ACPI. Feb 13 15:08:09.193309 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 15:08:09.193326 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:08:09.193348 kernel: psci: Trusted OS migration not required Feb 13 15:08:09.193366 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:08:09.193384 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:08:09.193401 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:08:09.193419 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 15:08:09.193436 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:08:09.193454 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:08:09.193471 kernel: CPU features: detected: Spectre-v2 Feb 13 15:08:09.193489 kernel: CPU features: detected: Spectre-v3a Feb 13 15:08:09.193506 kernel: CPU features: detected: Spectre-BHB Feb 13 15:08:09.193524 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 15:08:09.193542 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 15:08:09.193566 kernel: alternatives: applying boot alternatives Feb 13 15:08:09.193587 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=26b1bb981574844309559baa9983d7ef1e1e8283aa92ecd6061030daf7cdbbef Feb 13 15:08:09.193606 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:08:09.193624 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:08:09.193643 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:08:09.193660 kernel: Fallback order for Node 0: 0 Feb 13 15:08:09.193678 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 15:08:09.193695 kernel: Policy zone: Normal Feb 13 15:08:09.193712 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:08:09.193729 kernel: software IO TLB: area num 2. Feb 13 15:08:09.193750 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 15:08:09.193768 kernel: Memory: 3821240K/4030464K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 209224K reserved, 0K cma-reserved) Feb 13 15:08:09.193785 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:08:09.193803 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:08:09.193821 kernel: rcu: RCU event tracing is enabled. Feb 13 15:08:09.193838 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:08:09.193856 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:08:09.193873 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:08:09.193891 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:08:09.193908 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:08:09.193925 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:08:09.193947 kernel: GICv3: 96 SPIs implemented Feb 13 15:08:09.193964 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:08:09.193981 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:08:09.193998 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 15:08:09.194015 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 15:08:09.194032 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 15:08:09.194049 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:08:09.194066 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:08:09.194083 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 15:08:09.194100 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 15:08:09.194117 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 15:08:09.197317 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:08:09.197396 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 15:08:09.197415 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 15:08:09.197434 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 15:08:09.197453 kernel: Console: colour dummy device 80x25 Feb 13 15:08:09.197471 kernel: printk: console [tty1] enabled Feb 13 15:08:09.197491 kernel: ACPI: Core revision 20230628 Feb 13 15:08:09.197511 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 15:08:09.197530 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:08:09.197548 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:08:09.197566 kernel: landlock: Up and running. Feb 13 15:08:09.197592 kernel: SELinux: Initializing. Feb 13 15:08:09.197612 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:08:09.197631 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:08:09.197651 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:08:09.197671 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:08:09.197689 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:08:09.197709 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:08:09.197729 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 15:08:09.197753 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 15:08:09.197772 kernel: Remapping and enabling EFI services. Feb 13 15:08:09.197790 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:08:09.197810 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:08:09.197828 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 15:08:09.197847 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 15:08:09.197867 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 15:08:09.197887 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:08:09.197904 kernel: SMP: Total of 2 processors activated. Feb 13 15:08:09.197922 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:08:09.197948 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 15:08:09.197967 kernel: CPU features: detected: CRC32 instructions Feb 13 15:08:09.198001 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:08:09.198027 kernel: alternatives: applying system-wide alternatives Feb 13 15:08:09.198046 kernel: devtmpfs: initialized Feb 13 15:08:09.198066 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:08:09.198086 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:08:09.198104 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:08:09.198124 kernel: SMBIOS 3.0.0 present. Feb 13 15:08:09.199195 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 15:08:09.199216 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:08:09.199235 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:08:09.199254 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:08:09.199272 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:08:09.199291 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:08:09.199309 kernel: audit: type=2000 audit(0.221:1): state=initialized audit_enabled=0 res=1 Feb 13 15:08:09.199332 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:08:09.199351 kernel: cpuidle: using governor menu Feb 13 15:08:09.199369 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:08:09.199387 kernel: ASID allocator initialised with 65536 entries Feb 13 15:08:09.199405 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:08:09.199424 kernel: Serial: AMBA PL011 UART driver Feb 13 15:08:09.199442 kernel: Modules: 17760 pages in range for non-PLT usage Feb 13 15:08:09.199460 kernel: Modules: 509280 pages in range for PLT usage Feb 13 15:08:09.199478 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:08:09.199501 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:08:09.199520 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:08:09.199539 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:08:09.199558 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:08:09.199576 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:08:09.199595 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:08:09.199613 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:08:09.199632 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:08:09.199650 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:08:09.199676 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:08:09.199696 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:08:09.199714 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:08:09.199734 kernel: ACPI: Interpreter enabled Feb 13 15:08:09.199753 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:08:09.199772 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:08:09.199792 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 15:08:09.202297 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:08:09.202613 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:08:09.202821 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:08:09.203050 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 15:08:09.204408 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 15:08:09.204466 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 15:08:09.204486 kernel: acpiphp: Slot [1] registered Feb 13 15:08:09.204505 kernel: acpiphp: Slot [2] registered Feb 13 15:08:09.204523 kernel: acpiphp: Slot [3] registered Feb 13 15:08:09.204559 kernel: acpiphp: Slot [4] registered Feb 13 15:08:09.204578 kernel: acpiphp: Slot [5] registered Feb 13 15:08:09.204597 kernel: acpiphp: Slot [6] registered Feb 13 15:08:09.204617 kernel: acpiphp: Slot [7] registered Feb 13 15:08:09.204636 kernel: acpiphp: Slot [8] registered Feb 13 15:08:09.204654 kernel: acpiphp: Slot [9] registered Feb 13 15:08:09.204673 kernel: acpiphp: Slot [10] registered Feb 13 15:08:09.204693 kernel: acpiphp: Slot [11] registered Feb 13 15:08:09.204711 kernel: acpiphp: Slot [12] registered Feb 13 15:08:09.204729 kernel: acpiphp: Slot [13] registered Feb 13 15:08:09.204753 kernel: acpiphp: Slot [14] registered Feb 13 15:08:09.204792 kernel: acpiphp: Slot [15] registered Feb 13 15:08:09.204812 kernel: acpiphp: Slot [16] registered Feb 13 15:08:09.204830 kernel: acpiphp: Slot [17] registered Feb 13 15:08:09.204849 kernel: acpiphp: Slot [18] registered Feb 13 15:08:09.204867 kernel: acpiphp: Slot [19] registered Feb 13 15:08:09.204886 kernel: acpiphp: Slot [20] registered Feb 13 15:08:09.204904 kernel: acpiphp: Slot [21] registered Feb 13 15:08:09.204922 kernel: acpiphp: Slot [22] registered Feb 13 15:08:09.204947 kernel: acpiphp: Slot [23] registered Feb 13 15:08:09.204966 kernel: acpiphp: Slot [24] registered Feb 13 15:08:09.204985 kernel: acpiphp: Slot [25] registered Feb 13 15:08:09.205003 kernel: acpiphp: Slot [26] registered Feb 13 15:08:09.205021 kernel: acpiphp: Slot [27] registered Feb 13 15:08:09.205041 kernel: acpiphp: Slot [28] registered Feb 13 15:08:09.205060 kernel: acpiphp: Slot [29] registered Feb 13 15:08:09.205078 kernel: acpiphp: Slot [30] registered Feb 13 15:08:09.205097 kernel: acpiphp: Slot [31] registered Feb 13 15:08:09.205115 kernel: PCI host bridge to bus 0000:00 Feb 13 15:08:09.205401 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 15:08:09.205610 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:08:09.205805 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 15:08:09.205996 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 15:08:09.212459 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 15:08:09.212773 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 15:08:09.213004 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 15:08:09.213275 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 15:08:09.213486 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 15:08:09.213698 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 15:08:09.213927 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 15:08:09.214176 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 15:08:09.214398 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 15:08:09.214611 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 15:08:09.214818 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 15:08:09.215024 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 15:08:09.215332 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 15:08:09.215540 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 15:08:09.215741 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 15:08:09.215946 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 15:08:09.216180 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 15:08:09.216374 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:08:09.216568 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 15:08:09.216595 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:08:09.216614 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:08:09.216633 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:08:09.216652 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:08:09.216671 kernel: iommu: Default domain type: Translated Feb 13 15:08:09.216696 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:08:09.216715 kernel: efivars: Registered efivars operations Feb 13 15:08:09.216733 kernel: vgaarb: loaded Feb 13 15:08:09.216752 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:08:09.216790 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:08:09.216810 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:08:09.216829 kernel: pnp: PnP ACPI init Feb 13 15:08:09.217054 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 15:08:09.217087 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:08:09.217106 kernel: NET: Registered PF_INET protocol family Feb 13 15:08:09.217166 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:08:09.217244 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:08:09.217270 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:08:09.217289 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:08:09.217308 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:08:09.217327 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:08:09.217345 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:08:09.217371 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:08:09.217389 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:08:09.217408 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:08:09.217426 kernel: kvm [1]: HYP mode not available Feb 13 15:08:09.217444 kernel: Initialise system trusted keyrings Feb 13 15:08:09.217462 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:08:09.217481 kernel: Key type asymmetric registered Feb 13 15:08:09.217500 kernel: Asymmetric key parser 'x509' registered Feb 13 15:08:09.217518 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:08:09.217541 kernel: io scheduler mq-deadline registered Feb 13 15:08:09.217560 kernel: io scheduler kyber registered Feb 13 15:08:09.217578 kernel: io scheduler bfq registered Feb 13 15:08:09.219780 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 15:08:09.219817 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:08:09.219837 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:08:09.219856 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 15:08:09.219875 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 15:08:09.219902 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:08:09.219922 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 15:08:09.220171 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 15:08:09.220199 kernel: printk: console [ttyS0] disabled Feb 13 15:08:09.220219 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 15:08:09.220237 kernel: printk: console [ttyS0] enabled Feb 13 15:08:09.220256 kernel: printk: bootconsole [uart0] disabled Feb 13 15:08:09.220274 kernel: thunder_xcv, ver 1.0 Feb 13 15:08:09.220292 kernel: thunder_bgx, ver 1.0 Feb 13 15:08:09.220310 kernel: nicpf, ver 1.0 Feb 13 15:08:09.220335 kernel: nicvf, ver 1.0 Feb 13 15:08:09.220549 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:08:09.220743 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:08:08 UTC (1739459288) Feb 13 15:08:09.220785 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:08:09.220806 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 15:08:09.220825 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:08:09.220843 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:08:09.220867 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:08:09.220886 kernel: Segment Routing with IPv6 Feb 13 15:08:09.220904 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:08:09.220922 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:08:09.220940 kernel: Key type dns_resolver registered Feb 13 15:08:09.220958 kernel: registered taskstats version 1 Feb 13 15:08:09.220976 kernel: Loading compiled-in X.509 certificates Feb 13 15:08:09.220995 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 03c2ececc548f4ae45f50171451f5c036e2757d4' Feb 13 15:08:09.221013 kernel: Key type .fscrypt registered Feb 13 15:08:09.221030 kernel: Key type fscrypt-provisioning registered Feb 13 15:08:09.221053 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:08:09.221071 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:08:09.221089 kernel: ima: No architecture policies found Feb 13 15:08:09.221108 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:08:09.221146 kernel: clk: Disabling unused clocks Feb 13 15:08:09.221170 kernel: Freeing unused kernel memory: 38336K Feb 13 15:08:09.221189 kernel: Run /init as init process Feb 13 15:08:09.221207 kernel: with arguments: Feb 13 15:08:09.221225 kernel: /init Feb 13 15:08:09.221249 kernel: with environment: Feb 13 15:08:09.221267 kernel: HOME=/ Feb 13 15:08:09.221285 kernel: TERM=linux Feb 13 15:08:09.221303 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:08:09.221323 systemd[1]: Successfully made /usr/ read-only. Feb 13 15:08:09.221347 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:08:09.221369 systemd[1]: Detected virtualization amazon. Feb 13 15:08:09.221393 systemd[1]: Detected architecture arm64. Feb 13 15:08:09.221413 systemd[1]: Running in initrd. Feb 13 15:08:09.221432 systemd[1]: No hostname configured, using default hostname. Feb 13 15:08:09.221452 systemd[1]: Hostname set to . Feb 13 15:08:09.221472 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:08:09.221492 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:08:09.221512 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:08:09.221531 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:08:09.221552 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:08:09.221577 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:08:09.221597 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:08:09.221619 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:08:09.221640 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:08:09.221661 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:08:09.221681 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:08:09.221705 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:08:09.221725 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:08:09.221745 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:08:09.221765 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:08:09.221785 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:08:09.221804 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:08:09.221824 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:08:09.221844 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:08:09.221864 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 15:08:09.221888 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:08:09.221908 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:08:09.221928 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:08:09.221948 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:08:09.221968 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:08:09.221987 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:08:09.222007 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:08:09.222027 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:08:09.222051 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:08:09.222071 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:08:09.222091 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:08:09.222111 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:08:09.222182 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:08:09.222248 systemd-journald[252]: Collecting audit messages is disabled. Feb 13 15:08:09.222298 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:08:09.222321 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:08:09.222341 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:08:09.222365 kernel: Bridge firewalling registered Feb 13 15:08:09.222425 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:08:09.222447 systemd-journald[252]: Journal started Feb 13 15:08:09.222484 systemd-journald[252]: Runtime Journal (/run/log/journal/ec201a046518804effb869d99e0f9b9e) is 8M, max 75.3M, 67.3M free. Feb 13 15:08:09.183250 systemd-modules-load[253]: Inserted module 'overlay' Feb 13 15:08:09.231587 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:08:09.210951 systemd-modules-load[253]: Inserted module 'br_netfilter' Feb 13 15:08:09.236514 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:08:09.250433 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:08:09.259540 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:08:09.264504 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:08:09.268539 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:08:09.293391 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:08:09.310779 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:08:09.315684 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:08:09.333424 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:08:09.339212 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:08:09.349624 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:08:09.367797 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:08:09.393639 dracut-cmdline[291]: dracut-dracut-053 Feb 13 15:08:09.401320 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=26b1bb981574844309559baa9983d7ef1e1e8283aa92ecd6061030daf7cdbbef Feb 13 15:08:09.448715 systemd-resolved[290]: Positive Trust Anchors: Feb 13 15:08:09.448752 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:08:09.448830 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:08:09.541165 kernel: SCSI subsystem initialized Feb 13 15:08:09.547168 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:08:09.560180 kernel: iscsi: registered transport (tcp) Feb 13 15:08:09.582306 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:08:09.582407 kernel: QLogic iSCSI HBA Driver Feb 13 15:08:09.678191 kernel: random: crng init done Feb 13 15:08:09.678731 systemd-resolved[290]: Defaulting to hostname 'linux'. Feb 13 15:08:09.682550 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:08:09.701410 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:08:09.708227 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:08:09.717452 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:08:09.758396 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:08:09.758510 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:08:09.758538 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:08:09.829182 kernel: raid6: neonx8 gen() 6577 MB/s Feb 13 15:08:09.846162 kernel: raid6: neonx4 gen() 6528 MB/s Feb 13 15:08:09.863162 kernel: raid6: neonx2 gen() 5441 MB/s Feb 13 15:08:09.880161 kernel: raid6: neonx1 gen() 3947 MB/s Feb 13 15:08:09.897161 kernel: raid6: int64x8 gen() 3615 MB/s Feb 13 15:08:09.914159 kernel: raid6: int64x4 gen() 3711 MB/s Feb 13 15:08:09.931160 kernel: raid6: int64x2 gen() 3612 MB/s Feb 13 15:08:09.948914 kernel: raid6: int64x1 gen() 2771 MB/s Feb 13 15:08:09.948946 kernel: raid6: using algorithm neonx8 gen() 6577 MB/s Feb 13 15:08:09.966890 kernel: raid6: .... xor() 4764 MB/s, rmw enabled Feb 13 15:08:09.966941 kernel: raid6: using neon recovery algorithm Feb 13 15:08:09.974919 kernel: xor: measuring software checksum speed Feb 13 15:08:09.974975 kernel: 8regs : 12938 MB/sec Feb 13 15:08:09.976160 kernel: 32regs : 12088 MB/sec Feb 13 15:08:09.978162 kernel: arm64_neon : 8900 MB/sec Feb 13 15:08:09.978195 kernel: xor: using function: 8regs (12938 MB/sec) Feb 13 15:08:10.061211 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:08:10.080905 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:08:10.090431 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:08:10.136998 systemd-udevd[474]: Using default interface naming scheme 'v255'. Feb 13 15:08:10.147119 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:08:10.171433 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:08:10.201856 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Feb 13 15:08:10.269208 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:08:10.281591 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:08:10.404008 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:08:10.420088 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:08:10.466641 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:08:10.471594 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:08:10.489346 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:08:10.495335 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:08:10.518554 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:08:10.558315 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:08:10.626385 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:08:10.628809 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:08:10.633763 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:08:10.648211 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:08:10.648264 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 15:08:10.662010 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 15:08:10.662323 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 15:08:10.662564 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:ac:a7:61:59:8d Feb 13 15:08:10.635911 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:08:10.636171 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:08:10.639445 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:08:10.655551 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:08:10.659153 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:08:10.684726 (udev-worker)[518]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:08:10.690772 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 15:08:10.690811 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 15:08:10.701162 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 15:08:10.705417 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:08:10.706662 kernel: GPT:9289727 != 16777215 Feb 13 15:08:10.706689 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:08:10.706713 kernel: GPT:9289727 != 16777215 Feb 13 15:08:10.707226 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:08:10.708164 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:08:10.713209 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:08:10.724813 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:08:10.760649 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:08:10.796553 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (527) Feb 13 15:08:10.850170 kernel: BTRFS: device fsid b3d3c5e7-c505-4391-bb7a-de2a572c0855 devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (529) Feb 13 15:08:10.932894 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 15:08:10.959186 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 15:08:10.986075 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:08:11.005995 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 15:08:11.006808 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 15:08:11.038531 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:08:11.052541 disk-uuid[664]: Primary Header is updated. Feb 13 15:08:11.052541 disk-uuid[664]: Secondary Entries is updated. Feb 13 15:08:11.052541 disk-uuid[664]: Secondary Header is updated. Feb 13 15:08:11.065163 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:08:12.081167 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:08:12.083677 disk-uuid[665]: The operation has completed successfully. Feb 13 15:08:12.289934 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:08:12.290183 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:08:12.364448 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:08:12.382907 sh[925]: Success Feb 13 15:08:12.408174 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:08:12.522702 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:08:12.539385 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:08:12.544582 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:08:12.575097 kernel: BTRFS info (device dm-0): first mount of filesystem b3d3c5e7-c505-4391-bb7a-de2a572c0855 Feb 13 15:08:12.575198 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:08:12.575226 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:08:12.578179 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:08:12.578227 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:08:12.684165 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:08:12.707941 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:08:12.711789 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:08:12.725439 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:08:12.733467 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:08:12.765192 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:08:12.765320 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:08:12.766692 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:08:12.775181 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:08:12.801555 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:08:12.804513 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:08:12.818092 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:08:12.828542 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:08:12.947970 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:08:12.977437 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:08:13.029587 systemd-networkd[1118]: lo: Link UP Feb 13 15:08:13.029612 systemd-networkd[1118]: lo: Gained carrier Feb 13 15:08:13.034386 systemd-networkd[1118]: Enumeration completed Feb 13 15:08:13.034547 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:08:13.036835 systemd[1]: Reached target network.target - Network. Feb 13 15:08:13.041704 systemd-networkd[1118]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:08:13.041711 systemd-networkd[1118]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:08:13.050504 systemd-networkd[1118]: eth0: Link UP Feb 13 15:08:13.050511 systemd-networkd[1118]: eth0: Gained carrier Feb 13 15:08:13.050527 systemd-networkd[1118]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:08:13.075211 systemd-networkd[1118]: eth0: DHCPv4 address 172.31.21.44/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:08:13.224239 ignition[1027]: Ignition 2.20.0 Feb 13 15:08:13.224269 ignition[1027]: Stage: fetch-offline Feb 13 15:08:13.224757 ignition[1027]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:08:13.224791 ignition[1027]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:08:13.229298 ignition[1027]: Ignition finished successfully Feb 13 15:08:13.235365 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:08:13.246488 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:08:13.276360 ignition[1129]: Ignition 2.20.0 Feb 13 15:08:13.276397 ignition[1129]: Stage: fetch Feb 13 15:08:13.277826 ignition[1129]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:08:13.277854 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:08:13.278052 ignition[1129]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:08:13.297716 ignition[1129]: PUT result: OK Feb 13 15:08:13.302427 ignition[1129]: parsed url from cmdline: "" Feb 13 15:08:13.302589 ignition[1129]: no config URL provided Feb 13 15:08:13.302608 ignition[1129]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:08:13.302661 ignition[1129]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:08:13.302696 ignition[1129]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:08:13.306647 ignition[1129]: PUT result: OK Feb 13 15:08:13.306726 ignition[1129]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 15:08:13.311291 ignition[1129]: GET result: OK Feb 13 15:08:13.311452 ignition[1129]: parsing config with SHA512: fb0ee748c79d63639865fe2d1712917158b3e1c74ed22a4c738147d4ed26e9a40a16e651792c41f3ae9295296d8fd91a8f1c4d38c089f70654bd7fc6e106b123 Feb 13 15:08:13.324835 unknown[1129]: fetched base config from "system" Feb 13 15:08:13.325737 ignition[1129]: fetch: fetch complete Feb 13 15:08:13.324857 unknown[1129]: fetched base config from "system" Feb 13 15:08:13.325754 ignition[1129]: fetch: fetch passed Feb 13 15:08:13.324870 unknown[1129]: fetched user config from "aws" Feb 13 15:08:13.325877 ignition[1129]: Ignition finished successfully Feb 13 15:08:13.331907 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:08:13.347829 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:08:13.384945 ignition[1136]: Ignition 2.20.0 Feb 13 15:08:13.384974 ignition[1136]: Stage: kargs Feb 13 15:08:13.385738 ignition[1136]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:08:13.385764 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:08:13.385927 ignition[1136]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:08:13.387335 ignition[1136]: PUT result: OK Feb 13 15:08:13.397403 ignition[1136]: kargs: kargs passed Feb 13 15:08:13.397496 ignition[1136]: Ignition finished successfully Feb 13 15:08:13.403201 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:08:13.415651 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:08:13.454586 ignition[1142]: Ignition 2.20.0 Feb 13 15:08:13.455184 ignition[1142]: Stage: disks Feb 13 15:08:13.455850 ignition[1142]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:08:13.455901 ignition[1142]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:08:13.456087 ignition[1142]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:08:13.458505 ignition[1142]: PUT result: OK Feb 13 15:08:13.468941 ignition[1142]: disks: disks passed Feb 13 15:08:13.469324 ignition[1142]: Ignition finished successfully Feb 13 15:08:13.475184 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:08:13.479322 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:08:13.481927 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:08:13.484265 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:08:13.486164 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:08:13.488305 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:08:13.504088 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:08:13.547967 systemd-fsck[1151]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:08:13.556997 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:08:13.579806 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:08:13.669392 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f78dcc36-7881-4d16-ad8b-28e23dfbdad0 r/w with ordered data mode. Quota mode: none. Feb 13 15:08:13.670609 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:08:13.674472 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:08:13.684346 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:08:13.700292 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:08:13.704056 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:08:13.704252 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:08:13.704325 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:08:13.722386 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:08:13.737497 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:08:13.749764 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1170) Feb 13 15:08:13.749808 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:08:13.749834 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:08:13.749859 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:08:13.756980 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:08:13.759038 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:08:14.161690 initrd-setup-root[1194]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:08:14.184208 initrd-setup-root[1201]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:08:14.194454 initrd-setup-root[1208]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:08:14.203484 initrd-setup-root[1215]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:08:14.473409 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:08:14.483511 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:08:14.488710 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:08:14.524187 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:08:14.550972 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:08:14.570172 ignition[1284]: INFO : Ignition 2.20.0 Feb 13 15:08:14.570172 ignition[1284]: INFO : Stage: mount Feb 13 15:08:14.575751 ignition[1284]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:08:14.575751 ignition[1284]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:08:14.575751 ignition[1284]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:08:14.582247 ignition[1284]: INFO : PUT result: OK Feb 13 15:08:14.584898 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:08:14.589586 ignition[1284]: INFO : mount: mount passed Feb 13 15:08:14.591796 ignition[1284]: INFO : Ignition finished successfully Feb 13 15:08:14.594219 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:08:14.615491 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:08:14.638640 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:08:14.665167 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1294) Feb 13 15:08:14.668818 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:08:14.668858 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:08:14.668883 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:08:14.677166 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:08:14.678510 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:08:14.691361 systemd-networkd[1118]: eth0: Gained IPv6LL Feb 13 15:08:14.720216 ignition[1312]: INFO : Ignition 2.20.0 Feb 13 15:08:14.720216 ignition[1312]: INFO : Stage: files Feb 13 15:08:14.723740 ignition[1312]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:08:14.723740 ignition[1312]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:08:14.723740 ignition[1312]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:08:14.730519 ignition[1312]: INFO : PUT result: OK Feb 13 15:08:14.745754 ignition[1312]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:08:14.749304 ignition[1312]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:08:14.749304 ignition[1312]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:08:14.780289 ignition[1312]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:08:14.782989 ignition[1312]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:08:14.786112 unknown[1312]: wrote ssh authorized keys file for user: core Feb 13 15:08:14.790561 ignition[1312]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:08:14.790561 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:08:14.798468 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:08:14.910985 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:08:15.115031 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:08:15.120246 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:08:15.120246 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:08:15.120246 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:08:15.120246 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:08:15.120246 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:08:15.120246 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:08:15.120246 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:08:15.120246 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:08:15.120246 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:08:15.120246 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:08:15.120246 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:08:15.120246 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:08:15.120246 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:08:15.120246 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 15:08:15.769893 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:08:17.414725 ignition[1312]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 15:08:17.414725 ignition[1312]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:08:17.423364 ignition[1312]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:08:17.423364 ignition[1312]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:08:17.423364 ignition[1312]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:08:17.423364 ignition[1312]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:08:17.423364 ignition[1312]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:08:17.423364 ignition[1312]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:08:17.423364 ignition[1312]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:08:17.445182 ignition[1312]: INFO : files: files passed Feb 13 15:08:17.445182 ignition[1312]: INFO : Ignition finished successfully Feb 13 15:08:17.452189 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:08:17.463643 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:08:17.478529 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:08:17.492469 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:08:17.493775 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:08:17.513242 initrd-setup-root-after-ignition[1339]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:08:17.513242 initrd-setup-root-after-ignition[1339]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:08:17.521901 initrd-setup-root-after-ignition[1343]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:08:17.529301 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:08:17.535658 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:08:17.547484 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:08:17.607102 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:08:17.607702 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:08:17.616536 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:08:17.619711 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:08:17.622361 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:08:17.631206 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:08:17.685233 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:08:17.701468 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:08:17.730436 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:08:17.734344 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:08:17.740571 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:08:17.743713 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:08:17.744275 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:08:17.754420 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:08:17.756907 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:08:17.758975 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:08:17.762294 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:08:17.772737 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:08:17.775952 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:08:17.785018 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:08:17.787723 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:08:17.790467 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:08:17.793334 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:08:17.803419 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:08:17.803985 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:08:17.810978 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:08:17.813854 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:08:17.821912 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:08:17.826381 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:08:17.832309 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:08:17.832608 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:08:17.837290 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:08:17.837659 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:08:17.840552 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:08:17.840915 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:08:17.855025 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:08:17.869604 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:08:17.870510 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:08:17.884572 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:08:17.886482 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:08:17.886790 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:08:17.893174 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:08:17.895258 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:08:17.923302 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:08:17.923539 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:08:17.933739 ignition[1363]: INFO : Ignition 2.20.0 Feb 13 15:08:17.933739 ignition[1363]: INFO : Stage: umount Feb 13 15:08:17.939533 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:08:17.939533 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:08:17.939533 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:08:17.939533 ignition[1363]: INFO : PUT result: OK Feb 13 15:08:17.962630 ignition[1363]: INFO : umount: umount passed Feb 13 15:08:17.962630 ignition[1363]: INFO : Ignition finished successfully Feb 13 15:08:17.954024 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:08:17.954718 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:08:17.962219 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:08:17.963928 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:08:17.964215 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:08:17.975051 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:08:17.975259 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:08:17.975807 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:08:17.975906 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:08:17.977331 systemd[1]: Stopped target network.target - Network. Feb 13 15:08:17.977968 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:08:17.978093 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:08:17.979574 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:08:17.980222 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:08:18.014765 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:08:18.020373 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:08:18.022866 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:08:18.033761 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:08:18.033897 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:08:18.035962 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:08:18.036038 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:08:18.038412 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:08:18.038521 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:08:18.040471 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:08:18.040574 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:08:18.043010 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:08:18.045897 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:08:18.057461 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:08:18.057823 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:08:18.064545 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:08:18.064784 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:08:18.073517 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:08:18.074283 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:08:18.089579 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 15:08:18.090613 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:08:18.090861 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:08:18.098573 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 15:08:18.100964 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:08:18.101277 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:08:18.126019 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:08:18.134377 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:08:18.134658 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:08:18.143792 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:08:18.143910 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:08:18.153007 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:08:18.153123 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:08:18.155268 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:08:18.155359 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:08:18.166077 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:08:18.173794 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 15:08:18.173936 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:08:18.186232 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:08:18.188642 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:08:18.194632 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:08:18.194807 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:08:18.195995 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:08:18.196101 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:08:18.197900 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:08:18.198427 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:08:18.208825 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:08:18.208973 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:08:18.219535 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:08:18.219716 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:08:18.239686 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:08:18.250895 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:08:18.251045 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:08:18.257441 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:08:18.257577 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:08:18.270219 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 15:08:18.270348 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:08:18.271164 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:08:18.273202 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:08:18.276567 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:08:18.276766 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:08:18.282339 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:08:18.299869 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:08:18.314687 systemd[1]: Switching root. Feb 13 15:08:18.379219 systemd-journald[252]: Journal stopped Feb 13 15:08:21.356189 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Feb 13 15:08:21.356682 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:08:21.356778 kernel: SELinux: policy capability open_perms=1 Feb 13 15:08:21.356823 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:08:21.356857 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:08:21.356895 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:08:21.356926 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:08:21.356956 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:08:21.356987 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:08:21.357016 kernel: audit: type=1403 audit(1739459298.790:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:08:21.357059 systemd[1]: Successfully loaded SELinux policy in 64.040ms. Feb 13 15:08:21.357108 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 34.180ms. Feb 13 15:08:21.357245 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:08:21.357285 systemd[1]: Detected virtualization amazon. Feb 13 15:08:21.357325 systemd[1]: Detected architecture arm64. Feb 13 15:08:21.357360 systemd[1]: Detected first boot. Feb 13 15:08:21.357397 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:08:21.357428 zram_generator::config[1408]: No configuration found. Feb 13 15:08:21.357487 kernel: NET: Registered PF_VSOCK protocol family Feb 13 15:08:21.357521 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:08:21.357559 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 15:08:21.357606 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:08:21.357652 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:08:21.357690 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:08:21.357725 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:08:21.357756 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:08:21.357797 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:08:21.357830 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:08:21.357865 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:08:21.357899 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:08:21.357933 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:08:21.357973 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:08:21.358006 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:08:21.358038 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:08:21.358071 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:08:21.358104 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:08:21.367229 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:08:21.367303 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:08:21.367341 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:08:21.367373 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:08:21.367416 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:08:21.367449 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:08:21.367479 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:08:21.367508 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:08:21.367539 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:08:21.367572 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:08:21.367604 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:08:21.367635 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:08:21.367670 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:08:21.367702 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:08:21.367733 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 15:08:21.367766 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:08:21.367798 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:08:21.367829 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:08:21.367858 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:08:21.367889 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:08:21.367918 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:08:21.367959 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:08:21.367994 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:08:21.368024 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:08:21.368054 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:08:21.368088 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:08:21.368124 systemd[1]: Reached target machines.target - Containers. Feb 13 15:08:21.368601 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:08:21.368635 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:08:21.368675 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:08:21.368722 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:08:21.368758 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:08:21.368788 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:08:21.368817 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:08:21.368846 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:08:21.368879 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:08:21.368910 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:08:21.368949 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:08:21.368979 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:08:21.369014 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:08:21.369043 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:08:21.369076 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:08:21.369108 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:08:21.369244 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:08:21.369281 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:08:21.369311 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:08:21.369352 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 15:08:21.369385 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:08:21.369418 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:08:21.369451 systemd[1]: Stopped verity-setup.service. Feb 13 15:08:21.369490 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:08:21.369532 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:08:21.369561 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:08:21.369590 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:08:21.369619 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:08:21.369647 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:08:21.369680 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:08:21.369716 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:08:21.369746 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:08:21.369775 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:08:21.369804 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:08:21.369834 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:08:21.369863 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:08:21.369893 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:08:21.369921 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:08:21.369956 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:08:21.369990 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:08:21.370020 kernel: fuse: init (API version 7.39) Feb 13 15:08:21.370049 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:08:21.370079 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:08:21.370109 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:08:21.370259 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 15:08:21.370300 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:08:21.370334 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:08:21.370370 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:08:21.370404 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:08:21.370434 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:08:21.370468 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:08:21.370502 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:08:21.370532 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:08:21.370564 kernel: loop: module loaded Feb 13 15:08:21.370593 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:08:21.370631 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:08:21.370743 systemd-journald[1494]: Collecting audit messages is disabled. Feb 13 15:08:21.370797 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:08:21.370832 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 15:08:21.370864 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:08:21.370893 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:08:21.370923 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:08:21.370952 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:08:21.370980 kernel: ACPI: bus type drm_connector registered Feb 13 15:08:21.371008 systemd-journald[1494]: Journal started Feb 13 15:08:21.371065 systemd-journald[1494]: Runtime Journal (/run/log/journal/ec201a046518804effb869d99e0f9b9e) is 8M, max 75.3M, 67.3M free. Feb 13 15:08:21.380924 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:08:21.383807 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:08:20.458828 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:08:20.474779 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 15:08:20.476013 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:08:21.404480 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:08:21.404614 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:08:21.411991 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:08:21.418228 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:08:21.476444 kernel: loop0: detected capacity change from 0 to 123192 Feb 13 15:08:21.503972 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:08:21.522501 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:08:21.539555 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:08:21.557623 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 15:08:21.560100 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:08:21.564932 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:08:21.583948 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:08:21.597696 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:08:21.615726 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:08:21.633638 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:08:21.646918 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:08:21.658198 systemd-journald[1494]: Time spent on flushing to /var/log/journal/ec201a046518804effb869d99e0f9b9e is 34.315ms for 928 entries. Feb 13 15:08:21.658198 systemd-journald[1494]: System Journal (/var/log/journal/ec201a046518804effb869d99e0f9b9e) is 8M, max 195.6M, 187.6M free. Feb 13 15:08:21.706909 systemd-journald[1494]: Received client request to flush runtime journal. Feb 13 15:08:21.706990 kernel: loop1: detected capacity change from 0 to 113512 Feb 13 15:08:21.668516 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:08:21.677909 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 15:08:21.686617 udevadm[1555]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:08:21.712393 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:08:21.741964 systemd-tmpfiles[1559]: ACLs are not supported, ignoring. Feb 13 15:08:21.742007 systemd-tmpfiles[1559]: ACLs are not supported, ignoring. Feb 13 15:08:21.764961 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:08:21.834213 kernel: loop2: detected capacity change from 0 to 53784 Feb 13 15:08:21.895190 kernel: loop3: detected capacity change from 0 to 189592 Feb 13 15:08:21.960186 kernel: loop4: detected capacity change from 0 to 123192 Feb 13 15:08:21.985188 kernel: loop5: detected capacity change from 0 to 113512 Feb 13 15:08:22.016196 kernel: loop6: detected capacity change from 0 to 53784 Feb 13 15:08:22.052221 kernel: loop7: detected capacity change from 0 to 189592 Feb 13 15:08:22.087434 (sd-merge)[1568]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 15:08:22.090671 (sd-merge)[1568]: Merged extensions into '/usr'. Feb 13 15:08:22.101876 systemd[1]: Reload requested from client PID 1524 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:08:22.101919 systemd[1]: Reloading... Feb 13 15:08:22.370192 zram_generator::config[1599]: No configuration found. Feb 13 15:08:22.754754 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:08:22.932354 systemd[1]: Reloading finished in 829 ms. Feb 13 15:08:22.961030 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:08:22.964526 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:08:22.986934 systemd[1]: Starting ensure-sysext.service... Feb 13 15:08:22.995640 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:08:23.003482 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:08:23.037313 systemd[1]: Reload requested from client PID 1648 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:08:23.037380 systemd[1]: Reloading... Feb 13 15:08:23.125103 systemd-tmpfiles[1649]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:08:23.125779 systemd-tmpfiles[1649]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:08:23.130779 systemd-tmpfiles[1649]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:08:23.132580 systemd-tmpfiles[1649]: ACLs are not supported, ignoring. Feb 13 15:08:23.132755 systemd-tmpfiles[1649]: ACLs are not supported, ignoring. Feb 13 15:08:23.150784 systemd-tmpfiles[1649]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:08:23.152964 systemd-tmpfiles[1649]: Skipping /boot Feb 13 15:08:23.208523 systemd-tmpfiles[1649]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:08:23.208552 systemd-tmpfiles[1649]: Skipping /boot Feb 13 15:08:23.235375 ldconfig[1518]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:08:23.258722 systemd-udevd[1650]: Using default interface naming scheme 'v255'. Feb 13 15:08:23.296172 zram_generator::config[1684]: No configuration found. Feb 13 15:08:23.569974 (udev-worker)[1734]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:08:23.870031 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:08:23.993008 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1716) Feb 13 15:08:24.109863 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:08:24.110868 systemd[1]: Reloading finished in 1072 ms. Feb 13 15:08:24.131891 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:08:24.135826 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:08:24.173057 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:08:24.284253 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:08:24.298518 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:08:24.301799 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:08:24.307015 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:08:24.314098 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:08:24.321500 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:08:24.332500 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:08:24.335653 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:08:24.335806 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:08:24.361900 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:08:24.373467 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:08:24.387870 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:08:24.390226 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:08:24.407833 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:08:24.414219 systemd[1]: Finished ensure-sysext.service. Feb 13 15:08:24.416677 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:08:24.417460 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:08:24.558957 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:08:24.564165 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:08:24.568577 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:08:24.569323 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:08:24.573029 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:08:24.573617 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:08:24.577175 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:08:24.578280 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:08:24.603631 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:08:24.627716 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:08:24.632677 augenrules[1881]: No rules Feb 13 15:08:24.634534 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:08:24.635123 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:08:24.642227 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:08:24.656562 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:08:24.662468 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:08:24.664798 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:08:24.664975 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:08:24.669532 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:08:24.688452 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:08:24.695261 lvm[1889]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:08:24.697716 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:08:24.700410 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:08:24.758248 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:08:24.764181 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:08:24.778304 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:08:24.790744 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:08:24.794405 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:08:24.835570 lvm[1898]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:08:24.889686 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:08:24.906036 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:08:24.924882 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:08:25.063113 systemd-networkd[1856]: lo: Link UP Feb 13 15:08:25.063879 systemd-networkd[1856]: lo: Gained carrier Feb 13 15:08:25.066409 systemd-resolved[1858]: Positive Trust Anchors: Feb 13 15:08:25.066458 systemd-resolved[1858]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:08:25.066524 systemd-resolved[1858]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:08:25.067888 systemd-networkd[1856]: Enumeration completed Feb 13 15:08:25.068087 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:08:25.072062 systemd-networkd[1856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:08:25.072093 systemd-networkd[1856]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:08:25.077734 systemd-networkd[1856]: eth0: Link UP Feb 13 15:08:25.078771 systemd-networkd[1856]: eth0: Gained carrier Feb 13 15:08:25.078826 systemd-networkd[1856]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:08:25.079539 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 15:08:25.093847 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:08:25.096573 systemd-resolved[1858]: Defaulting to hostname 'linux'. Feb 13 15:08:25.099860 systemd-networkd[1856]: eth0: DHCPv4 address 172.31.21.44/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:08:25.111556 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:08:25.115521 systemd[1]: Reached target network.target - Network. Feb 13 15:08:25.117577 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:08:25.120292 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:08:25.122669 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:08:25.125257 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:08:25.129673 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:08:25.132503 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:08:25.135180 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:08:25.137838 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:08:25.137906 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:08:25.139914 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:08:25.144384 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:08:25.149942 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:08:25.159962 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 15:08:25.163847 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 15:08:25.166704 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 15:08:25.183086 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:08:25.186826 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 15:08:25.193350 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 15:08:25.197095 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:08:25.201199 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:08:25.203540 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:08:25.205786 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:08:25.205851 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:08:25.214805 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:08:25.227526 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:08:25.233581 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:08:25.252424 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:08:25.258628 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:08:25.261370 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:08:25.264653 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:08:25.271519 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 15:08:25.277439 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:08:25.284236 jq[1921]: false Feb 13 15:08:25.290502 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 15:08:25.308601 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:08:25.319511 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:08:25.346976 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:08:25.352427 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:08:25.353775 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:08:25.358349 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:08:25.368660 dbus-daemon[1920]: [system] SELinux support is enabled Feb 13 15:08:25.382665 dbus-daemon[1920]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1856 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 15:08:25.391682 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:08:25.395948 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:08:25.425981 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:08:25.427792 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:08:25.434851 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:08:25.436463 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:08:25.450182 jq[1932]: true Feb 13 15:08:25.477046 extend-filesystems[1922]: Found loop4 Feb 13 15:08:25.477046 extend-filesystems[1922]: Found loop5 Feb 13 15:08:25.477046 extend-filesystems[1922]: Found loop6 Feb 13 15:08:25.477046 extend-filesystems[1922]: Found loop7 Feb 13 15:08:25.477046 extend-filesystems[1922]: Found nvme0n1 Feb 13 15:08:25.477046 extend-filesystems[1922]: Found nvme0n1p1 Feb 13 15:08:25.477046 extend-filesystems[1922]: Found nvme0n1p2 Feb 13 15:08:25.477046 extend-filesystems[1922]: Found nvme0n1p3 Feb 13 15:08:25.477046 extend-filesystems[1922]: Found usr Feb 13 15:08:25.477046 extend-filesystems[1922]: Found nvme0n1p4 Feb 13 15:08:25.477046 extend-filesystems[1922]: Found nvme0n1p6 Feb 13 15:08:25.477046 extend-filesystems[1922]: Found nvme0n1p7 Feb 13 15:08:25.477046 extend-filesystems[1922]: Found nvme0n1p9 Feb 13 15:08:25.534931 extend-filesystems[1922]: Checking size of /dev/nvme0n1p9 Feb 13 15:08:25.532237 (ntainerd)[1943]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:08:25.539817 dbus-daemon[1920]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 15:08:25.537925 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:08:25.538057 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:08:25.543838 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:08:25.543880 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:08:25.557741 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 15:08:25.603174 jq[1942]: true Feb 13 15:08:25.619197 update_engine[1931]: I20250213 15:08:25.606096 1931 main.cc:92] Flatcar Update Engine starting Feb 13 15:08:25.623218 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:08:25.624442 ntpd[1924]: 13 Feb 15:08:25 ntpd[1924]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:12 UTC 2025 (1): Starting Feb 13 15:08:25.624442 ntpd[1924]: 13 Feb 15:08:25 ntpd[1924]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:08:25.624442 ntpd[1924]: 13 Feb 15:08:25 ntpd[1924]: ---------------------------------------------------- Feb 13 15:08:25.624442 ntpd[1924]: 13 Feb 15:08:25 ntpd[1924]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:08:25.624442 ntpd[1924]: 13 Feb 15:08:25 ntpd[1924]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:08:25.624442 ntpd[1924]: 13 Feb 15:08:25 ntpd[1924]: corporation. Support and training for ntp-4 are Feb 13 15:08:25.624442 ntpd[1924]: 13 Feb 15:08:25 ntpd[1924]: available at https://www.nwtime.org/support Feb 13 15:08:25.624442 ntpd[1924]: 13 Feb 15:08:25 ntpd[1924]: ---------------------------------------------------- Feb 13 15:08:25.622438 ntpd[1924]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:12 UTC 2025 (1): Starting Feb 13 15:08:25.625413 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:08:25.636897 ntpd[1924]: 13 Feb 15:08:25 ntpd[1924]: proto: precision = 0.096 usec (-23) Feb 13 15:08:25.636897 ntpd[1924]: 13 Feb 15:08:25 ntpd[1924]: basedate set to 2025-02-01 Feb 13 15:08:25.636897 ntpd[1924]: 13 Feb 15:08:25 ntpd[1924]: gps base set to 2025-02-02 (week 2352) Feb 13 15:08:25.622513 ntpd[1924]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:08:25.622533 ntpd[1924]: ---------------------------------------------------- Feb 13 15:08:25.622553 ntpd[1924]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:08:25.622571 ntpd[1924]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:08:25.622593 ntpd[1924]: corporation. Support and training for ntp-4 are Feb 13 15:08:25.622612 ntpd[1924]: available at https://www.nwtime.org/support Feb 13 15:08:25.622632 ntpd[1924]: ---------------------------------------------------- Feb 13 15:08:25.628631 ntpd[1924]: proto: precision = 0.096 usec (-23) Feb 13 15:08:25.630485 ntpd[1924]: basedate set to 2025-02-01 Feb 13 15:08:25.630743 ntpd[1924]: gps base set to 2025-02-02 (week 2352) Feb 13 15:08:25.639437 ntpd[1924]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:08:25.643291 ntpd[1924]: 13 Feb 15:08:25 ntpd[1924]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:08:25.643291 ntpd[1924]: 13 Feb 15:08:25 ntpd[1924]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:08:25.643291 ntpd[1924]: 13 Feb 15:08:25 ntpd[1924]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:08:25.643291 ntpd[1924]: 13 Feb 15:08:25 ntpd[1924]: Listen normally on 3 eth0 172.31.21.44:123 Feb 13 15:08:25.643291 ntpd[1924]: 13 Feb 15:08:25 ntpd[1924]: Listen normally on 4 lo [::1]:123 Feb 13 15:08:25.640201 ntpd[1924]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:08:25.640526 ntpd[1924]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:08:25.640593 ntpd[1924]: Listen normally on 3 eth0 172.31.21.44:123 Feb 13 15:08:25.640660 ntpd[1924]: Listen normally on 4 lo [::1]:123 Feb 13 15:08:25.640764 ntpd[1924]: bind(21) AF_INET6 fe80::4ac:a7ff:fe61:598d%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:08:25.645338 ntpd[1924]: 13 Feb 15:08:25 ntpd[1924]: bind(21) AF_INET6 fe80::4ac:a7ff:fe61:598d%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:08:25.645338 ntpd[1924]: 13 Feb 15:08:25 ntpd[1924]: unable to create socket on eth0 (5) for fe80::4ac:a7ff:fe61:598d%2#123 Feb 13 15:08:25.645338 ntpd[1924]: 13 Feb 15:08:25 ntpd[1924]: failed to init interface for address fe80::4ac:a7ff:fe61:598d%2 Feb 13 15:08:25.645338 ntpd[1924]: 13 Feb 15:08:25 ntpd[1924]: Listening on routing socket on fd #21 for interface updates Feb 13 15:08:25.648407 tar[1947]: linux-arm64/helm Feb 13 15:08:25.644029 ntpd[1924]: unable to create socket on eth0 (5) for fe80::4ac:a7ff:fe61:598d%2#123 Feb 13 15:08:25.655552 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:08:25.672777 update_engine[1931]: I20250213 15:08:25.667419 1931 update_check_scheduler.cc:74] Next update check in 8m32s Feb 13 15:08:25.644070 ntpd[1924]: failed to init interface for address fe80::4ac:a7ff:fe61:598d%2 Feb 13 15:08:25.644203 ntpd[1924]: Listening on routing socket on fd #21 for interface updates Feb 13 15:08:25.683587 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:08:25.687028 extend-filesystems[1922]: Resized partition /dev/nvme0n1p9 Feb 13 15:08:25.689638 ntpd[1924]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:08:25.691292 ntpd[1924]: 13 Feb 15:08:25 ntpd[1924]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:08:25.691292 ntpd[1924]: 13 Feb 15:08:25 ntpd[1924]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:08:25.689698 ntpd[1924]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:08:25.705729 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 15:08:25.711053 extend-filesystems[1973]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:08:25.758249 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 15:08:25.851971 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:08:25.903229 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 15:08:25.914762 coreos-metadata[1919]: Feb 13 15:08:25.914 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:08:25.914762 coreos-metadata[1919]: Feb 13 15:08:25.914 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 15:08:25.917799 coreos-metadata[1919]: Feb 13 15:08:25.917 INFO Fetch successful Feb 13 15:08:25.917799 coreos-metadata[1919]: Feb 13 15:08:25.917 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 15:08:25.919200 coreos-metadata[1919]: Feb 13 15:08:25.918 INFO Fetch successful Feb 13 15:08:25.919200 coreos-metadata[1919]: Feb 13 15:08:25.918 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 15:08:25.919390 extend-filesystems[1973]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 15:08:25.919390 extend-filesystems[1973]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:08:25.919390 extend-filesystems[1973]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 15:08:25.934956 extend-filesystems[1922]: Resized filesystem in /dev/nvme0n1p9 Feb 13 15:08:25.942753 bash[1991]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:08:25.934345 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:08:25.946532 coreos-metadata[1919]: Feb 13 15:08:25.938 INFO Fetch successful Feb 13 15:08:25.946532 coreos-metadata[1919]: Feb 13 15:08:25.938 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 15:08:25.946532 coreos-metadata[1919]: Feb 13 15:08:25.938 INFO Fetch successful Feb 13 15:08:25.946532 coreos-metadata[1919]: Feb 13 15:08:25.938 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 15:08:25.946532 coreos-metadata[1919]: Feb 13 15:08:25.938 INFO Fetch failed with 404: resource not found Feb 13 15:08:25.946532 coreos-metadata[1919]: Feb 13 15:08:25.938 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 15:08:25.946532 coreos-metadata[1919]: Feb 13 15:08:25.942 INFO Fetch successful Feb 13 15:08:25.946532 coreos-metadata[1919]: Feb 13 15:08:25.942 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 15:08:25.935080 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:08:25.944610 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:08:25.960767 coreos-metadata[1919]: Feb 13 15:08:25.956 INFO Fetch successful Feb 13 15:08:25.960767 coreos-metadata[1919]: Feb 13 15:08:25.956 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 15:08:25.966236 coreos-metadata[1919]: Feb 13 15:08:25.964 INFO Fetch successful Feb 13 15:08:25.966236 coreos-metadata[1919]: Feb 13 15:08:25.964 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 15:08:25.966236 coreos-metadata[1919]: Feb 13 15:08:25.964 INFO Fetch successful Feb 13 15:08:25.966236 coreos-metadata[1919]: Feb 13 15:08:25.964 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 15:08:25.972864 coreos-metadata[1919]: Feb 13 15:08:25.969 INFO Fetch successful Feb 13 15:08:25.969665 systemd[1]: Starting sshkeys.service... Feb 13 15:08:26.061586 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:08:26.089346 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:08:26.145185 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1737) Feb 13 15:08:26.165758 systemd-logind[1929]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:08:26.165849 systemd-logind[1929]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 15:08:26.167174 systemd-logind[1929]: New seat seat0. Feb 13 15:08:26.174317 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:08:26.183091 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:08:26.194841 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:08:26.304567 coreos-metadata[2007]: Feb 13 15:08:26.304 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:08:26.307691 coreos-metadata[2007]: Feb 13 15:08:26.306 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 15:08:26.307691 coreos-metadata[2007]: Feb 13 15:08:26.307 INFO Fetch successful Feb 13 15:08:26.307691 coreos-metadata[2007]: Feb 13 15:08:26.307 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 15:08:26.315102 coreos-metadata[2007]: Feb 13 15:08:26.312 INFO Fetch successful Feb 13 15:08:26.320809 unknown[2007]: wrote ssh authorized keys file for user: core Feb 13 15:08:26.403449 update-ssh-keys[2040]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:08:26.404045 systemd-networkd[1856]: eth0: Gained IPv6LL Feb 13 15:08:26.409031 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:08:26.418253 systemd[1]: Finished sshkeys.service. Feb 13 15:08:26.437025 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:08:26.442229 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:08:26.491195 dbus-daemon[1920]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 15:08:26.502377 dbus-daemon[1920]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1959 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 15:08:26.494065 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 15:08:26.506556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:08:26.512721 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:08:26.516240 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 15:08:26.572727 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 15:08:26.635516 polkitd[2063]: Started polkitd version 121 Feb 13 15:08:26.713261 containerd[1943]: time="2025-02-13T15:08:26.710988014Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:08:26.718112 polkitd[2063]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 15:08:26.718599 polkitd[2063]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 15:08:26.729959 polkitd[2063]: Finished loading, compiling and executing 2 rules Feb 13 15:08:26.734231 dbus-daemon[1920]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 15:08:26.734625 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 15:08:26.743983 polkitd[2063]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 15:08:26.792318 locksmithd[1972]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:08:26.822465 systemd-hostnamed[1959]: Hostname set to (transient) Feb 13 15:08:26.825267 systemd-resolved[1858]: System hostname changed to 'ip-172-31-21-44'. Feb 13 15:08:26.841268 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:08:26.871697 amazon-ssm-agent[2052]: Initializing new seelog logger Feb 13 15:08:26.872610 amazon-ssm-agent[2052]: New Seelog Logger Creation Complete Feb 13 15:08:26.872610 amazon-ssm-agent[2052]: 2025/02/13 15:08:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:08:26.872610 amazon-ssm-agent[2052]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:08:26.872864 amazon-ssm-agent[2052]: 2025/02/13 15:08:26 processing appconfig overrides Feb 13 15:08:26.882226 amazon-ssm-agent[2052]: 2025/02/13 15:08:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:08:26.882226 amazon-ssm-agent[2052]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:08:26.882226 amazon-ssm-agent[2052]: 2025/02/13 15:08:26 processing appconfig overrides Feb 13 15:08:26.882226 amazon-ssm-agent[2052]: 2025/02/13 15:08:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:08:26.882226 amazon-ssm-agent[2052]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:08:26.882226 amazon-ssm-agent[2052]: 2025/02/13 15:08:26 processing appconfig overrides Feb 13 15:08:26.883250 amazon-ssm-agent[2052]: 2025-02-13 15:08:26 INFO Proxy environment variables: Feb 13 15:08:26.889086 amazon-ssm-agent[2052]: 2025/02/13 15:08:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:08:26.889086 amazon-ssm-agent[2052]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:08:26.889086 amazon-ssm-agent[2052]: 2025/02/13 15:08:26 processing appconfig overrides Feb 13 15:08:26.951873 containerd[1943]: time="2025-02-13T15:08:26.948927375Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:08:26.958262 containerd[1943]: time="2025-02-13T15:08:26.956926287Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:08:26.958262 containerd[1943]: time="2025-02-13T15:08:26.957003315Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:08:26.958262 containerd[1943]: time="2025-02-13T15:08:26.957041343Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:08:26.958262 containerd[1943]: time="2025-02-13T15:08:26.957419547Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:08:26.958262 containerd[1943]: time="2025-02-13T15:08:26.957494535Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:08:26.958262 containerd[1943]: time="2025-02-13T15:08:26.957712839Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:08:26.958262 containerd[1943]: time="2025-02-13T15:08:26.957750063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:08:26.965063 containerd[1943]: time="2025-02-13T15:08:26.961318323Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:08:26.965063 containerd[1943]: time="2025-02-13T15:08:26.961381455Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:08:26.965063 containerd[1943]: time="2025-02-13T15:08:26.961437891Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:08:26.965063 containerd[1943]: time="2025-02-13T15:08:26.961466979Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:08:26.965063 containerd[1943]: time="2025-02-13T15:08:26.961786623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:08:26.965063 containerd[1943]: time="2025-02-13T15:08:26.964052751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:08:26.965063 containerd[1943]: time="2025-02-13T15:08:26.964588263Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:08:26.965063 containerd[1943]: time="2025-02-13T15:08:26.964657755Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:08:26.965063 containerd[1943]: time="2025-02-13T15:08:26.964968051Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:08:26.965576 containerd[1943]: time="2025-02-13T15:08:26.965094591Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:08:26.978062 containerd[1943]: time="2025-02-13T15:08:26.977941539Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:08:26.978258 containerd[1943]: time="2025-02-13T15:08:26.978099555Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:08:26.978258 containerd[1943]: time="2025-02-13T15:08:26.978160023Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:08:26.978258 containerd[1943]: time="2025-02-13T15:08:26.978200835Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:08:26.978258 containerd[1943]: time="2025-02-13T15:08:26.978240195Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:08:26.978765 containerd[1943]: time="2025-02-13T15:08:26.978538107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:08:26.986904 amazon-ssm-agent[2052]: 2025-02-13 15:08:26 INFO no_proxy: Feb 13 15:08:26.992191 containerd[1943]: time="2025-02-13T15:08:26.991764255Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:08:26.992191 containerd[1943]: time="2025-02-13T15:08:26.992103123Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:08:26.998973 containerd[1943]: time="2025-02-13T15:08:26.994403475Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:08:26.998973 containerd[1943]: time="2025-02-13T15:08:26.995187507Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:08:26.998973 containerd[1943]: time="2025-02-13T15:08:26.995234175Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:08:26.998973 containerd[1943]: time="2025-02-13T15:08:26.995267451Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:08:26.998973 containerd[1943]: time="2025-02-13T15:08:26.995299047Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:08:26.998973 containerd[1943]: time="2025-02-13T15:08:26.995333931Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:08:26.998973 containerd[1943]: time="2025-02-13T15:08:26.995367603Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:08:26.998973 containerd[1943]: time="2025-02-13T15:08:26.995400771Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:08:26.998973 containerd[1943]: time="2025-02-13T15:08:26.995431083Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:08:26.998973 containerd[1943]: time="2025-02-13T15:08:26.995457903Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:08:26.998973 containerd[1943]: time="2025-02-13T15:08:26.995498583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:08:26.998973 containerd[1943]: time="2025-02-13T15:08:26.995531079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:08:26.998973 containerd[1943]: time="2025-02-13T15:08:26.995561451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:08:26.998973 containerd[1943]: time="2025-02-13T15:08:26.995594487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:08:26.999697 containerd[1943]: time="2025-02-13T15:08:26.995627703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:08:26.999697 containerd[1943]: time="2025-02-13T15:08:26.995660355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:08:26.999697 containerd[1943]: time="2025-02-13T15:08:26.995708343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:08:26.999697 containerd[1943]: time="2025-02-13T15:08:26.995745063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:08:26.999697 containerd[1943]: time="2025-02-13T15:08:26.995782551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:08:26.999697 containerd[1943]: time="2025-02-13T15:08:26.995822451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:08:26.999697 containerd[1943]: time="2025-02-13T15:08:26.995854635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:08:26.999697 containerd[1943]: time="2025-02-13T15:08:26.995886123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:08:26.999697 containerd[1943]: time="2025-02-13T15:08:26.995927583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:08:26.999697 containerd[1943]: time="2025-02-13T15:08:26.995968443Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:08:26.999697 containerd[1943]: time="2025-02-13T15:08:26.996025923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:08:26.999697 containerd[1943]: time="2025-02-13T15:08:26.996060735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:08:26.999697 containerd[1943]: time="2025-02-13T15:08:26.996088155Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:08:26.999697 containerd[1943]: time="2025-02-13T15:08:26.998611167Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:08:27.010397 containerd[1943]: time="2025-02-13T15:08:26.998794539Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:08:27.010397 containerd[1943]: time="2025-02-13T15:08:26.998822139Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:08:27.010397 containerd[1943]: time="2025-02-13T15:08:26.998851023Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:08:27.010397 containerd[1943]: time="2025-02-13T15:08:26.998877879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:08:27.010397 containerd[1943]: time="2025-02-13T15:08:26.998925783Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:08:27.010397 containerd[1943]: time="2025-02-13T15:08:26.998957091Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:08:27.010397 containerd[1943]: time="2025-02-13T15:08:26.998984271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:08:27.010741 containerd[1943]: time="2025-02-13T15:08:26.999515847Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:08:27.010741 containerd[1943]: time="2025-02-13T15:08:26.999616371Z" level=info msg="Connect containerd service" Feb 13 15:08:27.010741 containerd[1943]: time="2025-02-13T15:08:26.999687819Z" level=info msg="using legacy CRI server" Feb 13 15:08:27.010741 containerd[1943]: time="2025-02-13T15:08:26.999705963Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:08:27.010741 containerd[1943]: time="2025-02-13T15:08:26.999958479Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:08:27.010741 containerd[1943]: time="2025-02-13T15:08:27.004864091Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:08:27.010741 containerd[1943]: time="2025-02-13T15:08:27.007282235Z" level=info msg="Start subscribing containerd event" Feb 13 15:08:27.010741 containerd[1943]: time="2025-02-13T15:08:27.007447415Z" level=info msg="Start recovering state" Feb 13 15:08:27.010741 containerd[1943]: time="2025-02-13T15:08:27.007645979Z" level=info msg="Start event monitor" Feb 13 15:08:27.010741 containerd[1943]: time="2025-02-13T15:08:27.007674035Z" level=info msg="Start snapshots syncer" Feb 13 15:08:27.010741 containerd[1943]: time="2025-02-13T15:08:27.007700027Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:08:27.010741 containerd[1943]: time="2025-02-13T15:08:27.007719251Z" level=info msg="Start streaming server" Feb 13 15:08:27.029433 containerd[1943]: time="2025-02-13T15:08:27.013396007Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:08:27.029433 containerd[1943]: time="2025-02-13T15:08:27.013534847Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:08:27.029433 containerd[1943]: time="2025-02-13T15:08:27.013674623Z" level=info msg="containerd successfully booted in 0.311697s" Feb 13 15:08:27.013858 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:08:27.087860 amazon-ssm-agent[2052]: 2025-02-13 15:08:26 INFO https_proxy: Feb 13 15:08:27.193253 amazon-ssm-agent[2052]: 2025-02-13 15:08:26 INFO http_proxy: Feb 13 15:08:27.295657 amazon-ssm-agent[2052]: 2025-02-13 15:08:26 INFO Checking if agent identity type OnPrem can be assumed Feb 13 15:08:27.394593 amazon-ssm-agent[2052]: 2025-02-13 15:08:26 INFO Checking if agent identity type EC2 can be assumed Feb 13 15:08:27.493393 amazon-ssm-agent[2052]: 2025-02-13 15:08:27 INFO Agent will take identity from EC2 Feb 13 15:08:27.592805 amazon-ssm-agent[2052]: 2025-02-13 15:08:27 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:08:27.692091 amazon-ssm-agent[2052]: 2025-02-13 15:08:27 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:08:27.791457 amazon-ssm-agent[2052]: 2025-02-13 15:08:27 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:08:27.891188 amazon-ssm-agent[2052]: 2025-02-13 15:08:27 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 15:08:27.991318 amazon-ssm-agent[2052]: 2025-02-13 15:08:27 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 15:08:28.092460 amazon-ssm-agent[2052]: 2025-02-13 15:08:27 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 15:08:28.096521 tar[1947]: linux-arm64/LICENSE Feb 13 15:08:28.097663 tar[1947]: linux-arm64/README.md Feb 13 15:08:28.148311 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:08:28.192271 amazon-ssm-agent[2052]: 2025-02-13 15:08:27 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 15:08:28.199185 sshd_keygen[1966]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:08:28.272255 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:08:28.286180 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:08:28.299204 amazon-ssm-agent[2052]: 2025-02-13 15:08:27 INFO [Registrar] Starting registrar module Feb 13 15:08:28.302075 systemd[1]: Started sshd@0-172.31.21.44:22-139.178.68.195:34068.service - OpenSSH per-connection server daemon (139.178.68.195:34068). Feb 13 15:08:28.343933 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:08:28.346295 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:08:28.372865 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:08:28.393004 amazon-ssm-agent[2052]: 2025-02-13 15:08:27 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 15:08:28.393004 amazon-ssm-agent[2052]: 2025-02-13 15:08:28 INFO [EC2Identity] EC2 registration was successful. Feb 13 15:08:28.393004 amazon-ssm-agent[2052]: 2025-02-13 15:08:28 INFO [CredentialRefresher] credentialRefresher has started Feb 13 15:08:28.393004 amazon-ssm-agent[2052]: 2025-02-13 15:08:28 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 15:08:28.393004 amazon-ssm-agent[2052]: 2025-02-13 15:08:28 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 15:08:28.399300 amazon-ssm-agent[2052]: 2025-02-13 15:08:28 INFO [CredentialRefresher] Next credential rotation will be in 32.26664932126667 minutes Feb 13 15:08:28.418248 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:08:28.437827 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:08:28.454431 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:08:28.457276 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:08:28.572439 sshd[2159]: Accepted publickey for core from 139.178.68.195 port 34068 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:08:28.581008 sshd-session[2159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:08:28.601524 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:08:28.613851 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:08:28.623667 ntpd[1924]: Listen normally on 6 eth0 [fe80::4ac:a7ff:fe61:598d%2]:123 Feb 13 15:08:28.631784 ntpd[1924]: 13 Feb 15:08:28 ntpd[1924]: Listen normally on 6 eth0 [fe80::4ac:a7ff:fe61:598d%2]:123 Feb 13 15:08:28.651228 systemd-logind[1929]: New session 1 of user core. Feb 13 15:08:28.671708 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:08:28.690995 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:08:28.709437 (systemd)[2170]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:08:28.718789 systemd-logind[1929]: New session c1 of user core. Feb 13 15:08:28.796657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:08:28.803754 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:08:28.815842 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:08:29.078942 systemd[2170]: Queued start job for default target default.target. Feb 13 15:08:29.091871 systemd[2170]: Created slice app.slice - User Application Slice. Feb 13 15:08:29.092015 systemd[2170]: Reached target paths.target - Paths. Feb 13 15:08:29.092195 systemd[2170]: Reached target timers.target - Timers. Feb 13 15:08:29.103713 systemd[2170]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:08:29.130051 systemd[2170]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:08:29.130279 systemd[2170]: Reached target sockets.target - Sockets. Feb 13 15:08:29.130410 systemd[2170]: Reached target basic.target - Basic System. Feb 13 15:08:29.130510 systemd[2170]: Reached target default.target - Main User Target. Feb 13 15:08:29.130582 systemd[2170]: Startup finished in 388ms. Feb 13 15:08:29.130888 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:08:29.144535 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:08:29.151557 systemd[1]: Startup finished in 1.120s (kernel) + 9.987s (initrd) + 10.423s (userspace) = 21.531s. Feb 13 15:08:29.325902 systemd[1]: Started sshd@1-172.31.21.44:22-139.178.68.195:41808.service - OpenSSH per-connection server daemon (139.178.68.195:41808). Feb 13 15:08:29.426510 amazon-ssm-agent[2052]: 2025-02-13 15:08:29 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 15:08:29.527174 amazon-ssm-agent[2052]: 2025-02-13 15:08:29 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2198) started Feb 13 15:08:29.555763 sshd[2195]: Accepted publickey for core from 139.178.68.195 port 41808 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:08:29.562652 sshd-session[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:08:29.584003 systemd-logind[1929]: New session 2 of user core. Feb 13 15:08:29.592442 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:08:29.628294 amazon-ssm-agent[2052]: 2025-02-13 15:08:29 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 15:08:29.742927 sshd[2204]: Connection closed by 139.178.68.195 port 41808 Feb 13 15:08:29.742775 sshd-session[2195]: pam_unix(sshd:session): session closed for user core Feb 13 15:08:29.750501 systemd-logind[1929]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:08:29.752772 systemd[1]: sshd@1-172.31.21.44:22-139.178.68.195:41808.service: Deactivated successfully. Feb 13 15:08:29.760994 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:08:29.801521 systemd[1]: Started sshd@2-172.31.21.44:22-139.178.68.195:41810.service - OpenSSH per-connection server daemon (139.178.68.195:41810). Feb 13 15:08:29.804638 systemd-logind[1929]: Removed session 2. Feb 13 15:08:29.908071 kubelet[2180]: E0213 15:08:29.907978 2180 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:08:29.912551 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:08:29.912937 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:08:29.914695 systemd[1]: kubelet.service: Consumed 1.415s CPU time, 235.6M memory peak. Feb 13 15:08:30.017405 sshd[2213]: Accepted publickey for core from 139.178.68.195 port 41810 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:08:30.019990 sshd-session[2213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:08:30.030353 systemd-logind[1929]: New session 3 of user core. Feb 13 15:08:30.039474 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:08:30.165032 sshd[2218]: Connection closed by 139.178.68.195 port 41810 Feb 13 15:08:30.164883 sshd-session[2213]: pam_unix(sshd:session): session closed for user core Feb 13 15:08:30.171235 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:08:30.174820 systemd[1]: sshd@2-172.31.21.44:22-139.178.68.195:41810.service: Deactivated successfully. Feb 13 15:08:30.180644 systemd-logind[1929]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:08:30.183469 systemd-logind[1929]: Removed session 3. Feb 13 15:08:30.222876 systemd[1]: Started sshd@3-172.31.21.44:22-139.178.68.195:41824.service - OpenSSH per-connection server daemon (139.178.68.195:41824). Feb 13 15:08:30.411767 sshd[2224]: Accepted publickey for core from 139.178.68.195 port 41824 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:08:30.415415 sshd-session[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:08:30.425077 systemd-logind[1929]: New session 4 of user core. Feb 13 15:08:30.437681 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:08:30.567896 sshd[2226]: Connection closed by 139.178.68.195 port 41824 Feb 13 15:08:30.568995 sshd-session[2224]: pam_unix(sshd:session): session closed for user core Feb 13 15:08:30.577963 systemd[1]: sshd@3-172.31.21.44:22-139.178.68.195:41824.service: Deactivated successfully. Feb 13 15:08:30.582808 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:08:30.584960 systemd-logind[1929]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:08:30.589184 systemd-logind[1929]: Removed session 4. Feb 13 15:08:30.618359 systemd[1]: Started sshd@4-172.31.21.44:22-139.178.68.195:41832.service - OpenSSH per-connection server daemon (139.178.68.195:41832). Feb 13 15:08:30.821321 sshd[2232]: Accepted publickey for core from 139.178.68.195 port 41832 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:08:30.825360 sshd-session[2232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:08:30.836362 systemd-logind[1929]: New session 5 of user core. Feb 13 15:08:30.847547 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:08:30.975240 sudo[2235]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:08:30.976821 sudo[2235]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:08:31.720773 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:08:31.733237 (dockerd)[2254]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:08:32.140182 dockerd[2254]: time="2025-02-13T15:08:32.140033081Z" level=info msg="Starting up" Feb 13 15:08:32.376507 dockerd[2254]: time="2025-02-13T15:08:32.375901830Z" level=info msg="Loading containers: start." Feb 13 15:08:32.198230 systemd-resolved[1858]: Clock change detected. Flushing caches. Feb 13 15:08:32.213801 systemd-journald[1494]: Time jumped backwards, rotating. Feb 13 15:08:32.275770 kernel: Initializing XFRM netlink socket Feb 13 15:08:32.320282 (udev-worker)[2277]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:08:32.447655 systemd-networkd[1856]: docker0: Link UP Feb 13 15:08:32.498260 dockerd[2254]: time="2025-02-13T15:08:32.497596285Z" level=info msg="Loading containers: done." Feb 13 15:08:32.526766 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3763875092-merged.mount: Deactivated successfully. Feb 13 15:08:32.529693 dockerd[2254]: time="2025-02-13T15:08:32.529121677Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:08:32.529693 dockerd[2254]: time="2025-02-13T15:08:32.529295701Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:08:32.529693 dockerd[2254]: time="2025-02-13T15:08:32.529593529Z" level=info msg="Daemon has completed initialization" Feb 13 15:08:32.610046 dockerd[2254]: time="2025-02-13T15:08:32.609934154Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:08:32.611265 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:08:33.777209 containerd[1943]: time="2025-02-13T15:08:33.777138567Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 15:08:34.565779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2815758476.mount: Deactivated successfully. Feb 13 15:08:37.744003 containerd[1943]: time="2025-02-13T15:08:37.743918719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:37.746266 containerd[1943]: time="2025-02-13T15:08:37.746177635Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620375" Feb 13 15:08:37.747322 containerd[1943]: time="2025-02-13T15:08:37.746773735Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:37.754420 containerd[1943]: time="2025-02-13T15:08:37.754289659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:37.756969 containerd[1943]: time="2025-02-13T15:08:37.756664987Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 3.979436264s" Feb 13 15:08:37.756969 containerd[1943]: time="2025-02-13T15:08:37.756755947Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 15:08:37.758578 containerd[1943]: time="2025-02-13T15:08:37.757897351Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 15:08:39.565491 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:08:39.575996 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:08:39.949111 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:08:39.965635 (kubelet)[2510]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:08:40.071857 kubelet[2510]: E0213 15:08:40.071499 2510 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:08:40.081940 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:08:40.082284 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:08:40.084854 systemd[1]: kubelet.service: Consumed 349ms CPU time, 96.5M memory peak. Feb 13 15:08:40.494817 containerd[1943]: time="2025-02-13T15:08:40.494664609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:40.497010 containerd[1943]: time="2025-02-13T15:08:40.496866981Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471773" Feb 13 15:08:40.497764 containerd[1943]: time="2025-02-13T15:08:40.497457057Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:40.504282 containerd[1943]: time="2025-02-13T15:08:40.504196653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:40.506757 containerd[1943]: time="2025-02-13T15:08:40.506552493Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 2.74856681s" Feb 13 15:08:40.506757 containerd[1943]: time="2025-02-13T15:08:40.506618145Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 15:08:40.507852 containerd[1943]: time="2025-02-13T15:08:40.507707445Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 15:08:42.579330 containerd[1943]: time="2025-02-13T15:08:42.579063167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:42.581176 containerd[1943]: time="2025-02-13T15:08:42.581105399Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024540" Feb 13 15:08:42.582190 containerd[1943]: time="2025-02-13T15:08:42.581787587Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:42.587293 containerd[1943]: time="2025-02-13T15:08:42.587236487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:42.589788 containerd[1943]: time="2025-02-13T15:08:42.589528355Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 2.081529334s" Feb 13 15:08:42.589788 containerd[1943]: time="2025-02-13T15:08:42.589595483Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 15:08:42.590865 containerd[1943]: time="2025-02-13T15:08:42.590483387Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 15:08:44.587409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2301394565.mount: Deactivated successfully. Feb 13 15:08:45.179379 containerd[1943]: time="2025-02-13T15:08:45.179273556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:45.181301 containerd[1943]: time="2025-02-13T15:08:45.181197672Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769256" Feb 13 15:08:45.183830 containerd[1943]: time="2025-02-13T15:08:45.183745512Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:45.188783 containerd[1943]: time="2025-02-13T15:08:45.188656428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:45.190899 containerd[1943]: time="2025-02-13T15:08:45.190097712Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 2.599563157s" Feb 13 15:08:45.190899 containerd[1943]: time="2025-02-13T15:08:45.190153608Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 15:08:45.191135 containerd[1943]: time="2025-02-13T15:08:45.191076036Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:08:45.903416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1795008204.mount: Deactivated successfully. Feb 13 15:08:47.635564 containerd[1943]: time="2025-02-13T15:08:47.635454112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:47.637991 containerd[1943]: time="2025-02-13T15:08:47.637880020Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 15:08:47.640259 containerd[1943]: time="2025-02-13T15:08:47.640178260Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:47.646741 containerd[1943]: time="2025-02-13T15:08:47.646600576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:47.649319 containerd[1943]: time="2025-02-13T15:08:47.648907720Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.457765108s" Feb 13 15:08:47.649319 containerd[1943]: time="2025-02-13T15:08:47.648972496Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:08:47.649998 containerd[1943]: time="2025-02-13T15:08:47.649933036Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 15:08:48.187483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount917841789.mount: Deactivated successfully. Feb 13 15:08:48.201931 containerd[1943]: time="2025-02-13T15:08:48.201835263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:48.204050 containerd[1943]: time="2025-02-13T15:08:48.203951991Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Feb 13 15:08:48.206628 containerd[1943]: time="2025-02-13T15:08:48.206530935Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:48.212098 containerd[1943]: time="2025-02-13T15:08:48.211988595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:48.214017 containerd[1943]: time="2025-02-13T15:08:48.213787155Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 563.682555ms" Feb 13 15:08:48.214017 containerd[1943]: time="2025-02-13T15:08:48.213855567Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 15:08:48.215178 containerd[1943]: time="2025-02-13T15:08:48.214822023Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 15:08:48.808025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount177210480.mount: Deactivated successfully. Feb 13 15:08:50.315272 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:08:50.324232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:08:50.624248 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:08:50.644322 (kubelet)[2633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:08:50.715787 kubelet[2633]: E0213 15:08:50.714564 2633 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:08:50.718743 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:08:50.719103 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:08:50.720099 systemd[1]: kubelet.service: Consumed 302ms CPU time, 94.3M memory peak. Feb 13 15:08:52.734775 containerd[1943]: time="2025-02-13T15:08:52.734675998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:52.737494 containerd[1943]: time="2025-02-13T15:08:52.737408410Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Feb 13 15:08:52.739704 containerd[1943]: time="2025-02-13T15:08:52.739627786Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:52.746018 containerd[1943]: time="2025-02-13T15:08:52.745923310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:08:52.748952 containerd[1943]: time="2025-02-13T15:08:52.748706062Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.533819215s" Feb 13 15:08:52.748952 containerd[1943]: time="2025-02-13T15:08:52.748815310Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 15:08:56.434440 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 15:08:58.312487 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:08:58.312866 systemd[1]: kubelet.service: Consumed 302ms CPU time, 94.3M memory peak. Feb 13 15:08:58.325301 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:08:58.402798 systemd[1]: Reload requested from client PID 2675 ('systemctl') (unit session-5.scope)... Feb 13 15:08:58.402830 systemd[1]: Reloading... Feb 13 15:08:58.685841 zram_generator::config[2720]: No configuration found. Feb 13 15:08:58.920739 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:08:59.145749 systemd[1]: Reloading finished in 742 ms. Feb 13 15:08:59.256244 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:08:59.258976 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:08:59.260793 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:08:59.260983 systemd[1]: kubelet.service: Consumed 222ms CPU time, 81.7M memory peak. Feb 13 15:08:59.272407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:08:59.580115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:08:59.591279 (kubelet)[2785]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:08:59.668036 kubelet[2785]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:08:59.670782 kubelet[2785]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:08:59.670782 kubelet[2785]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:08:59.670782 kubelet[2785]: I0213 15:08:59.669020 2785 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:09:00.569124 kubelet[2785]: I0213 15:09:00.569052 2785 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:09:00.569124 kubelet[2785]: I0213 15:09:00.569104 2785 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:09:00.569566 kubelet[2785]: I0213 15:09:00.569523 2785 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:09:00.776573 kubelet[2785]: E0213 15:09:00.776466 2785 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.21.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.44:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:09:00.779201 kubelet[2785]: I0213 15:09:00.778886 2785 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:09:00.795043 kubelet[2785]: E0213 15:09:00.794980 2785 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:09:00.795256 kubelet[2785]: I0213 15:09:00.795230 2785 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:09:00.803787 kubelet[2785]: I0213 15:09:00.802313 2785 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:09:00.803787 kubelet[2785]: I0213 15:09:00.802606 2785 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:09:00.803787 kubelet[2785]: I0213 15:09:00.802925 2785 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:09:00.803787 kubelet[2785]: I0213 15:09:00.802971 2785 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-44","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:09:00.804189 kubelet[2785]: I0213 15:09:00.803332 2785 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:09:00.804189 kubelet[2785]: I0213 15:09:00.803352 2785 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:09:00.804189 kubelet[2785]: I0213 15:09:00.803562 2785 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:09:00.809665 kubelet[2785]: I0213 15:09:00.809131 2785 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:09:00.809665 kubelet[2785]: I0213 15:09:00.809187 2785 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:09:00.809665 kubelet[2785]: I0213 15:09:00.809234 2785 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:09:00.809665 kubelet[2785]: I0213 15:09:00.809254 2785 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:09:00.814972 kubelet[2785]: W0213 15:09:00.814873 2785 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-44&limit=500&resourceVersion=0": dial tcp 172.31.21.44:6443: connect: connection refused Feb 13 15:09:00.815212 kubelet[2785]: E0213 15:09:00.814991 2785 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.21.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-44&limit=500&resourceVersion=0\": dial tcp 172.31.21.44:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:09:00.817369 kubelet[2785]: W0213 15:09:00.816990 2785 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.21.44:6443: connect: connection refused Feb 13 15:09:00.817369 kubelet[2785]: E0213 15:09:00.817117 2785 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.21.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.44:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:09:00.817369 kubelet[2785]: I0213 15:09:00.817298 2785 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:09:00.822289 kubelet[2785]: I0213 15:09:00.822132 2785 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:09:00.823995 kubelet[2785]: W0213 15:09:00.823942 2785 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:09:00.829095 kubelet[2785]: I0213 15:09:00.829042 2785 server.go:1269] "Started kubelet" Feb 13 15:09:00.832626 kubelet[2785]: I0213 15:09:00.832450 2785 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:09:00.834402 kubelet[2785]: I0213 15:09:00.832685 2785 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:09:00.835626 kubelet[2785]: I0213 15:09:00.835545 2785 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:09:00.836765 kubelet[2785]: I0213 15:09:00.836696 2785 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:09:00.841850 kubelet[2785]: I0213 15:09:00.841540 2785 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:09:00.844293 kubelet[2785]: E0213 15:09:00.842065 2785 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.44:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.44:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-44.1823cd0def58f0e6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-44,UID:ip-172-31-21-44,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-44,},FirstTimestamp:2025-02-13 15:09:00.828995814 +0000 UTC m=+1.230903559,LastTimestamp:2025-02-13 15:09:00.828995814 +0000 UTC m=+1.230903559,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-44,}" Feb 13 15:09:00.851644 kubelet[2785]: E0213 15:09:00.850105 2785 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:09:00.851644 kubelet[2785]: I0213 15:09:00.850418 2785 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:09:00.857912 kubelet[2785]: E0213 15:09:00.857823 2785 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-21-44\" not found" Feb 13 15:09:00.858097 kubelet[2785]: I0213 15:09:00.857959 2785 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:09:00.858397 kubelet[2785]: I0213 15:09:00.858336 2785 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:09:00.858478 kubelet[2785]: I0213 15:09:00.858450 2785 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:09:00.859710 kubelet[2785]: W0213 15:09:00.859549 2785 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.44:6443: connect: connection refused Feb 13 15:09:00.860016 kubelet[2785]: E0213 15:09:00.859711 2785 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.21.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.44:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:09:00.860016 kubelet[2785]: E0213 15:09:00.859948 2785 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-44?timeout=10s\": dial tcp 172.31.21.44:6443: connect: connection refused" interval="200ms" Feb 13 15:09:00.861244 kubelet[2785]: I0213 15:09:00.861187 2785 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:09:00.861397 kubelet[2785]: I0213 15:09:00.861356 2785 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:09:00.865665 kubelet[2785]: I0213 15:09:00.865606 2785 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:09:00.880864 kubelet[2785]: I0213 15:09:00.880800 2785 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:09:00.884928 kubelet[2785]: I0213 15:09:00.884888 2785 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:09:00.885882 kubelet[2785]: I0213 15:09:00.885085 2785 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:09:00.885882 kubelet[2785]: I0213 15:09:00.885123 2785 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:09:00.885882 kubelet[2785]: E0213 15:09:00.885193 2785 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:09:00.903714 kubelet[2785]: W0213 15:09:00.903629 2785 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.44:6443: connect: connection refused Feb 13 15:09:00.903714 kubelet[2785]: E0213 15:09:00.903754 2785 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.21.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.44:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:09:00.915053 kubelet[2785]: I0213 15:09:00.914972 2785 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:09:00.915053 kubelet[2785]: I0213 15:09:00.915004 2785 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:09:00.915053 kubelet[2785]: I0213 15:09:00.915037 2785 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:09:00.921537 kubelet[2785]: I0213 15:09:00.921481 2785 policy_none.go:49] "None policy: Start" Feb 13 15:09:00.923410 kubelet[2785]: I0213 15:09:00.923382 2785 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:09:00.924237 kubelet[2785]: I0213 15:09:00.923695 2785 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:09:00.942522 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:09:00.958162 kubelet[2785]: E0213 15:09:00.958091 2785 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-21-44\" not found" Feb 13 15:09:00.958893 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:09:00.967038 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:09:00.982966 kubelet[2785]: I0213 15:09:00.982931 2785 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:09:00.983452 kubelet[2785]: I0213 15:09:00.983425 2785 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:09:00.983663 kubelet[2785]: I0213 15:09:00.983590 2785 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:09:00.984968 kubelet[2785]: I0213 15:09:00.984924 2785 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:09:00.993232 kubelet[2785]: E0213 15:09:00.993076 2785 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-44\" not found" Feb 13 15:09:01.012562 systemd[1]: Created slice kubepods-burstable-pode6feef02f1c57a271f0f27d322c0dd7a.slice - libcontainer container kubepods-burstable-pode6feef02f1c57a271f0f27d322c0dd7a.slice. Feb 13 15:09:01.041617 systemd[1]: Created slice kubepods-burstable-podc9bad5f6b56fcfe55193b55accad4bb9.slice - libcontainer container kubepods-burstable-podc9bad5f6b56fcfe55193b55accad4bb9.slice. Feb 13 15:09:01.050635 systemd[1]: Created slice kubepods-burstable-podb80e0c03e2b025a13edae0e1202dc89f.slice - libcontainer container kubepods-burstable-podb80e0c03e2b025a13edae0e1202dc89f.slice. Feb 13 15:09:01.060813 kubelet[2785]: E0213 15:09:01.060675 2785 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-44?timeout=10s\": dial tcp 172.31.21.44:6443: connect: connection refused" interval="400ms" Feb 13 15:09:01.086776 kubelet[2785]: I0213 15:09:01.086079 2785 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-21-44" Feb 13 15:09:01.086776 kubelet[2785]: E0213 15:09:01.086599 2785 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.21.44:6443/api/v1/nodes\": dial tcp 172.31.21.44:6443: connect: connection refused" node="ip-172-31-21-44" Feb 13 15:09:01.161377 kubelet[2785]: I0213 15:09:01.161232 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c9bad5f6b56fcfe55193b55accad4bb9-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-44\" (UID: \"c9bad5f6b56fcfe55193b55accad4bb9\") " pod="kube-system/kube-controller-manager-ip-172-31-21-44" Feb 13 15:09:01.161377 kubelet[2785]: I0213 15:09:01.161294 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b80e0c03e2b025a13edae0e1202dc89f-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-44\" (UID: \"b80e0c03e2b025a13edae0e1202dc89f\") " pod="kube-system/kube-scheduler-ip-172-31-21-44" Feb 13 15:09:01.161377 kubelet[2785]: I0213 15:09:01.161331 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c9bad5f6b56fcfe55193b55accad4bb9-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-44\" (UID: \"c9bad5f6b56fcfe55193b55accad4bb9\") " pod="kube-system/kube-controller-manager-ip-172-31-21-44" Feb 13 15:09:01.161377 kubelet[2785]: I0213 15:09:01.161367 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c9bad5f6b56fcfe55193b55accad4bb9-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-44\" (UID: \"c9bad5f6b56fcfe55193b55accad4bb9\") " pod="kube-system/kube-controller-manager-ip-172-31-21-44" Feb 13 15:09:01.161681 kubelet[2785]: I0213 15:09:01.161406 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c9bad5f6b56fcfe55193b55accad4bb9-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-44\" (UID: \"c9bad5f6b56fcfe55193b55accad4bb9\") " pod="kube-system/kube-controller-manager-ip-172-31-21-44" Feb 13 15:09:01.161681 kubelet[2785]: I0213 15:09:01.161441 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c9bad5f6b56fcfe55193b55accad4bb9-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-44\" (UID: \"c9bad5f6b56fcfe55193b55accad4bb9\") " pod="kube-system/kube-controller-manager-ip-172-31-21-44" Feb 13 15:09:01.161681 kubelet[2785]: I0213 15:09:01.161479 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6feef02f1c57a271f0f27d322c0dd7a-ca-certs\") pod \"kube-apiserver-ip-172-31-21-44\" (UID: \"e6feef02f1c57a271f0f27d322c0dd7a\") " pod="kube-system/kube-apiserver-ip-172-31-21-44" Feb 13 15:09:01.161681 kubelet[2785]: I0213 15:09:01.161514 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6feef02f1c57a271f0f27d322c0dd7a-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-44\" (UID: \"e6feef02f1c57a271f0f27d322c0dd7a\") " pod="kube-system/kube-apiserver-ip-172-31-21-44" Feb 13 15:09:01.161681 kubelet[2785]: I0213 15:09:01.161548 2785 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6feef02f1c57a271f0f27d322c0dd7a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-44\" (UID: \"e6feef02f1c57a271f0f27d322c0dd7a\") " pod="kube-system/kube-apiserver-ip-172-31-21-44" Feb 13 15:09:01.289250 kubelet[2785]: I0213 15:09:01.289023 2785 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-21-44" Feb 13 15:09:01.289991 kubelet[2785]: E0213 15:09:01.289928 2785 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.21.44:6443/api/v1/nodes\": dial tcp 172.31.21.44:6443: connect: connection refused" node="ip-172-31-21-44" Feb 13 15:09:01.334564 containerd[1943]: time="2025-02-13T15:09:01.334494112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-44,Uid:e6feef02f1c57a271f0f27d322c0dd7a,Namespace:kube-system,Attempt:0,}" Feb 13 15:09:01.352259 containerd[1943]: time="2025-02-13T15:09:01.352000480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-44,Uid:c9bad5f6b56fcfe55193b55accad4bb9,Namespace:kube-system,Attempt:0,}" Feb 13 15:09:01.356372 containerd[1943]: time="2025-02-13T15:09:01.355829716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-44,Uid:b80e0c03e2b025a13edae0e1202dc89f,Namespace:kube-system,Attempt:0,}" Feb 13 15:09:01.462611 kubelet[2785]: E0213 15:09:01.462547 2785 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-44?timeout=10s\": dial tcp 172.31.21.44:6443: connect: connection refused" interval="800ms" Feb 13 15:09:01.693780 kubelet[2785]: I0213 15:09:01.693526 2785 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-21-44" Feb 13 15:09:01.694713 kubelet[2785]: E0213 15:09:01.694647 2785 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.21.44:6443/api/v1/nodes\": dial tcp 172.31.21.44:6443: connect: connection refused" node="ip-172-31-21-44" Feb 13 15:09:01.890752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2429434641.mount: Deactivated successfully. Feb 13 15:09:01.905062 containerd[1943]: time="2025-02-13T15:09:01.904953103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:09:01.911846 containerd[1943]: time="2025-02-13T15:09:01.911756227Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 15:09:01.914680 containerd[1943]: time="2025-02-13T15:09:01.913859719Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:09:01.916804 containerd[1943]: time="2025-02-13T15:09:01.916579303Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:09:01.920348 containerd[1943]: time="2025-02-13T15:09:01.920272999Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:09:01.922558 containerd[1943]: time="2025-02-13T15:09:01.922472203Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:09:01.924378 containerd[1943]: time="2025-02-13T15:09:01.924283495Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:09:01.927034 containerd[1943]: time="2025-02-13T15:09:01.926882839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:09:01.931331 containerd[1943]: time="2025-02-13T15:09:01.930997771Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 578.064639ms" Feb 13 15:09:01.934675 containerd[1943]: time="2025-02-13T15:09:01.934583479Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 599.968467ms" Feb 13 15:09:01.942506 containerd[1943]: time="2025-02-13T15:09:01.942403999Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 586.450899ms" Feb 13 15:09:02.147126 containerd[1943]: time="2025-02-13T15:09:02.146764120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:09:02.147126 containerd[1943]: time="2025-02-13T15:09:02.146911816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:09:02.147707 containerd[1943]: time="2025-02-13T15:09:02.146949976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:02.147963 containerd[1943]: time="2025-02-13T15:09:02.147911776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:02.150753 containerd[1943]: time="2025-02-13T15:09:02.150486604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:09:02.150753 containerd[1943]: time="2025-02-13T15:09:02.150606316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:09:02.150753 containerd[1943]: time="2025-02-13T15:09:02.150638572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:02.151905 containerd[1943]: time="2025-02-13T15:09:02.151792624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:02.163906 containerd[1943]: time="2025-02-13T15:09:02.163708216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:09:02.164075 containerd[1943]: time="2025-02-13T15:09:02.163865488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:09:02.164075 containerd[1943]: time="2025-02-13T15:09:02.163901932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:02.164296 containerd[1943]: time="2025-02-13T15:09:02.164047804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:02.193122 systemd[1]: Started cri-containerd-9059feba800c225c907c3d6613f58342832856cc074e5089066ec211cd6bbee5.scope - libcontainer container 9059feba800c225c907c3d6613f58342832856cc074e5089066ec211cd6bbee5. Feb 13 15:09:02.197470 kubelet[2785]: W0213 15:09:02.197370 2785 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-44&limit=500&resourceVersion=0": dial tcp 172.31.21.44:6443: connect: connection refused Feb 13 15:09:02.198285 kubelet[2785]: E0213 15:09:02.197481 2785 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.21.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-44&limit=500&resourceVersion=0\": dial tcp 172.31.21.44:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:09:02.237367 systemd[1]: Started cri-containerd-9f4a974d9b93897bcde1a920e95d15cfed7c440e9f4ea598c82bcc5d3cffa858.scope - libcontainer container 9f4a974d9b93897bcde1a920e95d15cfed7c440e9f4ea598c82bcc5d3cffa858. Feb 13 15:09:02.257151 systemd[1]: Started cri-containerd-963df73bdbd551814506bf5c57453b48bd9f13da7716dc039beb74c44d96e670.scope - libcontainer container 963df73bdbd551814506bf5c57453b48bd9f13da7716dc039beb74c44d96e670. Feb 13 15:09:02.265247 kubelet[2785]: E0213 15:09:02.265166 2785 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-44?timeout=10s\": dial tcp 172.31.21.44:6443: connect: connection refused" interval="1.6s" Feb 13 15:09:02.276694 kubelet[2785]: W0213 15:09:02.276504 2785 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.21.44:6443: connect: connection refused Feb 13 15:09:02.276694 kubelet[2785]: E0213 15:09:02.276618 2785 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.21.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.44:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:09:02.356242 containerd[1943]: time="2025-02-13T15:09:02.355665305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-44,Uid:e6feef02f1c57a271f0f27d322c0dd7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9059feba800c225c907c3d6613f58342832856cc074e5089066ec211cd6bbee5\"" Feb 13 15:09:02.369981 containerd[1943]: time="2025-02-13T15:09:02.369902045Z" level=info msg="CreateContainer within sandbox \"9059feba800c225c907c3d6613f58342832856cc074e5089066ec211cd6bbee5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:09:02.374699 kubelet[2785]: W0213 15:09:02.374419 2785 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.44:6443: connect: connection refused Feb 13 15:09:02.374699 kubelet[2785]: E0213 15:09:02.374518 2785 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.21.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.44:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:09:02.385513 containerd[1943]: time="2025-02-13T15:09:02.385252385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-44,Uid:b80e0c03e2b025a13edae0e1202dc89f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f4a974d9b93897bcde1a920e95d15cfed7c440e9f4ea598c82bcc5d3cffa858\"" Feb 13 15:09:02.395620 containerd[1943]: time="2025-02-13T15:09:02.395250437Z" level=info msg="CreateContainer within sandbox \"9f4a974d9b93897bcde1a920e95d15cfed7c440e9f4ea598c82bcc5d3cffa858\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:09:02.408876 containerd[1943]: time="2025-02-13T15:09:02.408667230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-44,Uid:c9bad5f6b56fcfe55193b55accad4bb9,Namespace:kube-system,Attempt:0,} returns sandbox id \"963df73bdbd551814506bf5c57453b48bd9f13da7716dc039beb74c44d96e670\"" Feb 13 15:09:02.416288 containerd[1943]: time="2025-02-13T15:09:02.415812054Z" level=info msg="CreateContainer within sandbox \"963df73bdbd551814506bf5c57453b48bd9f13da7716dc039beb74c44d96e670\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:09:02.424463 containerd[1943]: time="2025-02-13T15:09:02.424404366Z" level=info msg="CreateContainer within sandbox \"9059feba800c225c907c3d6613f58342832856cc074e5089066ec211cd6bbee5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bf929892317f89dbe21b2e752bba16137c8a750d532ae3c8125013d15cd12007\"" Feb 13 15:09:02.425839 containerd[1943]: time="2025-02-13T15:09:02.425665086Z" level=info msg="StartContainer for \"bf929892317f89dbe21b2e752bba16137c8a750d532ae3c8125013d15cd12007\"" Feb 13 15:09:02.432653 kubelet[2785]: W0213 15:09:02.432515 2785 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.44:6443: connect: connection refused Feb 13 15:09:02.434941 kubelet[2785]: E0213 15:09:02.434828 2785 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.21.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.44:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:09:02.454682 containerd[1943]: time="2025-02-13T15:09:02.454584834Z" level=info msg="CreateContainer within sandbox \"963df73bdbd551814506bf5c57453b48bd9f13da7716dc039beb74c44d96e670\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"88506664c703420cea6f8342f97cbc4ffb5563673b381e7658fc7df08409a6c8\"" Feb 13 15:09:02.455638 containerd[1943]: time="2025-02-13T15:09:02.455585058Z" level=info msg="StartContainer for \"88506664c703420cea6f8342f97cbc4ffb5563673b381e7658fc7df08409a6c8\"" Feb 13 15:09:02.486067 systemd[1]: Started cri-containerd-bf929892317f89dbe21b2e752bba16137c8a750d532ae3c8125013d15cd12007.scope - libcontainer container bf929892317f89dbe21b2e752bba16137c8a750d532ae3c8125013d15cd12007. Feb 13 15:09:02.499585 kubelet[2785]: I0213 15:09:02.499514 2785 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-21-44" Feb 13 15:09:02.500283 kubelet[2785]: E0213 15:09:02.500095 2785 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.21.44:6443/api/v1/nodes\": dial tcp 172.31.21.44:6443: connect: connection refused" node="ip-172-31-21-44" Feb 13 15:09:02.507513 containerd[1943]: time="2025-02-13T15:09:02.507372906Z" level=info msg="CreateContainer within sandbox \"9f4a974d9b93897bcde1a920e95d15cfed7c440e9f4ea598c82bcc5d3cffa858\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"572edfc35a7ea69c214a0b2ac33368f75a9d73295ab5fb7e693b6bae7452dfe5\"" Feb 13 15:09:02.511779 containerd[1943]: time="2025-02-13T15:09:02.510584010Z" level=info msg="StartContainer for \"572edfc35a7ea69c214a0b2ac33368f75a9d73295ab5fb7e693b6bae7452dfe5\"" Feb 13 15:09:02.559093 systemd[1]: Started cri-containerd-88506664c703420cea6f8342f97cbc4ffb5563673b381e7658fc7df08409a6c8.scope - libcontainer container 88506664c703420cea6f8342f97cbc4ffb5563673b381e7658fc7df08409a6c8. Feb 13 15:09:02.603070 systemd[1]: Started cri-containerd-572edfc35a7ea69c214a0b2ac33368f75a9d73295ab5fb7e693b6bae7452dfe5.scope - libcontainer container 572edfc35a7ea69c214a0b2ac33368f75a9d73295ab5fb7e693b6bae7452dfe5. Feb 13 15:09:02.617255 containerd[1943]: time="2025-02-13T15:09:02.616978003Z" level=info msg="StartContainer for \"bf929892317f89dbe21b2e752bba16137c8a750d532ae3c8125013d15cd12007\" returns successfully" Feb 13 15:09:02.705984 containerd[1943]: time="2025-02-13T15:09:02.705832039Z" level=info msg="StartContainer for \"88506664c703420cea6f8342f97cbc4ffb5563673b381e7658fc7df08409a6c8\" returns successfully" Feb 13 15:09:02.802624 containerd[1943]: time="2025-02-13T15:09:02.802329392Z" level=info msg="StartContainer for \"572edfc35a7ea69c214a0b2ac33368f75a9d73295ab5fb7e693b6bae7452dfe5\" returns successfully" Feb 13 15:09:02.805302 kubelet[2785]: E0213 15:09:02.805129 2785 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.21.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.44:6443: connect: connection refused" logger="UnhandledError" Feb 13 15:09:04.103841 kubelet[2785]: I0213 15:09:04.103755 2785 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-21-44" Feb 13 15:09:06.464214 kubelet[2785]: E0213 15:09:06.464142 2785 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-21-44\" not found" node="ip-172-31-21-44" Feb 13 15:09:06.558870 kubelet[2785]: I0213 15:09:06.558806 2785 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-21-44" Feb 13 15:09:06.820434 kubelet[2785]: I0213 15:09:06.820072 2785 apiserver.go:52] "Watching apiserver" Feb 13 15:09:06.859107 kubelet[2785]: I0213 15:09:06.859043 2785 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:09:09.069578 systemd[1]: Reload requested from client PID 3060 ('systemctl') (unit session-5.scope)... Feb 13 15:09:09.069604 systemd[1]: Reloading... Feb 13 15:09:09.287158 zram_generator::config[3111]: No configuration found. Feb 13 15:09:09.534096 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:09:09.793623 systemd[1]: Reloading finished in 723 ms. Feb 13 15:09:09.838564 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:09:09.854070 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:09:09.854712 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:09:09.854823 systemd[1]: kubelet.service: Consumed 1.829s CPU time, 117.9M memory peak. Feb 13 15:09:09.863572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:09:10.197897 update_engine[1931]: I20250213 15:09:10.197611 1931 update_attempter.cc:509] Updating boot flags... Feb 13 15:09:10.220050 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:09:10.239916 (kubelet)[3167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:09:10.413374 kubelet[3167]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:09:10.413374 kubelet[3167]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:09:10.413374 kubelet[3167]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:09:10.413374 kubelet[3167]: I0213 15:09:10.411915 3167 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:09:10.419913 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3185) Feb 13 15:09:10.448813 kubelet[3167]: I0213 15:09:10.448670 3167 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 15:09:10.452254 kubelet[3167]: I0213 15:09:10.451150 3167 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:09:10.454789 kubelet[3167]: I0213 15:09:10.452941 3167 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 15:09:10.460559 kubelet[3167]: I0213 15:09:10.460513 3167 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:09:10.534127 kubelet[3167]: I0213 15:09:10.534067 3167 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:09:10.554210 kubelet[3167]: E0213 15:09:10.554044 3167 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 15:09:10.555007 kubelet[3167]: I0213 15:09:10.554702 3167 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 15:09:10.573934 kubelet[3167]: I0213 15:09:10.573457 3167 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:09:10.578345 kubelet[3167]: I0213 15:09:10.577029 3167 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 15:09:10.578345 kubelet[3167]: I0213 15:09:10.577384 3167 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:09:10.578345 kubelet[3167]: I0213 15:09:10.577433 3167 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-44","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 15:09:10.578345 kubelet[3167]: I0213 15:09:10.577802 3167 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:09:10.580382 kubelet[3167]: I0213 15:09:10.577827 3167 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 15:09:10.580382 kubelet[3167]: I0213 15:09:10.577899 3167 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:09:10.583420 kubelet[3167]: I0213 15:09:10.580980 3167 kubelet.go:408] "Attempting to sync node with API server" Feb 13 15:09:10.583420 kubelet[3167]: I0213 15:09:10.582516 3167 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:09:10.583420 kubelet[3167]: I0213 15:09:10.582584 3167 kubelet.go:314] "Adding apiserver pod source" Feb 13 15:09:10.583420 kubelet[3167]: I0213 15:09:10.582614 3167 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:09:10.593993 kubelet[3167]: I0213 15:09:10.593943 3167 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:09:10.595232 kubelet[3167]: I0213 15:09:10.595172 3167 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:09:10.598244 kubelet[3167]: I0213 15:09:10.598205 3167 server.go:1269] "Started kubelet" Feb 13 15:09:10.616553 kubelet[3167]: I0213 15:09:10.615858 3167 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:09:10.631608 kubelet[3167]: I0213 15:09:10.619076 3167 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:09:10.687760 kubelet[3167]: I0213 15:09:10.685757 3167 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:09:10.688333 kubelet[3167]: I0213 15:09:10.688282 3167 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:09:10.716102 kubelet[3167]: I0213 15:09:10.623442 3167 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 15:09:10.716102 kubelet[3167]: I0213 15:09:10.634922 3167 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 15:09:10.722792 kubelet[3167]: I0213 15:09:10.634967 3167 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 15:09:10.723379 kubelet[3167]: E0213 15:09:10.635308 3167 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-21-44\" not found" Feb 13 15:09:10.726987 kubelet[3167]: I0213 15:09:10.726931 3167 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:09:10.753439 kubelet[3167]: I0213 15:09:10.619187 3167 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:09:10.753439 kubelet[3167]: I0213 15:09:10.751363 3167 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:09:10.753439 kubelet[3167]: I0213 15:09:10.751532 3167 server.go:460] "Adding debug handlers to kubelet server" Feb 13 15:09:10.852774 kubelet[3167]: I0213 15:09:10.851630 3167 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:09:10.923535 kubelet[3167]: E0213 15:09:10.923461 3167 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:09:10.933265 kubelet[3167]: I0213 15:09:10.933147 3167 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:09:10.955056 kubelet[3167]: I0213 15:09:10.954378 3167 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:09:10.959884 kubelet[3167]: I0213 15:09:10.957709 3167 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:09:10.959884 kubelet[3167]: I0213 15:09:10.957824 3167 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 15:09:10.959884 kubelet[3167]: E0213 15:09:10.957923 3167 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:09:11.060876 kubelet[3167]: E0213 15:09:11.059321 3167 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:09:11.260541 kubelet[3167]: E0213 15:09:11.260383 3167 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:09:11.315665 kubelet[3167]: I0213 15:09:11.315619 3167 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:09:11.315665 kubelet[3167]: I0213 15:09:11.315654 3167 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:09:11.315968 kubelet[3167]: I0213 15:09:11.315692 3167 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:09:11.316458 kubelet[3167]: I0213 15:09:11.316092 3167 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:09:11.316458 kubelet[3167]: I0213 15:09:11.316127 3167 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:09:11.316458 kubelet[3167]: I0213 15:09:11.316167 3167 policy_none.go:49] "None policy: Start" Feb 13 15:09:11.318825 kubelet[3167]: I0213 15:09:11.318266 3167 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:09:11.318825 kubelet[3167]: I0213 15:09:11.318330 3167 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:09:11.319092 kubelet[3167]: I0213 15:09:11.319017 3167 state_mem.go:75] "Updated machine memory state" Feb 13 15:09:11.330275 kubelet[3167]: I0213 15:09:11.330196 3167 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:09:11.332252 kubelet[3167]: I0213 15:09:11.330638 3167 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 15:09:11.332252 kubelet[3167]: I0213 15:09:11.330676 3167 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:09:11.336753 kubelet[3167]: I0213 15:09:11.336116 3167 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:09:11.472511 kubelet[3167]: I0213 15:09:11.471931 3167 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-21-44" Feb 13 15:09:11.490609 kubelet[3167]: I0213 15:09:11.490553 3167 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-21-44" Feb 13 15:09:11.491902 kubelet[3167]: I0213 15:09:11.491814 3167 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-21-44" Feb 13 15:09:11.584769 kubelet[3167]: I0213 15:09:11.584520 3167 apiserver.go:52] "Watching apiserver" Feb 13 15:09:11.722935 kubelet[3167]: I0213 15:09:11.722879 3167 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 15:09:11.752887 kubelet[3167]: I0213 15:09:11.752060 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-44" podStartSLOduration=3.752036668 podStartE2EDuration="3.752036668s" podCreationTimestamp="2025-02-13 15:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:09:11.727147324 +0000 UTC m=+1.473768812" watchObservedRunningTime="2025-02-13 15:09:11.752036668 +0000 UTC m=+1.498658132" Feb 13 15:09:11.769859 kubelet[3167]: I0213 15:09:11.769787 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-44" podStartSLOduration=0.769762768 podStartE2EDuration="769.762768ms" podCreationTimestamp="2025-02-13 15:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:09:11.754010284 +0000 UTC m=+1.500631772" watchObservedRunningTime="2025-02-13 15:09:11.769762768 +0000 UTC m=+1.516384256" Feb 13 15:09:11.772841 kubelet[3167]: I0213 15:09:11.772603 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6feef02f1c57a271f0f27d322c0dd7a-ca-certs\") pod \"kube-apiserver-ip-172-31-21-44\" (UID: \"e6feef02f1c57a271f0f27d322c0dd7a\") " pod="kube-system/kube-apiserver-ip-172-31-21-44" Feb 13 15:09:11.772841 kubelet[3167]: I0213 15:09:11.772695 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6feef02f1c57a271f0f27d322c0dd7a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-44\" (UID: \"e6feef02f1c57a271f0f27d322c0dd7a\") " pod="kube-system/kube-apiserver-ip-172-31-21-44" Feb 13 15:09:11.772841 kubelet[3167]: I0213 15:09:11.772774 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c9bad5f6b56fcfe55193b55accad4bb9-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-44\" (UID: \"c9bad5f6b56fcfe55193b55accad4bb9\") " pod="kube-system/kube-controller-manager-ip-172-31-21-44" Feb 13 15:09:11.773678 kubelet[3167]: I0213 15:09:11.772931 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c9bad5f6b56fcfe55193b55accad4bb9-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-44\" (UID: \"c9bad5f6b56fcfe55193b55accad4bb9\") " pod="kube-system/kube-controller-manager-ip-172-31-21-44" Feb 13 15:09:11.773678 kubelet[3167]: I0213 15:09:11.772976 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6feef02f1c57a271f0f27d322c0dd7a-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-44\" (UID: \"e6feef02f1c57a271f0f27d322c0dd7a\") " pod="kube-system/kube-apiserver-ip-172-31-21-44" Feb 13 15:09:11.773678 kubelet[3167]: I0213 15:09:11.773574 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c9bad5f6b56fcfe55193b55accad4bb9-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-44\" (UID: \"c9bad5f6b56fcfe55193b55accad4bb9\") " pod="kube-system/kube-controller-manager-ip-172-31-21-44" Feb 13 15:09:11.774283 kubelet[3167]: I0213 15:09:11.774013 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c9bad5f6b56fcfe55193b55accad4bb9-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-44\" (UID: \"c9bad5f6b56fcfe55193b55accad4bb9\") " pod="kube-system/kube-controller-manager-ip-172-31-21-44" Feb 13 15:09:11.774283 kubelet[3167]: I0213 15:09:11.774129 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c9bad5f6b56fcfe55193b55accad4bb9-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-44\" (UID: \"c9bad5f6b56fcfe55193b55accad4bb9\") " pod="kube-system/kube-controller-manager-ip-172-31-21-44" Feb 13 15:09:11.774534 kubelet[3167]: I0213 15:09:11.774232 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b80e0c03e2b025a13edae0e1202dc89f-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-44\" (UID: \"b80e0c03e2b025a13edae0e1202dc89f\") " pod="kube-system/kube-scheduler-ip-172-31-21-44" Feb 13 15:09:12.032939 kubelet[3167]: I0213 15:09:12.032757 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-44" podStartSLOduration=1.032712361 podStartE2EDuration="1.032712361s" podCreationTimestamp="2025-02-13 15:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:09:11.77282674 +0000 UTC m=+1.519448216" watchObservedRunningTime="2025-02-13 15:09:12.032712361 +0000 UTC m=+1.779333837" Feb 13 15:09:12.597692 sudo[2235]: pam_unix(sudo:session): session closed for user root Feb 13 15:09:12.623775 sshd[2234]: Connection closed by 139.178.68.195 port 41832 Feb 13 15:09:12.623099 sshd-session[2232]: pam_unix(sshd:session): session closed for user core Feb 13 15:09:12.631478 systemd[1]: sshd@4-172.31.21.44:22-139.178.68.195:41832.service: Deactivated successfully. Feb 13 15:09:12.633934 systemd-logind[1929]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:09:12.642970 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:09:12.644150 systemd[1]: session-5.scope: Consumed 7.864s CPU time, 222.1M memory peak. Feb 13 15:09:12.651282 systemd-logind[1929]: Removed session 5. Feb 13 15:09:15.120512 kubelet[3167]: I0213 15:09:15.120372 3167 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:09:15.123380 kubelet[3167]: I0213 15:09:15.122277 3167 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:09:15.123516 containerd[1943]: time="2025-02-13T15:09:15.121924493Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:09:16.027582 systemd[1]: Created slice kubepods-besteffort-poda37c666f_8460_42da_81da_b1bb6545e324.slice - libcontainer container kubepods-besteffort-poda37c666f_8460_42da_81da_b1bb6545e324.slice. Feb 13 15:09:16.091642 systemd[1]: Created slice kubepods-burstable-podab3a2d29_006a_4eb0_8f56_363c18c5aaae.slice - libcontainer container kubepods-burstable-podab3a2d29_006a_4eb0_8f56_363c18c5aaae.slice. Feb 13 15:09:16.104332 kubelet[3167]: I0213 15:09:16.102871 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a37c666f-8460-42da-81da-b1bb6545e324-kube-proxy\") pod \"kube-proxy-whlvn\" (UID: \"a37c666f-8460-42da-81da-b1bb6545e324\") " pod="kube-system/kube-proxy-whlvn" Feb 13 15:09:16.104332 kubelet[3167]: I0213 15:09:16.102983 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a37c666f-8460-42da-81da-b1bb6545e324-xtables-lock\") pod \"kube-proxy-whlvn\" (UID: \"a37c666f-8460-42da-81da-b1bb6545e324\") " pod="kube-system/kube-proxy-whlvn" Feb 13 15:09:16.104332 kubelet[3167]: I0213 15:09:16.103087 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a37c666f-8460-42da-81da-b1bb6545e324-lib-modules\") pod \"kube-proxy-whlvn\" (UID: \"a37c666f-8460-42da-81da-b1bb6545e324\") " pod="kube-system/kube-proxy-whlvn" Feb 13 15:09:16.104332 kubelet[3167]: I0213 15:09:16.103148 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tffsh\" (UniqueName: \"kubernetes.io/projected/a37c666f-8460-42da-81da-b1bb6545e324-kube-api-access-tffsh\") pod \"kube-proxy-whlvn\" (UID: \"a37c666f-8460-42da-81da-b1bb6545e324\") " pod="kube-system/kube-proxy-whlvn" Feb 13 15:09:16.206770 kubelet[3167]: I0213 15:09:16.203601 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab3a2d29-006a-4eb0-8f56-363c18c5aaae-xtables-lock\") pod \"kube-flannel-ds-gdmfb\" (UID: \"ab3a2d29-006a-4eb0-8f56-363c18c5aaae\") " pod="kube-flannel/kube-flannel-ds-gdmfb" Feb 13 15:09:16.206770 kubelet[3167]: I0213 15:09:16.203697 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/ab3a2d29-006a-4eb0-8f56-363c18c5aaae-flannel-cfg\") pod \"kube-flannel-ds-gdmfb\" (UID: \"ab3a2d29-006a-4eb0-8f56-363c18c5aaae\") " pod="kube-flannel/kube-flannel-ds-gdmfb" Feb 13 15:09:16.206770 kubelet[3167]: I0213 15:09:16.203767 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr8mc\" (UniqueName: \"kubernetes.io/projected/ab3a2d29-006a-4eb0-8f56-363c18c5aaae-kube-api-access-nr8mc\") pod \"kube-flannel-ds-gdmfb\" (UID: \"ab3a2d29-006a-4eb0-8f56-363c18c5aaae\") " pod="kube-flannel/kube-flannel-ds-gdmfb" Feb 13 15:09:16.206770 kubelet[3167]: I0213 15:09:16.203849 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/ab3a2d29-006a-4eb0-8f56-363c18c5aaae-cni\") pod \"kube-flannel-ds-gdmfb\" (UID: \"ab3a2d29-006a-4eb0-8f56-363c18c5aaae\") " pod="kube-flannel/kube-flannel-ds-gdmfb" Feb 13 15:09:16.206770 kubelet[3167]: I0213 15:09:16.203907 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/ab3a2d29-006a-4eb0-8f56-363c18c5aaae-cni-plugin\") pod \"kube-flannel-ds-gdmfb\" (UID: \"ab3a2d29-006a-4eb0-8f56-363c18c5aaae\") " pod="kube-flannel/kube-flannel-ds-gdmfb" Feb 13 15:09:16.207843 kubelet[3167]: I0213 15:09:16.204462 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ab3a2d29-006a-4eb0-8f56-363c18c5aaae-run\") pod \"kube-flannel-ds-gdmfb\" (UID: \"ab3a2d29-006a-4eb0-8f56-363c18c5aaae\") " pod="kube-flannel/kube-flannel-ds-gdmfb" Feb 13 15:09:16.340209 containerd[1943]: time="2025-02-13T15:09:16.340095403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-whlvn,Uid:a37c666f-8460-42da-81da-b1bb6545e324,Namespace:kube-system,Attempt:0,}" Feb 13 15:09:16.393386 containerd[1943]: time="2025-02-13T15:09:16.393017263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:09:16.393386 containerd[1943]: time="2025-02-13T15:09:16.393169351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:09:16.393386 containerd[1943]: time="2025-02-13T15:09:16.393225211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:16.396789 containerd[1943]: time="2025-02-13T15:09:16.394754575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:16.402591 containerd[1943]: time="2025-02-13T15:09:16.401103403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-gdmfb,Uid:ab3a2d29-006a-4eb0-8f56-363c18c5aaae,Namespace:kube-flannel,Attempt:0,}" Feb 13 15:09:16.448048 systemd[1]: Started cri-containerd-c65877d1e43926fcf5e9825bc804fe822e4a690109207c7e79e73cd2942bafa5.scope - libcontainer container c65877d1e43926fcf5e9825bc804fe822e4a690109207c7e79e73cd2942bafa5. Feb 13 15:09:16.481615 containerd[1943]: time="2025-02-13T15:09:16.481086679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:09:16.481615 containerd[1943]: time="2025-02-13T15:09:16.481199815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:09:16.481615 containerd[1943]: time="2025-02-13T15:09:16.481229647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:16.481615 containerd[1943]: time="2025-02-13T15:09:16.481383451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:16.518406 containerd[1943]: time="2025-02-13T15:09:16.518342348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-whlvn,Uid:a37c666f-8460-42da-81da-b1bb6545e324,Namespace:kube-system,Attempt:0,} returns sandbox id \"c65877d1e43926fcf5e9825bc804fe822e4a690109207c7e79e73cd2942bafa5\"" Feb 13 15:09:16.534297 containerd[1943]: time="2025-02-13T15:09:16.534128792Z" level=info msg="CreateContainer within sandbox \"c65877d1e43926fcf5e9825bc804fe822e4a690109207c7e79e73cd2942bafa5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:09:16.539212 systemd[1]: Started cri-containerd-4e1a71de8298b22f03c14cf1f667594187e1da1584a1c2f04e24e99c8cf7413a.scope - libcontainer container 4e1a71de8298b22f03c14cf1f667594187e1da1584a1c2f04e24e99c8cf7413a. Feb 13 15:09:16.588481 containerd[1943]: time="2025-02-13T15:09:16.588395672Z" level=info msg="CreateContainer within sandbox \"c65877d1e43926fcf5e9825bc804fe822e4a690109207c7e79e73cd2942bafa5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cabee179a6c036ec668974074c4987dcbcbad09e93284261d3dac5544f9c0e5f\"" Feb 13 15:09:16.591371 containerd[1943]: time="2025-02-13T15:09:16.591232556Z" level=info msg="StartContainer for \"cabee179a6c036ec668974074c4987dcbcbad09e93284261d3dac5544f9c0e5f\"" Feb 13 15:09:16.642500 containerd[1943]: time="2025-02-13T15:09:16.642416264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-gdmfb,Uid:ab3a2d29-006a-4eb0-8f56-363c18c5aaae,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"4e1a71de8298b22f03c14cf1f667594187e1da1584a1c2f04e24e99c8cf7413a\"" Feb 13 15:09:16.648819 containerd[1943]: time="2025-02-13T15:09:16.648768284Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 15:09:16.675145 systemd[1]: Started cri-containerd-cabee179a6c036ec668974074c4987dcbcbad09e93284261d3dac5544f9c0e5f.scope - libcontainer container cabee179a6c036ec668974074c4987dcbcbad09e93284261d3dac5544f9c0e5f. Feb 13 15:09:16.745112 containerd[1943]: time="2025-02-13T15:09:16.745046493Z" level=info msg="StartContainer for \"cabee179a6c036ec668974074c4987dcbcbad09e93284261d3dac5544f9c0e5f\" returns successfully" Feb 13 15:09:17.248974 systemd[1]: run-containerd-runc-k8s.io-c65877d1e43926fcf5e9825bc804fe822e4a690109207c7e79e73cd2942bafa5-runc.LKE3ow.mount: Deactivated successfully. Feb 13 15:09:19.233702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2034428369.mount: Deactivated successfully. Feb 13 15:09:19.300299 containerd[1943]: time="2025-02-13T15:09:19.300219717Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:09:19.303442 containerd[1943]: time="2025-02-13T15:09:19.303352401Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Feb 13 15:09:19.306209 containerd[1943]: time="2025-02-13T15:09:19.306147873Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:09:19.311126 containerd[1943]: time="2025-02-13T15:09:19.310982674Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:09:19.313785 containerd[1943]: time="2025-02-13T15:09:19.312673090Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 2.663657066s" Feb 13 15:09:19.313785 containerd[1943]: time="2025-02-13T15:09:19.312752230Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 13 15:09:19.319038 containerd[1943]: time="2025-02-13T15:09:19.318642886Z" level=info msg="CreateContainer within sandbox \"4e1a71de8298b22f03c14cf1f667594187e1da1584a1c2f04e24e99c8cf7413a\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 15:09:19.349793 containerd[1943]: time="2025-02-13T15:09:19.349609270Z" level=info msg="CreateContainer within sandbox \"4e1a71de8298b22f03c14cf1f667594187e1da1584a1c2f04e24e99c8cf7413a\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"3cb481604fa21bd722cc7e6f13f51ec035432facd6cb2bf7e58a6c4dec44fc3e\"" Feb 13 15:09:19.352661 containerd[1943]: time="2025-02-13T15:09:19.351882094Z" level=info msg="StartContainer for \"3cb481604fa21bd722cc7e6f13f51ec035432facd6cb2bf7e58a6c4dec44fc3e\"" Feb 13 15:09:19.418856 systemd[1]: Started cri-containerd-3cb481604fa21bd722cc7e6f13f51ec035432facd6cb2bf7e58a6c4dec44fc3e.scope - libcontainer container 3cb481604fa21bd722cc7e6f13f51ec035432facd6cb2bf7e58a6c4dec44fc3e. Feb 13 15:09:19.425710 kubelet[3167]: I0213 15:09:19.425581 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-whlvn" podStartSLOduration=4.425537146 podStartE2EDuration="4.425537146s" podCreationTimestamp="2025-02-13 15:09:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:09:17.206301931 +0000 UTC m=+6.952923431" watchObservedRunningTime="2025-02-13 15:09:19.425537146 +0000 UTC m=+9.172158610" Feb 13 15:09:19.481271 containerd[1943]: time="2025-02-13T15:09:19.481133806Z" level=info msg="StartContainer for \"3cb481604fa21bd722cc7e6f13f51ec035432facd6cb2bf7e58a6c4dec44fc3e\" returns successfully" Feb 13 15:09:19.488192 systemd[1]: cri-containerd-3cb481604fa21bd722cc7e6f13f51ec035432facd6cb2bf7e58a6c4dec44fc3e.scope: Deactivated successfully. Feb 13 15:09:19.564177 containerd[1943]: time="2025-02-13T15:09:19.564090659Z" level=info msg="shim disconnected" id=3cb481604fa21bd722cc7e6f13f51ec035432facd6cb2bf7e58a6c4dec44fc3e namespace=k8s.io Feb 13 15:09:19.564177 containerd[1943]: time="2025-02-13T15:09:19.564169727Z" level=warning msg="cleaning up after shim disconnected" id=3cb481604fa21bd722cc7e6f13f51ec035432facd6cb2bf7e58a6c4dec44fc3e namespace=k8s.io Feb 13 15:09:19.564581 containerd[1943]: time="2025-02-13T15:09:19.564191123Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:09:20.089941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cb481604fa21bd722cc7e6f13f51ec035432facd6cb2bf7e58a6c4dec44fc3e-rootfs.mount: Deactivated successfully. Feb 13 15:09:20.175824 containerd[1943]: time="2025-02-13T15:09:20.174030502Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 15:09:22.431102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1825884568.mount: Deactivated successfully. Feb 13 15:09:23.711683 containerd[1943]: time="2025-02-13T15:09:23.710869947Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:09:23.713239 containerd[1943]: time="2025-02-13T15:09:23.713140443Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Feb 13 15:09:23.715303 containerd[1943]: time="2025-02-13T15:09:23.715216815Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:09:23.724414 containerd[1943]: time="2025-02-13T15:09:23.724312575Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:09:23.726775 containerd[1943]: time="2025-02-13T15:09:23.726497895Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 3.552368357s" Feb 13 15:09:23.726775 containerd[1943]: time="2025-02-13T15:09:23.726564891Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 13 15:09:23.735354 containerd[1943]: time="2025-02-13T15:09:23.735021039Z" level=info msg="CreateContainer within sandbox \"4e1a71de8298b22f03c14cf1f667594187e1da1584a1c2f04e24e99c8cf7413a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:09:23.763834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3674036910.mount: Deactivated successfully. Feb 13 15:09:23.771814 containerd[1943]: time="2025-02-13T15:09:23.771691276Z" level=info msg="CreateContainer within sandbox \"4e1a71de8298b22f03c14cf1f667594187e1da1584a1c2f04e24e99c8cf7413a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4ae32e21709857f3997752463c3a6077e20720e9adc0caa7c316f2249e2c225f\"" Feb 13 15:09:23.773871 containerd[1943]: time="2025-02-13T15:09:23.773330380Z" level=info msg="StartContainer for \"4ae32e21709857f3997752463c3a6077e20720e9adc0caa7c316f2249e2c225f\"" Feb 13 15:09:23.835061 systemd[1]: Started cri-containerd-4ae32e21709857f3997752463c3a6077e20720e9adc0caa7c316f2249e2c225f.scope - libcontainer container 4ae32e21709857f3997752463c3a6077e20720e9adc0caa7c316f2249e2c225f. Feb 13 15:09:23.884633 systemd[1]: cri-containerd-4ae32e21709857f3997752463c3a6077e20720e9adc0caa7c316f2249e2c225f.scope: Deactivated successfully. Feb 13 15:09:23.892368 containerd[1943]: time="2025-02-13T15:09:23.892127236Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab3a2d29_006a_4eb0_8f56_363c18c5aaae.slice/cri-containerd-4ae32e21709857f3997752463c3a6077e20720e9adc0caa7c316f2249e2c225f.scope/memory.events\": no such file or directory" Feb 13 15:09:23.896137 containerd[1943]: time="2025-02-13T15:09:23.896061808Z" level=info msg="StartContainer for \"4ae32e21709857f3997752463c3a6077e20720e9adc0caa7c316f2249e2c225f\" returns successfully" Feb 13 15:09:23.915780 kubelet[3167]: I0213 15:09:23.913160 3167 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 15:09:24.008620 systemd[1]: Created slice kubepods-burstable-poda31c00ca_a7a8_49b1_88b8_fb0e77c17b47.slice - libcontainer container kubepods-burstable-poda31c00ca_a7a8_49b1_88b8_fb0e77c17b47.slice. Feb 13 15:09:24.040871 systemd[1]: Created slice kubepods-burstable-pod809b83ee_8eb6_4267_94f5_8a1737b6f0e1.slice - libcontainer container kubepods-burstable-pod809b83ee_8eb6_4267_94f5_8a1737b6f0e1.slice. Feb 13 15:09:24.065705 kubelet[3167]: I0213 15:09:24.065636 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/809b83ee-8eb6-4267-94f5-8a1737b6f0e1-config-volume\") pod \"coredns-6f6b679f8f-z5clw\" (UID: \"809b83ee-8eb6-4267-94f5-8a1737b6f0e1\") " pod="kube-system/coredns-6f6b679f8f-z5clw" Feb 13 15:09:24.065946 kubelet[3167]: I0213 15:09:24.065713 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl4hm\" (UniqueName: \"kubernetes.io/projected/809b83ee-8eb6-4267-94f5-8a1737b6f0e1-kube-api-access-hl4hm\") pod \"coredns-6f6b679f8f-z5clw\" (UID: \"809b83ee-8eb6-4267-94f5-8a1737b6f0e1\") " pod="kube-system/coredns-6f6b679f8f-z5clw" Feb 13 15:09:24.065946 kubelet[3167]: I0213 15:09:24.065784 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpvhk\" (UniqueName: \"kubernetes.io/projected/a31c00ca-a7a8-49b1-88b8-fb0e77c17b47-kube-api-access-xpvhk\") pod \"coredns-6f6b679f8f-npxt6\" (UID: \"a31c00ca-a7a8-49b1-88b8-fb0e77c17b47\") " pod="kube-system/coredns-6f6b679f8f-npxt6" Feb 13 15:09:24.065946 kubelet[3167]: I0213 15:09:24.065826 3167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a31c00ca-a7a8-49b1-88b8-fb0e77c17b47-config-volume\") pod \"coredns-6f6b679f8f-npxt6\" (UID: \"a31c00ca-a7a8-49b1-88b8-fb0e77c17b47\") " pod="kube-system/coredns-6f6b679f8f-npxt6" Feb 13 15:09:24.109875 containerd[1943]: time="2025-02-13T15:09:24.109774081Z" level=info msg="shim disconnected" id=4ae32e21709857f3997752463c3a6077e20720e9adc0caa7c316f2249e2c225f namespace=k8s.io Feb 13 15:09:24.109875 containerd[1943]: time="2025-02-13T15:09:24.109872037Z" level=warning msg="cleaning up after shim disconnected" id=4ae32e21709857f3997752463c3a6077e20720e9adc0caa7c316f2249e2c225f namespace=k8s.io Feb 13 15:09:24.109875 containerd[1943]: time="2025-02-13T15:09:24.109893541Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:09:24.204179 containerd[1943]: time="2025-02-13T15:09:24.203161826Z" level=info msg="CreateContainer within sandbox \"4e1a71de8298b22f03c14cf1f667594187e1da1584a1c2f04e24e99c8cf7413a\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 15:09:24.227300 containerd[1943]: time="2025-02-13T15:09:24.227197082Z" level=info msg="CreateContainer within sandbox \"4e1a71de8298b22f03c14cf1f667594187e1da1584a1c2f04e24e99c8cf7413a\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"5ab34ad3a43570f7e641bebc08d5463a06e41649d96077d3bc292df49f06164a\"" Feb 13 15:09:24.229845 containerd[1943]: time="2025-02-13T15:09:24.229022822Z" level=info msg="StartContainer for \"5ab34ad3a43570f7e641bebc08d5463a06e41649d96077d3bc292df49f06164a\"" Feb 13 15:09:24.274291 systemd[1]: Started cri-containerd-5ab34ad3a43570f7e641bebc08d5463a06e41649d96077d3bc292df49f06164a.scope - libcontainer container 5ab34ad3a43570f7e641bebc08d5463a06e41649d96077d3bc292df49f06164a. Feb 13 15:09:24.328800 containerd[1943]: time="2025-02-13T15:09:24.328548386Z" level=info msg="StartContainer for \"5ab34ad3a43570f7e641bebc08d5463a06e41649d96077d3bc292df49f06164a\" returns successfully" Feb 13 15:09:24.348956 containerd[1943]: time="2025-02-13T15:09:24.348560751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-npxt6,Uid:a31c00ca-a7a8-49b1-88b8-fb0e77c17b47,Namespace:kube-system,Attempt:0,}" Feb 13 15:09:24.353095 containerd[1943]: time="2025-02-13T15:09:24.352983807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-z5clw,Uid:809b83ee-8eb6-4267-94f5-8a1737b6f0e1,Namespace:kube-system,Attempt:0,}" Feb 13 15:09:24.467784 containerd[1943]: time="2025-02-13T15:09:24.466547835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-npxt6,Uid:a31c00ca-a7a8-49b1-88b8-fb0e77c17b47,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"13ad03756fb96b32e5ef03c9318d525cebae35c319b09471e847bdf24e27bd23\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:09:24.468139 kubelet[3167]: E0213 15:09:24.467037 3167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13ad03756fb96b32e5ef03c9318d525cebae35c319b09471e847bdf24e27bd23\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:09:24.468139 kubelet[3167]: E0213 15:09:24.467134 3167 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13ad03756fb96b32e5ef03c9318d525cebae35c319b09471e847bdf24e27bd23\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-npxt6" Feb 13 15:09:24.468139 kubelet[3167]: E0213 15:09:24.467167 3167 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13ad03756fb96b32e5ef03c9318d525cebae35c319b09471e847bdf24e27bd23\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-npxt6" Feb 13 15:09:24.468139 kubelet[3167]: E0213 15:09:24.467226 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-npxt6_kube-system(a31c00ca-a7a8-49b1-88b8-fb0e77c17b47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-npxt6_kube-system(a31c00ca-a7a8-49b1-88b8-fb0e77c17b47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13ad03756fb96b32e5ef03c9318d525cebae35c319b09471e847bdf24e27bd23\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-npxt6" podUID="a31c00ca-a7a8-49b1-88b8-fb0e77c17b47" Feb 13 15:09:24.472330 containerd[1943]: time="2025-02-13T15:09:24.472205763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-z5clw,Uid:809b83ee-8eb6-4267-94f5-8a1737b6f0e1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2a4f49976e618f9453b8d0ba1c3dbe76beb4ee92b7a75129f6377a94eaf35d27\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:09:24.473241 kubelet[3167]: E0213 15:09:24.472668 3167 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a4f49976e618f9453b8d0ba1c3dbe76beb4ee92b7a75129f6377a94eaf35d27\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:09:24.473241 kubelet[3167]: E0213 15:09:24.472895 3167 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a4f49976e618f9453b8d0ba1c3dbe76beb4ee92b7a75129f6377a94eaf35d27\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-z5clw" Feb 13 15:09:24.473241 kubelet[3167]: E0213 15:09:24.472934 3167 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a4f49976e618f9453b8d0ba1c3dbe76beb4ee92b7a75129f6377a94eaf35d27\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-z5clw" Feb 13 15:09:24.473241 kubelet[3167]: E0213 15:09:24.473027 3167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-z5clw_kube-system(809b83ee-8eb6-4267-94f5-8a1737b6f0e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-z5clw_kube-system(809b83ee-8eb6-4267-94f5-8a1737b6f0e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a4f49976e618f9453b8d0ba1c3dbe76beb4ee92b7a75129f6377a94eaf35d27\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-z5clw" podUID="809b83ee-8eb6-4267-94f5-8a1737b6f0e1" Feb 13 15:09:24.759327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ae32e21709857f3997752463c3a6077e20720e9adc0caa7c316f2249e2c225f-rootfs.mount: Deactivated successfully. Feb 13 15:09:25.221979 kubelet[3167]: I0213 15:09:25.221762 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-gdmfb" podStartSLOduration=2.140330288 podStartE2EDuration="9.221707863s" podCreationTimestamp="2025-02-13 15:09:16 +0000 UTC" firstStartedPulling="2025-02-13 15:09:16.647552732 +0000 UTC m=+6.394174196" lastFinishedPulling="2025-02-13 15:09:23.728930307 +0000 UTC m=+13.475551771" observedRunningTime="2025-02-13 15:09:25.220951383 +0000 UTC m=+14.967572871" watchObservedRunningTime="2025-02-13 15:09:25.221707863 +0000 UTC m=+14.968329327" Feb 13 15:09:25.465115 (udev-worker)[3817]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:09:25.493219 systemd-networkd[1856]: flannel.1: Link UP Feb 13 15:09:25.493240 systemd-networkd[1856]: flannel.1: Gained carrier Feb 13 15:09:27.418069 systemd-networkd[1856]: flannel.1: Gained IPv6LL Feb 13 15:09:30.197924 ntpd[1924]: Listen normally on 7 flannel.1 192.168.0.0:123 Feb 13 15:09:30.198064 ntpd[1924]: Listen normally on 8 flannel.1 [fe80::d8f1:c9ff:fe4c:2608%4]:123 Feb 13 15:09:30.199381 ntpd[1924]: 13 Feb 15:09:30 ntpd[1924]: Listen normally on 7 flannel.1 192.168.0.0:123 Feb 13 15:09:30.199381 ntpd[1924]: 13 Feb 15:09:30 ntpd[1924]: Listen normally on 8 flannel.1 [fe80::d8f1:c9ff:fe4c:2608%4]:123 Feb 13 15:09:37.959179 containerd[1943]: time="2025-02-13T15:09:37.959084550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-npxt6,Uid:a31c00ca-a7a8-49b1-88b8-fb0e77c17b47,Namespace:kube-system,Attempt:0,}" Feb 13 15:09:38.010093 systemd-networkd[1856]: cni0: Link UP Feb 13 15:09:38.010114 systemd-networkd[1856]: cni0: Gained carrier Feb 13 15:09:38.025660 kernel: cni0: port 1(vethbb12341c) entered blocking state Feb 13 15:09:38.025819 kernel: cni0: port 1(vethbb12341c) entered disabled state Feb 13 15:09:38.025865 kernel: vethbb12341c: entered allmulticast mode Feb 13 15:09:38.023671 systemd-networkd[1856]: vethbb12341c: Link UP Feb 13 15:09:38.028204 kernel: vethbb12341c: entered promiscuous mode Feb 13 15:09:38.029011 systemd-networkd[1856]: cni0: Lost carrier Feb 13 15:09:38.035015 (udev-worker)[3959]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:09:38.044007 kernel: cni0: port 1(vethbb12341c) entered blocking state Feb 13 15:09:38.044124 kernel: cni0: port 1(vethbb12341c) entered forwarding state Feb 13 15:09:38.044100 systemd-networkd[1856]: vethbb12341c: Gained carrier Feb 13 15:09:38.048477 systemd-networkd[1856]: cni0: Gained carrier Feb 13 15:09:38.049607 (udev-worker)[3965]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:09:38.055924 containerd[1943]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000012938), "name":"cbr0", "type":"bridge"} Feb 13 15:09:38.055924 containerd[1943]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:09:38.107399 containerd[1943]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T15:09:38.107154687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:09:38.107761 containerd[1943]: time="2025-02-13T15:09:38.107459271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:09:38.107761 containerd[1943]: time="2025-02-13T15:09:38.107552055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:38.108188 containerd[1943]: time="2025-02-13T15:09:38.108069951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:38.151205 systemd[1]: Started cri-containerd-0eeb863c48b964e631320af8bbf99f2348a96952dd3d5cb74333f961f80b6558.scope - libcontainer container 0eeb863c48b964e631320af8bbf99f2348a96952dd3d5cb74333f961f80b6558. Feb 13 15:09:38.227533 containerd[1943]: time="2025-02-13T15:09:38.227281959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-npxt6,Uid:a31c00ca-a7a8-49b1-88b8-fb0e77c17b47,Namespace:kube-system,Attempt:0,} returns sandbox id \"0eeb863c48b964e631320af8bbf99f2348a96952dd3d5cb74333f961f80b6558\"" Feb 13 15:09:38.237795 containerd[1943]: time="2025-02-13T15:09:38.237513088Z" level=info msg="CreateContainer within sandbox \"0eeb863c48b964e631320af8bbf99f2348a96952dd3d5cb74333f961f80b6558\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:09:38.266006 containerd[1943]: time="2025-02-13T15:09:38.265857940Z" level=info msg="CreateContainer within sandbox \"0eeb863c48b964e631320af8bbf99f2348a96952dd3d5cb74333f961f80b6558\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cf6f323989a0e83cf67beb1548f3f9166ad8f7a07a0ecc6d4069fde259dbf94c\"" Feb 13 15:09:38.267256 containerd[1943]: time="2025-02-13T15:09:38.266706508Z" level=info msg="StartContainer for \"cf6f323989a0e83cf67beb1548f3f9166ad8f7a07a0ecc6d4069fde259dbf94c\"" Feb 13 15:09:38.315090 systemd[1]: Started cri-containerd-cf6f323989a0e83cf67beb1548f3f9166ad8f7a07a0ecc6d4069fde259dbf94c.scope - libcontainer container cf6f323989a0e83cf67beb1548f3f9166ad8f7a07a0ecc6d4069fde259dbf94c. Feb 13 15:09:38.371047 containerd[1943]: time="2025-02-13T15:09:38.369979528Z" level=info msg="StartContainer for \"cf6f323989a0e83cf67beb1548f3f9166ad8f7a07a0ecc6d4069fde259dbf94c\" returns successfully" Feb 13 15:09:38.961038 containerd[1943]: time="2025-02-13T15:09:38.960905779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-z5clw,Uid:809b83ee-8eb6-4267-94f5-8a1737b6f0e1,Namespace:kube-system,Attempt:0,}" Feb 13 15:09:39.015095 (udev-worker)[3976]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:09:39.016183 systemd-networkd[1856]: vetha4d92369: Link UP Feb 13 15:09:39.019151 kernel: cni0: port 2(vetha4d92369) entered blocking state Feb 13 15:09:39.019285 kernel: cni0: port 2(vetha4d92369) entered disabled state Feb 13 15:09:39.019356 kernel: vetha4d92369: entered allmulticast mode Feb 13 15:09:39.021659 kernel: vetha4d92369: entered promiscuous mode Feb 13 15:09:39.022846 kernel: cni0: port 2(vetha4d92369) entered blocking state Feb 13 15:09:39.024797 kernel: cni0: port 2(vetha4d92369) entered forwarding state Feb 13 15:09:39.037926 systemd-networkd[1856]: vetha4d92369: Gained carrier Feb 13 15:09:39.041787 containerd[1943]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"} Feb 13 15:09:39.041787 containerd[1943]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:09:39.087109 containerd[1943]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T15:09:39.086533600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:09:39.087109 containerd[1943]: time="2025-02-13T15:09:39.086653216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:09:39.087109 containerd[1943]: time="2025-02-13T15:09:39.086692384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:39.087109 containerd[1943]: time="2025-02-13T15:09:39.087005740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:09:39.144055 systemd[1]: Started cri-containerd-05102f22f22d3340b34d33311963e3e7e531ac47264cb5ef0d9ec11382933d72.scope - libcontainer container 05102f22f22d3340b34d33311963e3e7e531ac47264cb5ef0d9ec11382933d72. Feb 13 15:09:39.213514 containerd[1943]: time="2025-02-13T15:09:39.213242332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-z5clw,Uid:809b83ee-8eb6-4267-94f5-8a1737b6f0e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"05102f22f22d3340b34d33311963e3e7e531ac47264cb5ef0d9ec11382933d72\"" Feb 13 15:09:39.223490 containerd[1943]: time="2025-02-13T15:09:39.222275200Z" level=info msg="CreateContainer within sandbox \"05102f22f22d3340b34d33311963e3e7e531ac47264cb5ef0d9ec11382933d72\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:09:39.254580 containerd[1943]: time="2025-02-13T15:09:39.254022773Z" level=info msg="CreateContainer within sandbox \"05102f22f22d3340b34d33311963e3e7e531ac47264cb5ef0d9ec11382933d72\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d7aed30cfbc0539c0b23deac52894e6dec0b89c5be82e671b0cf3ed6dff1693\"" Feb 13 15:09:39.258084 containerd[1943]: time="2025-02-13T15:09:39.255737993Z" level=info msg="StartContainer for \"9d7aed30cfbc0539c0b23deac52894e6dec0b89c5be82e671b0cf3ed6dff1693\"" Feb 13 15:09:39.275609 kubelet[3167]: I0213 15:09:39.275176 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-npxt6" podStartSLOduration=23.275146577 podStartE2EDuration="23.275146577s" podCreationTimestamp="2025-02-13 15:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:09:39.274294397 +0000 UTC m=+29.020915861" watchObservedRunningTime="2025-02-13 15:09:39.275146577 +0000 UTC m=+29.021768077" Feb 13 15:09:39.338565 systemd[1]: Started cri-containerd-9d7aed30cfbc0539c0b23deac52894e6dec0b89c5be82e671b0cf3ed6dff1693.scope - libcontainer container 9d7aed30cfbc0539c0b23deac52894e6dec0b89c5be82e671b0cf3ed6dff1693. Feb 13 15:09:39.416558 containerd[1943]: time="2025-02-13T15:09:39.416444501Z" level=info msg="StartContainer for \"9d7aed30cfbc0539c0b23deac52894e6dec0b89c5be82e671b0cf3ed6dff1693\" returns successfully" Feb 13 15:09:39.450556 systemd-networkd[1856]: vethbb12341c: Gained IPv6LL Feb 13 15:09:39.514121 systemd-networkd[1856]: cni0: Gained IPv6LL Feb 13 15:09:40.312809 kubelet[3167]: I0213 15:09:40.312066 3167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-z5clw" podStartSLOduration=24.31204185 podStartE2EDuration="24.31204185s" podCreationTimestamp="2025-02-13 15:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:09:40.287595318 +0000 UTC m=+30.034216818" watchObservedRunningTime="2025-02-13 15:09:40.31204185 +0000 UTC m=+30.058663386" Feb 13 15:09:40.730418 systemd-networkd[1856]: vetha4d92369: Gained IPv6LL Feb 13 15:09:43.197996 ntpd[1924]: Listen normally on 9 cni0 192.168.0.1:123 Feb 13 15:09:43.198850 ntpd[1924]: 13 Feb 15:09:43 ntpd[1924]: Listen normally on 9 cni0 192.168.0.1:123 Feb 13 15:09:43.198850 ntpd[1924]: 13 Feb 15:09:43 ntpd[1924]: Listen normally on 10 cni0 [fe80::348d:2eff:febe:c02e%5]:123 Feb 13 15:09:43.198850 ntpd[1924]: 13 Feb 15:09:43 ntpd[1924]: Listen normally on 11 vethbb12341c [fe80::28eb:b9ff:fe62:abc2%6]:123 Feb 13 15:09:43.198850 ntpd[1924]: 13 Feb 15:09:43 ntpd[1924]: Listen normally on 12 vetha4d92369 [fe80::b41f:cff:fe67:198b%7]:123 Feb 13 15:09:43.198131 ntpd[1924]: Listen normally on 10 cni0 [fe80::348d:2eff:febe:c02e%5]:123 Feb 13 15:09:43.198208 ntpd[1924]: Listen normally on 11 vethbb12341c [fe80::28eb:b9ff:fe62:abc2%6]:123 Feb 13 15:09:43.198274 ntpd[1924]: Listen normally on 12 vetha4d92369 [fe80::b41f:cff:fe67:198b%7]:123 Feb 13 15:09:56.463557 systemd[1]: Started sshd@5-172.31.21.44:22-139.178.68.195:34452.service - OpenSSH per-connection server daemon (139.178.68.195:34452). Feb 13 15:09:56.651063 sshd[4256]: Accepted publickey for core from 139.178.68.195 port 34452 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:09:56.654855 sshd-session[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:09:56.667338 systemd-logind[1929]: New session 6 of user core. Feb 13 15:09:56.677117 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:09:56.976015 sshd[4258]: Connection closed by 139.178.68.195 port 34452 Feb 13 15:09:56.977361 sshd-session[4256]: pam_unix(sshd:session): session closed for user core Feb 13 15:09:56.986994 systemd[1]: sshd@5-172.31.21.44:22-139.178.68.195:34452.service: Deactivated successfully. Feb 13 15:09:56.992208 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:09:56.998035 systemd-logind[1929]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:09:57.007462 systemd-logind[1929]: Removed session 6. Feb 13 15:10:02.021480 systemd[1]: Started sshd@6-172.31.21.44:22-139.178.68.195:34454.service - OpenSSH per-connection server daemon (139.178.68.195:34454). Feb 13 15:10:02.209407 sshd[4294]: Accepted publickey for core from 139.178.68.195 port 34454 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:02.213151 sshd-session[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:02.227310 systemd-logind[1929]: New session 7 of user core. Feb 13 15:10:02.235335 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:10:02.513141 sshd[4298]: Connection closed by 139.178.68.195 port 34454 Feb 13 15:10:02.512959 sshd-session[4294]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:02.522326 systemd[1]: sshd@6-172.31.21.44:22-139.178.68.195:34454.service: Deactivated successfully. Feb 13 15:10:02.528627 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:10:02.530668 systemd-logind[1929]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:10:02.533291 systemd-logind[1929]: Removed session 7. Feb 13 15:10:07.563297 systemd[1]: Started sshd@7-172.31.21.44:22-139.178.68.195:60840.service - OpenSSH per-connection server daemon (139.178.68.195:60840). Feb 13 15:10:07.752186 sshd[4332]: Accepted publickey for core from 139.178.68.195 port 60840 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:07.755832 sshd-session[4332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:07.766806 systemd-logind[1929]: New session 8 of user core. Feb 13 15:10:07.777073 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:10:08.064906 sshd[4334]: Connection closed by 139.178.68.195 port 60840 Feb 13 15:10:08.067072 sshd-session[4332]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:08.075563 systemd[1]: sshd@7-172.31.21.44:22-139.178.68.195:60840.service: Deactivated successfully. Feb 13 15:10:08.082893 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:10:08.087795 systemd-logind[1929]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:10:08.119329 systemd[1]: Started sshd@8-172.31.21.44:22-139.178.68.195:60854.service - OpenSSH per-connection server daemon (139.178.68.195:60854). Feb 13 15:10:08.122916 systemd-logind[1929]: Removed session 8. Feb 13 15:10:08.328133 sshd[4346]: Accepted publickey for core from 139.178.68.195 port 60854 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:08.331533 sshd-session[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:08.340970 systemd-logind[1929]: New session 9 of user core. Feb 13 15:10:08.348103 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:10:08.701774 sshd[4349]: Connection closed by 139.178.68.195 port 60854 Feb 13 15:10:08.701418 sshd-session[4346]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:08.716645 systemd[1]: sshd@8-172.31.21.44:22-139.178.68.195:60854.service: Deactivated successfully. Feb 13 15:10:08.725866 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:10:08.731891 systemd-logind[1929]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:10:08.762411 systemd[1]: Started sshd@9-172.31.21.44:22-139.178.68.195:60864.service - OpenSSH per-connection server daemon (139.178.68.195:60864). Feb 13 15:10:08.767335 systemd-logind[1929]: Removed session 9. Feb 13 15:10:08.976052 sshd[4358]: Accepted publickey for core from 139.178.68.195 port 60864 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:08.980977 sshd-session[4358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:09.003159 systemd-logind[1929]: New session 10 of user core. Feb 13 15:10:09.010867 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:10:09.306477 sshd[4361]: Connection closed by 139.178.68.195 port 60864 Feb 13 15:10:09.308080 sshd-session[4358]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:09.317141 systemd[1]: sshd@9-172.31.21.44:22-139.178.68.195:60864.service: Deactivated successfully. Feb 13 15:10:09.322290 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:10:09.324374 systemd-logind[1929]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:10:09.329335 systemd-logind[1929]: Removed session 10. Feb 13 15:10:14.360325 systemd[1]: Started sshd@10-172.31.21.44:22-139.178.68.195:60872.service - OpenSSH per-connection server daemon (139.178.68.195:60872). Feb 13 15:10:14.561626 sshd[4397]: Accepted publickey for core from 139.178.68.195 port 60872 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:14.564684 sshd-session[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:14.573234 systemd-logind[1929]: New session 11 of user core. Feb 13 15:10:14.580012 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:10:14.840686 sshd[4399]: Connection closed by 139.178.68.195 port 60872 Feb 13 15:10:14.841680 sshd-session[4397]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:14.847924 systemd-logind[1929]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:10:14.850282 systemd[1]: sshd@10-172.31.21.44:22-139.178.68.195:60872.service: Deactivated successfully. Feb 13 15:10:14.855273 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:10:14.859045 systemd-logind[1929]: Removed session 11. Feb 13 15:10:19.886308 systemd[1]: Started sshd@11-172.31.21.44:22-139.178.68.195:37008.service - OpenSSH per-connection server daemon (139.178.68.195:37008). Feb 13 15:10:20.079363 sshd[4434]: Accepted publickey for core from 139.178.68.195 port 37008 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:20.082664 sshd-session[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:20.092180 systemd-logind[1929]: New session 12 of user core. Feb 13 15:10:20.107014 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:10:20.373342 sshd[4436]: Connection closed by 139.178.68.195 port 37008 Feb 13 15:10:20.374260 sshd-session[4434]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:20.383038 systemd[1]: sshd@11-172.31.21.44:22-139.178.68.195:37008.service: Deactivated successfully. Feb 13 15:10:20.387057 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:10:20.389306 systemd-logind[1929]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:10:20.391208 systemd-logind[1929]: Removed session 12. Feb 13 15:10:25.421524 systemd[1]: Started sshd@12-172.31.21.44:22-139.178.68.195:37020.service - OpenSSH per-connection server daemon (139.178.68.195:37020). Feb 13 15:10:25.618966 sshd[4469]: Accepted publickey for core from 139.178.68.195 port 37020 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:25.622687 sshd-session[4469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:25.636874 systemd-logind[1929]: New session 13 of user core. Feb 13 15:10:25.643132 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:10:25.918110 sshd[4471]: Connection closed by 139.178.68.195 port 37020 Feb 13 15:10:25.919162 sshd-session[4469]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:25.927425 systemd[1]: sshd@12-172.31.21.44:22-139.178.68.195:37020.service: Deactivated successfully. Feb 13 15:10:25.933419 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:10:25.935300 systemd-logind[1929]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:10:25.937802 systemd-logind[1929]: Removed session 13. Feb 13 15:10:30.959359 systemd[1]: Started sshd@13-172.31.21.44:22-139.178.68.195:39336.service - OpenSSH per-connection server daemon (139.178.68.195:39336). Feb 13 15:10:31.160773 sshd[4510]: Accepted publickey for core from 139.178.68.195 port 39336 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:31.164113 sshd-session[4510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:31.175225 systemd-logind[1929]: New session 14 of user core. Feb 13 15:10:31.183331 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:10:31.486943 sshd[4517]: Connection closed by 139.178.68.195 port 39336 Feb 13 15:10:31.488253 sshd-session[4510]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:31.498433 systemd[1]: sshd@13-172.31.21.44:22-139.178.68.195:39336.service: Deactivated successfully. Feb 13 15:10:31.504847 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:10:31.507318 systemd-logind[1929]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:10:31.535477 systemd[1]: Started sshd@14-172.31.21.44:22-139.178.68.195:39338.service - OpenSSH per-connection server daemon (139.178.68.195:39338). Feb 13 15:10:31.537998 systemd-logind[1929]: Removed session 14. Feb 13 15:10:31.744446 sshd[4538]: Accepted publickey for core from 139.178.68.195 port 39338 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:31.748959 sshd-session[4538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:31.763699 systemd-logind[1929]: New session 15 of user core. Feb 13 15:10:31.774125 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:10:32.093661 sshd[4541]: Connection closed by 139.178.68.195 port 39338 Feb 13 15:10:32.094817 sshd-session[4538]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:32.102952 systemd[1]: sshd@14-172.31.21.44:22-139.178.68.195:39338.service: Deactivated successfully. Feb 13 15:10:32.108762 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:10:32.110604 systemd-logind[1929]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:10:32.113155 systemd-logind[1929]: Removed session 15. Feb 13 15:10:32.142279 systemd[1]: Started sshd@15-172.31.21.44:22-139.178.68.195:39344.service - OpenSSH per-connection server daemon (139.178.68.195:39344). Feb 13 15:10:32.327685 sshd[4551]: Accepted publickey for core from 139.178.68.195 port 39344 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:32.330266 sshd-session[4551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:32.338801 systemd-logind[1929]: New session 16 of user core. Feb 13 15:10:32.348180 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:10:35.059643 sshd[4553]: Connection closed by 139.178.68.195 port 39344 Feb 13 15:10:35.061659 sshd-session[4551]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:35.071761 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:10:35.072245 systemd[1]: session-16.scope: Consumed 1.042s CPU time, 56.4M memory peak. Feb 13 15:10:35.077081 systemd[1]: sshd@15-172.31.21.44:22-139.178.68.195:39344.service: Deactivated successfully. Feb 13 15:10:35.078596 systemd-logind[1929]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:10:35.124037 systemd[1]: Started sshd@16-172.31.21.44:22-139.178.68.195:39354.service - OpenSSH per-connection server daemon (139.178.68.195:39354). Feb 13 15:10:35.125354 systemd-logind[1929]: Removed session 16. Feb 13 15:10:35.339942 sshd[4569]: Accepted publickey for core from 139.178.68.195 port 39354 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:35.342840 sshd-session[4569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:35.353482 systemd-logind[1929]: New session 17 of user core. Feb 13 15:10:35.359160 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:10:35.883785 sshd[4572]: Connection closed by 139.178.68.195 port 39354 Feb 13 15:10:35.883139 sshd-session[4569]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:35.897791 systemd[1]: sshd@16-172.31.21.44:22-139.178.68.195:39354.service: Deactivated successfully. Feb 13 15:10:35.904372 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:10:35.907939 systemd-logind[1929]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:10:35.934326 systemd[1]: Started sshd@17-172.31.21.44:22-139.178.68.195:39362.service - OpenSSH per-connection server daemon (139.178.68.195:39362). Feb 13 15:10:35.937677 systemd-logind[1929]: Removed session 17. Feb 13 15:10:36.146385 sshd[4587]: Accepted publickey for core from 139.178.68.195 port 39362 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:36.147941 sshd-session[4587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:36.159342 systemd-logind[1929]: New session 18 of user core. Feb 13 15:10:36.173139 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:10:36.428338 sshd[4590]: Connection closed by 139.178.68.195 port 39362 Feb 13 15:10:36.429346 sshd-session[4587]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:36.441416 systemd[1]: sshd@17-172.31.21.44:22-139.178.68.195:39362.service: Deactivated successfully. Feb 13 15:10:36.447788 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:10:36.449546 systemd-logind[1929]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:10:36.451885 systemd-logind[1929]: Removed session 18. Feb 13 15:10:41.482429 systemd[1]: Started sshd@18-172.31.21.44:22-139.178.68.195:51492.service - OpenSSH per-connection server daemon (139.178.68.195:51492). Feb 13 15:10:41.677322 sshd[4639]: Accepted publickey for core from 139.178.68.195 port 51492 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:41.682678 sshd-session[4639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:41.697858 systemd-logind[1929]: New session 19 of user core. Feb 13 15:10:41.704405 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:10:41.981205 sshd[4641]: Connection closed by 139.178.68.195 port 51492 Feb 13 15:10:41.982823 sshd-session[4639]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:41.991563 systemd[1]: sshd@18-172.31.21.44:22-139.178.68.195:51492.service: Deactivated successfully. Feb 13 15:10:41.998881 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:10:42.004638 systemd-logind[1929]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:10:42.009757 systemd-logind[1929]: Removed session 19. Feb 13 15:10:47.032305 systemd[1]: Started sshd@19-172.31.21.44:22-139.178.68.195:55902.service - OpenSSH per-connection server daemon (139.178.68.195:55902). Feb 13 15:10:47.213959 sshd[4680]: Accepted publickey for core from 139.178.68.195 port 55902 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:47.217205 sshd-session[4680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:47.227685 systemd-logind[1929]: New session 20 of user core. Feb 13 15:10:47.239038 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:10:47.486609 sshd[4682]: Connection closed by 139.178.68.195 port 55902 Feb 13 15:10:47.487764 sshd-session[4680]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:47.494702 systemd[1]: sshd@19-172.31.21.44:22-139.178.68.195:55902.service: Deactivated successfully. Feb 13 15:10:47.498983 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:10:47.502075 systemd-logind[1929]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:10:47.505599 systemd-logind[1929]: Removed session 20. Feb 13 15:10:52.528459 systemd[1]: Started sshd@20-172.31.21.44:22-139.178.68.195:55918.service - OpenSSH per-connection server daemon (139.178.68.195:55918). Feb 13 15:10:52.721404 sshd[4715]: Accepted publickey for core from 139.178.68.195 port 55918 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:52.724131 sshd-session[4715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:52.733536 systemd-logind[1929]: New session 21 of user core. Feb 13 15:10:52.751111 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:10:53.006413 sshd[4717]: Connection closed by 139.178.68.195 port 55918 Feb 13 15:10:53.006225 sshd-session[4715]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:53.016259 systemd[1]: sshd@20-172.31.21.44:22-139.178.68.195:55918.service: Deactivated successfully. Feb 13 15:10:53.021120 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:10:53.024433 systemd-logind[1929]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:10:53.028364 systemd-logind[1929]: Removed session 21. Feb 13 15:10:58.057254 systemd[1]: Started sshd@21-172.31.21.44:22-139.178.68.195:57714.service - OpenSSH per-connection server daemon (139.178.68.195:57714). Feb 13 15:10:58.243393 sshd[4750]: Accepted publickey for core from 139.178.68.195 port 57714 ssh2: RSA SHA256:3/htRDj1ntNL6MPpPyfsmj3hBKnexY2IDu6B20AGqLs Feb 13 15:10:58.246325 sshd-session[4750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:10:58.256350 systemd-logind[1929]: New session 22 of user core. Feb 13 15:10:58.264054 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:10:58.530249 sshd[4752]: Connection closed by 139.178.68.195 port 57714 Feb 13 15:10:58.530896 sshd-session[4750]: pam_unix(sshd:session): session closed for user core Feb 13 15:10:58.540473 systemd[1]: sshd@21-172.31.21.44:22-139.178.68.195:57714.service: Deactivated successfully. Feb 13 15:10:58.545634 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:10:58.547914 systemd-logind[1929]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:10:58.550379 systemd-logind[1929]: Removed session 22. Feb 13 15:11:12.935637 systemd[1]: cri-containerd-88506664c703420cea6f8342f97cbc4ffb5563673b381e7658fc7df08409a6c8.scope: Deactivated successfully. Feb 13 15:11:12.936373 systemd[1]: cri-containerd-88506664c703420cea6f8342f97cbc4ffb5563673b381e7658fc7df08409a6c8.scope: Consumed 4.578s CPU time, 51.5M memory peak. Feb 13 15:11:12.994567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88506664c703420cea6f8342f97cbc4ffb5563673b381e7658fc7df08409a6c8-rootfs.mount: Deactivated successfully. Feb 13 15:11:13.001336 containerd[1943]: time="2025-02-13T15:11:13.001236562Z" level=info msg="shim disconnected" id=88506664c703420cea6f8342f97cbc4ffb5563673b381e7658fc7df08409a6c8 namespace=k8s.io Feb 13 15:11:13.001336 containerd[1943]: time="2025-02-13T15:11:13.001322878Z" level=warning msg="cleaning up after shim disconnected" id=88506664c703420cea6f8342f97cbc4ffb5563673b381e7658fc7df08409a6c8 namespace=k8s.io Feb 13 15:11:13.002567 containerd[1943]: time="2025-02-13T15:11:13.001345882Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:11:13.131127 kubelet[3167]: E0213 15:11:13.130540 3167 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-44?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 15:11:13.541340 kubelet[3167]: I0213 15:11:13.541255 3167 scope.go:117] "RemoveContainer" containerID="88506664c703420cea6f8342f97cbc4ffb5563673b381e7658fc7df08409a6c8" Feb 13 15:11:13.546004 containerd[1943]: time="2025-02-13T15:11:13.545883961Z" level=info msg="CreateContainer within sandbox \"963df73bdbd551814506bf5c57453b48bd9f13da7716dc039beb74c44d96e670\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 15:11:13.592787 containerd[1943]: time="2025-02-13T15:11:13.592577749Z" level=info msg="CreateContainer within sandbox \"963df73bdbd551814506bf5c57453b48bd9f13da7716dc039beb74c44d96e670\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8339248eb6137fdf0f817e08780f3360f7c50d45af5791abf99bf513a2eda704\"" Feb 13 15:11:13.594768 containerd[1943]: time="2025-02-13T15:11:13.594517333Z" level=info msg="StartContainer for \"8339248eb6137fdf0f817e08780f3360f7c50d45af5791abf99bf513a2eda704\"" Feb 13 15:11:13.655118 systemd[1]: Started cri-containerd-8339248eb6137fdf0f817e08780f3360f7c50d45af5791abf99bf513a2eda704.scope - libcontainer container 8339248eb6137fdf0f817e08780f3360f7c50d45af5791abf99bf513a2eda704. Feb 13 15:11:13.738484 containerd[1943]: time="2025-02-13T15:11:13.738330194Z" level=info msg="StartContainer for \"8339248eb6137fdf0f817e08780f3360f7c50d45af5791abf99bf513a2eda704\" returns successfully" Feb 13 15:11:17.645847 systemd[1]: cri-containerd-572edfc35a7ea69c214a0b2ac33368f75a9d73295ab5fb7e693b6bae7452dfe5.scope: Deactivated successfully. Feb 13 15:11:17.646560 systemd[1]: cri-containerd-572edfc35a7ea69c214a0b2ac33368f75a9d73295ab5fb7e693b6bae7452dfe5.scope: Consumed 2.055s CPU time, 18.7M memory peak. Feb 13 15:11:17.706210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-572edfc35a7ea69c214a0b2ac33368f75a9d73295ab5fb7e693b6bae7452dfe5-rootfs.mount: Deactivated successfully. Feb 13 15:11:17.717377 containerd[1943]: time="2025-02-13T15:11:17.717177318Z" level=info msg="shim disconnected" id=572edfc35a7ea69c214a0b2ac33368f75a9d73295ab5fb7e693b6bae7452dfe5 namespace=k8s.io Feb 13 15:11:17.717377 containerd[1943]: time="2025-02-13T15:11:17.717294954Z" level=warning msg="cleaning up after shim disconnected" id=572edfc35a7ea69c214a0b2ac33368f75a9d73295ab5fb7e693b6bae7452dfe5 namespace=k8s.io Feb 13 15:11:17.717377 containerd[1943]: time="2025-02-13T15:11:17.717321258Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:11:18.565867 kubelet[3167]: I0213 15:11:18.565608 3167 scope.go:117] "RemoveContainer" containerID="572edfc35a7ea69c214a0b2ac33368f75a9d73295ab5fb7e693b6bae7452dfe5" Feb 13 15:11:18.569410 containerd[1943]: time="2025-02-13T15:11:18.569067606Z" level=info msg="CreateContainer within sandbox \"9f4a974d9b93897bcde1a920e95d15cfed7c440e9f4ea598c82bcc5d3cffa858\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 15:11:18.604101 containerd[1943]: time="2025-02-13T15:11:18.603901434Z" level=info msg="CreateContainer within sandbox \"9f4a974d9b93897bcde1a920e95d15cfed7c440e9f4ea598c82bcc5d3cffa858\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"42c3ff10276a8dcd7ca6e4b5112cc3e21d751a20894afef4d991c07c7b1091ae\"" Feb 13 15:11:18.605191 containerd[1943]: time="2025-02-13T15:11:18.604889550Z" level=info msg="StartContainer for \"42c3ff10276a8dcd7ca6e4b5112cc3e21d751a20894afef4d991c07c7b1091ae\"" Feb 13 15:11:18.671098 systemd[1]: Started cri-containerd-42c3ff10276a8dcd7ca6e4b5112cc3e21d751a20894afef4d991c07c7b1091ae.scope - libcontainer container 42c3ff10276a8dcd7ca6e4b5112cc3e21d751a20894afef4d991c07c7b1091ae. Feb 13 15:11:18.763026 containerd[1943]: time="2025-02-13T15:11:18.762935299Z" level=info msg="StartContainer for \"42c3ff10276a8dcd7ca6e4b5112cc3e21d751a20894afef4d991c07c7b1091ae\" returns successfully"