Feb 13 16:05:46.196332 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 16:05:46.196382 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 14:34:20 -00 2025 Feb 13 16:05:46.196407 kernel: KASLR disabled due to lack of seed Feb 13 16:05:46.196424 kernel: efi: EFI v2.7 by EDK II Feb 13 16:05:46.196440 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Feb 13 16:05:46.196456 kernel: ACPI: Early table checksum verification disabled Feb 13 16:05:46.196474 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 16:05:46.196490 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 16:05:46.196506 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 16:05:46.196521 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 16:05:46.196544 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 16:05:46.196560 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 16:05:46.196576 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 16:05:46.196593 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 16:05:46.196631 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 16:05:46.196655 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 16:05:46.196674 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 16:05:46.196690 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 16:05:46.196707 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 16:05:46.196723 kernel: printk: bootconsole [uart0] enabled Feb 13 16:05:46.196739 kernel: NUMA: Failed to initialise from firmware Feb 13 16:05:46.196756 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 16:05:46.196774 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 16:05:46.196790 kernel: Zone ranges: Feb 13 16:05:46.196807 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 16:05:46.196823 kernel: DMA32 empty Feb 13 16:05:46.196845 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 16:05:46.196862 kernel: Movable zone start for each node Feb 13 16:05:46.196878 kernel: Early memory node ranges Feb 13 16:05:46.196894 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 16:05:46.196911 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 16:05:46.196928 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 16:05:46.196945 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 16:05:46.196961 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 16:05:46.196977 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 16:05:46.196994 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 16:05:46.197010 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 16:05:46.197027 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 16:05:46.197048 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 16:05:46.197066 kernel: psci: probing for conduit method from ACPI. Feb 13 16:05:46.197091 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 16:05:46.197142 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 16:05:46.197183 kernel: psci: Trusted OS migration not required Feb 13 16:05:46.197209 kernel: psci: SMC Calling Convention v1.1 Feb 13 16:05:46.197228 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 16:05:46.197246 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 16:05:46.197264 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 16:05:46.197282 kernel: Detected PIPT I-cache on CPU0 Feb 13 16:05:46.197300 kernel: CPU features: detected: GIC system register CPU interface Feb 13 16:05:46.197318 kernel: CPU features: detected: Spectre-v2 Feb 13 16:05:46.197335 kernel: CPU features: detected: Spectre-v3a Feb 13 16:05:46.197353 kernel: CPU features: detected: Spectre-BHB Feb 13 16:05:46.197370 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 16:05:46.197388 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 16:05:46.197410 kernel: alternatives: applying boot alternatives Feb 13 16:05:46.197431 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=55866785c450f887021047c4ba00d104a5882975060a5fc692d64491b0d81886 Feb 13 16:05:46.197450 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 16:05:46.197467 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 16:05:46.197485 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 16:05:46.197502 kernel: Fallback order for Node 0: 0 Feb 13 16:05:46.197519 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 16:05:46.197537 kernel: Policy zone: Normal Feb 13 16:05:46.197554 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 16:05:46.197571 kernel: software IO TLB: area num 2. Feb 13 16:05:46.197589 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 16:05:46.197612 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Feb 13 16:05:46.197630 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 16:05:46.197647 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 16:05:46.197666 kernel: rcu: RCU event tracing is enabled. Feb 13 16:05:46.197684 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 16:05:46.197702 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 16:05:46.197720 kernel: Tracing variant of Tasks RCU enabled. Feb 13 16:05:46.197737 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 16:05:46.197755 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 16:05:46.197772 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 16:05:46.197789 kernel: GICv3: 96 SPIs implemented Feb 13 16:05:46.197812 kernel: GICv3: 0 Extended SPIs implemented Feb 13 16:05:46.197830 kernel: Root IRQ handler: gic_handle_irq Feb 13 16:05:46.197847 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 16:05:46.197865 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 16:05:46.197882 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 16:05:46.197900 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 16:05:46.197918 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 16:05:46.197935 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 16:05:46.197952 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 16:05:46.197970 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 16:05:46.197987 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 16:05:46.198005 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 16:05:46.198028 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 16:05:46.198046 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 16:05:46.198064 kernel: Console: colour dummy device 80x25 Feb 13 16:05:46.198082 kernel: printk: console [tty1] enabled Feb 13 16:05:46.198102 kernel: ACPI: Core revision 20230628 Feb 13 16:05:46.200200 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 16:05:46.200222 kernel: pid_max: default: 32768 minimum: 301 Feb 13 16:05:46.200241 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 16:05:46.200260 kernel: landlock: Up and running. Feb 13 16:05:46.200288 kernel: SELinux: Initializing. Feb 13 16:05:46.200306 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 16:05:46.200325 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 16:05:46.200344 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:05:46.200362 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:05:46.200380 kernel: rcu: Hierarchical SRCU implementation. Feb 13 16:05:46.200400 kernel: rcu: Max phase no-delay instances is 400. Feb 13 16:05:46.200419 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 16:05:46.200437 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 16:05:46.200460 kernel: Remapping and enabling EFI services. Feb 13 16:05:46.200479 kernel: smp: Bringing up secondary CPUs ... Feb 13 16:05:46.200497 kernel: Detected PIPT I-cache on CPU1 Feb 13 16:05:46.200516 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 16:05:46.200534 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 16:05:46.200552 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 16:05:46.200570 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 16:05:46.200588 kernel: SMP: Total of 2 processors activated. Feb 13 16:05:46.200606 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 16:05:46.200629 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 16:05:46.200647 kernel: CPU features: detected: CRC32 instructions Feb 13 16:05:46.200666 kernel: CPU: All CPU(s) started at EL1 Feb 13 16:05:46.200697 kernel: alternatives: applying system-wide alternatives Feb 13 16:05:46.200723 kernel: devtmpfs: initialized Feb 13 16:05:46.200742 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 16:05:46.200761 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 16:05:46.200780 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 16:05:46.200798 kernel: SMBIOS 3.0.0 present. Feb 13 16:05:46.200818 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 16:05:46.200842 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 16:05:46.200861 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 16:05:46.200880 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 16:05:46.200899 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 16:05:46.200919 kernel: audit: initializing netlink subsys (disabled) Feb 13 16:05:46.200938 kernel: audit: type=2000 audit(0.288:1): state=initialized audit_enabled=0 res=1 Feb 13 16:05:46.200957 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 16:05:46.200981 kernel: cpuidle: using governor menu Feb 13 16:05:46.201000 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 16:05:46.201019 kernel: ASID allocator initialised with 65536 entries Feb 13 16:05:46.201038 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 16:05:46.201056 kernel: Serial: AMBA PL011 UART driver Feb 13 16:05:46.201075 kernel: Modules: 17520 pages in range for non-PLT usage Feb 13 16:05:46.201094 kernel: Modules: 509040 pages in range for PLT usage Feb 13 16:05:46.201137 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 16:05:46.201179 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 16:05:46.201209 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 16:05:46.201228 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 16:05:46.201247 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 16:05:46.201266 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 16:05:46.201284 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 16:05:46.201304 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 16:05:46.201322 kernel: ACPI: Added _OSI(Module Device) Feb 13 16:05:46.201341 kernel: ACPI: Added _OSI(Processor Device) Feb 13 16:05:46.201359 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 16:05:46.201383 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 16:05:46.201402 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 16:05:46.201421 kernel: ACPI: Interpreter enabled Feb 13 16:05:46.201440 kernel: ACPI: Using GIC for interrupt routing Feb 13 16:05:46.201459 kernel: ACPI: MCFG table detected, 1 entries Feb 13 16:05:46.201478 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 16:05:46.201816 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 16:05:46.202053 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 16:05:46.204435 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 16:05:46.204703 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 16:05:46.204915 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 16:05:46.204943 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 16:05:46.204963 kernel: acpiphp: Slot [1] registered Feb 13 16:05:46.204983 kernel: acpiphp: Slot [2] registered Feb 13 16:05:46.205002 kernel: acpiphp: Slot [3] registered Feb 13 16:05:46.205020 kernel: acpiphp: Slot [4] registered Feb 13 16:05:46.205051 kernel: acpiphp: Slot [5] registered Feb 13 16:05:46.205071 kernel: acpiphp: Slot [6] registered Feb 13 16:05:46.205090 kernel: acpiphp: Slot [7] registered Feb 13 16:05:46.205138 kernel: acpiphp: Slot [8] registered Feb 13 16:05:46.205179 kernel: acpiphp: Slot [9] registered Feb 13 16:05:46.205202 kernel: acpiphp: Slot [10] registered Feb 13 16:05:46.205221 kernel: acpiphp: Slot [11] registered Feb 13 16:05:46.205241 kernel: acpiphp: Slot [12] registered Feb 13 16:05:46.205259 kernel: acpiphp: Slot [13] registered Feb 13 16:05:46.205278 kernel: acpiphp: Slot [14] registered Feb 13 16:05:46.205318 kernel: acpiphp: Slot [15] registered Feb 13 16:05:46.205338 kernel: acpiphp: Slot [16] registered Feb 13 16:05:46.205356 kernel: acpiphp: Slot [17] registered Feb 13 16:05:46.205375 kernel: acpiphp: Slot [18] registered Feb 13 16:05:46.205393 kernel: acpiphp: Slot [19] registered Feb 13 16:05:46.205412 kernel: acpiphp: Slot [20] registered Feb 13 16:05:46.205431 kernel: acpiphp: Slot [21] registered Feb 13 16:05:46.205450 kernel: acpiphp: Slot [22] registered Feb 13 16:05:46.205468 kernel: acpiphp: Slot [23] registered Feb 13 16:05:46.205493 kernel: acpiphp: Slot [24] registered Feb 13 16:05:46.205512 kernel: acpiphp: Slot [25] registered Feb 13 16:05:46.205531 kernel: acpiphp: Slot [26] registered Feb 13 16:05:46.205549 kernel: acpiphp: Slot [27] registered Feb 13 16:05:46.205567 kernel: acpiphp: Slot [28] registered Feb 13 16:05:46.205586 kernel: acpiphp: Slot [29] registered Feb 13 16:05:46.205605 kernel: acpiphp: Slot [30] registered Feb 13 16:05:46.205624 kernel: acpiphp: Slot [31] registered Feb 13 16:05:46.205642 kernel: PCI host bridge to bus 0000:00 Feb 13 16:05:46.205880 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 16:05:46.206084 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 16:05:46.208321 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 16:05:46.208509 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 16:05:46.208756 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 16:05:46.208987 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 16:05:46.209259 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 16:05:46.209502 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 16:05:46.209717 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 16:05:46.209932 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 16:05:46.212435 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 16:05:46.212724 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 16:05:46.212930 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 16:05:46.213181 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 16:05:46.213396 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 16:05:46.213600 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 16:05:46.213806 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 16:05:46.214020 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 16:05:46.218425 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 16:05:46.218699 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 16:05:46.218910 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 16:05:46.219095 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 16:05:46.221405 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 16:05:46.221436 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 16:05:46.221456 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 16:05:46.221476 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 16:05:46.221495 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 16:05:46.221520 kernel: iommu: Default domain type: Translated Feb 13 16:05:46.221539 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 16:05:46.221569 kernel: efivars: Registered efivars operations Feb 13 16:05:46.221588 kernel: vgaarb: loaded Feb 13 16:05:46.221607 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 16:05:46.221626 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 16:05:46.221645 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 16:05:46.221665 kernel: pnp: PnP ACPI init Feb 13 16:05:46.221887 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 16:05:46.221916 kernel: pnp: PnP ACPI: found 1 devices Feb 13 16:05:46.221942 kernel: NET: Registered PF_INET protocol family Feb 13 16:05:46.221962 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 16:05:46.221982 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 16:05:46.222001 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 16:05:46.222020 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 16:05:46.222039 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 16:05:46.222057 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 16:05:46.222077 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 16:05:46.222095 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 16:05:46.222153 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 16:05:46.222173 kernel: PCI: CLS 0 bytes, default 64 Feb 13 16:05:46.222192 kernel: kvm [1]: HYP mode not available Feb 13 16:05:46.222211 kernel: Initialise system trusted keyrings Feb 13 16:05:46.222230 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 16:05:46.222248 kernel: Key type asymmetric registered Feb 13 16:05:46.222267 kernel: Asymmetric key parser 'x509' registered Feb 13 16:05:46.222286 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 16:05:46.222305 kernel: io scheduler mq-deadline registered Feb 13 16:05:46.222330 kernel: io scheduler kyber registered Feb 13 16:05:46.222349 kernel: io scheduler bfq registered Feb 13 16:05:46.222578 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 16:05:46.222607 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 16:05:46.222626 kernel: ACPI: button: Power Button [PWRB] Feb 13 16:05:46.222645 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 16:05:46.222664 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 16:05:46.222682 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 16:05:46.222708 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 16:05:46.222931 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 16:05:46.222958 kernel: printk: console [ttyS0] disabled Feb 13 16:05:46.222977 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 16:05:46.222996 kernel: printk: console [ttyS0] enabled Feb 13 16:05:46.223015 kernel: printk: bootconsole [uart0] disabled Feb 13 16:05:46.223034 kernel: thunder_xcv, ver 1.0 Feb 13 16:05:46.223052 kernel: thunder_bgx, ver 1.0 Feb 13 16:05:46.223070 kernel: nicpf, ver 1.0 Feb 13 16:05:46.223094 kernel: nicvf, ver 1.0 Feb 13 16:05:46.223334 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 16:05:46.223536 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T16:05:45 UTC (1739462745) Feb 13 16:05:46.223563 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 16:05:46.223583 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 16:05:46.223602 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 16:05:46.223621 kernel: watchdog: Hard watchdog permanently disabled Feb 13 16:05:46.223640 kernel: NET: Registered PF_INET6 protocol family Feb 13 16:05:46.223665 kernel: Segment Routing with IPv6 Feb 13 16:05:46.223684 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 16:05:46.223703 kernel: NET: Registered PF_PACKET protocol family Feb 13 16:05:46.223722 kernel: Key type dns_resolver registered Feb 13 16:05:46.223740 kernel: registered taskstats version 1 Feb 13 16:05:46.223759 kernel: Loading compiled-in X.509 certificates Feb 13 16:05:46.223778 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: d3f151cc07005f6a29244b13ac54c8677429c8f5' Feb 13 16:05:46.223820 kernel: Key type .fscrypt registered Feb 13 16:05:46.223840 kernel: Key type fscrypt-provisioning registered Feb 13 16:05:46.223866 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 16:05:46.223886 kernel: ima: Allocated hash algorithm: sha1 Feb 13 16:05:46.223905 kernel: ima: No architecture policies found Feb 13 16:05:46.223924 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 16:05:46.223943 kernel: clk: Disabling unused clocks Feb 13 16:05:46.223961 kernel: Freeing unused kernel memory: 39360K Feb 13 16:05:46.223980 kernel: Run /init as init process Feb 13 16:05:46.223999 kernel: with arguments: Feb 13 16:05:46.224017 kernel: /init Feb 13 16:05:46.224036 kernel: with environment: Feb 13 16:05:46.224060 kernel: HOME=/ Feb 13 16:05:46.224079 kernel: TERM=linux Feb 13 16:05:46.224097 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 16:05:46.226183 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 16:05:46.226212 systemd[1]: Detected virtualization amazon. Feb 13 16:05:46.226234 systemd[1]: Detected architecture arm64. Feb 13 16:05:46.226254 systemd[1]: Running in initrd. Feb 13 16:05:46.226284 systemd[1]: No hostname configured, using default hostname. Feb 13 16:05:46.226305 systemd[1]: Hostname set to . Feb 13 16:05:46.226326 systemd[1]: Initializing machine ID from VM UUID. Feb 13 16:05:46.226347 systemd[1]: Queued start job for default target initrd.target. Feb 13 16:05:46.226367 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:05:46.226387 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:05:46.226410 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 16:05:46.226430 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 16:05:46.226456 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 16:05:46.226477 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 16:05:46.226501 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 16:05:46.226522 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 16:05:46.226542 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:05:46.226562 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:05:46.226582 systemd[1]: Reached target paths.target - Path Units. Feb 13 16:05:46.226620 systemd[1]: Reached target slices.target - Slice Units. Feb 13 16:05:46.226643 systemd[1]: Reached target swap.target - Swaps. Feb 13 16:05:46.226664 systemd[1]: Reached target timers.target - Timer Units. Feb 13 16:05:46.226684 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 16:05:46.226705 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 16:05:46.226725 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 16:05:46.226745 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 16:05:46.226766 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:05:46.226786 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 16:05:46.226812 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:05:46.226832 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 16:05:46.226853 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 16:05:46.226873 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 16:05:46.226894 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 16:05:46.226914 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 16:05:46.226934 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 16:05:46.226954 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 16:05:46.226982 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:05:46.227002 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 16:05:46.227069 systemd-journald[251]: Collecting audit messages is disabled. Feb 13 16:05:46.227147 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:05:46.227177 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 16:05:46.227200 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 16:05:46.227221 systemd-journald[251]: Journal started Feb 13 16:05:46.227264 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2e6a0373623ed0212a2a77b2982b61) is 8.0M, max 75.3M, 67.3M free. Feb 13 16:05:46.224731 systemd-modules-load[252]: Inserted module 'overlay' Feb 13 16:05:46.233194 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:05:46.240160 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 16:05:46.256141 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 16:05:46.257444 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:05:46.263816 kernel: Bridge firewalling registered Feb 13 16:05:46.261081 systemd-modules-load[252]: Inserted module 'br_netfilter' Feb 13 16:05:46.263909 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 16:05:46.281555 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 16:05:46.281997 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:05:46.290380 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:05:46.306480 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 16:05:46.331343 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:05:46.345569 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:05:46.354590 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 16:05:46.369587 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:05:46.376653 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:05:46.397925 dracut-cmdline[283]: dracut-dracut-053 Feb 13 16:05:46.403044 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 16:05:46.409489 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=55866785c450f887021047c4ba00d104a5882975060a5fc692d64491b0d81886 Feb 13 16:05:46.486975 systemd-resolved[296]: Positive Trust Anchors: Feb 13 16:05:46.487004 systemd-resolved[296]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 16:05:46.487066 systemd-resolved[296]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 16:05:46.561146 kernel: SCSI subsystem initialized Feb 13 16:05:46.568214 kernel: Loading iSCSI transport class v2.0-870. Feb 13 16:05:46.581245 kernel: iscsi: registered transport (tcp) Feb 13 16:05:46.603780 kernel: iscsi: registered transport (qla4xxx) Feb 13 16:05:46.603894 kernel: QLogic iSCSI HBA Driver Feb 13 16:05:46.693146 kernel: random: crng init done Feb 13 16:05:46.691523 systemd-resolved[296]: Defaulting to hostname 'linux'. Feb 13 16:05:46.695494 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 16:05:46.698071 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:05:46.726043 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 16:05:46.734394 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 16:05:46.778543 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 16:05:46.778625 kernel: device-mapper: uevent: version 1.0.3 Feb 13 16:05:46.780401 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 16:05:46.862128 kernel: raid6: neonx8 gen() 6703 MB/s Feb 13 16:05:46.875149 kernel: raid6: neonx4 gen() 6515 MB/s Feb 13 16:05:46.880138 kernel: raid6: neonx2 gen() 5430 MB/s Feb 13 16:05:46.897139 kernel: raid6: neonx1 gen() 3940 MB/s Feb 13 16:05:46.914138 kernel: raid6: int64x8 gen() 3776 MB/s Feb 13 16:05:46.931138 kernel: raid6: int64x4 gen() 3711 MB/s Feb 13 16:05:46.948138 kernel: raid6: int64x2 gen() 3590 MB/s Feb 13 16:05:46.965886 kernel: raid6: int64x1 gen() 2759 MB/s Feb 13 16:05:46.965929 kernel: raid6: using algorithm neonx8 gen() 6703 MB/s Feb 13 16:05:46.983871 kernel: raid6: .... xor() 4916 MB/s, rmw enabled Feb 13 16:05:46.983910 kernel: raid6: using neon recovery algorithm Feb 13 16:05:46.992298 kernel: xor: measuring software checksum speed Feb 13 16:05:46.992348 kernel: 8regs : 10684 MB/sec Feb 13 16:05:46.993388 kernel: 32regs : 11936 MB/sec Feb 13 16:05:46.994542 kernel: arm64_neon : 9582 MB/sec Feb 13 16:05:46.994575 kernel: xor: using function: 32regs (11936 MB/sec) Feb 13 16:05:47.078160 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 16:05:47.097615 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 16:05:47.105463 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:05:47.146099 systemd-udevd[472]: Using default interface naming scheme 'v255'. Feb 13 16:05:47.155574 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:05:47.169746 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 16:05:47.211046 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Feb 13 16:05:47.270380 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 16:05:47.281424 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 16:05:47.403330 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:05:47.417818 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 16:05:47.464505 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 16:05:47.483808 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 16:05:47.488459 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:05:47.497215 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 16:05:47.507493 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 16:05:47.545551 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 16:05:47.619439 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 16:05:47.619503 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 16:05:47.664533 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 16:05:47.664818 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 16:05:47.665057 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:8c:be:1a:ff:71 Feb 13 16:05:47.665765 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 16:05:47.665798 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 16:05:47.627947 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 16:05:47.628226 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:05:47.632588 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:05:47.634721 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:05:47.634989 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:05:47.681235 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 16:05:47.637301 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:05:47.657989 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:05:47.692659 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 16:05:47.692727 kernel: GPT:9289727 != 16777215 Feb 13 16:05:47.695510 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 16:05:47.695569 kernel: GPT:9289727 != 16777215 Feb 13 16:05:47.695596 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 16:05:47.696423 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:05:47.701818 (udev-worker)[522]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:05:47.717643 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:05:47.732471 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:05:47.788881 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:05:47.808164 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (541) Feb 13 16:05:47.831184 kernel: BTRFS: device fsid 39fc2625-8d65-490f-9a1f-39e365051e19 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (516) Feb 13 16:05:47.905774 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 16:05:47.927574 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 16:05:47.963020 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 16:05:47.978762 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 16:05:47.981010 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 16:05:48.007416 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 16:05:48.025211 disk-uuid[662]: Primary Header is updated. Feb 13 16:05:48.025211 disk-uuid[662]: Secondary Entries is updated. Feb 13 16:05:48.025211 disk-uuid[662]: Secondary Header is updated. Feb 13 16:05:48.035178 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:05:48.046150 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:05:48.053165 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:05:49.055194 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 16:05:49.055337 disk-uuid[663]: The operation has completed successfully. Feb 13 16:05:49.241312 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 16:05:49.241503 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 16:05:49.290424 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 16:05:49.306281 sh[1006]: Success Feb 13 16:05:49.325161 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 16:05:49.424435 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 16:05:49.445420 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 16:05:49.455288 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 16:05:49.482629 kernel: BTRFS info (device dm-0): first mount of filesystem 39fc2625-8d65-490f-9a1f-39e365051e19 Feb 13 16:05:49.482691 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 16:05:49.482719 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 16:05:49.483988 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 16:05:49.485089 kernel: BTRFS info (device dm-0): using free space tree Feb 13 16:05:49.601155 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 16:05:49.627636 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 16:05:49.630844 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 16:05:49.647535 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 16:05:49.652424 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 16:05:49.697136 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:05:49.697230 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 16:05:49.697262 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 16:05:49.705191 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 16:05:49.722680 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 16:05:49.724819 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:05:49.736587 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 16:05:49.746541 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 16:05:49.842228 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 16:05:49.856475 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 16:05:49.909858 systemd-networkd[1198]: lo: Link UP Feb 13 16:05:49.911579 systemd-networkd[1198]: lo: Gained carrier Feb 13 16:05:49.915444 systemd-networkd[1198]: Enumeration completed Feb 13 16:05:49.915593 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 16:05:49.917813 systemd[1]: Reached target network.target - Network. Feb 13 16:05:49.926996 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:05:49.927014 systemd-networkd[1198]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 16:05:49.935884 systemd-networkd[1198]: eth0: Link UP Feb 13 16:05:49.935905 systemd-networkd[1198]: eth0: Gained carrier Feb 13 16:05:49.935924 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:05:49.962220 systemd-networkd[1198]: eth0: DHCPv4 address 172.31.25.78/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 16:05:50.085377 ignition[1125]: Ignition 2.19.0 Feb 13 16:05:50.085408 ignition[1125]: Stage: fetch-offline Feb 13 16:05:50.086988 ignition[1125]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:05:50.087015 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:05:50.090357 ignition[1125]: Ignition finished successfully Feb 13 16:05:50.095843 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 16:05:50.107418 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 16:05:50.138852 ignition[1208]: Ignition 2.19.0 Feb 13 16:05:50.139405 ignition[1208]: Stage: fetch Feb 13 16:05:50.140022 ignition[1208]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:05:50.140046 ignition[1208]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:05:50.140246 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:05:50.154470 ignition[1208]: PUT result: OK Feb 13 16:05:50.157440 ignition[1208]: parsed url from cmdline: "" Feb 13 16:05:50.157569 ignition[1208]: no config URL provided Feb 13 16:05:50.157593 ignition[1208]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 16:05:50.157618 ignition[1208]: no config at "/usr/lib/ignition/user.ign" Feb 13 16:05:50.157650 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:05:50.161204 ignition[1208]: PUT result: OK Feb 13 16:05:50.161278 ignition[1208]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 16:05:50.165246 ignition[1208]: GET result: OK Feb 13 16:05:50.168125 ignition[1208]: parsing config with SHA512: 63b7e3390560f5608e24482ebf207df2abb5e5f17f03d3c575c63687af6d1806605502f47e2caaf301c3eddc4399fb0ccc6060b47aa8d8e95cc3c76445236d85 Feb 13 16:05:50.175253 unknown[1208]: fetched base config from "system" Feb 13 16:05:50.175902 ignition[1208]: fetch: fetch complete Feb 13 16:05:50.175276 unknown[1208]: fetched base config from "system" Feb 13 16:05:50.175913 ignition[1208]: fetch: fetch passed Feb 13 16:05:50.175289 unknown[1208]: fetched user config from "aws" Feb 13 16:05:50.175984 ignition[1208]: Ignition finished successfully Feb 13 16:05:50.186906 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 16:05:50.197468 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 16:05:50.226254 ignition[1214]: Ignition 2.19.0 Feb 13 16:05:50.226792 ignition[1214]: Stage: kargs Feb 13 16:05:50.227505 ignition[1214]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:05:50.227531 ignition[1214]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:05:50.227696 ignition[1214]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:05:50.231885 ignition[1214]: PUT result: OK Feb 13 16:05:50.240294 ignition[1214]: kargs: kargs passed Feb 13 16:05:50.240593 ignition[1214]: Ignition finished successfully Feb 13 16:05:50.248181 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 16:05:50.262013 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 16:05:50.284539 ignition[1220]: Ignition 2.19.0 Feb 13 16:05:50.284571 ignition[1220]: Stage: disks Feb 13 16:05:50.285378 ignition[1220]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:05:50.285403 ignition[1220]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:05:50.285571 ignition[1220]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:05:50.293380 ignition[1220]: PUT result: OK Feb 13 16:05:50.297632 ignition[1220]: disks: disks passed Feb 13 16:05:50.297803 ignition[1220]: Ignition finished successfully Feb 13 16:05:50.301804 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 16:05:50.306152 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 16:05:50.308393 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 16:05:50.310822 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 16:05:50.314878 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 16:05:50.316825 systemd[1]: Reached target basic.target - Basic System. Feb 13 16:05:50.335503 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 16:05:50.383869 systemd-fsck[1228]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 16:05:50.389232 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 16:05:50.400335 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 16:05:50.503154 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 1daf3470-d909-4a02-84d2-f6d9b0a5b55c r/w with ordered data mode. Quota mode: none. Feb 13 16:05:50.504706 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 16:05:50.508016 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 16:05:50.523299 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 16:05:50.537261 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 16:05:50.539482 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 16:05:50.539566 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 16:05:50.539614 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 16:05:50.548718 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 16:05:50.573170 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1247) Feb 13 16:05:50.573236 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:05:50.573264 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 16:05:50.578346 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 16:05:50.575218 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 16:05:50.607151 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 16:05:50.609253 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 16:05:51.017571 initrd-setup-root[1271]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 16:05:51.027252 systemd-networkd[1198]: eth0: Gained IPv6LL Feb 13 16:05:51.037165 initrd-setup-root[1278]: cut: /sysroot/etc/group: No such file or directory Feb 13 16:05:51.046776 initrd-setup-root[1285]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 16:05:51.055631 initrd-setup-root[1292]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 16:05:51.318297 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 16:05:51.326288 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 16:05:51.331544 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 16:05:51.357967 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 16:05:51.360552 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:05:51.393245 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 16:05:51.406024 ignition[1359]: INFO : Ignition 2.19.0 Feb 13 16:05:51.406024 ignition[1359]: INFO : Stage: mount Feb 13 16:05:51.409299 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:05:51.409299 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:05:51.413372 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:05:51.416451 ignition[1359]: INFO : PUT result: OK Feb 13 16:05:51.421161 ignition[1359]: INFO : mount: mount passed Feb 13 16:05:51.422985 ignition[1359]: INFO : Ignition finished successfully Feb 13 16:05:51.426728 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 16:05:51.437341 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 16:05:51.511474 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 16:05:51.547843 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1371) Feb 13 16:05:51.547905 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41 Feb 13 16:05:51.547933 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 16:05:51.550574 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 16:05:51.556283 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 16:05:51.559044 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 16:05:51.599418 ignition[1388]: INFO : Ignition 2.19.0 Feb 13 16:05:51.599418 ignition[1388]: INFO : Stage: files Feb 13 16:05:51.602630 ignition[1388]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:05:51.602630 ignition[1388]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:05:51.602630 ignition[1388]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:05:51.609514 ignition[1388]: INFO : PUT result: OK Feb 13 16:05:51.614616 ignition[1388]: DEBUG : files: compiled without relabeling support, skipping Feb 13 16:05:51.617751 ignition[1388]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 16:05:51.617751 ignition[1388]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 16:05:51.634971 ignition[1388]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 16:05:51.637938 ignition[1388]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 16:05:51.640926 unknown[1388]: wrote ssh authorized keys file for user: core Feb 13 16:05:51.644053 ignition[1388]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 16:05:51.644053 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 16:05:51.644053 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 16:05:51.751807 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 16:05:51.892671 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 16:05:51.896212 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 16:05:51.900042 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 16:05:51.903313 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 16:05:51.906596 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 16:05:51.909722 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 16:05:51.913437 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 16:05:51.913437 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 16:05:51.920211 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 16:05:51.920211 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 16:05:51.920211 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 16:05:51.920211 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 16:05:51.920211 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 16:05:51.920211 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 16:05:51.920211 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 16:05:52.217812 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 16:05:52.556409 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 16:05:52.560448 ignition[1388]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 16:05:52.562752 ignition[1388]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 16:05:52.562752 ignition[1388]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 16:05:52.562752 ignition[1388]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 16:05:52.562752 ignition[1388]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 16:05:52.562752 ignition[1388]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 16:05:52.562752 ignition[1388]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 16:05:52.562752 ignition[1388]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 16:05:52.562752 ignition[1388]: INFO : files: files passed Feb 13 16:05:52.562752 ignition[1388]: INFO : Ignition finished successfully Feb 13 16:05:52.588995 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 16:05:52.597388 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 16:05:52.617581 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 16:05:52.631571 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 16:05:52.633602 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 16:05:52.646012 initrd-setup-root-after-ignition[1416]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:05:52.646012 initrd-setup-root-after-ignition[1416]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:05:52.654312 initrd-setup-root-after-ignition[1420]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:05:52.663181 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 16:05:52.669003 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 16:05:52.681394 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 16:05:52.742409 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 16:05:52.742828 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 16:05:52.749851 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 16:05:52.765604 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 16:05:52.769268 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 16:05:52.786494 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 16:05:52.815174 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 16:05:52.834491 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 16:05:52.859749 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:05:52.863835 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:05:52.866764 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 16:05:52.868609 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 16:05:52.868838 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 16:05:52.871634 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 16:05:52.873822 systemd[1]: Stopped target basic.target - Basic System. Feb 13 16:05:52.875780 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 16:05:52.878052 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 16:05:52.880361 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 16:05:52.882618 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 16:05:52.884706 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 16:05:52.887141 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 16:05:52.889260 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 16:05:52.891302 systemd[1]: Stopped target swap.target - Swaps. Feb 13 16:05:52.892944 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 16:05:52.893194 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 16:05:52.895678 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:05:52.897959 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:05:52.900344 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 16:05:52.902269 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:05:52.904605 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 16:05:52.904818 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 16:05:52.907127 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 16:05:52.907350 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 16:05:52.909819 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 16:05:52.910039 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 16:05:52.925242 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 16:05:52.958804 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 16:05:52.969934 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 16:05:52.971946 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:05:52.979089 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 16:05:52.979342 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 16:05:52.998321 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 16:05:52.998575 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 16:05:53.028424 ignition[1440]: INFO : Ignition 2.19.0 Feb 13 16:05:53.032162 ignition[1440]: INFO : Stage: umount Feb 13 16:05:53.033836 ignition[1440]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:05:53.036286 ignition[1440]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 16:05:53.036286 ignition[1440]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 16:05:53.041740 ignition[1440]: INFO : PUT result: OK Feb 13 16:05:53.046928 ignition[1440]: INFO : umount: umount passed Feb 13 16:05:53.048760 ignition[1440]: INFO : Ignition finished successfully Feb 13 16:05:53.052588 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 16:05:53.055584 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 16:05:53.057600 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 16:05:53.061968 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 16:05:53.062153 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 16:05:53.065327 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 16:05:53.065437 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 16:05:53.067505 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 16:05:53.067593 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 16:05:53.070077 systemd[1]: Stopped target network.target - Network. Feb 13 16:05:53.075015 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 16:05:53.075138 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 16:05:53.077373 systemd[1]: Stopped target paths.target - Path Units. Feb 13 16:05:53.079016 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 16:05:53.098476 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:05:53.100995 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 16:05:53.106921 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 16:05:53.108801 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 16:05:53.108891 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 16:05:53.110837 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 16:05:53.110903 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 16:05:53.112840 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 16:05:53.112929 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 16:05:53.114878 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 16:05:53.114958 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 16:05:53.117252 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 16:05:53.119287 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 16:05:53.141738 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 16:05:53.142068 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 16:05:53.148063 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 16:05:53.148248 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 16:05:53.160258 systemd-networkd[1198]: eth0: DHCPv6 lease lost Feb 13 16:05:53.163947 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 16:05:53.164344 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 16:05:53.172562 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 16:05:53.174968 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 16:05:53.180733 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 16:05:53.180825 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:05:53.192368 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 16:05:53.196099 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 16:05:53.196295 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 16:05:53.199193 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 16:05:53.199277 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:05:53.201641 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 16:05:53.201720 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 16:05:53.203988 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 16:05:53.204062 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:05:53.207059 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:05:53.245769 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 16:05:53.247985 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:05:53.253886 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 16:05:53.254018 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 16:05:53.259675 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 16:05:53.259760 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:05:53.261771 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 16:05:53.261861 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 16:05:53.264132 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 16:05:53.264220 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 16:05:53.277000 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 16:05:53.277098 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:05:53.288507 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 16:05:53.291167 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 16:05:53.291325 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:05:53.297366 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 16:05:53.297473 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:05:53.300378 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 16:05:53.300461 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:05:53.303298 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:05:53.303383 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:05:53.308338 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 16:05:53.308796 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 16:05:53.347784 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 16:05:53.347981 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 16:05:53.351657 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 16:05:53.371000 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 16:05:53.388361 systemd[1]: Switching root. Feb 13 16:05:53.442261 systemd-journald[251]: Journal stopped Feb 13 16:05:55.877750 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Feb 13 16:05:55.877881 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 16:05:55.877925 kernel: SELinux: policy capability open_perms=1 Feb 13 16:05:55.877957 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 16:05:55.877988 kernel: SELinux: policy capability always_check_network=0 Feb 13 16:05:55.878019 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 16:05:55.878049 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 16:05:55.878078 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 16:05:55.878202 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 16:05:55.878242 kernel: audit: type=1403 audit(1739462753.985:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 16:05:55.878276 systemd[1]: Successfully loaded SELinux policy in 58.975ms. Feb 13 16:05:55.878321 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.360ms. Feb 13 16:05:55.878357 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 16:05:55.878390 systemd[1]: Detected virtualization amazon. Feb 13 16:05:55.878420 systemd[1]: Detected architecture arm64. Feb 13 16:05:55.878451 systemd[1]: Detected first boot. Feb 13 16:05:55.878483 systemd[1]: Initializing machine ID from VM UUID. Feb 13 16:05:55.878520 zram_generator::config[1482]: No configuration found. Feb 13 16:05:55.878563 systemd[1]: Populated /etc with preset unit settings. Feb 13 16:05:55.878595 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 16:05:55.878629 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 16:05:55.878661 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 16:05:55.878694 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 16:05:55.878728 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 16:05:55.878758 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 16:05:55.878793 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 16:05:55.878827 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 16:05:55.878857 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 16:05:55.878887 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 16:05:55.878918 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 16:05:55.878953 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:05:55.878983 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:05:55.879013 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 16:05:55.879046 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 16:05:55.879081 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 16:05:55.879132 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 16:05:55.879167 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 16:05:55.879199 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:05:55.879231 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 16:05:55.879261 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 16:05:55.879292 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 16:05:55.879325 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 16:05:55.879361 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:05:55.879394 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 16:05:55.879426 systemd[1]: Reached target slices.target - Slice Units. Feb 13 16:05:55.879459 systemd[1]: Reached target swap.target - Swaps. Feb 13 16:05:55.879504 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 16:05:55.879539 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 16:05:55.879573 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:05:55.879603 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 16:05:55.879637 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:05:55.879673 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 16:05:55.879705 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 16:05:55.879737 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 16:05:55.879779 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 16:05:55.879811 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 16:05:55.879843 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 16:05:55.879873 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 16:05:55.879907 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 16:05:55.879937 systemd[1]: Reached target machines.target - Containers. Feb 13 16:05:55.879971 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 16:05:55.880002 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:05:55.880034 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 16:05:55.880064 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 16:05:55.880095 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:05:55.880145 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 16:05:55.880177 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:05:55.880206 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 16:05:55.880241 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:05:55.880273 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 16:05:55.880302 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 16:05:55.880332 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 16:05:55.880362 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 16:05:55.880391 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 16:05:55.880423 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 16:05:55.880453 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 16:05:55.880482 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 16:05:55.880516 kernel: loop: module loaded Feb 13 16:05:55.880547 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 16:05:55.880575 kernel: fuse: init (API version 7.39) Feb 13 16:05:55.880607 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 16:05:55.880638 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 16:05:55.880668 systemd[1]: Stopped verity-setup.service. Feb 13 16:05:55.880698 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 16:05:55.880727 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 16:05:55.880756 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 16:05:55.880790 kernel: ACPI: bus type drm_connector registered Feb 13 16:05:55.880821 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 16:05:55.880853 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 16:05:55.880887 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 16:05:55.880920 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:05:55.880954 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 16:05:55.880984 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 16:05:55.881014 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 16:05:55.881043 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:05:55.881073 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:05:55.881147 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 16:05:55.881182 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 16:05:55.881256 systemd-journald[1567]: Collecting audit messages is disabled. Feb 13 16:05:55.881314 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:05:55.881346 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:05:55.881461 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 16:05:55.881659 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 16:05:55.881698 systemd-journald[1567]: Journal started Feb 13 16:05:55.881845 systemd-journald[1567]: Runtime Journal (/run/log/journal/ec2e6a0373623ed0212a2a77b2982b61) is 8.0M, max 75.3M, 67.3M free. Feb 13 16:05:55.272788 systemd[1]: Queued start job for default target multi-user.target. Feb 13 16:05:55.331564 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 16:05:55.886062 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 16:05:55.332364 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 16:05:55.887759 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:05:55.889342 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:05:55.895317 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 16:05:55.898834 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 16:05:55.905221 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 16:05:55.932334 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 16:05:55.943351 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 16:05:55.954802 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 16:05:55.957036 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 16:05:55.957101 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 16:05:55.963457 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 16:05:55.976529 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 16:05:55.990434 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 16:05:55.992575 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:05:56.002665 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 16:05:56.008424 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 16:05:56.012367 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 16:05:56.015549 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 16:05:56.017679 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 16:05:56.030414 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:05:56.035412 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 16:05:56.043565 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 16:05:56.051140 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 16:05:56.054654 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 16:05:56.070516 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 16:05:56.108267 systemd-journald[1567]: Time spent on flushing to /var/log/journal/ec2e6a0373623ed0212a2a77b2982b61 is 183.152ms for 908 entries. Feb 13 16:05:56.108267 systemd-journald[1567]: System Journal (/var/log/journal/ec2e6a0373623ed0212a2a77b2982b61) is 8.0M, max 195.6M, 187.6M free. Feb 13 16:05:56.325549 systemd-journald[1567]: Received client request to flush runtime journal. Feb 13 16:05:56.325633 kernel: loop0: detected capacity change from 0 to 114432 Feb 13 16:05:56.325701 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 16:05:56.125197 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 16:05:56.128474 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 16:05:56.149501 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 16:05:56.175236 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:05:56.204802 systemd-tmpfiles[1612]: ACLs are not supported, ignoring. Feb 13 16:05:56.204827 systemd-tmpfiles[1612]: ACLs are not supported, ignoring. Feb 13 16:05:56.222229 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:05:56.234586 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 16:05:56.263800 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:05:56.278573 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 16:05:56.283372 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 16:05:56.285696 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 16:05:56.330001 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 16:05:56.355708 udevadm[1628]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 16:05:56.358447 kernel: loop1: detected capacity change from 0 to 52536 Feb 13 16:05:56.399273 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 16:05:56.411029 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 16:05:56.468611 systemd-tmpfiles[1635]: ACLs are not supported, ignoring. Feb 13 16:05:56.469195 systemd-tmpfiles[1635]: ACLs are not supported, ignoring. Feb 13 16:05:56.485810 kernel: loop2: detected capacity change from 0 to 114328 Feb 13 16:05:56.482746 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:05:56.576148 kernel: loop3: detected capacity change from 0 to 189592 Feb 13 16:05:56.684278 kernel: loop4: detected capacity change from 0 to 114432 Feb 13 16:05:56.700151 kernel: loop5: detected capacity change from 0 to 52536 Feb 13 16:05:56.713781 kernel: loop6: detected capacity change from 0 to 114328 Feb 13 16:05:56.731162 kernel: loop7: detected capacity change from 0 to 189592 Feb 13 16:05:56.769023 (sd-merge)[1641]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 16:05:56.770684 (sd-merge)[1641]: Merged extensions into '/usr'. Feb 13 16:05:56.781702 systemd[1]: Reloading requested from client PID 1611 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 16:05:56.781900 systemd[1]: Reloading... Feb 13 16:05:56.979211 zram_generator::config[1667]: No configuration found. Feb 13 16:05:57.335370 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:05:57.458666 systemd[1]: Reloading finished in 675 ms. Feb 13 16:05:57.494183 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 16:05:57.497478 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 16:05:57.515478 systemd[1]: Starting ensure-sysext.service... Feb 13 16:05:57.521037 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 16:05:57.531382 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:05:57.552413 systemd[1]: Reloading requested from client PID 1719 ('systemctl') (unit ensure-sysext.service)... Feb 13 16:05:57.552447 systemd[1]: Reloading... Feb 13 16:05:57.620929 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 16:05:57.625770 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 16:05:57.632830 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 16:05:57.634817 systemd-tmpfiles[1720]: ACLs are not supported, ignoring. Feb 13 16:05:57.636202 systemd-tmpfiles[1720]: ACLs are not supported, ignoring. Feb 13 16:05:57.646431 systemd-udevd[1721]: Using default interface naming scheme 'v255'. Feb 13 16:05:57.652583 systemd-tmpfiles[1720]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 16:05:57.652603 systemd-tmpfiles[1720]: Skipping /boot Feb 13 16:05:57.702033 systemd-tmpfiles[1720]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 16:05:57.703446 systemd-tmpfiles[1720]: Skipping /boot Feb 13 16:05:57.756562 ldconfig[1606]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 16:05:57.759088 zram_generator::config[1748]: No configuration found. Feb 13 16:05:57.981903 (udev-worker)[1761]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:05:58.169056 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:05:58.181987 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1755) Feb 13 16:05:58.341260 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 16:05:58.343249 systemd[1]: Reloading finished in 790 ms. Feb 13 16:05:58.373640 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:05:58.377903 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 16:05:58.399733 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:05:58.509307 systemd[1]: Finished ensure-sysext.service. Feb 13 16:05:58.519589 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 16:05:58.524178 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 16:05:58.534451 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 16:05:58.545639 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 16:05:58.550624 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:05:58.552877 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 16:05:58.560390 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:05:58.573516 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 16:05:58.583561 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:05:58.591553 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:05:58.593837 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:05:58.597564 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 16:05:58.608441 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 16:05:58.618471 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 16:05:58.628168 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 16:05:58.630282 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 16:05:58.638085 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 16:05:58.647525 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:05:58.658818 lvm[1920]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 16:05:58.718867 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:05:58.719362 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:05:58.734375 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 16:05:58.737222 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 16:05:58.745243 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 16:05:58.748749 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:05:58.763382 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 16:05:58.782833 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 16:05:58.793606 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:05:58.793938 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:05:58.799991 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:05:58.801597 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:05:58.806236 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 16:05:58.819163 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 16:05:58.834429 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 16:05:58.834662 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 16:05:58.841629 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 16:05:58.846765 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 16:05:58.849812 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 16:05:58.864145 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 16:05:58.869313 augenrules[1958]: No rules Feb 13 16:05:58.869560 lvm[1945]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 16:05:58.873928 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 16:05:58.925331 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 16:05:58.928639 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 16:05:58.943549 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 16:05:59.004274 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:05:59.069917 systemd-networkd[1934]: lo: Link UP Feb 13 16:05:59.070430 systemd-networkd[1934]: lo: Gained carrier Feb 13 16:05:59.073401 systemd-networkd[1934]: Enumeration completed Feb 13 16:05:59.073779 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 16:05:59.078016 systemd-networkd[1934]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:05:59.078310 systemd-resolved[1936]: Positive Trust Anchors: Feb 13 16:05:59.078332 systemd-resolved[1936]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 16:05:59.078394 systemd-resolved[1936]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 16:05:59.078806 systemd-networkd[1934]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 16:05:59.081471 systemd-networkd[1934]: eth0: Link UP Feb 13 16:05:59.081988 systemd-networkd[1934]: eth0: Gained carrier Feb 13 16:05:59.082042 systemd-networkd[1934]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:05:59.087575 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 16:05:59.092237 systemd-networkd[1934]: eth0: DHCPv4 address 172.31.25.78/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 16:05:59.095885 systemd-resolved[1936]: Defaulting to hostname 'linux'. Feb 13 16:05:59.109875 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 16:05:59.112488 systemd[1]: Reached target network.target - Network. Feb 13 16:05:59.114274 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:05:59.116461 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 16:05:59.118577 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 16:05:59.120999 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 16:05:59.123564 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 16:05:59.125710 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 16:05:59.128036 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 16:05:59.130383 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 16:05:59.130433 systemd[1]: Reached target paths.target - Path Units. Feb 13 16:05:59.132120 systemd[1]: Reached target timers.target - Timer Units. Feb 13 16:05:59.135055 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 16:05:59.139927 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 16:05:59.152625 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 16:05:59.156387 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 16:05:59.158722 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 16:05:59.160581 systemd[1]: Reached target basic.target - Basic System. Feb 13 16:05:59.162402 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 16:05:59.162458 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 16:05:59.164824 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 16:05:59.171507 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 16:05:59.183898 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 16:05:59.191353 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 16:05:59.196515 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 16:05:59.199271 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 16:05:59.203471 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 16:05:59.211531 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 16:05:59.217700 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 16:05:59.226276 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 16:05:59.232515 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 16:05:59.241237 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 16:05:59.254503 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 16:05:59.257370 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 16:05:59.260325 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 16:05:59.264470 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 16:05:59.273364 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 16:05:59.306142 jq[1984]: false Feb 13 16:05:59.307901 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 16:05:59.308351 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 16:05:59.315851 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 16:05:59.318335 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 16:05:59.375448 jq[1996]: true Feb 13 16:05:59.410055 dbus-daemon[1983]: [system] SELinux support is enabled Feb 13 16:05:59.410672 (ntainerd)[2002]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 16:05:59.424708 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 16:05:59.436996 dbus-daemon[1983]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1934 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 16:05:59.438059 extend-filesystems[1985]: Found loop4 Feb 13 16:05:59.438059 extend-filesystems[1985]: Found loop5 Feb 13 16:05:59.438059 extend-filesystems[1985]: Found loop6 Feb 13 16:05:59.438059 extend-filesystems[1985]: Found loop7 Feb 13 16:05:59.438059 extend-filesystems[1985]: Found nvme0n1 Feb 13 16:05:59.438059 extend-filesystems[1985]: Found nvme0n1p1 Feb 13 16:05:59.438059 extend-filesystems[1985]: Found nvme0n1p2 Feb 13 16:05:59.438059 extend-filesystems[1985]: Found nvme0n1p3 Feb 13 16:05:59.438059 extend-filesystems[1985]: Found usr Feb 13 16:05:59.438059 extend-filesystems[1985]: Found nvme0n1p4 Feb 13 16:05:59.438059 extend-filesystems[1985]: Found nvme0n1p6 Feb 13 16:05:59.438059 extend-filesystems[1985]: Found nvme0n1p7 Feb 13 16:05:59.438059 extend-filesystems[1985]: Found nvme0n1p9 Feb 13 16:05:59.473991 extend-filesystems[1985]: Checking size of /dev/nvme0n1p9 Feb 13 16:05:59.475811 jq[2005]: true Feb 13 16:05:59.440156 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 16:05:59.482546 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 16:05:59.440243 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 16:05:59.443755 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 16:05:59.443792 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 16:05:59.515705 update_engine[1995]: I20250213 16:05:59.497022 1995 main.cc:92] Flatcar Update Engine starting Feb 13 16:05:59.515705 update_engine[1995]: I20250213 16:05:59.503976 1995 update_check_scheduler.cc:74] Next update check in 6m54s Feb 13 16:05:59.510467 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 16:05:59.513476 systemd[1]: Started update-engine.service - Update Engine. Feb 13 16:05:59.524462 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 16:05:59.545388 extend-filesystems[1985]: Resized partition /dev/nvme0n1p9 Feb 13 16:05:59.568825 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 16:05:59.571141 extend-filesystems[2032]: resize2fs 1.47.1 (20-May-2024) Feb 13 16:05:59.572037 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 16:05:59.591890 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 16:05:59.591988 tar[2001]: linux-arm64/helm Feb 13 16:05:59.604860 coreos-metadata[1982]: Feb 13 16:05:59.594 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 16:05:59.604860 coreos-metadata[1982]: Feb 13 16:05:59.598 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 16:05:59.604860 coreos-metadata[1982]: Feb 13 16:05:59.598 INFO Fetch successful Feb 13 16:05:59.604860 coreos-metadata[1982]: Feb 13 16:05:59.599 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 16:05:59.604860 coreos-metadata[1982]: Feb 13 16:05:59.601 INFO Fetch successful Feb 13 16:05:59.604860 coreos-metadata[1982]: Feb 13 16:05:59.601 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 16:05:59.604860 coreos-metadata[1982]: Feb 13 16:05:59.602 INFO Fetch successful Feb 13 16:05:59.604860 coreos-metadata[1982]: Feb 13 16:05:59.603 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 16:05:59.605651 ntpd[1987]: 13 Feb 16:05:59 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:58:42 UTC 2025 (1): Starting Feb 13 16:05:59.605651 ntpd[1987]: 13 Feb 16:05:59 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 16:05:59.605651 ntpd[1987]: 13 Feb 16:05:59 ntpd[1987]: ---------------------------------------------------- Feb 13 16:05:59.605651 ntpd[1987]: 13 Feb 16:05:59 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Feb 13 16:05:59.605651 ntpd[1987]: 13 Feb 16:05:59 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 16:05:59.605651 ntpd[1987]: 13 Feb 16:05:59 ntpd[1987]: corporation. Support and training for ntp-4 are Feb 13 16:05:59.605651 ntpd[1987]: 13 Feb 16:05:59 ntpd[1987]: available at https://www.nwtime.org/support Feb 13 16:05:59.605651 ntpd[1987]: 13 Feb 16:05:59 ntpd[1987]: ---------------------------------------------------- Feb 13 16:05:59.605651 ntpd[1987]: 13 Feb 16:05:59 ntpd[1987]: proto: precision = 0.096 usec (-23) Feb 13 16:05:59.600293 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:58:42 UTC 2025 (1): Starting Feb 13 16:05:59.615278 coreos-metadata[1982]: Feb 13 16:05:59.608 INFO Fetch successful Feb 13 16:05:59.615278 coreos-metadata[1982]: Feb 13 16:05:59.608 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 16:05:59.615278 coreos-metadata[1982]: Feb 13 16:05:59.612 INFO Fetch failed with 404: resource not found Feb 13 16:05:59.615278 coreos-metadata[1982]: Feb 13 16:05:59.612 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 16:05:59.615516 ntpd[1987]: 13 Feb 16:05:59 ntpd[1987]: basedate set to 2025-02-01 Feb 13 16:05:59.615516 ntpd[1987]: 13 Feb 16:05:59 ntpd[1987]: gps base set to 2025-02-02 (week 2352) Feb 13 16:05:59.615516 ntpd[1987]: 13 Feb 16:05:59 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 16:05:59.615516 ntpd[1987]: 13 Feb 16:05:59 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 16:05:59.600341 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 16:05:59.600362 ntpd[1987]: ---------------------------------------------------- Feb 13 16:05:59.600381 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Feb 13 16:05:59.600401 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 16:05:59.619355 ntpd[1987]: 13 Feb 16:05:59 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 16:05:59.619355 ntpd[1987]: 13 Feb 16:05:59 ntpd[1987]: Listen normally on 3 eth0 172.31.25.78:123 Feb 13 16:05:59.619355 ntpd[1987]: 13 Feb 16:05:59 ntpd[1987]: Listen normally on 4 lo [::1]:123 Feb 13 16:05:59.619355 ntpd[1987]: 13 Feb 16:05:59 ntpd[1987]: bind(21) AF_INET6 fe80::48c:beff:fe1a:ff71%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 16:05:59.619355 ntpd[1987]: 13 Feb 16:05:59 ntpd[1987]: unable to create socket on eth0 (5) for fe80::48c:beff:fe1a:ff71%2#123 Feb 13 16:05:59.619355 ntpd[1987]: 13 Feb 16:05:59 ntpd[1987]: failed to init interface for address fe80::48c:beff:fe1a:ff71%2 Feb 13 16:05:59.619355 ntpd[1987]: 13 Feb 16:05:59 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Feb 13 16:05:59.619678 coreos-metadata[1982]: Feb 13 16:05:59.616 INFO Fetch successful Feb 13 16:05:59.619678 coreos-metadata[1982]: Feb 13 16:05:59.617 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 16:05:59.600420 ntpd[1987]: corporation. Support and training for ntp-4 are Feb 13 16:05:59.600439 ntpd[1987]: available at https://www.nwtime.org/support Feb 13 16:05:59.600458 ntpd[1987]: ---------------------------------------------------- Feb 13 16:05:59.604253 ntpd[1987]: proto: precision = 0.096 usec (-23) Feb 13 16:05:59.608288 ntpd[1987]: basedate set to 2025-02-01 Feb 13 16:05:59.608323 ntpd[1987]: gps base set to 2025-02-02 (week 2352) Feb 13 16:05:59.614233 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 16:05:59.624883 coreos-metadata[1982]: Feb 13 16:05:59.621 INFO Fetch successful Feb 13 16:05:59.614320 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 16:05:59.616611 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 16:05:59.616682 ntpd[1987]: Listen normally on 3 eth0 172.31.25.78:123 Feb 13 16:05:59.616748 ntpd[1987]: Listen normally on 4 lo [::1]:123 Feb 13 16:05:59.616827 ntpd[1987]: bind(21) AF_INET6 fe80::48c:beff:fe1a:ff71%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 16:05:59.616865 ntpd[1987]: unable to create socket on eth0 (5) for fe80::48c:beff:fe1a:ff71%2#123 Feb 13 16:05:59.616893 ntpd[1987]: failed to init interface for address fe80::48c:beff:fe1a:ff71%2 Feb 13 16:05:59.616950 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Feb 13 16:05:59.636192 coreos-metadata[1982]: Feb 13 16:05:59.626 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 16:05:59.636192 coreos-metadata[1982]: Feb 13 16:05:59.635 INFO Fetch successful Feb 13 16:05:59.636192 coreos-metadata[1982]: Feb 13 16:05:59.635 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 16:05:59.637481 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 16:05:59.638437 ntpd[1987]: 13 Feb 16:05:59 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 16:05:59.638437 ntpd[1987]: 13 Feb 16:05:59 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 16:05:59.637544 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 16:05:59.641638 coreos-metadata[1982]: Feb 13 16:05:59.640 INFO Fetch successful Feb 13 16:05:59.641638 coreos-metadata[1982]: Feb 13 16:05:59.640 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 16:05:59.659979 coreos-metadata[1982]: Feb 13 16:05:59.648 INFO Fetch successful Feb 13 16:05:59.709166 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 16:05:59.732482 extend-filesystems[2032]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 16:05:59.732482 extend-filesystems[2032]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 16:05:59.732482 extend-filesystems[2032]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 16:05:59.757272 extend-filesystems[1985]: Resized filesystem in /dev/nvme0n1p9 Feb 13 16:05:59.735194 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 16:05:59.735553 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 16:05:59.773318 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 16:05:59.855211 bash[2067]: Updated "/home/core/.ssh/authorized_keys" Feb 13 16:05:59.873828 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1754) Feb 13 16:05:59.876151 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 16:05:59.890474 systemd[1]: Starting sshkeys.service... Feb 13 16:05:59.899348 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 16:05:59.902542 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 16:05:59.917990 systemd-logind[1993]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 16:05:59.918041 systemd-logind[1993]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 16:05:59.921221 systemd-logind[1993]: New seat seat0. Feb 13 16:05:59.927516 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 16:06:00.004642 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 16:06:00.066178 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 16:06:00.085435 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 16:06:00.170809 containerd[2002]: time="2025-02-13T16:06:00.168871316Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 16:06:00.279401 locksmithd[2027]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 16:06:00.288376 containerd[2002]: time="2025-02-13T16:06:00.285062205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:06:00.291992 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 16:06:00.292263 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 16:06:00.299566 dbus-daemon[1983]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2025 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 16:06:00.311135 containerd[2002]: time="2025-02-13T16:06:00.311044365Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:06:00.311135 containerd[2002]: time="2025-02-13T16:06:00.311127897Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 16:06:00.311293 containerd[2002]: time="2025-02-13T16:06:00.311166345Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 16:06:00.311519 containerd[2002]: time="2025-02-13T16:06:00.311475405Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 16:06:00.311576 containerd[2002]: time="2025-02-13T16:06:00.311521257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 16:06:00.311718 containerd[2002]: time="2025-02-13T16:06:00.311648373Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:06:00.311718 containerd[2002]: time="2025-02-13T16:06:00.311678109Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:06:00.317149 containerd[2002]: time="2025-02-13T16:06:00.311966085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:06:00.317149 containerd[2002]: time="2025-02-13T16:06:00.312015513Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 16:06:00.317149 containerd[2002]: time="2025-02-13T16:06:00.312049749Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:06:00.317149 containerd[2002]: time="2025-02-13T16:06:00.312077217Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 16:06:00.317149 containerd[2002]: time="2025-02-13T16:06:00.312279261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:06:00.317149 containerd[2002]: time="2025-02-13T16:06:00.312696369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:06:00.317149 containerd[2002]: time="2025-02-13T16:06:00.312901713Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:06:00.317149 containerd[2002]: time="2025-02-13T16:06:00.312933933Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 16:06:00.321295 containerd[2002]: time="2025-02-13T16:06:00.321217305Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 16:06:00.323136 containerd[2002]: time="2025-02-13T16:06:00.321433857Z" level=info msg="metadata content store policy set" policy=shared Feb 13 16:06:00.328505 containerd[2002]: time="2025-02-13T16:06:00.328436121Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 16:06:00.328628 containerd[2002]: time="2025-02-13T16:06:00.328553937Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 16:06:00.331135 containerd[2002]: time="2025-02-13T16:06:00.328668765Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 16:06:00.331135 containerd[2002]: time="2025-02-13T16:06:00.328721109Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 16:06:00.331135 containerd[2002]: time="2025-02-13T16:06:00.328756101Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 16:06:00.331135 containerd[2002]: time="2025-02-13T16:06:00.329028117Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 16:06:00.331135 containerd[2002]: time="2025-02-13T16:06:00.331035069Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 16:06:00.333142 containerd[2002]: time="2025-02-13T16:06:00.331656057Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 16:06:00.333142 containerd[2002]: time="2025-02-13T16:06:00.332198385Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 16:06:00.333142 containerd[2002]: time="2025-02-13T16:06:00.332234253Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 16:06:00.333142 containerd[2002]: time="2025-02-13T16:06:00.332295033Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 16:06:00.333142 containerd[2002]: time="2025-02-13T16:06:00.332332485Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 16:06:00.333142 containerd[2002]: time="2025-02-13T16:06:00.332389953Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 16:06:00.333142 containerd[2002]: time="2025-02-13T16:06:00.332424765Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 16:06:00.333142 containerd[2002]: time="2025-02-13T16:06:00.332658345Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 16:06:00.333142 containerd[2002]: time="2025-02-13T16:06:00.332714553Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 16:06:00.333142 containerd[2002]: time="2025-02-13T16:06:00.332748105Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 16:06:00.333618 containerd[2002]: time="2025-02-13T16:06:00.333163257Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 16:06:00.333618 containerd[2002]: time="2025-02-13T16:06:00.333241269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 16:06:00.333618 containerd[2002]: time="2025-02-13T16:06:00.333327261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 16:06:00.333618 containerd[2002]: time="2025-02-13T16:06:00.333364413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 16:06:00.333618 containerd[2002]: time="2025-02-13T16:06:00.333425649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 16:06:00.333618 containerd[2002]: time="2025-02-13T16:06:00.333458889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 16:06:00.337133 containerd[2002]: time="2025-02-13T16:06:00.335166741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 16:06:00.337133 containerd[2002]: time="2025-02-13T16:06:00.335241177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 16:06:00.337133 containerd[2002]: time="2025-02-13T16:06:00.335275953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 16:06:00.337133 containerd[2002]: time="2025-02-13T16:06:00.335332569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 16:06:00.337133 containerd[2002]: time="2025-02-13T16:06:00.335368965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 16:06:00.337133 containerd[2002]: time="2025-02-13T16:06:00.335423673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 16:06:00.337133 containerd[2002]: time="2025-02-13T16:06:00.335456793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 16:06:00.337133 containerd[2002]: time="2025-02-13T16:06:00.335515545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 16:06:00.337133 containerd[2002]: time="2025-02-13T16:06:00.335575053Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 16:06:00.337133 containerd[2002]: time="2025-02-13T16:06:00.335661753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 16:06:00.337133 containerd[2002]: time="2025-02-13T16:06:00.335703405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 16:06:00.337133 containerd[2002]: time="2025-02-13T16:06:00.335755833Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 16:06:00.337133 containerd[2002]: time="2025-02-13T16:06:00.336050481Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 16:06:00.337133 containerd[2002]: time="2025-02-13T16:06:00.336089397Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 16:06:00.337783 containerd[2002]: time="2025-02-13T16:06:00.336144813Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 16:06:00.337783 containerd[2002]: time="2025-02-13T16:06:00.336175425Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 16:06:00.337783 containerd[2002]: time="2025-02-13T16:06:00.336223977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 16:06:00.337783 containerd[2002]: time="2025-02-13T16:06:00.336258033Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 16:06:00.337783 containerd[2002]: time="2025-02-13T16:06:00.336304473Z" level=info msg="NRI interface is disabled by configuration." Feb 13 16:06:00.337783 containerd[2002]: time="2025-02-13T16:06:00.336341097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 16:06:00.338050 containerd[2002]: time="2025-02-13T16:06:00.337061721Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 16:06:00.338050 containerd[2002]: time="2025-02-13T16:06:00.337532745Z" level=info msg="Connect containerd service" Feb 13 16:06:00.338050 containerd[2002]: time="2025-02-13T16:06:00.337634169Z" level=info msg="using legacy CRI server" Feb 13 16:06:00.338050 containerd[2002]: time="2025-02-13T16:06:00.337654413Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 16:06:00.338407 containerd[2002]: time="2025-02-13T16:06:00.338346669Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 16:06:00.342131 containerd[2002]: time="2025-02-13T16:06:00.341487933Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 16:06:00.342131 containerd[2002]: time="2025-02-13T16:06:00.341734569Z" level=info msg="Start subscribing containerd event" Feb 13 16:06:00.342131 containerd[2002]: time="2025-02-13T16:06:00.341813217Z" level=info msg="Start recovering state" Feb 13 16:06:00.342131 containerd[2002]: time="2025-02-13T16:06:00.341953233Z" level=info msg="Start event monitor" Feb 13 16:06:00.342131 containerd[2002]: time="2025-02-13T16:06:00.341980005Z" level=info msg="Start snapshots syncer" Feb 13 16:06:00.342131 containerd[2002]: time="2025-02-13T16:06:00.342001185Z" level=info msg="Start cni network conf syncer for default" Feb 13 16:06:00.342131 containerd[2002]: time="2025-02-13T16:06:00.342019161Z" level=info msg="Start streaming server" Feb 13 16:06:00.343455 containerd[2002]: time="2025-02-13T16:06:00.343400901Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 16:06:00.343761 containerd[2002]: time="2025-02-13T16:06:00.343597929Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 16:06:00.348325 containerd[2002]: time="2025-02-13T16:06:00.345238665Z" level=info msg="containerd successfully booted in 0.179698s" Feb 13 16:06:00.367699 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 16:06:00.370597 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 16:06:00.428893 polkitd[2162]: Started polkitd version 121 Feb 13 16:06:00.442045 coreos-metadata[2099]: Feb 13 16:06:00.441 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 16:06:00.442045 coreos-metadata[2099]: Feb 13 16:06:00.441 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 16:06:00.442045 coreos-metadata[2099]: Feb 13 16:06:00.441 INFO Fetch successful Feb 13 16:06:00.442045 coreos-metadata[2099]: Feb 13 16:06:00.441 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 16:06:00.451208 coreos-metadata[2099]: Feb 13 16:06:00.443 INFO Fetch successful Feb 13 16:06:00.454886 unknown[2099]: wrote ssh authorized keys file for user: core Feb 13 16:06:00.480710 polkitd[2162]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 16:06:00.480834 polkitd[2162]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 16:06:00.491183 polkitd[2162]: Finished loading, compiling and executing 2 rules Feb 13 16:06:00.495292 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 16:06:00.496766 polkitd[2162]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 16:06:00.502090 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 16:06:00.556718 update-ssh-keys[2185]: Updated "/home/core/.ssh/authorized_keys" Feb 13 16:06:00.560282 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 16:06:00.567504 systemd[1]: Finished sshkeys.service. Feb 13 16:06:00.593376 systemd-resolved[1936]: System hostname changed to 'ip-172-31-25-78'. Feb 13 16:06:00.595271 systemd-hostnamed[2025]: Hostname set to (transient) Feb 13 16:06:00.601053 ntpd[1987]: bind(24) AF_INET6 fe80::48c:beff:fe1a:ff71%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 16:06:00.601165 ntpd[1987]: unable to create socket on eth0 (6) for fe80::48c:beff:fe1a:ff71%2#123 Feb 13 16:06:00.601529 ntpd[1987]: 13 Feb 16:06:00 ntpd[1987]: bind(24) AF_INET6 fe80::48c:beff:fe1a:ff71%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 16:06:00.601529 ntpd[1987]: 13 Feb 16:06:00 ntpd[1987]: unable to create socket on eth0 (6) for fe80::48c:beff:fe1a:ff71%2#123 Feb 13 16:06:00.601529 ntpd[1987]: 13 Feb 16:06:00 ntpd[1987]: failed to init interface for address fe80::48c:beff:fe1a:ff71%2 Feb 13 16:06:00.601196 ntpd[1987]: failed to init interface for address fe80::48c:beff:fe1a:ff71%2 Feb 13 16:06:00.947313 systemd-networkd[1934]: eth0: Gained IPv6LL Feb 13 16:06:00.954803 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 16:06:00.961475 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 16:06:00.973834 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 16:06:00.987769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:06:00.994653 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 16:06:01.132870 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 16:06:01.140283 amazon-ssm-agent[2190]: Initializing new seelog logger Feb 13 16:06:01.141969 amazon-ssm-agent[2190]: New Seelog Logger Creation Complete Feb 13 16:06:01.141969 amazon-ssm-agent[2190]: 2025/02/13 16:06:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:06:01.141969 amazon-ssm-agent[2190]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:06:01.141969 amazon-ssm-agent[2190]: 2025/02/13 16:06:01 processing appconfig overrides Feb 13 16:06:01.145696 amazon-ssm-agent[2190]: 2025/02/13 16:06:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:06:01.145696 amazon-ssm-agent[2190]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:06:01.145696 amazon-ssm-agent[2190]: 2025/02/13 16:06:01 processing appconfig overrides Feb 13 16:06:01.145696 amazon-ssm-agent[2190]: 2025/02/13 16:06:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:06:01.145696 amazon-ssm-agent[2190]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:06:01.145696 amazon-ssm-agent[2190]: 2025/02/13 16:06:01 processing appconfig overrides Feb 13 16:06:01.147204 amazon-ssm-agent[2190]: 2025-02-13 16:06:01 INFO Proxy environment variables: Feb 13 16:06:01.151909 amazon-ssm-agent[2190]: 2025/02/13 16:06:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:06:01.151909 amazon-ssm-agent[2190]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 16:06:01.152073 amazon-ssm-agent[2190]: 2025/02/13 16:06:01 processing appconfig overrides Feb 13 16:06:01.164132 tar[2001]: linux-arm64/LICENSE Feb 13 16:06:01.164132 tar[2001]: linux-arm64/README.md Feb 13 16:06:01.212248 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 16:06:01.246718 amazon-ssm-agent[2190]: 2025-02-13 16:06:01 INFO https_proxy: Feb 13 16:06:01.345013 amazon-ssm-agent[2190]: 2025-02-13 16:06:01 INFO http_proxy: Feb 13 16:06:01.444126 amazon-ssm-agent[2190]: 2025-02-13 16:06:01 INFO no_proxy: Feb 13 16:06:01.538930 amazon-ssm-agent[2190]: 2025-02-13 16:06:01 INFO Checking if agent identity type OnPrem can be assumed Feb 13 16:06:01.538930 amazon-ssm-agent[2190]: 2025-02-13 16:06:01 INFO Checking if agent identity type EC2 can be assumed Feb 13 16:06:01.538930 amazon-ssm-agent[2190]: 2025-02-13 16:06:01 INFO Agent will take identity from EC2 Feb 13 16:06:01.538930 amazon-ssm-agent[2190]: 2025-02-13 16:06:01 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 16:06:01.538930 amazon-ssm-agent[2190]: 2025-02-13 16:06:01 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 16:06:01.538930 amazon-ssm-agent[2190]: 2025-02-13 16:06:01 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 16:06:01.538930 amazon-ssm-agent[2190]: 2025-02-13 16:06:01 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 16:06:01.538930 amazon-ssm-agent[2190]: 2025-02-13 16:06:01 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 16:06:01.538930 amazon-ssm-agent[2190]: 2025-02-13 16:06:01 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 16:06:01.538930 amazon-ssm-agent[2190]: 2025-02-13 16:06:01 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 16:06:01.538930 amazon-ssm-agent[2190]: 2025-02-13 16:06:01 INFO [Registrar] Starting registrar module Feb 13 16:06:01.538930 amazon-ssm-agent[2190]: 2025-02-13 16:06:01 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 16:06:01.538930 amazon-ssm-agent[2190]: 2025-02-13 16:06:01 INFO [EC2Identity] EC2 registration was successful. Feb 13 16:06:01.538930 amazon-ssm-agent[2190]: 2025-02-13 16:06:01 INFO [CredentialRefresher] credentialRefresher has started Feb 13 16:06:01.538930 amazon-ssm-agent[2190]: 2025-02-13 16:06:01 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 16:06:01.538930 amazon-ssm-agent[2190]: 2025-02-13 16:06:01 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 16:06:01.541707 amazon-ssm-agent[2190]: 2025-02-13 16:06:01 INFO [CredentialRefresher] Next credential rotation will be in 31.208318171833334 minutes Feb 13 16:06:01.712751 sshd_keygen[2022]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 16:06:01.754567 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 16:06:01.765948 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 16:06:01.776614 systemd[1]: Started sshd@0-172.31.25.78:22-139.178.68.195:36448.service - OpenSSH per-connection server daemon (139.178.68.195:36448). Feb 13 16:06:01.789249 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 16:06:01.791193 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 16:06:01.800815 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 16:06:01.836574 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 16:06:01.846728 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 16:06:01.853910 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 16:06:01.856572 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 16:06:02.046044 sshd[2221]: Accepted publickey for core from 139.178.68.195 port 36448 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:02.049036 sshd[2221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:02.071391 systemd-logind[1993]: New session 1 of user core. Feb 13 16:06:02.075622 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 16:06:02.086451 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 16:06:02.123363 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 16:06:02.136640 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 16:06:02.155740 (systemd)[2232]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 16:06:02.381085 systemd[2232]: Queued start job for default target default.target. Feb 13 16:06:02.394490 systemd[2232]: Created slice app.slice - User Application Slice. Feb 13 16:06:02.394557 systemd[2232]: Reached target paths.target - Paths. Feb 13 16:06:02.394590 systemd[2232]: Reached target timers.target - Timers. Feb 13 16:06:02.397091 systemd[2232]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 16:06:02.434127 systemd[2232]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 16:06:02.434578 systemd[2232]: Reached target sockets.target - Sockets. Feb 13 16:06:02.434722 systemd[2232]: Reached target basic.target - Basic System. Feb 13 16:06:02.434830 systemd[2232]: Reached target default.target - Main User Target. Feb 13 16:06:02.434897 systemd[2232]: Startup finished in 266ms. Feb 13 16:06:02.434911 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 16:06:02.444413 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 16:06:02.584008 amazon-ssm-agent[2190]: 2025-02-13 16:06:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 16:06:02.624350 systemd[1]: Started sshd@1-172.31.25.78:22-139.178.68.195:36454.service - OpenSSH per-connection server daemon (139.178.68.195:36454). Feb 13 16:06:02.685213 amazon-ssm-agent[2190]: 2025-02-13 16:06:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2243) started Feb 13 16:06:02.786375 amazon-ssm-agent[2190]: 2025-02-13 16:06:02 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 16:06:02.846559 sshd[2245]: Accepted publickey for core from 139.178.68.195 port 36454 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:02.849429 sshd[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:02.858817 systemd-logind[1993]: New session 2 of user core. Feb 13 16:06:02.869398 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 16:06:03.004593 sshd[2245]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:03.012938 systemd[1]: sshd@1-172.31.25.78:22-139.178.68.195:36454.service: Deactivated successfully. Feb 13 16:06:03.016761 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 16:06:03.018356 systemd-logind[1993]: Session 2 logged out. Waiting for processes to exit. Feb 13 16:06:03.020374 systemd-logind[1993]: Removed session 2. Feb 13 16:06:03.043667 systemd[1]: Started sshd@2-172.31.25.78:22-139.178.68.195:36466.service - OpenSSH per-connection server daemon (139.178.68.195:36466). Feb 13 16:06:03.221449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:06:03.224675 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 16:06:03.228173 (kubelet)[2267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:06:03.232508 systemd[1]: Startup finished in 1.221s (kernel) + 8.175s (initrd) + 9.304s (userspace) = 18.701s. Feb 13 16:06:03.238224 sshd[2260]: Accepted publickey for core from 139.178.68.195 port 36466 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:03.243141 sshd[2260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:03.260588 systemd-logind[1993]: New session 3 of user core. Feb 13 16:06:03.268690 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 16:06:03.403792 sshd[2260]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:03.413139 systemd[1]: sshd@2-172.31.25.78:22-139.178.68.195:36466.service: Deactivated successfully. Feb 13 16:06:03.419052 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 16:06:03.420534 systemd-logind[1993]: Session 3 logged out. Waiting for processes to exit. Feb 13 16:06:03.422436 systemd-logind[1993]: Removed session 3. Feb 13 16:06:03.601045 ntpd[1987]: Listen normally on 7 eth0 [fe80::48c:beff:fe1a:ff71%2]:123 Feb 13 16:06:03.601851 ntpd[1987]: 13 Feb 16:06:03 ntpd[1987]: Listen normally on 7 eth0 [fe80::48c:beff:fe1a:ff71%2]:123 Feb 13 16:06:04.368309 kubelet[2267]: E0213 16:06:04.368228 2267 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:06:04.373204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:06:04.373563 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:06:04.374280 systemd[1]: kubelet.service: Consumed 1.265s CPU time. Feb 13 16:06:07.041848 systemd-resolved[1936]: Clock change detected. Flushing caches. Feb 13 16:06:13.888251 systemd[1]: Started sshd@3-172.31.25.78:22-139.178.68.195:60764.service - OpenSSH per-connection server daemon (139.178.68.195:60764). Feb 13 16:06:14.065570 sshd[2284]: Accepted publickey for core from 139.178.68.195 port 60764 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:14.068197 sshd[2284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:14.075448 systemd-logind[1993]: New session 4 of user core. Feb 13 16:06:14.081933 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 16:06:14.211523 sshd[2284]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:14.218341 systemd[1]: sshd@3-172.31.25.78:22-139.178.68.195:60764.service: Deactivated successfully. Feb 13 16:06:14.221291 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 16:06:14.222605 systemd-logind[1993]: Session 4 logged out. Waiting for processes to exit. Feb 13 16:06:14.224457 systemd-logind[1993]: Removed session 4. Feb 13 16:06:14.257106 systemd[1]: Started sshd@4-172.31.25.78:22-139.178.68.195:60772.service - OpenSSH per-connection server daemon (139.178.68.195:60772). Feb 13 16:06:14.426922 sshd[2291]: Accepted publickey for core from 139.178.68.195 port 60772 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:14.429550 sshd[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:14.438972 systemd-logind[1993]: New session 5 of user core. Feb 13 16:06:14.445943 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 16:06:14.568193 sshd[2291]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:14.573519 systemd-logind[1993]: Session 5 logged out. Waiting for processes to exit. Feb 13 16:06:14.574138 systemd[1]: sshd@4-172.31.25.78:22-139.178.68.195:60772.service: Deactivated successfully. Feb 13 16:06:14.578139 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 16:06:14.582051 systemd-logind[1993]: Removed session 5. Feb 13 16:06:14.610189 systemd[1]: Started sshd@5-172.31.25.78:22-139.178.68.195:60774.service - OpenSSH per-connection server daemon (139.178.68.195:60774). Feb 13 16:06:14.778567 sshd[2298]: Accepted publickey for core from 139.178.68.195 port 60774 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:14.781112 sshd[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:14.788495 systemd-logind[1993]: New session 6 of user core. Feb 13 16:06:14.799961 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 16:06:14.905816 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 16:06:14.912192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:06:14.932960 sshd[2298]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:14.945163 systemd[1]: sshd@5-172.31.25.78:22-139.178.68.195:60774.service: Deactivated successfully. Feb 13 16:06:14.950575 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 16:06:14.953837 systemd-logind[1993]: Session 6 logged out. Waiting for processes to exit. Feb 13 16:06:14.984973 systemd[1]: Started sshd@6-172.31.25.78:22-139.178.68.195:60786.service - OpenSSH per-connection server daemon (139.178.68.195:60786). Feb 13 16:06:14.989086 systemd-logind[1993]: Removed session 6. Feb 13 16:06:15.155307 sshd[2308]: Accepted publickey for core from 139.178.68.195 port 60786 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:06:15.159137 sshd[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:06:15.170830 systemd-logind[1993]: New session 7 of user core. Feb 13 16:06:15.178970 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 16:06:15.241709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:06:15.258152 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:06:15.309829 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 16:06:15.310465 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:06:15.370776 kubelet[2316]: E0213 16:06:15.370370 2316 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:06:15.381932 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:06:15.382260 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:06:15.768159 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 16:06:15.769961 (dockerd)[2338]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 16:06:16.153727 dockerd[2338]: time="2025-02-13T16:06:16.152154085Z" level=info msg="Starting up" Feb 13 16:06:16.293201 dockerd[2338]: time="2025-02-13T16:06:16.292763389Z" level=info msg="Loading containers: start." Feb 13 16:06:16.446852 kernel: Initializing XFRM netlink socket Feb 13 16:06:16.483169 (udev-worker)[2361]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:06:16.573103 systemd-networkd[1934]: docker0: Link UP Feb 13 16:06:16.597194 dockerd[2338]: time="2025-02-13T16:06:16.597045087Z" level=info msg="Loading containers: done." Feb 13 16:06:16.620631 dockerd[2338]: time="2025-02-13T16:06:16.620551971Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 16:06:16.620974 dockerd[2338]: time="2025-02-13T16:06:16.620732655Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 16:06:16.620974 dockerd[2338]: time="2025-02-13T16:06:16.620939379Z" level=info msg="Daemon has completed initialization" Feb 13 16:06:16.682461 dockerd[2338]: time="2025-02-13T16:06:16.681097875Z" level=info msg="API listen on /run/docker.sock" Feb 13 16:06:16.682028 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 16:06:17.783710 containerd[2002]: time="2025-02-13T16:06:17.783388421Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 16:06:18.649064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1574014563.mount: Deactivated successfully. Feb 13 16:06:21.193003 containerd[2002]: time="2025-02-13T16:06:21.192936126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:21.195243 containerd[2002]: time="2025-02-13T16:06:21.195171990Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620375" Feb 13 16:06:21.197200 containerd[2002]: time="2025-02-13T16:06:21.197099022Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:21.202866 containerd[2002]: time="2025-02-13T16:06:21.202771326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:21.205710 containerd[2002]: time="2025-02-13T16:06:21.205216350Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 3.421764557s" Feb 13 16:06:21.205710 containerd[2002]: time="2025-02-13T16:06:21.205275258Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 16:06:21.206495 containerd[2002]: time="2025-02-13T16:06:21.206455506Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 16:06:23.781279 containerd[2002]: time="2025-02-13T16:06:23.781170971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:23.783491 containerd[2002]: time="2025-02-13T16:06:23.783383219Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471773" Feb 13 16:06:23.784240 containerd[2002]: time="2025-02-13T16:06:23.784131887Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:23.790031 containerd[2002]: time="2025-02-13T16:06:23.789933491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:23.792693 containerd[2002]: time="2025-02-13T16:06:23.792445847Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 2.585816533s" Feb 13 16:06:23.792693 containerd[2002]: time="2025-02-13T16:06:23.792509123Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 16:06:23.793807 containerd[2002]: time="2025-02-13T16:06:23.793366943Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 16:06:25.632618 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 16:06:25.642196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:06:25.690154 containerd[2002]: time="2025-02-13T16:06:25.690095856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:25.694077 containerd[2002]: time="2025-02-13T16:06:25.693739488Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024540" Feb 13 16:06:25.694618 containerd[2002]: time="2025-02-13T16:06:25.694536456Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:25.701687 containerd[2002]: time="2025-02-13T16:06:25.701586480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:25.710716 containerd[2002]: time="2025-02-13T16:06:25.710049084Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 1.916627073s" Feb 13 16:06:25.710716 containerd[2002]: time="2025-02-13T16:06:25.710128164Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 16:06:25.714101 containerd[2002]: time="2025-02-13T16:06:25.714034164Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 16:06:25.936580 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:06:25.950161 (kubelet)[2548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:06:26.024110 kubelet[2548]: E0213 16:06:26.023970 2548 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:06:26.028058 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:06:26.028711 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:06:27.121625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4218837668.mount: Deactivated successfully. Feb 13 16:06:27.626925 containerd[2002]: time="2025-02-13T16:06:27.626860754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:27.628317 containerd[2002]: time="2025-02-13T16:06:27.628263314Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769256" Feb 13 16:06:27.629301 containerd[2002]: time="2025-02-13T16:06:27.629215394Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:27.634000 containerd[2002]: time="2025-02-13T16:06:27.633907406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:27.635322 containerd[2002]: time="2025-02-13T16:06:27.635263466Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.921159126s" Feb 13 16:06:27.635454 containerd[2002]: time="2025-02-13T16:06:27.635321150Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 16:06:27.636530 containerd[2002]: time="2025-02-13T16:06:27.636209750Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 16:06:28.229113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount832744777.mount: Deactivated successfully. Feb 13 16:06:29.367392 containerd[2002]: time="2025-02-13T16:06:29.367310402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:29.370014 containerd[2002]: time="2025-02-13T16:06:29.369898610Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 16:06:29.372341 containerd[2002]: time="2025-02-13T16:06:29.372208994Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:29.381923 containerd[2002]: time="2025-02-13T16:06:29.381831218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:29.383755 containerd[2002]: time="2025-02-13T16:06:29.383511662Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.747241492s" Feb 13 16:06:29.383755 containerd[2002]: time="2025-02-13T16:06:29.383569946Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 16:06:29.385331 containerd[2002]: time="2025-02-13T16:06:29.384974930Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 16:06:30.019315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4232867163.mount: Deactivated successfully. Feb 13 16:06:30.031710 containerd[2002]: time="2025-02-13T16:06:30.031041794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:30.033363 containerd[2002]: time="2025-02-13T16:06:30.033309386Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Feb 13 16:06:30.035463 containerd[2002]: time="2025-02-13T16:06:30.035423018Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:30.040584 containerd[2002]: time="2025-02-13T16:06:30.040519886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:30.042474 containerd[2002]: time="2025-02-13T16:06:30.042386258Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 657.330076ms" Feb 13 16:06:30.042689 containerd[2002]: time="2025-02-13T16:06:30.042630098Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 16:06:30.043448 containerd[2002]: time="2025-02-13T16:06:30.043307618Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 16:06:30.709865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount733667889.mount: Deactivated successfully. Feb 13 16:06:31.064834 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 16:06:34.521726 containerd[2002]: time="2025-02-13T16:06:34.521332172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:34.523876 containerd[2002]: time="2025-02-13T16:06:34.523805156Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406425" Feb 13 16:06:34.526495 containerd[2002]: time="2025-02-13T16:06:34.526419512Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:34.533130 containerd[2002]: time="2025-02-13T16:06:34.533045768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:06:34.535931 containerd[2002]: time="2025-02-13T16:06:34.535712924Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.491896674s" Feb 13 16:06:34.535931 containerd[2002]: time="2025-02-13T16:06:34.535770464Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 16:06:36.114787 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 16:06:36.125311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:06:36.448953 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:06:36.450333 (kubelet)[2695]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:06:36.541546 kubelet[2695]: E0213 16:06:36.541444 2695 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:06:36.545417 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:06:36.545985 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:06:42.312415 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:06:42.322192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:06:42.391885 systemd[1]: Reloading requested from client PID 2710 ('systemctl') (unit session-7.scope)... Feb 13 16:06:42.391914 systemd[1]: Reloading... Feb 13 16:06:42.639059 zram_generator::config[2753]: No configuration found. Feb 13 16:06:42.871482 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:06:43.042070 systemd[1]: Reloading finished in 649 ms. Feb 13 16:06:43.131025 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 16:06:43.131209 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 16:06:43.132156 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:06:43.142253 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:06:43.416374 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:06:43.432351 (kubelet)[2813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 16:06:43.513407 kubelet[2813]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:06:43.515692 kubelet[2813]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 16:06:43.515692 kubelet[2813]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:06:43.515692 kubelet[2813]: I0213 16:06:43.514114 2813 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 16:06:44.973795 update_engine[1995]: I20250213 16:06:44.973719 1995 update_attempter.cc:509] Updating boot flags... Feb 13 16:06:45.041947 kubelet[2813]: I0213 16:06:45.041881 2813 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 16:06:45.041947 kubelet[2813]: I0213 16:06:45.041937 2813 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 16:06:45.042531 kubelet[2813]: I0213 16:06:45.042383 2813 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 16:06:45.083724 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2835) Feb 13 16:06:45.102909 kubelet[2813]: E0213 16:06:45.102837 2813 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.25.78:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.25.78:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:06:45.107012 kubelet[2813]: I0213 16:06:45.106937 2813 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 16:06:45.130745 kubelet[2813]: E0213 16:06:45.130465 2813 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 16:06:45.130745 kubelet[2813]: I0213 16:06:45.130524 2813 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 16:06:45.140083 kubelet[2813]: I0213 16:06:45.140039 2813 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 16:06:45.140499 kubelet[2813]: I0213 16:06:45.140476 2813 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 16:06:45.142479 kubelet[2813]: I0213 16:06:45.140906 2813 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 16:06:45.142479 kubelet[2813]: I0213 16:06:45.140958 2813 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-78","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 16:06:45.142479 kubelet[2813]: I0213 16:06:45.141331 2813 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 16:06:45.142479 kubelet[2813]: I0213 16:06:45.141353 2813 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 16:06:45.142896 kubelet[2813]: I0213 16:06:45.141594 2813 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:06:45.147931 kubelet[2813]: I0213 16:06:45.147883 2813 kubelet.go:408] "Attempting to sync node with API server" Feb 13 16:06:45.148755 kubelet[2813]: I0213 16:06:45.148721 2813 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 16:06:45.148969 kubelet[2813]: I0213 16:06:45.148949 2813 kubelet.go:314] "Adding apiserver pod source" Feb 13 16:06:45.149695 kubelet[2813]: I0213 16:06:45.149642 2813 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 16:06:45.159270 kubelet[2813]: W0213 16:06:45.157189 2813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.25.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-78&limit=500&resourceVersion=0": dial tcp 172.31.25.78:6443: connect: connection refused Feb 13 16:06:45.159270 kubelet[2813]: E0213 16:06:45.157297 2813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.25.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-78&limit=500&resourceVersion=0\": dial tcp 172.31.25.78:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:06:45.161995 kubelet[2813]: I0213 16:06:45.161719 2813 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 16:06:45.166760 kubelet[2813]: I0213 16:06:45.166692 2813 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 16:06:45.169269 kubelet[2813]: W0213 16:06:45.169220 2813 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 16:06:45.173700 kubelet[2813]: W0213 16:06:45.171863 2813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.25.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.25.78:6443: connect: connection refused Feb 13 16:06:45.173700 kubelet[2813]: E0213 16:06:45.171979 2813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.25.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.78:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:06:45.179950 kubelet[2813]: I0213 16:06:45.179885 2813 server.go:1269] "Started kubelet" Feb 13 16:06:45.195291 kubelet[2813]: I0213 16:06:45.195213 2813 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 16:06:45.201017 kubelet[2813]: I0213 16:06:45.200934 2813 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 16:06:45.211484 kubelet[2813]: I0213 16:06:45.211337 2813 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 16:06:45.212458 kubelet[2813]: E0213 16:06:45.212406 2813 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-25-78\" not found" Feb 13 16:06:45.212565 kubelet[2813]: E0213 16:06:45.200456 2813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.25.78:6443/api/v1/namespaces/default/events\": dial tcp 172.31.25.78:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-25-78.1823d0348abfa235 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-78,UID:ip-172-31-25-78,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-78,},FirstTimestamp:2025-02-13 16:06:45.179834933 +0000 UTC m=+1.740220474,LastTimestamp:2025-02-13 16:06:45.179834933 +0000 UTC m=+1.740220474,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-78,}" Feb 13 16:06:45.213225 kubelet[2813]: I0213 16:06:45.213181 2813 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 16:06:45.213360 kubelet[2813]: I0213 16:06:45.201975 2813 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 16:06:45.213549 kubelet[2813]: I0213 16:06:45.213511 2813 reconciler.go:26] "Reconciler: start to sync state" Feb 13 16:06:45.216514 kubelet[2813]: I0213 16:06:45.214191 2813 server.go:460] "Adding debug handlers to kubelet server" Feb 13 16:06:45.220254 kubelet[2813]: I0213 16:06:45.201243 2813 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 16:06:45.220254 kubelet[2813]: I0213 16:06:45.220155 2813 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 16:06:45.221809 kubelet[2813]: W0213 16:06:45.221447 2813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.25.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.78:6443: connect: connection refused Feb 13 16:06:45.221809 kubelet[2813]: E0213 16:06:45.221581 2813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.25.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.78:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:06:45.224049 kubelet[2813]: E0213 16:06:45.223882 2813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-78?timeout=10s\": dial tcp 172.31.25.78:6443: connect: connection refused" interval="200ms" Feb 13 16:06:45.227975 kubelet[2813]: I0213 16:06:45.227912 2813 factory.go:221] Registration of the systemd container factory successfully Feb 13 16:06:45.228139 kubelet[2813]: I0213 16:06:45.228088 2813 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 16:06:45.235292 kubelet[2813]: I0213 16:06:45.234920 2813 factory.go:221] Registration of the containerd container factory successfully Feb 13 16:06:45.262852 kubelet[2813]: E0213 16:06:45.260165 2813 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 16:06:45.296821 kubelet[2813]: I0213 16:06:45.296765 2813 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 16:06:45.296821 kubelet[2813]: I0213 16:06:45.296809 2813 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 16:06:45.297035 kubelet[2813]: I0213 16:06:45.296847 2813 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:06:45.301780 kubelet[2813]: I0213 16:06:45.301697 2813 policy_none.go:49] "None policy: Start" Feb 13 16:06:45.303921 kubelet[2813]: I0213 16:06:45.303869 2813 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 16:06:45.303921 kubelet[2813]: I0213 16:06:45.303922 2813 state_mem.go:35] "Initializing new in-memory state store" Feb 13 16:06:45.313734 kubelet[2813]: E0213 16:06:45.313612 2813 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-25-78\" not found" Feb 13 16:06:45.330685 kubelet[2813]: I0213 16:06:45.328507 2813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 16:06:45.347504 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 16:06:45.353973 kubelet[2813]: I0213 16:06:45.352585 2813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 16:06:45.353973 kubelet[2813]: I0213 16:06:45.352637 2813 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 16:06:45.353973 kubelet[2813]: I0213 16:06:45.352694 2813 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 16:06:45.353973 kubelet[2813]: E0213 16:06:45.352765 2813 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 16:06:45.381245 kubelet[2813]: W0213 16:06:45.379186 2813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.25.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.78:6443: connect: connection refused Feb 13 16:06:45.381245 kubelet[2813]: E0213 16:06:45.379289 2813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.25.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.78:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:06:45.414717 kubelet[2813]: E0213 16:06:45.413870 2813 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-25-78\" not found" Feb 13 16:06:45.428112 kubelet[2813]: E0213 16:06:45.428055 2813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-78?timeout=10s\": dial tcp 172.31.25.78:6443: connect: connection refused" interval="400ms" Feb 13 16:06:45.446617 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 16:06:45.456978 kubelet[2813]: E0213 16:06:45.456920 2813 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 16:06:45.510699 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2839) Feb 13 16:06:45.514674 kubelet[2813]: E0213 16:06:45.514597 2813 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-25-78\" not found" Feb 13 16:06:45.514908 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 16:06:45.526569 kubelet[2813]: I0213 16:06:45.526504 2813 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 16:06:45.527026 kubelet[2813]: I0213 16:06:45.526817 2813 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 16:06:45.527026 kubelet[2813]: I0213 16:06:45.526848 2813 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 16:06:45.527597 kubelet[2813]: I0213 16:06:45.527448 2813 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 16:06:45.536907 kubelet[2813]: E0213 16:06:45.535820 2813 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-25-78\" not found" Feb 13 16:06:45.632729 kubelet[2813]: I0213 16:06:45.631299 2813 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-25-78" Feb 13 16:06:45.632729 kubelet[2813]: E0213 16:06:45.631851 2813 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.25.78:6443/api/v1/nodes\": dial tcp 172.31.25.78:6443: connect: connection refused" node="ip-172-31-25-78" Feb 13 16:06:45.721196 kubelet[2813]: I0213 16:06:45.717879 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/714c83387e4f038b5e2eda58f6296db9-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-78\" (UID: \"714c83387e4f038b5e2eda58f6296db9\") " pod="kube-system/kube-apiserver-ip-172-31-25-78" Feb 13 16:06:45.721196 kubelet[2813]: I0213 16:06:45.717971 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/714c83387e4f038b5e2eda58f6296db9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-78\" (UID: \"714c83387e4f038b5e2eda58f6296db9\") " pod="kube-system/kube-apiserver-ip-172-31-25-78" Feb 13 16:06:45.721196 kubelet[2813]: I0213 16:06:45.718038 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fcb130508017cdec707fd34aeb373da5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-78\" (UID: \"fcb130508017cdec707fd34aeb373da5\") " pod="kube-system/kube-controller-manager-ip-172-31-25-78" Feb 13 16:06:45.721196 kubelet[2813]: I0213 16:06:45.718082 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fcb130508017cdec707fd34aeb373da5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-78\" (UID: \"fcb130508017cdec707fd34aeb373da5\") " pod="kube-system/kube-controller-manager-ip-172-31-25-78" Feb 13 16:06:45.721196 kubelet[2813]: I0213 16:06:45.718147 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fcb130508017cdec707fd34aeb373da5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-78\" (UID: \"fcb130508017cdec707fd34aeb373da5\") " pod="kube-system/kube-controller-manager-ip-172-31-25-78" Feb 13 16:06:45.721585 kubelet[2813]: I0213 16:06:45.718185 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fcb130508017cdec707fd34aeb373da5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-78\" (UID: \"fcb130508017cdec707fd34aeb373da5\") " pod="kube-system/kube-controller-manager-ip-172-31-25-78" Feb 13 16:06:45.721585 kubelet[2813]: I0213 16:06:45.718252 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/52c09372b612df435e65d5547ac2d48b-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-78\" (UID: \"52c09372b612df435e65d5547ac2d48b\") " pod="kube-system/kube-scheduler-ip-172-31-25-78" Feb 13 16:06:45.721585 kubelet[2813]: I0213 16:06:45.718312 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/714c83387e4f038b5e2eda58f6296db9-ca-certs\") pod \"kube-apiserver-ip-172-31-25-78\" (UID: \"714c83387e4f038b5e2eda58f6296db9\") " pod="kube-system/kube-apiserver-ip-172-31-25-78" Feb 13 16:06:45.721585 kubelet[2813]: I0213 16:06:45.718352 2813 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fcb130508017cdec707fd34aeb373da5-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-78\" (UID: \"fcb130508017cdec707fd34aeb373da5\") " pod="kube-system/kube-controller-manager-ip-172-31-25-78" Feb 13 16:06:45.769608 systemd[1]: Created slice kubepods-burstable-podfcb130508017cdec707fd34aeb373da5.slice - libcontainer container kubepods-burstable-podfcb130508017cdec707fd34aeb373da5.slice. Feb 13 16:06:45.790710 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2839) Feb 13 16:06:45.808886 systemd[1]: Created slice kubepods-burstable-pod52c09372b612df435e65d5547ac2d48b.slice - libcontainer container kubepods-burstable-pod52c09372b612df435e65d5547ac2d48b.slice. Feb 13 16:06:45.821515 systemd[1]: Created slice kubepods-burstable-pod714c83387e4f038b5e2eda58f6296db9.slice - libcontainer container kubepods-burstable-pod714c83387e4f038b5e2eda58f6296db9.slice. Feb 13 16:06:45.828957 kubelet[2813]: E0213 16:06:45.828873 2813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-78?timeout=10s\": dial tcp 172.31.25.78:6443: connect: connection refused" interval="800ms" Feb 13 16:06:45.829789 containerd[2002]: time="2025-02-13T16:06:45.829735916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-78,Uid:714c83387e4f038b5e2eda58f6296db9,Namespace:kube-system,Attempt:0,}" Feb 13 16:06:45.837443 kubelet[2813]: I0213 16:06:45.836412 2813 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-25-78" Feb 13 16:06:45.837443 kubelet[2813]: E0213 16:06:45.836934 2813 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.25.78:6443/api/v1/nodes\": dial tcp 172.31.25.78:6443: connect: connection refused" node="ip-172-31-25-78" Feb 13 16:06:46.099634 containerd[2002]: time="2025-02-13T16:06:46.099094986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-78,Uid:fcb130508017cdec707fd34aeb373da5,Namespace:kube-system,Attempt:0,}" Feb 13 16:06:46.118749 containerd[2002]: time="2025-02-13T16:06:46.118565046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-78,Uid:52c09372b612df435e65d5547ac2d48b,Namespace:kube-system,Attempt:0,}" Feb 13 16:06:46.239895 kubelet[2813]: I0213 16:06:46.239837 2813 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-25-78" Feb 13 16:06:46.240590 kubelet[2813]: E0213 16:06:46.240482 2813 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.25.78:6443/api/v1/nodes\": dial tcp 172.31.25.78:6443: connect: connection refused" node="ip-172-31-25-78" Feb 13 16:06:46.308310 kubelet[2813]: W0213 16:06:46.308211 2813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.25.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.25.78:6443: connect: connection refused Feb 13 16:06:46.308474 kubelet[2813]: E0213 16:06:46.308349 2813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.25.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.25.78:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:06:46.444435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3773851314.mount: Deactivated successfully. Feb 13 16:06:46.459793 containerd[2002]: time="2025-02-13T16:06:46.458815927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:06:46.465206 containerd[2002]: time="2025-02-13T16:06:46.465145687Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 16:06:46.468594 containerd[2002]: time="2025-02-13T16:06:46.467842339Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:06:46.470170 containerd[2002]: time="2025-02-13T16:06:46.469862395Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:06:46.473308 containerd[2002]: time="2025-02-13T16:06:46.473247247Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:06:46.475275 containerd[2002]: time="2025-02-13T16:06:46.475210243Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 16:06:46.477022 containerd[2002]: time="2025-02-13T16:06:46.476531347Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 16:06:46.482029 containerd[2002]: time="2025-02-13T16:06:46.481969315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:06:46.486297 containerd[2002]: time="2025-02-13T16:06:46.486246991Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 367.553965ms" Feb 13 16:06:46.489889 containerd[2002]: time="2025-02-13T16:06:46.489808543Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 389.861221ms" Feb 13 16:06:46.491259 containerd[2002]: time="2025-02-13T16:06:46.491194747Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 660.080979ms" Feb 13 16:06:46.549697 kubelet[2813]: W0213 16:06:46.542703 2813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.25.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.78:6443: connect: connection refused Feb 13 16:06:46.549697 kubelet[2813]: E0213 16:06:46.542811 2813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.25.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.25.78:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:06:46.630023 kubelet[2813]: E0213 16:06:46.629943 2813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-78?timeout=10s\": dial tcp 172.31.25.78:6443: connect: connection refused" interval="1.6s" Feb 13 16:06:46.642888 kubelet[2813]: W0213 16:06:46.641567 2813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.25.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-78&limit=500&resourceVersion=0": dial tcp 172.31.25.78:6443: connect: connection refused Feb 13 16:06:46.642888 kubelet[2813]: E0213 16:06:46.641763 2813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.25.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-78&limit=500&resourceVersion=0\": dial tcp 172.31.25.78:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:06:46.673972 containerd[2002]: time="2025-02-13T16:06:46.673787672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:46.673972 containerd[2002]: time="2025-02-13T16:06:46.673913468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:46.674437 containerd[2002]: time="2025-02-13T16:06:46.674194868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:46.674726 containerd[2002]: time="2025-02-13T16:06:46.674555768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:46.678361 containerd[2002]: time="2025-02-13T16:06:46.677968508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:46.678361 containerd[2002]: time="2025-02-13T16:06:46.678185936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:46.678361 containerd[2002]: time="2025-02-13T16:06:46.678224456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:46.679286 containerd[2002]: time="2025-02-13T16:06:46.678925688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:46.685376 containerd[2002]: time="2025-02-13T16:06:46.685129376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:06:46.686494 containerd[2002]: time="2025-02-13T16:06:46.686163032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:06:46.687098 containerd[2002]: time="2025-02-13T16:06:46.686576312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:46.688714 containerd[2002]: time="2025-02-13T16:06:46.687302252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:06:46.727088 systemd[1]: Started cri-containerd-0f1e1198cfd4aefe83cf2639ef55f46578d36402dc903810c1e4879a1767a03f.scope - libcontainer container 0f1e1198cfd4aefe83cf2639ef55f46578d36402dc903810c1e4879a1767a03f. Feb 13 16:06:46.747492 systemd[1]: Started cri-containerd-557424b9df31e0bddbfc695d82c48add36fa99aabc35202fb58438dd60435601.scope - libcontainer container 557424b9df31e0bddbfc695d82c48add36fa99aabc35202fb58438dd60435601. Feb 13 16:06:46.769026 systemd[1]: Started cri-containerd-c6036d9180955e7badb004caeda649848ffd34de1a8f14b182a769d86adcc545.scope - libcontainer container c6036d9180955e7badb004caeda649848ffd34de1a8f14b182a769d86adcc545. Feb 13 16:06:46.835842 kubelet[2813]: W0213 16:06:46.835485 2813 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.25.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.78:6443: connect: connection refused Feb 13 16:06:46.835842 kubelet[2813]: E0213 16:06:46.835619 2813 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.25.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.25.78:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:06:46.852302 containerd[2002]: time="2025-02-13T16:06:46.852073029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-78,Uid:fcb130508017cdec707fd34aeb373da5,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f1e1198cfd4aefe83cf2639ef55f46578d36402dc903810c1e4879a1767a03f\"" Feb 13 16:06:46.861112 containerd[2002]: time="2025-02-13T16:06:46.860809077Z" level=info msg="CreateContainer within sandbox \"0f1e1198cfd4aefe83cf2639ef55f46578d36402dc903810c1e4879a1767a03f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 16:06:46.891246 containerd[2002]: time="2025-02-13T16:06:46.891023913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-78,Uid:714c83387e4f038b5e2eda58f6296db9,Namespace:kube-system,Attempt:0,} returns sandbox id \"557424b9df31e0bddbfc695d82c48add36fa99aabc35202fb58438dd60435601\"" Feb 13 16:06:46.897976 containerd[2002]: time="2025-02-13T16:06:46.897919005Z" level=info msg="CreateContainer within sandbox \"557424b9df31e0bddbfc695d82c48add36fa99aabc35202fb58438dd60435601\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 16:06:46.909503 containerd[2002]: time="2025-02-13T16:06:46.909413134Z" level=info msg="CreateContainer within sandbox \"0f1e1198cfd4aefe83cf2639ef55f46578d36402dc903810c1e4879a1767a03f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bf099f3ab5a0fa1e2fdb4b23b89da65a689b41740b834d8224884ad72203be83\"" Feb 13 16:06:46.918318 containerd[2002]: time="2025-02-13T16:06:46.918159682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-78,Uid:52c09372b612df435e65d5547ac2d48b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6036d9180955e7badb004caeda649848ffd34de1a8f14b182a769d86adcc545\"" Feb 13 16:06:46.918450 containerd[2002]: time="2025-02-13T16:06:46.918336022Z" level=info msg="StartContainer for \"bf099f3ab5a0fa1e2fdb4b23b89da65a689b41740b834d8224884ad72203be83\"" Feb 13 16:06:46.925848 containerd[2002]: time="2025-02-13T16:06:46.925793722Z" level=info msg="CreateContainer within sandbox \"c6036d9180955e7badb004caeda649848ffd34de1a8f14b182a769d86adcc545\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 16:06:46.945124 containerd[2002]: time="2025-02-13T16:06:46.945049702Z" level=info msg="CreateContainer within sandbox \"557424b9df31e0bddbfc695d82c48add36fa99aabc35202fb58438dd60435601\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c96608ab2496d90ff525e0bb8d92d5e0dc9e141e34f55cdc2c8ad042048ca6d8\"" Feb 13 16:06:46.948722 containerd[2002]: time="2025-02-13T16:06:46.946851694Z" level=info msg="StartContainer for \"c96608ab2496d90ff525e0bb8d92d5e0dc9e141e34f55cdc2c8ad042048ca6d8\"" Feb 13 16:06:46.966500 containerd[2002]: time="2025-02-13T16:06:46.966425482Z" level=info msg="CreateContainer within sandbox \"c6036d9180955e7badb004caeda649848ffd34de1a8f14b182a769d86adcc545\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"940de31c6f85272b5775ffcfe59ee6c008ecc03062d8e383e79b025f919701df\"" Feb 13 16:06:46.967340 containerd[2002]: time="2025-02-13T16:06:46.967296850Z" level=info msg="StartContainer for \"940de31c6f85272b5775ffcfe59ee6c008ecc03062d8e383e79b025f919701df\"" Feb 13 16:06:46.982097 systemd[1]: Started cri-containerd-bf099f3ab5a0fa1e2fdb4b23b89da65a689b41740b834d8224884ad72203be83.scope - libcontainer container bf099f3ab5a0fa1e2fdb4b23b89da65a689b41740b834d8224884ad72203be83. Feb 13 16:06:47.027003 systemd[1]: Started cri-containerd-c96608ab2496d90ff525e0bb8d92d5e0dc9e141e34f55cdc2c8ad042048ca6d8.scope - libcontainer container c96608ab2496d90ff525e0bb8d92d5e0dc9e141e34f55cdc2c8ad042048ca6d8. Feb 13 16:06:47.047212 kubelet[2813]: I0213 16:06:47.047124 2813 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-25-78" Feb 13 16:06:47.048206 kubelet[2813]: E0213 16:06:47.048130 2813 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.25.78:6443/api/v1/nodes\": dial tcp 172.31.25.78:6443: connect: connection refused" node="ip-172-31-25-78" Feb 13 16:06:47.078983 systemd[1]: Started cri-containerd-940de31c6f85272b5775ffcfe59ee6c008ecc03062d8e383e79b025f919701df.scope - libcontainer container 940de31c6f85272b5775ffcfe59ee6c008ecc03062d8e383e79b025f919701df. Feb 13 16:06:47.128274 containerd[2002]: time="2025-02-13T16:06:47.128165239Z" level=info msg="StartContainer for \"bf099f3ab5a0fa1e2fdb4b23b89da65a689b41740b834d8224884ad72203be83\" returns successfully" Feb 13 16:06:47.180072 kubelet[2813]: E0213 16:06:47.179995 2813 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.25.78:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.25.78:6443: connect: connection refused" logger="UnhandledError" Feb 13 16:06:47.188705 containerd[2002]: time="2025-02-13T16:06:47.187786747Z" level=info msg="StartContainer for \"c96608ab2496d90ff525e0bb8d92d5e0dc9e141e34f55cdc2c8ad042048ca6d8\" returns successfully" Feb 13 16:06:47.208835 containerd[2002]: time="2025-02-13T16:06:47.208616443Z" level=info msg="StartContainer for \"940de31c6f85272b5775ffcfe59ee6c008ecc03062d8e383e79b025f919701df\" returns successfully" Feb 13 16:06:48.653032 kubelet[2813]: I0213 16:06:48.652957 2813 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-25-78" Feb 13 16:06:51.439229 kubelet[2813]: E0213 16:06:51.439157 2813 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-25-78\" not found" node="ip-172-31-25-78" Feb 13 16:06:51.616850 kubelet[2813]: I0213 16:06:51.616742 2813 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-25-78" Feb 13 16:06:51.991860 kubelet[2813]: E0213 16:06:51.991733 2813 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-25-78\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-25-78" Feb 13 16:06:52.162008 kubelet[2813]: I0213 16:06:52.160153 2813 apiserver.go:52] "Watching apiserver" Feb 13 16:06:52.214478 kubelet[2813]: I0213 16:06:52.214380 2813 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 16:06:53.914259 systemd[1]: Reloading requested from client PID 3362 ('systemctl') (unit session-7.scope)... Feb 13 16:06:53.914284 systemd[1]: Reloading... Feb 13 16:06:54.112746 zram_generator::config[3402]: No configuration found. Feb 13 16:06:54.339867 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:06:54.543359 systemd[1]: Reloading finished in 628 ms. Feb 13 16:06:54.621507 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:06:54.641384 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 16:06:54.642058 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:06:54.642214 systemd[1]: kubelet.service: Consumed 2.443s CPU time, 119.9M memory peak, 0B memory swap peak. Feb 13 16:06:54.649204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:06:54.968176 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:06:54.982157 (kubelet)[3462]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 16:06:55.075571 kubelet[3462]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:06:55.075571 kubelet[3462]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 16:06:55.075571 kubelet[3462]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:06:55.076147 kubelet[3462]: I0213 16:06:55.075716 3462 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 16:06:55.097811 kubelet[3462]: I0213 16:06:55.097445 3462 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 16:06:55.097811 kubelet[3462]: I0213 16:06:55.097509 3462 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 16:06:55.098043 kubelet[3462]: I0213 16:06:55.097946 3462 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 16:06:55.102808 kubelet[3462]: I0213 16:06:55.102762 3462 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 16:06:55.107877 kubelet[3462]: I0213 16:06:55.107579 3462 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 16:06:55.116338 kubelet[3462]: E0213 16:06:55.116288 3462 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 16:06:55.117318 kubelet[3462]: I0213 16:06:55.117292 3462 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 16:06:55.124297 kubelet[3462]: I0213 16:06:55.124245 3462 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 16:06:55.124906 kubelet[3462]: I0213 16:06:55.124779 3462 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 16:06:55.126348 kubelet[3462]: I0213 16:06:55.125298 3462 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 16:06:55.126348 kubelet[3462]: I0213 16:06:55.125346 3462 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-25-78","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 16:06:55.126348 kubelet[3462]: I0213 16:06:55.125704 3462 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 16:06:55.126348 kubelet[3462]: I0213 16:06:55.125728 3462 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 16:06:55.127223 kubelet[3462]: I0213 16:06:55.125790 3462 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:06:55.127223 kubelet[3462]: I0213 16:06:55.126028 3462 kubelet.go:408] "Attempting to sync node with API server" Feb 13 16:06:55.127223 kubelet[3462]: I0213 16:06:55.126108 3462 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 16:06:55.127223 kubelet[3462]: I0213 16:06:55.126209 3462 kubelet.go:314] "Adding apiserver pod source" Feb 13 16:06:55.127223 kubelet[3462]: I0213 16:06:55.126233 3462 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 16:06:55.131745 kubelet[3462]: I0213 16:06:55.129625 3462 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 16:06:55.131745 kubelet[3462]: I0213 16:06:55.130634 3462 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 16:06:55.132366 kubelet[3462]: I0213 16:06:55.132317 3462 server.go:1269] "Started kubelet" Feb 13 16:06:55.141487 kubelet[3462]: I0213 16:06:55.139397 3462 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 16:06:55.150079 kubelet[3462]: I0213 16:06:55.149955 3462 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 16:06:55.154364 kubelet[3462]: I0213 16:06:55.153580 3462 server.go:460] "Adding debug handlers to kubelet server" Feb 13 16:06:55.159009 kubelet[3462]: I0213 16:06:55.158908 3462 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 16:06:55.159324 kubelet[3462]: I0213 16:06:55.159282 3462 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 16:06:55.160159 kubelet[3462]: I0213 16:06:55.160073 3462 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 16:06:55.169066 kubelet[3462]: I0213 16:06:55.168579 3462 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 16:06:55.170238 kubelet[3462]: E0213 16:06:55.170074 3462 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-25-78\" not found" Feb 13 16:06:55.172548 kubelet[3462]: I0213 16:06:55.172447 3462 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 16:06:55.173636 kubelet[3462]: I0213 16:06:55.172908 3462 reconciler.go:26] "Reconciler: start to sync state" Feb 13 16:06:55.200750 kubelet[3462]: I0213 16:06:55.200636 3462 factory.go:221] Registration of the systemd container factory successfully Feb 13 16:06:55.200960 kubelet[3462]: I0213 16:06:55.200872 3462 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 16:06:55.213034 kubelet[3462]: I0213 16:06:55.212785 3462 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 16:06:55.217031 kubelet[3462]: I0213 16:06:55.216988 3462 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 16:06:55.219876 kubelet[3462]: I0213 16:06:55.219739 3462 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 16:06:55.224710 kubelet[3462]: I0213 16:06:55.220057 3462 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 16:06:55.224710 kubelet[3462]: E0213 16:06:55.220142 3462 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 16:06:55.245009 kubelet[3462]: I0213 16:06:55.244933 3462 factory.go:221] Registration of the containerd container factory successfully Feb 13 16:06:55.273030 kubelet[3462]: E0213 16:06:55.272986 3462 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-25-78\" not found" Feb 13 16:06:55.321403 kubelet[3462]: E0213 16:06:55.321351 3462 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 16:06:55.403959 kubelet[3462]: I0213 16:06:55.403920 3462 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 16:06:55.404173 kubelet[3462]: I0213 16:06:55.404150 3462 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 16:06:55.404720 kubelet[3462]: I0213 16:06:55.404289 3462 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:06:55.404720 kubelet[3462]: I0213 16:06:55.404559 3462 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 16:06:55.404720 kubelet[3462]: I0213 16:06:55.404582 3462 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 16:06:55.404720 kubelet[3462]: I0213 16:06:55.404615 3462 policy_none.go:49] "None policy: Start" Feb 13 16:06:55.407177 kubelet[3462]: I0213 16:06:55.407128 3462 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 16:06:55.407177 kubelet[3462]: I0213 16:06:55.407180 3462 state_mem.go:35] "Initializing new in-memory state store" Feb 13 16:06:55.407515 kubelet[3462]: I0213 16:06:55.407488 3462 state_mem.go:75] "Updated machine memory state" Feb 13 16:06:55.422286 kubelet[3462]: I0213 16:06:55.422146 3462 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 16:06:55.423705 kubelet[3462]: I0213 16:06:55.422874 3462 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 16:06:55.423705 kubelet[3462]: I0213 16:06:55.422900 3462 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 16:06:55.423705 kubelet[3462]: I0213 16:06:55.423383 3462 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 16:06:55.551889 kubelet[3462]: I0213 16:06:55.550024 3462 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-25-78" Feb 13 16:06:55.576743 kubelet[3462]: I0213 16:06:55.576040 3462 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-25-78" Feb 13 16:06:55.576743 kubelet[3462]: I0213 16:06:55.576234 3462 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-25-78" Feb 13 16:06:55.674360 kubelet[3462]: I0213 16:06:55.674253 3462 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/714c83387e4f038b5e2eda58f6296db9-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-78\" (UID: \"714c83387e4f038b5e2eda58f6296db9\") " pod="kube-system/kube-apiserver-ip-172-31-25-78" Feb 13 16:06:55.674360 kubelet[3462]: I0213 16:06:55.674356 3462 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fcb130508017cdec707fd34aeb373da5-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-78\" (UID: \"fcb130508017cdec707fd34aeb373da5\") " pod="kube-system/kube-controller-manager-ip-172-31-25-78" Feb 13 16:06:55.674734 kubelet[3462]: I0213 16:06:55.674402 3462 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fcb130508017cdec707fd34aeb373da5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-78\" (UID: \"fcb130508017cdec707fd34aeb373da5\") " pod="kube-system/kube-controller-manager-ip-172-31-25-78" Feb 13 16:06:55.674734 kubelet[3462]: I0213 16:06:55.674440 3462 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/52c09372b612df435e65d5547ac2d48b-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-78\" (UID: \"52c09372b612df435e65d5547ac2d48b\") " pod="kube-system/kube-scheduler-ip-172-31-25-78" Feb 13 16:06:55.674734 kubelet[3462]: I0213 16:06:55.674478 3462 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/714c83387e4f038b5e2eda58f6296db9-ca-certs\") pod \"kube-apiserver-ip-172-31-25-78\" (UID: \"714c83387e4f038b5e2eda58f6296db9\") " pod="kube-system/kube-apiserver-ip-172-31-25-78" Feb 13 16:06:55.674734 kubelet[3462]: I0213 16:06:55.674520 3462 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/714c83387e4f038b5e2eda58f6296db9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-78\" (UID: \"714c83387e4f038b5e2eda58f6296db9\") " pod="kube-system/kube-apiserver-ip-172-31-25-78" Feb 13 16:06:55.674734 kubelet[3462]: I0213 16:06:55.674575 3462 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fcb130508017cdec707fd34aeb373da5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-78\" (UID: \"fcb130508017cdec707fd34aeb373da5\") " pod="kube-system/kube-controller-manager-ip-172-31-25-78" Feb 13 16:06:55.675010 kubelet[3462]: I0213 16:06:55.674619 3462 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fcb130508017cdec707fd34aeb373da5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-78\" (UID: \"fcb130508017cdec707fd34aeb373da5\") " pod="kube-system/kube-controller-manager-ip-172-31-25-78" Feb 13 16:06:55.675010 kubelet[3462]: I0213 16:06:55.674692 3462 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fcb130508017cdec707fd34aeb373da5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-78\" (UID: \"fcb130508017cdec707fd34aeb373da5\") " pod="kube-system/kube-controller-manager-ip-172-31-25-78" Feb 13 16:06:56.153633 kubelet[3462]: I0213 16:06:56.153563 3462 apiserver.go:52] "Watching apiserver" Feb 13 16:06:56.173397 kubelet[3462]: I0213 16:06:56.173328 3462 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 16:06:56.350635 kubelet[3462]: E0213 16:06:56.350549 3462 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-25-78\" already exists" pod="kube-system/kube-scheduler-ip-172-31-25-78" Feb 13 16:06:56.418030 kubelet[3462]: I0213 16:06:56.417577 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-25-78" podStartSLOduration=1.4175537089999999 podStartE2EDuration="1.417553709s" podCreationTimestamp="2025-02-13 16:06:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:06:56.400169213 +0000 UTC m=+1.410372488" watchObservedRunningTime="2025-02-13 16:06:56.417553709 +0000 UTC m=+1.427756972" Feb 13 16:06:56.442615 kubelet[3462]: I0213 16:06:56.442487 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-25-78" podStartSLOduration=1.442451945 podStartE2EDuration="1.442451945s" podCreationTimestamp="2025-02-13 16:06:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:06:56.419367149 +0000 UTC m=+1.429570424" watchObservedRunningTime="2025-02-13 16:06:56.442451945 +0000 UTC m=+1.452655196" Feb 13 16:06:56.472189 kubelet[3462]: I0213 16:06:56.471970 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-25-78" podStartSLOduration=1.471948521 podStartE2EDuration="1.471948521s" podCreationTimestamp="2025-02-13 16:06:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:06:56.443993237 +0000 UTC m=+1.454196500" watchObservedRunningTime="2025-02-13 16:06:56.471948521 +0000 UTC m=+1.482151772" Feb 13 16:06:56.838884 sudo[2321]: pam_unix(sudo:session): session closed for user root Feb 13 16:06:56.861338 sshd[2308]: pam_unix(sshd:session): session closed for user core Feb 13 16:06:56.868836 systemd[1]: sshd@6-172.31.25.78:22-139.178.68.195:60786.service: Deactivated successfully. Feb 13 16:06:56.874502 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 16:06:56.876193 systemd[1]: session-7.scope: Consumed 9.649s CPU time, 154.3M memory peak, 0B memory swap peak. Feb 13 16:06:56.884502 systemd-logind[1993]: Session 7 logged out. Waiting for processes to exit. Feb 13 16:06:56.889548 systemd-logind[1993]: Removed session 7. Feb 13 16:07:00.207478 kubelet[3462]: I0213 16:07:00.207420 3462 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 16:07:00.208621 kubelet[3462]: I0213 16:07:00.208356 3462 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 16:07:00.208708 containerd[2002]: time="2025-02-13T16:07:00.208026440Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 16:07:01.217199 kubelet[3462]: I0213 16:07:01.216845 3462 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cb8216b2-808f-4661-b669-628fd0caf8d2-kube-proxy\") pod \"kube-proxy-8gzzw\" (UID: \"cb8216b2-808f-4661-b669-628fd0caf8d2\") " pod="kube-system/kube-proxy-8gzzw" Feb 13 16:07:01.217199 kubelet[3462]: I0213 16:07:01.216906 3462 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb8216b2-808f-4661-b669-628fd0caf8d2-xtables-lock\") pod \"kube-proxy-8gzzw\" (UID: \"cb8216b2-808f-4661-b669-628fd0caf8d2\") " pod="kube-system/kube-proxy-8gzzw" Feb 13 16:07:01.217199 kubelet[3462]: I0213 16:07:01.216943 3462 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb8216b2-808f-4661-b669-628fd0caf8d2-lib-modules\") pod \"kube-proxy-8gzzw\" (UID: \"cb8216b2-808f-4661-b669-628fd0caf8d2\") " pod="kube-system/kube-proxy-8gzzw" Feb 13 16:07:01.217199 kubelet[3462]: I0213 16:07:01.216982 3462 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fmqq\" (UniqueName: \"kubernetes.io/projected/cb8216b2-808f-4661-b669-628fd0caf8d2-kube-api-access-6fmqq\") pod \"kube-proxy-8gzzw\" (UID: \"cb8216b2-808f-4661-b669-628fd0caf8d2\") " pod="kube-system/kube-proxy-8gzzw" Feb 13 16:07:01.223439 systemd[1]: Created slice kubepods-besteffort-podcb8216b2_808f_4661_b669_628fd0caf8d2.slice - libcontainer container kubepods-besteffort-podcb8216b2_808f_4661_b669_628fd0caf8d2.slice. Feb 13 16:07:01.255694 systemd[1]: Created slice kubepods-burstable-pod623d560e_eda0_4981_b0a6_89bec741e39d.slice - libcontainer container kubepods-burstable-pod623d560e_eda0_4981_b0a6_89bec741e39d.slice. Feb 13 16:07:01.317745 kubelet[3462]: I0213 16:07:01.317689 3462 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/623d560e-eda0-4981-b0a6-89bec741e39d-cni-plugin\") pod \"kube-flannel-ds-vph74\" (UID: \"623d560e-eda0-4981-b0a6-89bec741e39d\") " pod="kube-flannel/kube-flannel-ds-vph74" Feb 13 16:07:01.318218 kubelet[3462]: I0213 16:07:01.318190 3462 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/623d560e-eda0-4981-b0a6-89bec741e39d-flannel-cfg\") pod \"kube-flannel-ds-vph74\" (UID: \"623d560e-eda0-4981-b0a6-89bec741e39d\") " pod="kube-flannel/kube-flannel-ds-vph74" Feb 13 16:07:01.318418 kubelet[3462]: I0213 16:07:01.318391 3462 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/623d560e-eda0-4981-b0a6-89bec741e39d-xtables-lock\") pod \"kube-flannel-ds-vph74\" (UID: \"623d560e-eda0-4981-b0a6-89bec741e39d\") " pod="kube-flannel/kube-flannel-ds-vph74" Feb 13 16:07:01.321071 kubelet[3462]: I0213 16:07:01.320912 3462 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntnj5\" (UniqueName: \"kubernetes.io/projected/623d560e-eda0-4981-b0a6-89bec741e39d-kube-api-access-ntnj5\") pod \"kube-flannel-ds-vph74\" (UID: \"623d560e-eda0-4981-b0a6-89bec741e39d\") " pod="kube-flannel/kube-flannel-ds-vph74" Feb 13 16:07:01.323573 kubelet[3462]: I0213 16:07:01.322957 3462 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/623d560e-eda0-4981-b0a6-89bec741e39d-cni\") pod \"kube-flannel-ds-vph74\" (UID: \"623d560e-eda0-4981-b0a6-89bec741e39d\") " pod="kube-flannel/kube-flannel-ds-vph74" Feb 13 16:07:01.323871 kubelet[3462]: I0213 16:07:01.323153 3462 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/623d560e-eda0-4981-b0a6-89bec741e39d-run\") pod \"kube-flannel-ds-vph74\" (UID: \"623d560e-eda0-4981-b0a6-89bec741e39d\") " pod="kube-flannel/kube-flannel-ds-vph74" Feb 13 16:07:01.543093 containerd[2002]: time="2025-02-13T16:07:01.542267302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8gzzw,Uid:cb8216b2-808f-4661-b669-628fd0caf8d2,Namespace:kube-system,Attempt:0,}" Feb 13 16:07:01.577378 containerd[2002]: time="2025-02-13T16:07:01.576345430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-vph74,Uid:623d560e-eda0-4981-b0a6-89bec741e39d,Namespace:kube-flannel,Attempt:0,}" Feb 13 16:07:01.585593 containerd[2002]: time="2025-02-13T16:07:01.585134050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:07:01.585593 containerd[2002]: time="2025-02-13T16:07:01.585266938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:07:01.585894 containerd[2002]: time="2025-02-13T16:07:01.585590386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:01.586007 containerd[2002]: time="2025-02-13T16:07:01.585889906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:01.645030 systemd[1]: Started cri-containerd-c953489df9ef6bda3e565483f1364be2688c23c6ce311b4ffbfb0916faa2fc18.scope - libcontainer container c953489df9ef6bda3e565483f1364be2688c23c6ce311b4ffbfb0916faa2fc18. Feb 13 16:07:01.654942 containerd[2002]: time="2025-02-13T16:07:01.652321703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:07:01.654942 containerd[2002]: time="2025-02-13T16:07:01.654324227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:07:01.655618 containerd[2002]: time="2025-02-13T16:07:01.654356039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:01.655618 containerd[2002]: time="2025-02-13T16:07:01.654517667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:01.698012 systemd[1]: Started cri-containerd-b9f6e74702995d88beca1e586d41e1de80e65c4ad9a5a2c4c98a91746a53446c.scope - libcontainer container b9f6e74702995d88beca1e586d41e1de80e65c4ad9a5a2c4c98a91746a53446c. Feb 13 16:07:01.716227 containerd[2002]: time="2025-02-13T16:07:01.716069663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8gzzw,Uid:cb8216b2-808f-4661-b669-628fd0caf8d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"c953489df9ef6bda3e565483f1364be2688c23c6ce311b4ffbfb0916faa2fc18\"" Feb 13 16:07:01.730506 containerd[2002]: time="2025-02-13T16:07:01.730156811Z" level=info msg="CreateContainer within sandbox \"c953489df9ef6bda3e565483f1364be2688c23c6ce311b4ffbfb0916faa2fc18\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 16:07:01.755295 containerd[2002]: time="2025-02-13T16:07:01.755218919Z" level=info msg="CreateContainer within sandbox \"c953489df9ef6bda3e565483f1364be2688c23c6ce311b4ffbfb0916faa2fc18\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"366e23a9f3f9f387cd178fa6ed35cff7cd8d47aa4444abab9f132940175ee26a\"" Feb 13 16:07:01.758865 containerd[2002]: time="2025-02-13T16:07:01.757114691Z" level=info msg="StartContainer for \"366e23a9f3f9f387cd178fa6ed35cff7cd8d47aa4444abab9f132940175ee26a\"" Feb 13 16:07:01.786640 containerd[2002]: time="2025-02-13T16:07:01.786576563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-vph74,Uid:623d560e-eda0-4981-b0a6-89bec741e39d,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"b9f6e74702995d88beca1e586d41e1de80e65c4ad9a5a2c4c98a91746a53446c\"" Feb 13 16:07:01.795305 containerd[2002]: time="2025-02-13T16:07:01.795138395Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 16:07:01.829556 systemd[1]: Started cri-containerd-366e23a9f3f9f387cd178fa6ed35cff7cd8d47aa4444abab9f132940175ee26a.scope - libcontainer container 366e23a9f3f9f387cd178fa6ed35cff7cd8d47aa4444abab9f132940175ee26a. Feb 13 16:07:01.880892 containerd[2002]: time="2025-02-13T16:07:01.880769832Z" level=info msg="StartContainer for \"366e23a9f3f9f387cd178fa6ed35cff7cd8d47aa4444abab9f132940175ee26a\" returns successfully" Feb 13 16:07:02.385752 kubelet[3462]: I0213 16:07:02.385362 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8gzzw" podStartSLOduration=1.385315762 podStartE2EDuration="1.385315762s" podCreationTimestamp="2025-02-13 16:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:07:02.38516533 +0000 UTC m=+7.395368581" watchObservedRunningTime="2025-02-13 16:07:02.385315762 +0000 UTC m=+7.395519025" Feb 13 16:07:04.870829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2804456131.mount: Deactivated successfully. Feb 13 16:07:04.925782 containerd[2002]: time="2025-02-13T16:07:04.925118919Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:04.927233 containerd[2002]: time="2025-02-13T16:07:04.926964243Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Feb 13 16:07:04.929532 containerd[2002]: time="2025-02-13T16:07:04.928869435Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:04.934122 containerd[2002]: time="2025-02-13T16:07:04.934042455Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:04.936113 containerd[2002]: time="2025-02-13T16:07:04.935809419Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 3.140573608s" Feb 13 16:07:04.936113 containerd[2002]: time="2025-02-13T16:07:04.935908251Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 13 16:07:04.941489 containerd[2002]: time="2025-02-13T16:07:04.941395131Z" level=info msg="CreateContainer within sandbox \"b9f6e74702995d88beca1e586d41e1de80e65c4ad9a5a2c4c98a91746a53446c\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 16:07:04.972198 containerd[2002]: time="2025-02-13T16:07:04.972128895Z" level=info msg="CreateContainer within sandbox \"b9f6e74702995d88beca1e586d41e1de80e65c4ad9a5a2c4c98a91746a53446c\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"8fb4cd94e0ec721c97c0cf9d8fb96d0251bbba7a4dd0f09e313ac9e988367e91\"" Feb 13 16:07:04.973726 containerd[2002]: time="2025-02-13T16:07:04.973679775Z" level=info msg="StartContainer for \"8fb4cd94e0ec721c97c0cf9d8fb96d0251bbba7a4dd0f09e313ac9e988367e91\"" Feb 13 16:07:05.028062 systemd[1]: Started cri-containerd-8fb4cd94e0ec721c97c0cf9d8fb96d0251bbba7a4dd0f09e313ac9e988367e91.scope - libcontainer container 8fb4cd94e0ec721c97c0cf9d8fb96d0251bbba7a4dd0f09e313ac9e988367e91. Feb 13 16:07:05.082701 containerd[2002]: time="2025-02-13T16:07:05.082489188Z" level=info msg="StartContainer for \"8fb4cd94e0ec721c97c0cf9d8fb96d0251bbba7a4dd0f09e313ac9e988367e91\" returns successfully" Feb 13 16:07:05.088835 systemd[1]: cri-containerd-8fb4cd94e0ec721c97c0cf9d8fb96d0251bbba7a4dd0f09e313ac9e988367e91.scope: Deactivated successfully. Feb 13 16:07:05.163929 containerd[2002]: time="2025-02-13T16:07:05.163593888Z" level=info msg="shim disconnected" id=8fb4cd94e0ec721c97c0cf9d8fb96d0251bbba7a4dd0f09e313ac9e988367e91 namespace=k8s.io Feb 13 16:07:05.163929 containerd[2002]: time="2025-02-13T16:07:05.163798740Z" level=warning msg="cleaning up after shim disconnected" id=8fb4cd94e0ec721c97c0cf9d8fb96d0251bbba7a4dd0f09e313ac9e988367e91 namespace=k8s.io Feb 13 16:07:05.163929 containerd[2002]: time="2025-02-13T16:07:05.163823880Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:07:05.364299 containerd[2002]: time="2025-02-13T16:07:05.363781993Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 16:07:05.720592 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fb4cd94e0ec721c97c0cf9d8fb96d0251bbba7a4dd0f09e313ac9e988367e91-rootfs.mount: Deactivated successfully. Feb 13 16:07:07.719341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount114109228.mount: Deactivated successfully. Feb 13 16:07:08.952800 containerd[2002]: time="2025-02-13T16:07:08.952742911Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:08.955417 containerd[2002]: time="2025-02-13T16:07:08.955172035Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Feb 13 16:07:08.957044 containerd[2002]: time="2025-02-13T16:07:08.956943307Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:08.963062 containerd[2002]: time="2025-02-13T16:07:08.962961403Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:07:08.966562 containerd[2002]: time="2025-02-13T16:07:08.966480211Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 3.602558862s" Feb 13 16:07:08.966562 containerd[2002]: time="2025-02-13T16:07:08.966549739Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 13 16:07:08.973839 containerd[2002]: time="2025-02-13T16:07:08.973492171Z" level=info msg="CreateContainer within sandbox \"b9f6e74702995d88beca1e586d41e1de80e65c4ad9a5a2c4c98a91746a53446c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 16:07:09.003868 containerd[2002]: time="2025-02-13T16:07:09.003647547Z" level=info msg="CreateContainer within sandbox \"b9f6e74702995d88beca1e586d41e1de80e65c4ad9a5a2c4c98a91746a53446c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b7fb0ef870165e75290cd11503abef003a276daae6ea4259134109aae8be2240\"" Feb 13 16:07:09.005896 containerd[2002]: time="2025-02-13T16:07:09.005495631Z" level=info msg="StartContainer for \"b7fb0ef870165e75290cd11503abef003a276daae6ea4259134109aae8be2240\"" Feb 13 16:07:09.078179 systemd[1]: Started cri-containerd-b7fb0ef870165e75290cd11503abef003a276daae6ea4259134109aae8be2240.scope - libcontainer container b7fb0ef870165e75290cd11503abef003a276daae6ea4259134109aae8be2240. Feb 13 16:07:09.126380 systemd[1]: cri-containerd-b7fb0ef870165e75290cd11503abef003a276daae6ea4259134109aae8be2240.scope: Deactivated successfully. Feb 13 16:07:09.133238 containerd[2002]: time="2025-02-13T16:07:09.133114084Z" level=info msg="StartContainer for \"b7fb0ef870165e75290cd11503abef003a276daae6ea4259134109aae8be2240\" returns successfully" Feb 13 16:07:09.170205 kubelet[3462]: I0213 16:07:09.170130 3462 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 16:07:09.176537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7fb0ef870165e75290cd11503abef003a276daae6ea4259134109aae8be2240-rootfs.mount: Deactivated successfully. Feb 13 16:07:09.273206 systemd[1]: Created slice kubepods-burstable-pod54e5ec74_959f_4976_be06_2bdb21b9e4f3.slice - libcontainer container kubepods-burstable-pod54e5ec74_959f_4976_be06_2bdb21b9e4f3.slice. Feb 13 16:07:09.281924 kubelet[3462]: I0213 16:07:09.281804 3462 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnb24\" (UniqueName: \"kubernetes.io/projected/54e5ec74-959f-4976-be06-2bdb21b9e4f3-kube-api-access-fnb24\") pod \"coredns-6f6b679f8f-4d5gm\" (UID: \"54e5ec74-959f-4976-be06-2bdb21b9e4f3\") " pod="kube-system/coredns-6f6b679f8f-4d5gm" Feb 13 16:07:09.281924 kubelet[3462]: I0213 16:07:09.281896 3462 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/208c7922-bcd9-44d0-a983-8968ceb62131-config-volume\") pod \"coredns-6f6b679f8f-jfg8g\" (UID: \"208c7922-bcd9-44d0-a983-8968ceb62131\") " pod="kube-system/coredns-6f6b679f8f-jfg8g" Feb 13 16:07:09.282179 kubelet[3462]: I0213 16:07:09.281939 3462 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54e5ec74-959f-4976-be06-2bdb21b9e4f3-config-volume\") pod \"coredns-6f6b679f8f-4d5gm\" (UID: \"54e5ec74-959f-4976-be06-2bdb21b9e4f3\") " pod="kube-system/coredns-6f6b679f8f-4d5gm" Feb 13 16:07:09.282179 kubelet[3462]: I0213 16:07:09.281984 3462 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9mkv\" (UniqueName: \"kubernetes.io/projected/208c7922-bcd9-44d0-a983-8968ceb62131-kube-api-access-x9mkv\") pod \"coredns-6f6b679f8f-jfg8g\" (UID: \"208c7922-bcd9-44d0-a983-8968ceb62131\") " pod="kube-system/coredns-6f6b679f8f-jfg8g" Feb 13 16:07:09.288048 systemd[1]: Created slice kubepods-burstable-pod208c7922_bcd9_44d0_a983_8968ceb62131.slice - libcontainer container kubepods-burstable-pod208c7922_bcd9_44d0_a983_8968ceb62131.slice. Feb 13 16:07:09.324955 containerd[2002]: time="2025-02-13T16:07:09.324711293Z" level=info msg="shim disconnected" id=b7fb0ef870165e75290cd11503abef003a276daae6ea4259134109aae8be2240 namespace=k8s.io Feb 13 16:07:09.324955 containerd[2002]: time="2025-02-13T16:07:09.324811049Z" level=warning msg="cleaning up after shim disconnected" id=b7fb0ef870165e75290cd11503abef003a276daae6ea4259134109aae8be2240 namespace=k8s.io Feb 13 16:07:09.324955 containerd[2002]: time="2025-02-13T16:07:09.324831761Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:07:09.382061 containerd[2002]: time="2025-02-13T16:07:09.381307601Z" level=info msg="CreateContainer within sandbox \"b9f6e74702995d88beca1e586d41e1de80e65c4ad9a5a2c4c98a91746a53446c\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 16:07:09.408067 containerd[2002]: time="2025-02-13T16:07:09.408005381Z" level=info msg="CreateContainer within sandbox \"b9f6e74702995d88beca1e586d41e1de80e65c4ad9a5a2c4c98a91746a53446c\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"7bc8387a0a6599ad0e3539ff0d51b6409c6c654070dda2501a53b3cb07134522\"" Feb 13 16:07:09.414313 containerd[2002]: time="2025-02-13T16:07:09.413301257Z" level=info msg="StartContainer for \"7bc8387a0a6599ad0e3539ff0d51b6409c6c654070dda2501a53b3cb07134522\"" Feb 13 16:07:09.463993 systemd[1]: Started cri-containerd-7bc8387a0a6599ad0e3539ff0d51b6409c6c654070dda2501a53b3cb07134522.scope - libcontainer container 7bc8387a0a6599ad0e3539ff0d51b6409c6c654070dda2501a53b3cb07134522. Feb 13 16:07:09.513058 containerd[2002]: time="2025-02-13T16:07:09.512824218Z" level=info msg="StartContainer for \"7bc8387a0a6599ad0e3539ff0d51b6409c6c654070dda2501a53b3cb07134522\" returns successfully" Feb 13 16:07:09.585597 containerd[2002]: time="2025-02-13T16:07:09.584936754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4d5gm,Uid:54e5ec74-959f-4976-be06-2bdb21b9e4f3,Namespace:kube-system,Attempt:0,}" Feb 13 16:07:09.612905 containerd[2002]: time="2025-02-13T16:07:09.612527046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jfg8g,Uid:208c7922-bcd9-44d0-a983-8968ceb62131,Namespace:kube-system,Attempt:0,}" Feb 13 16:07:09.637562 containerd[2002]: time="2025-02-13T16:07:09.637487850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4d5gm,Uid:54e5ec74-959f-4976-be06-2bdb21b9e4f3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0988cf631eed0545cd6298afe89efd84701661012e13b62ee57e50ef9129cf52\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 16:07:09.638036 kubelet[3462]: E0213 16:07:09.637945 3462 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0988cf631eed0545cd6298afe89efd84701661012e13b62ee57e50ef9129cf52\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 16:07:09.638260 kubelet[3462]: E0213 16:07:09.638083 3462 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0988cf631eed0545cd6298afe89efd84701661012e13b62ee57e50ef9129cf52\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-4d5gm" Feb 13 16:07:09.638260 kubelet[3462]: E0213 16:07:09.638120 3462 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0988cf631eed0545cd6298afe89efd84701661012e13b62ee57e50ef9129cf52\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-4d5gm" Feb 13 16:07:09.638260 kubelet[3462]: E0213 16:07:09.638198 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-4d5gm_kube-system(54e5ec74-959f-4976-be06-2bdb21b9e4f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-4d5gm_kube-system(54e5ec74-959f-4976-be06-2bdb21b9e4f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0988cf631eed0545cd6298afe89efd84701661012e13b62ee57e50ef9129cf52\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-4d5gm" podUID="54e5ec74-959f-4976-be06-2bdb21b9e4f3" Feb 13 16:07:09.655172 containerd[2002]: time="2025-02-13T16:07:09.655102831Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jfg8g,Uid:208c7922-bcd9-44d0-a983-8968ceb62131,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"90b1aa6d437fdf7dbbe418681cb9556c928cba7669f0079351cfd5db14399ab0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 16:07:09.655940 kubelet[3462]: E0213 16:07:09.655639 3462 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90b1aa6d437fdf7dbbe418681cb9556c928cba7669f0079351cfd5db14399ab0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 16:07:09.655940 kubelet[3462]: E0213 16:07:09.655753 3462 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90b1aa6d437fdf7dbbe418681cb9556c928cba7669f0079351cfd5db14399ab0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-jfg8g" Feb 13 16:07:09.655940 kubelet[3462]: E0213 16:07:09.655786 3462 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90b1aa6d437fdf7dbbe418681cb9556c928cba7669f0079351cfd5db14399ab0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-jfg8g" Feb 13 16:07:09.655940 kubelet[3462]: E0213 16:07:09.655863 3462 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-jfg8g_kube-system(208c7922-bcd9-44d0-a983-8968ceb62131)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-jfg8g_kube-system(208c7922-bcd9-44d0-a983-8968ceb62131)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90b1aa6d437fdf7dbbe418681cb9556c928cba7669f0079351cfd5db14399ab0\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-jfg8g" podUID="208c7922-bcd9-44d0-a983-8968ceb62131" Feb 13 16:07:10.589236 (udev-worker)[4007]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:07:10.609520 systemd-networkd[1934]: flannel.1: Link UP Feb 13 16:07:10.609539 systemd-networkd[1934]: flannel.1: Gained carrier Feb 13 16:07:11.723987 systemd-networkd[1934]: flannel.1: Gained IPv6LL Feb 13 16:07:14.041555 ntpd[1987]: Listen normally on 8 flannel.1 192.168.0.0:123 Feb 13 16:07:14.041723 ntpd[1987]: Listen normally on 9 flannel.1 [fe80::ccff:b5ff:feb4:cbc0%4]:123 Feb 13 16:07:14.042172 ntpd[1987]: 13 Feb 16:07:14 ntpd[1987]: Listen normally on 8 flannel.1 192.168.0.0:123 Feb 13 16:07:14.042172 ntpd[1987]: 13 Feb 16:07:14 ntpd[1987]: Listen normally on 9 flannel.1 [fe80::ccff:b5ff:feb4:cbc0%4]:123 Feb 13 16:07:20.222151 containerd[2002]: time="2025-02-13T16:07:20.222068871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jfg8g,Uid:208c7922-bcd9-44d0-a983-8968ceb62131,Namespace:kube-system,Attempt:0,}" Feb 13 16:07:20.259771 systemd-networkd[1934]: cni0: Link UP Feb 13 16:07:20.259791 systemd-networkd[1934]: cni0: Gained carrier Feb 13 16:07:20.266385 systemd-networkd[1934]: cni0: Lost carrier Feb 13 16:07:20.266985 (udev-worker)[4125]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:07:20.273967 systemd-networkd[1934]: veth3f944803: Link UP Feb 13 16:07:20.277612 kernel: cni0: port 1(veth3f944803) entered blocking state Feb 13 16:07:20.277914 kernel: cni0: port 1(veth3f944803) entered disabled state Feb 13 16:07:20.277961 kernel: veth3f944803: entered allmulticast mode Feb 13 16:07:20.281742 kernel: veth3f944803: entered promiscuous mode Feb 13 16:07:20.281861 kernel: cni0: port 1(veth3f944803) entered blocking state Feb 13 16:07:20.281900 kernel: cni0: port 1(veth3f944803) entered forwarding state Feb 13 16:07:20.284701 kernel: cni0: port 1(veth3f944803) entered disabled state Feb 13 16:07:20.285841 (udev-worker)[4130]: Network interface NamePolicy= disabled on kernel command line. Feb 13 16:07:20.299187 kernel: cni0: port 1(veth3f944803) entered blocking state Feb 13 16:07:20.299290 kernel: cni0: port 1(veth3f944803) entered forwarding state Feb 13 16:07:20.303817 systemd-networkd[1934]: veth3f944803: Gained carrier Feb 13 16:07:20.304220 systemd-networkd[1934]: cni0: Gained carrier Feb 13 16:07:20.307436 containerd[2002]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000012938), "name":"cbr0", "type":"bridge"} Feb 13 16:07:20.307436 containerd[2002]: delegateAdd: netconf sent to delegate plugin: Feb 13 16:07:20.346253 containerd[2002]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T16:07:20.346025296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:07:20.346253 containerd[2002]: time="2025-02-13T16:07:20.346140436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:07:20.346253 containerd[2002]: time="2025-02-13T16:07:20.346177888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:20.346572 containerd[2002]: time="2025-02-13T16:07:20.346334848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:20.387991 systemd[1]: Started cri-containerd-28d2d9b8ae263a61bbb39636aaf71bace8f2ebb2d043f477c5e2c4ee1f3ddd28.scope - libcontainer container 28d2d9b8ae263a61bbb39636aaf71bace8f2ebb2d043f477c5e2c4ee1f3ddd28. Feb 13 16:07:20.450882 containerd[2002]: time="2025-02-13T16:07:20.450749368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jfg8g,Uid:208c7922-bcd9-44d0-a983-8968ceb62131,Namespace:kube-system,Attempt:0,} returns sandbox id \"28d2d9b8ae263a61bbb39636aaf71bace8f2ebb2d043f477c5e2c4ee1f3ddd28\"" Feb 13 16:07:20.457545 containerd[2002]: time="2025-02-13T16:07:20.457290448Z" level=info msg="CreateContainer within sandbox \"28d2d9b8ae263a61bbb39636aaf71bace8f2ebb2d043f477c5e2c4ee1f3ddd28\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 16:07:20.476062 containerd[2002]: time="2025-02-13T16:07:20.475918360Z" level=info msg="CreateContainer within sandbox \"28d2d9b8ae263a61bbb39636aaf71bace8f2ebb2d043f477c5e2c4ee1f3ddd28\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"42ee209b54f92ce478b9ffd9c942da64038fc08613611fd3ddcc3954587f559e\"" Feb 13 16:07:20.478306 containerd[2002]: time="2025-02-13T16:07:20.478230580Z" level=info msg="StartContainer for \"42ee209b54f92ce478b9ffd9c942da64038fc08613611fd3ddcc3954587f559e\"" Feb 13 16:07:20.521005 systemd[1]: Started cri-containerd-42ee209b54f92ce478b9ffd9c942da64038fc08613611fd3ddcc3954587f559e.scope - libcontainer container 42ee209b54f92ce478b9ffd9c942da64038fc08613611fd3ddcc3954587f559e. Feb 13 16:07:20.572320 containerd[2002]: time="2025-02-13T16:07:20.572246129Z" level=info msg="StartContainer for \"42ee209b54f92ce478b9ffd9c942da64038fc08613611fd3ddcc3954587f559e\" returns successfully" Feb 13 16:07:21.425231 kubelet[3462]: I0213 16:07:21.423719 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-vph74" podStartSLOduration=13.248529909 podStartE2EDuration="20.423696413s" podCreationTimestamp="2025-02-13 16:07:01 +0000 UTC" firstStartedPulling="2025-02-13 16:07:01.793452863 +0000 UTC m=+6.803656114" lastFinishedPulling="2025-02-13 16:07:08.968619367 +0000 UTC m=+13.978822618" observedRunningTime="2025-02-13 16:07:10.401265174 +0000 UTC m=+15.411468449" watchObservedRunningTime="2025-02-13 16:07:21.423696413 +0000 UTC m=+26.433899676" Feb 13 16:07:21.425231 kubelet[3462]: I0213 16:07:21.423903 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-jfg8g" podStartSLOduration=20.423893069000002 podStartE2EDuration="20.423893069s" podCreationTimestamp="2025-02-13 16:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:07:21.423209285 +0000 UTC m=+26.433412572" watchObservedRunningTime="2025-02-13 16:07:21.423893069 +0000 UTC m=+26.434096344" Feb 13 16:07:21.452317 systemd-networkd[1934]: cni0: Gained IPv6LL Feb 13 16:07:22.222056 containerd[2002]: time="2025-02-13T16:07:22.221508041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4d5gm,Uid:54e5ec74-959f-4976-be06-2bdb21b9e4f3,Namespace:kube-system,Attempt:0,}" Feb 13 16:07:22.259378 systemd-networkd[1934]: vethe57faf5d: Link UP Feb 13 16:07:22.263478 kernel: cni0: port 2(vethe57faf5d) entered blocking state Feb 13 16:07:22.263588 kernel: cni0: port 2(vethe57faf5d) entered disabled state Feb 13 16:07:22.263629 kernel: vethe57faf5d: entered allmulticast mode Feb 13 16:07:22.265803 kernel: vethe57faf5d: entered promiscuous mode Feb 13 16:07:22.275297 kernel: cni0: port 2(vethe57faf5d) entered blocking state Feb 13 16:07:22.275421 kernel: cni0: port 2(vethe57faf5d) entered forwarding state Feb 13 16:07:22.277806 systemd-networkd[1934]: vethe57faf5d: Gained carrier Feb 13 16:07:22.283873 containerd[2002]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000014678), "name":"cbr0", "type":"bridge"} Feb 13 16:07:22.283873 containerd[2002]: delegateAdd: netconf sent to delegate plugin: Feb 13 16:07:22.314292 containerd[2002]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T16:07:22.313969757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:07:22.314292 containerd[2002]: time="2025-02-13T16:07:22.314073773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:07:22.314292 containerd[2002]: time="2025-02-13T16:07:22.314119325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:22.314711 containerd[2002]: time="2025-02-13T16:07:22.314336117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:07:22.351310 systemd-networkd[1934]: veth3f944803: Gained IPv6LL Feb 13 16:07:22.364990 systemd[1]: Started cri-containerd-d971023620fbd17046bdfcb700a050eb17fa99c8d546a5df278aa1132f72f712.scope - libcontainer container d971023620fbd17046bdfcb700a050eb17fa99c8d546a5df278aa1132f72f712. Feb 13 16:07:22.423557 containerd[2002]: time="2025-02-13T16:07:22.423505458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4d5gm,Uid:54e5ec74-959f-4976-be06-2bdb21b9e4f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"d971023620fbd17046bdfcb700a050eb17fa99c8d546a5df278aa1132f72f712\"" Feb 13 16:07:22.429474 containerd[2002]: time="2025-02-13T16:07:22.429404766Z" level=info msg="CreateContainer within sandbox \"d971023620fbd17046bdfcb700a050eb17fa99c8d546a5df278aa1132f72f712\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 16:07:22.464815 containerd[2002]: time="2025-02-13T16:07:22.464638242Z" level=info msg="CreateContainer within sandbox \"d971023620fbd17046bdfcb700a050eb17fa99c8d546a5df278aa1132f72f712\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a291af237a9708408437566b40545cff46a7b9da923c9cd0212589c9201d1229\"" Feb 13 16:07:22.468128 containerd[2002]: time="2025-02-13T16:07:22.467639034Z" level=info msg="StartContainer for \"a291af237a9708408437566b40545cff46a7b9da923c9cd0212589c9201d1229\"" Feb 13 16:07:22.519957 systemd[1]: Started cri-containerd-a291af237a9708408437566b40545cff46a7b9da923c9cd0212589c9201d1229.scope - libcontainer container a291af237a9708408437566b40545cff46a7b9da923c9cd0212589c9201d1229. Feb 13 16:07:22.579074 containerd[2002]: time="2025-02-13T16:07:22.578993635Z" level=info msg="StartContainer for \"a291af237a9708408437566b40545cff46a7b9da923c9cd0212589c9201d1229\" returns successfully" Feb 13 16:07:23.563981 systemd-networkd[1934]: vethe57faf5d: Gained IPv6LL Feb 13 16:07:26.041769 ntpd[1987]: Listen normally on 10 cni0 192.168.0.1:123 Feb 13 16:07:26.041939 ntpd[1987]: Listen normally on 11 cni0 [fe80::ec2e:caff:fe09:d8a3%5]:123 Feb 13 16:07:26.042527 ntpd[1987]: 13 Feb 16:07:26 ntpd[1987]: Listen normally on 10 cni0 192.168.0.1:123 Feb 13 16:07:26.042527 ntpd[1987]: 13 Feb 16:07:26 ntpd[1987]: Listen normally on 11 cni0 [fe80::ec2e:caff:fe09:d8a3%5]:123 Feb 13 16:07:26.042527 ntpd[1987]: 13 Feb 16:07:26 ntpd[1987]: Listen normally on 12 veth3f944803 [fe80::6c06:13ff:feef:7274%6]:123 Feb 13 16:07:26.042527 ntpd[1987]: 13 Feb 16:07:26 ntpd[1987]: Listen normally on 13 vethe57faf5d [fe80::fc17:34ff:fea8:65f9%7]:123 Feb 13 16:07:26.042031 ntpd[1987]: Listen normally on 12 veth3f944803 [fe80::6c06:13ff:feef:7274%6]:123 Feb 13 16:07:26.042100 ntpd[1987]: Listen normally on 13 vethe57faf5d [fe80::fc17:34ff:fea8:65f9%7]:123 Feb 13 16:07:29.606696 kubelet[3462]: I0213 16:07:29.605762 3462 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-4d5gm" podStartSLOduration=28.605738846 podStartE2EDuration="28.605738846s" podCreationTimestamp="2025-02-13 16:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:07:23.440809039 +0000 UTC m=+28.451012314" watchObservedRunningTime="2025-02-13 16:07:29.605738846 +0000 UTC m=+34.615942109" Feb 13 16:07:36.760582 systemd[1]: Started sshd@7-172.31.25.78:22-139.178.68.195:52064.service - OpenSSH per-connection server daemon (139.178.68.195:52064). Feb 13 16:07:36.933732 sshd[4421]: Accepted publickey for core from 139.178.68.195 port 52064 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:36.936570 sshd[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:36.945477 systemd-logind[1993]: New session 8 of user core. Feb 13 16:07:36.952961 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 16:07:37.213036 sshd[4421]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:37.218701 systemd-logind[1993]: Session 8 logged out. Waiting for processes to exit. Feb 13 16:07:37.220202 systemd[1]: sshd@7-172.31.25.78:22-139.178.68.195:52064.service: Deactivated successfully. Feb 13 16:07:37.223403 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 16:07:37.230390 systemd-logind[1993]: Removed session 8. Feb 13 16:07:42.252165 systemd[1]: Started sshd@8-172.31.25.78:22-139.178.68.195:52072.service - OpenSSH per-connection server daemon (139.178.68.195:52072). Feb 13 16:07:42.432733 sshd[4455]: Accepted publickey for core from 139.178.68.195 port 52072 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:42.435392 sshd[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:42.448888 systemd-logind[1993]: New session 9 of user core. Feb 13 16:07:42.459067 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 16:07:42.719276 sshd[4455]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:42.726035 systemd[1]: sshd@8-172.31.25.78:22-139.178.68.195:52072.service: Deactivated successfully. Feb 13 16:07:42.731259 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 16:07:42.734126 systemd-logind[1993]: Session 9 logged out. Waiting for processes to exit. Feb 13 16:07:42.737898 systemd-logind[1993]: Removed session 9. Feb 13 16:07:47.761210 systemd[1]: Started sshd@9-172.31.25.78:22-139.178.68.195:45566.service - OpenSSH per-connection server daemon (139.178.68.195:45566). Feb 13 16:07:47.939140 sshd[4490]: Accepted publickey for core from 139.178.68.195 port 45566 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:47.942466 sshd[4490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:47.949525 systemd-logind[1993]: New session 10 of user core. Feb 13 16:07:47.961923 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 16:07:48.201251 sshd[4490]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:48.206497 systemd[1]: sshd@9-172.31.25.78:22-139.178.68.195:45566.service: Deactivated successfully. Feb 13 16:07:48.210620 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 16:07:48.214052 systemd-logind[1993]: Session 10 logged out. Waiting for processes to exit. Feb 13 16:07:48.216447 systemd-logind[1993]: Removed session 10. Feb 13 16:07:53.242177 systemd[1]: Started sshd@10-172.31.25.78:22-139.178.68.195:45578.service - OpenSSH per-connection server daemon (139.178.68.195:45578). Feb 13 16:07:53.421499 sshd[4525]: Accepted publickey for core from 139.178.68.195 port 45578 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:53.424407 sshd[4525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:53.432998 systemd-logind[1993]: New session 11 of user core. Feb 13 16:07:53.442063 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 16:07:53.685917 sshd[4525]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:53.692327 systemd[1]: sshd@10-172.31.25.78:22-139.178.68.195:45578.service: Deactivated successfully. Feb 13 16:07:53.696668 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 16:07:53.699408 systemd-logind[1993]: Session 11 logged out. Waiting for processes to exit. Feb 13 16:07:53.701239 systemd-logind[1993]: Removed session 11. Feb 13 16:07:53.725395 systemd[1]: Started sshd@11-172.31.25.78:22-139.178.68.195:45590.service - OpenSSH per-connection server daemon (139.178.68.195:45590). Feb 13 16:07:53.907860 sshd[4539]: Accepted publickey for core from 139.178.68.195 port 45590 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:53.910496 sshd[4539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:53.918519 systemd-logind[1993]: New session 12 of user core. Feb 13 16:07:53.925965 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 16:07:54.250006 sshd[4539]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:54.256977 systemd-logind[1993]: Session 12 logged out. Waiting for processes to exit. Feb 13 16:07:54.257911 systemd[1]: sshd@11-172.31.25.78:22-139.178.68.195:45590.service: Deactivated successfully. Feb 13 16:07:54.262529 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 16:07:54.285819 systemd-logind[1993]: Removed session 12. Feb 13 16:07:54.292346 systemd[1]: Started sshd@12-172.31.25.78:22-139.178.68.195:45596.service - OpenSSH per-connection server daemon (139.178.68.195:45596). Feb 13 16:07:54.470340 sshd[4550]: Accepted publickey for core from 139.178.68.195 port 45596 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:54.473182 sshd[4550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:54.482045 systemd-logind[1993]: New session 13 of user core. Feb 13 16:07:54.488968 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 16:07:54.733341 sshd[4550]: pam_unix(sshd:session): session closed for user core Feb 13 16:07:54.740886 systemd[1]: sshd@12-172.31.25.78:22-139.178.68.195:45596.service: Deactivated successfully. Feb 13 16:07:54.744932 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 16:07:54.747015 systemd-logind[1993]: Session 13 logged out. Waiting for processes to exit. Feb 13 16:07:54.749866 systemd-logind[1993]: Removed session 13. Feb 13 16:07:59.774174 systemd[1]: Started sshd@13-172.31.25.78:22-139.178.68.195:38706.service - OpenSSH per-connection server daemon (139.178.68.195:38706). Feb 13 16:07:59.957337 sshd[4585]: Accepted publickey for core from 139.178.68.195 port 38706 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:07:59.959968 sshd[4585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:07:59.967719 systemd-logind[1993]: New session 14 of user core. Feb 13 16:07:59.981950 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 16:08:00.235024 sshd[4585]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:00.240706 systemd-logind[1993]: Session 14 logged out. Waiting for processes to exit. Feb 13 16:08:00.243147 systemd[1]: sshd@13-172.31.25.78:22-139.178.68.195:38706.service: Deactivated successfully. Feb 13 16:08:00.248402 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 16:08:00.250883 systemd-logind[1993]: Removed session 14. Feb 13 16:08:05.278288 systemd[1]: Started sshd@14-172.31.25.78:22-139.178.68.195:38710.service - OpenSSH per-connection server daemon (139.178.68.195:38710). Feb 13 16:08:05.453036 sshd[4623]: Accepted publickey for core from 139.178.68.195 port 38710 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:05.455896 sshd[4623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:05.466114 systemd-logind[1993]: New session 15 of user core. Feb 13 16:08:05.478948 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 16:08:05.744249 sshd[4623]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:05.751626 systemd[1]: sshd@14-172.31.25.78:22-139.178.68.195:38710.service: Deactivated successfully. Feb 13 16:08:05.759049 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 16:08:05.760645 systemd-logind[1993]: Session 15 logged out. Waiting for processes to exit. Feb 13 16:08:05.763037 systemd-logind[1993]: Removed session 15. Feb 13 16:08:05.785240 systemd[1]: Started sshd@15-172.31.25.78:22-139.178.68.195:38716.service - OpenSSH per-connection server daemon (139.178.68.195:38716). Feb 13 16:08:05.961461 sshd[4636]: Accepted publickey for core from 139.178.68.195 port 38716 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:05.964074 sshd[4636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:05.972701 systemd-logind[1993]: New session 16 of user core. Feb 13 16:08:05.982946 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 16:08:06.299294 sshd[4636]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:06.304896 systemd-logind[1993]: Session 16 logged out. Waiting for processes to exit. Feb 13 16:08:06.305288 systemd[1]: sshd@15-172.31.25.78:22-139.178.68.195:38716.service: Deactivated successfully. Feb 13 16:08:06.308790 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 16:08:06.313933 systemd-logind[1993]: Removed session 16. Feb 13 16:08:06.341176 systemd[1]: Started sshd@16-172.31.25.78:22-139.178.68.195:38718.service - OpenSSH per-connection server daemon (139.178.68.195:38718). Feb 13 16:08:06.521629 sshd[4668]: Accepted publickey for core from 139.178.68.195 port 38718 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:06.524757 sshd[4668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:06.533172 systemd-logind[1993]: New session 17 of user core. Feb 13 16:08:06.540949 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 16:08:08.850575 sshd[4668]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:08.865301 systemd[1]: sshd@16-172.31.25.78:22-139.178.68.195:38718.service: Deactivated successfully. Feb 13 16:08:08.873057 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 16:08:08.876369 systemd-logind[1993]: Session 17 logged out. Waiting for processes to exit. Feb 13 16:08:08.897210 systemd[1]: Started sshd@17-172.31.25.78:22-139.178.68.195:38806.service - OpenSSH per-connection server daemon (139.178.68.195:38806). Feb 13 16:08:08.899871 systemd-logind[1993]: Removed session 17. Feb 13 16:08:09.071375 sshd[4687]: Accepted publickey for core from 139.178.68.195 port 38806 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:09.074404 sshd[4687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:09.083940 systemd-logind[1993]: New session 18 of user core. Feb 13 16:08:09.092069 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 16:08:09.560427 sshd[4687]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:09.569834 systemd[1]: sshd@17-172.31.25.78:22-139.178.68.195:38806.service: Deactivated successfully. Feb 13 16:08:09.574062 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 16:08:09.577620 systemd-logind[1993]: Session 18 logged out. Waiting for processes to exit. Feb 13 16:08:09.579770 systemd-logind[1993]: Removed session 18. Feb 13 16:08:09.603236 systemd[1]: Started sshd@18-172.31.25.78:22-139.178.68.195:38818.service - OpenSSH per-connection server daemon (139.178.68.195:38818). Feb 13 16:08:09.781700 sshd[4698]: Accepted publickey for core from 139.178.68.195 port 38818 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:09.785648 sshd[4698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:09.794785 systemd-logind[1993]: New session 19 of user core. Feb 13 16:08:09.804948 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 16:08:10.043333 sshd[4698]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:10.049877 systemd[1]: sshd@18-172.31.25.78:22-139.178.68.195:38818.service: Deactivated successfully. Feb 13 16:08:10.053905 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 16:08:10.056413 systemd-logind[1993]: Session 19 logged out. Waiting for processes to exit. Feb 13 16:08:10.058522 systemd-logind[1993]: Removed session 19. Feb 13 16:08:15.086197 systemd[1]: Started sshd@19-172.31.25.78:22-139.178.68.195:38834.service - OpenSSH per-connection server daemon (139.178.68.195:38834). Feb 13 16:08:15.261402 sshd[4731]: Accepted publickey for core from 139.178.68.195 port 38834 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:15.264103 sshd[4731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:15.272903 systemd-logind[1993]: New session 20 of user core. Feb 13 16:08:15.281932 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 16:08:15.516639 sshd[4731]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:15.522956 systemd[1]: sshd@19-172.31.25.78:22-139.178.68.195:38834.service: Deactivated successfully. Feb 13 16:08:15.527016 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 16:08:15.529020 systemd-logind[1993]: Session 20 logged out. Waiting for processes to exit. Feb 13 16:08:15.531423 systemd-logind[1993]: Removed session 20. Feb 13 16:08:20.556193 systemd[1]: Started sshd@20-172.31.25.78:22-139.178.68.195:35608.service - OpenSSH per-connection server daemon (139.178.68.195:35608). Feb 13 16:08:20.734918 sshd[4768]: Accepted publickey for core from 139.178.68.195 port 35608 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:20.737584 sshd[4768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:20.747010 systemd-logind[1993]: New session 21 of user core. Feb 13 16:08:20.755944 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 16:08:20.997033 sshd[4768]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:21.002084 systemd[1]: sshd@20-172.31.25.78:22-139.178.68.195:35608.service: Deactivated successfully. Feb 13 16:08:21.005130 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 16:08:21.009455 systemd-logind[1993]: Session 21 logged out. Waiting for processes to exit. Feb 13 16:08:21.011283 systemd-logind[1993]: Removed session 21. Feb 13 16:08:26.043178 systemd[1]: Started sshd@21-172.31.25.78:22-139.178.68.195:35616.service - OpenSSH per-connection server daemon (139.178.68.195:35616). Feb 13 16:08:26.221379 sshd[4808]: Accepted publickey for core from 139.178.68.195 port 35616 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:26.224126 sshd[4808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:26.231470 systemd-logind[1993]: New session 22 of user core. Feb 13 16:08:26.242947 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 16:08:26.481969 sshd[4808]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:26.487188 systemd[1]: sshd@21-172.31.25.78:22-139.178.68.195:35616.service: Deactivated successfully. Feb 13 16:08:26.490824 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 16:08:26.494375 systemd-logind[1993]: Session 22 logged out. Waiting for processes to exit. Feb 13 16:08:26.496591 systemd-logind[1993]: Removed session 22. Feb 13 16:08:31.522401 systemd[1]: Started sshd@22-172.31.25.78:22-139.178.68.195:57108.service - OpenSSH per-connection server daemon (139.178.68.195:57108). Feb 13 16:08:31.701804 sshd[4857]: Accepted publickey for core from 139.178.68.195 port 57108 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU Feb 13 16:08:31.704398 sshd[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:08:31.711908 systemd-logind[1993]: New session 23 of user core. Feb 13 16:08:31.723964 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 16:08:31.959422 sshd[4857]: pam_unix(sshd:session): session closed for user core Feb 13 16:08:31.965643 systemd[1]: sshd@22-172.31.25.78:22-139.178.68.195:57108.service: Deactivated successfully. Feb 13 16:08:31.968974 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 16:08:31.971049 systemd-logind[1993]: Session 23 logged out. Waiting for processes to exit. Feb 13 16:08:31.973500 systemd-logind[1993]: Removed session 23. Feb 13 16:08:45.865786 systemd[1]: cri-containerd-bf099f3ab5a0fa1e2fdb4b23b89da65a689b41740b834d8224884ad72203be83.scope: Deactivated successfully. Feb 13 16:08:45.866740 systemd[1]: cri-containerd-bf099f3ab5a0fa1e2fdb4b23b89da65a689b41740b834d8224884ad72203be83.scope: Consumed 5.051s CPU time, 17.7M memory peak, 0B memory swap peak. Feb 13 16:08:45.910483 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf099f3ab5a0fa1e2fdb4b23b89da65a689b41740b834d8224884ad72203be83-rootfs.mount: Deactivated successfully. Feb 13 16:08:45.920830 containerd[2002]: time="2025-02-13T16:08:45.920714657Z" level=info msg="shim disconnected" id=bf099f3ab5a0fa1e2fdb4b23b89da65a689b41740b834d8224884ad72203be83 namespace=k8s.io Feb 13 16:08:45.920830 containerd[2002]: time="2025-02-13T16:08:45.920821865Z" level=warning msg="cleaning up after shim disconnected" id=bf099f3ab5a0fa1e2fdb4b23b89da65a689b41740b834d8224884ad72203be83 namespace=k8s.io Feb 13 16:08:45.922047 containerd[2002]: time="2025-02-13T16:08:45.920844533Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:08:46.623021 kubelet[3462]: I0213 16:08:46.622688 3462 scope.go:117] "RemoveContainer" containerID="bf099f3ab5a0fa1e2fdb4b23b89da65a689b41740b834d8224884ad72203be83" Feb 13 16:08:46.626577 containerd[2002]: time="2025-02-13T16:08:46.626264056Z" level=info msg="CreateContainer within sandbox \"0f1e1198cfd4aefe83cf2639ef55f46578d36402dc903810c1e4879a1767a03f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 16:08:46.661647 containerd[2002]: time="2025-02-13T16:08:46.661566328Z" level=info msg="CreateContainer within sandbox \"0f1e1198cfd4aefe83cf2639ef55f46578d36402dc903810c1e4879a1767a03f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"67270a710a2b2569522c9604c54453f90c5a846839aca080c8a66d7b2d6e03f8\"" Feb 13 16:08:46.663033 containerd[2002]: time="2025-02-13T16:08:46.662202916Z" level=info msg="StartContainer for \"67270a710a2b2569522c9604c54453f90c5a846839aca080c8a66d7b2d6e03f8\"" Feb 13 16:08:46.722000 systemd[1]: Started cri-containerd-67270a710a2b2569522c9604c54453f90c5a846839aca080c8a66d7b2d6e03f8.scope - libcontainer container 67270a710a2b2569522c9604c54453f90c5a846839aca080c8a66d7b2d6e03f8. Feb 13 16:08:46.792430 containerd[2002]: time="2025-02-13T16:08:46.792255641Z" level=info msg="StartContainer for \"67270a710a2b2569522c9604c54453f90c5a846839aca080c8a66d7b2d6e03f8\" returns successfully" Feb 13 16:08:47.631067 kubelet[3462]: E0213 16:08:47.630527 3462 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-78?timeout=10s\": context deadline exceeded" Feb 13 16:08:50.697625 systemd[1]: cri-containerd-940de31c6f85272b5775ffcfe59ee6c008ecc03062d8e383e79b025f919701df.scope: Deactivated successfully. Feb 13 16:08:50.699194 systemd[1]: cri-containerd-940de31c6f85272b5775ffcfe59ee6c008ecc03062d8e383e79b025f919701df.scope: Consumed 2.287s CPU time, 16.1M memory peak, 0B memory swap peak. Feb 13 16:08:50.739807 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-940de31c6f85272b5775ffcfe59ee6c008ecc03062d8e383e79b025f919701df-rootfs.mount: Deactivated successfully. Feb 13 16:08:50.751637 containerd[2002]: time="2025-02-13T16:08:50.751554213Z" level=info msg="shim disconnected" id=940de31c6f85272b5775ffcfe59ee6c008ecc03062d8e383e79b025f919701df namespace=k8s.io Feb 13 16:08:50.751637 containerd[2002]: time="2025-02-13T16:08:50.751634229Z" level=warning msg="cleaning up after shim disconnected" id=940de31c6f85272b5775ffcfe59ee6c008ecc03062d8e383e79b025f919701df namespace=k8s.io Feb 13 16:08:50.752624 containerd[2002]: time="2025-02-13T16:08:50.751832697Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:08:51.646597 kubelet[3462]: I0213 16:08:51.646538 3462 scope.go:117] "RemoveContainer" containerID="940de31c6f85272b5775ffcfe59ee6c008ecc03062d8e383e79b025f919701df" Feb 13 16:08:51.650262 containerd[2002]: time="2025-02-13T16:08:51.649962441Z" level=info msg="CreateContainer within sandbox \"c6036d9180955e7badb004caeda649848ffd34de1a8f14b182a769d86adcc545\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 16:08:51.676708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3421824873.mount: Deactivated successfully. Feb 13 16:08:51.681804 containerd[2002]: time="2025-02-13T16:08:51.681729345Z" level=info msg="CreateContainer within sandbox \"c6036d9180955e7badb004caeda649848ffd34de1a8f14b182a769d86adcc545\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e872c33c078ba8858611518b9830a78c16ff331b9d2f3f386a0b893b085f6ace\"" Feb 13 16:08:51.682603 containerd[2002]: time="2025-02-13T16:08:51.682523481Z" level=info msg="StartContainer for \"e872c33c078ba8858611518b9830a78c16ff331b9d2f3f386a0b893b085f6ace\"" Feb 13 16:08:51.734988 systemd[1]: Started cri-containerd-e872c33c078ba8858611518b9830a78c16ff331b9d2f3f386a0b893b085f6ace.scope - libcontainer container e872c33c078ba8858611518b9830a78c16ff331b9d2f3f386a0b893b085f6ace. Feb 13 16:08:51.806365 containerd[2002]: time="2025-02-13T16:08:51.806290678Z" level=info msg="StartContainer for \"e872c33c078ba8858611518b9830a78c16ff331b9d2f3f386a0b893b085f6ace\" returns successfully"