Feb 13 15:16:44.270618 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 13 15:16:44.270667 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 13:57:00 -00 2025 Feb 13 15:16:44.270692 kernel: KASLR disabled due to lack of seed Feb 13 15:16:44.270708 kernel: efi: EFI v2.7 by EDK II Feb 13 15:16:44.270724 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x78503d98 Feb 13 15:16:44.270767 kernel: secureboot: Secure boot disabled Feb 13 15:16:44.270789 kernel: ACPI: Early table checksum verification disabled Feb 13 15:16:44.270805 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 13 15:16:44.270821 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 13 15:16:44.270837 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 15:16:44.270859 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 13 15:16:44.270874 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 15:16:44.270890 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 13 15:16:44.270906 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 13 15:16:44.270924 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 13 15:16:44.270945 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 15:16:44.270962 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 13 15:16:44.270979 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 13 15:16:44.270996 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 13 15:16:44.271012 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 13 15:16:44.271029 kernel: printk: bootconsole [uart0] enabled Feb 13 15:16:44.271045 kernel: NUMA: Failed to initialise from firmware Feb 13 15:16:44.271062 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 15:16:44.271079 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Feb 13 15:16:44.271095 kernel: Zone ranges: Feb 13 15:16:44.271112 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 15:16:44.271132 kernel: DMA32 empty Feb 13 15:16:44.271149 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 13 15:16:44.271165 kernel: Movable zone start for each node Feb 13 15:16:44.271182 kernel: Early memory node ranges Feb 13 15:16:44.271198 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Feb 13 15:16:44.271215 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Feb 13 15:16:44.271231 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Feb 13 15:16:44.271248 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 13 15:16:44.271265 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 13 15:16:44.271281 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 13 15:16:44.271298 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 13 15:16:44.271315 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 13 15:16:44.273087 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 13 15:16:44.273179 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 13 15:16:44.273223 kernel: psci: probing for conduit method from ACPI. Feb 13 15:16:44.273243 kernel: psci: PSCIv1.0 detected in firmware. Feb 13 15:16:44.273262 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:16:44.273285 kernel: psci: Trusted OS migration not required Feb 13 15:16:44.273304 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:16:44.273322 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:16:44.273340 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:16:44.273359 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 15:16:44.273378 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:16:44.273397 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:16:44.273415 kernel: CPU features: detected: Spectre-v2 Feb 13 15:16:44.273432 kernel: CPU features: detected: Spectre-v3a Feb 13 15:16:44.273451 kernel: CPU features: detected: Spectre-BHB Feb 13 15:16:44.273469 kernel: CPU features: detected: ARM erratum 1742098 Feb 13 15:16:44.273487 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 13 15:16:44.273515 kernel: alternatives: applying boot alternatives Feb 13 15:16:44.273536 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:16:44.273558 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:16:44.273577 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:16:44.273595 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:16:44.273613 kernel: Fallback order for Node 0: 0 Feb 13 15:16:44.273632 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 13 15:16:44.273652 kernel: Policy zone: Normal Feb 13 15:16:44.273680 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:16:44.273781 kernel: software IO TLB: area num 2. Feb 13 15:16:44.273832 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 13 15:16:44.274499 kernel: Memory: 3819960K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 210504K reserved, 0K cma-reserved) Feb 13 15:16:44.274535 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:16:44.274555 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:16:44.274575 kernel: rcu: RCU event tracing is enabled. Feb 13 15:16:44.274594 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:16:44.274613 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:16:44.274634 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:16:44.274655 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:16:44.274674 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:16:44.274693 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:16:44.274723 kernel: GICv3: 96 SPIs implemented Feb 13 15:16:44.274780 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:16:44.274801 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:16:44.274819 kernel: GICv3: GICv3 features: 16 PPIs Feb 13 15:16:44.274837 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 13 15:16:44.274855 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 13 15:16:44.274873 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:16:44.274892 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:16:44.274910 kernel: GICv3: using LPI property table @0x00000004000d0000 Feb 13 15:16:44.274928 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 13 15:16:44.274946 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Feb 13 15:16:44.274965 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:16:44.274994 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 13 15:16:44.275013 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 13 15:16:44.275031 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 13 15:16:44.275050 kernel: Console: colour dummy device 80x25 Feb 13 15:16:44.275068 kernel: printk: console [tty1] enabled Feb 13 15:16:44.275086 kernel: ACPI: Core revision 20230628 Feb 13 15:16:44.275105 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 13 15:16:44.275124 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:16:44.275142 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:16:44.275165 kernel: landlock: Up and running. Feb 13 15:16:44.275184 kernel: SELinux: Initializing. Feb 13 15:16:44.275202 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:16:44.275221 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:16:44.275239 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:16:44.275257 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:16:44.275275 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:16:44.275295 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:16:44.275314 kernel: Platform MSI: ITS@0x10080000 domain created Feb 13 15:16:44.275340 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 13 15:16:44.275361 kernel: Remapping and enabling EFI services. Feb 13 15:16:44.275383 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:16:44.275403 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:16:44.275423 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 13 15:16:44.275443 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Feb 13 15:16:44.275461 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 13 15:16:44.275479 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:16:44.275497 kernel: SMP: Total of 2 processors activated. Feb 13 15:16:44.275521 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:16:44.275539 kernel: CPU features: detected: 32-bit EL1 Support Feb 13 15:16:44.275556 kernel: CPU features: detected: CRC32 instructions Feb 13 15:16:44.275586 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:16:44.275609 kernel: alternatives: applying system-wide alternatives Feb 13 15:16:44.275627 kernel: devtmpfs: initialized Feb 13 15:16:44.275645 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:16:44.275663 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:16:44.275683 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:16:44.275701 kernel: SMBIOS 3.0.0 present. Feb 13 15:16:44.275723 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 13 15:16:44.276263 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:16:44.276300 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:16:44.276319 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:16:44.276339 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:16:44.276376 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:16:44.276400 kernel: audit: type=2000 audit(0.236:1): state=initialized audit_enabled=0 res=1 Feb 13 15:16:44.276433 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:16:44.276453 kernel: cpuidle: using governor menu Feb 13 15:16:44.276472 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:16:44.276493 kernel: ASID allocator initialised with 65536 entries Feb 13 15:16:44.276516 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:16:44.276536 kernel: Serial: AMBA PL011 UART driver Feb 13 15:16:44.276555 kernel: Modules: 17440 pages in range for non-PLT usage Feb 13 15:16:44.276574 kernel: Modules: 508960 pages in range for PLT usage Feb 13 15:16:44.276593 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:16:44.276620 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:16:44.276640 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:16:44.276659 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:16:44.276677 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:16:44.276696 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:16:44.276715 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:16:44.276734 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:16:44.276791 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:16:44.276811 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:16:44.276839 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:16:44.276858 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:16:44.276877 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:16:44.276896 kernel: ACPI: Interpreter enabled Feb 13 15:16:44.276917 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:16:44.276936 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:16:44.276955 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 13 15:16:44.277277 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:16:44.277532 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:16:44.279856 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:16:44.280153 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 13 15:16:44.280392 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 13 15:16:44.280426 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 13 15:16:44.280447 kernel: acpiphp: Slot [1] registered Feb 13 15:16:44.280467 kernel: acpiphp: Slot [2] registered Feb 13 15:16:44.280488 kernel: acpiphp: Slot [3] registered Feb 13 15:16:44.280561 kernel: acpiphp: Slot [4] registered Feb 13 15:16:44.280586 kernel: acpiphp: Slot [5] registered Feb 13 15:16:44.280606 kernel: acpiphp: Slot [6] registered Feb 13 15:16:44.280625 kernel: acpiphp: Slot [7] registered Feb 13 15:16:44.280644 kernel: acpiphp: Slot [8] registered Feb 13 15:16:44.280663 kernel: acpiphp: Slot [9] registered Feb 13 15:16:44.280681 kernel: acpiphp: Slot [10] registered Feb 13 15:16:44.280700 kernel: acpiphp: Slot [11] registered Feb 13 15:16:44.280719 kernel: acpiphp: Slot [12] registered Feb 13 15:16:44.280760 kernel: acpiphp: Slot [13] registered Feb 13 15:16:44.280830 kernel: acpiphp: Slot [14] registered Feb 13 15:16:44.280852 kernel: acpiphp: Slot [15] registered Feb 13 15:16:44.280872 kernel: acpiphp: Slot [16] registered Feb 13 15:16:44.280894 kernel: acpiphp: Slot [17] registered Feb 13 15:16:44.280914 kernel: acpiphp: Slot [18] registered Feb 13 15:16:44.280934 kernel: acpiphp: Slot [19] registered Feb 13 15:16:44.280955 kernel: acpiphp: Slot [20] registered Feb 13 15:16:44.280976 kernel: acpiphp: Slot [21] registered Feb 13 15:16:44.280996 kernel: acpiphp: Slot [22] registered Feb 13 15:16:44.281025 kernel: acpiphp: Slot [23] registered Feb 13 15:16:44.281046 kernel: acpiphp: Slot [24] registered Feb 13 15:16:44.281067 kernel: acpiphp: Slot [25] registered Feb 13 15:16:44.281088 kernel: acpiphp: Slot [26] registered Feb 13 15:16:44.281108 kernel: acpiphp: Slot [27] registered Feb 13 15:16:44.281129 kernel: acpiphp: Slot [28] registered Feb 13 15:16:44.281148 kernel: acpiphp: Slot [29] registered Feb 13 15:16:44.281168 kernel: acpiphp: Slot [30] registered Feb 13 15:16:44.281187 kernel: acpiphp: Slot [31] registered Feb 13 15:16:44.281206 kernel: PCI host bridge to bus 0000:00 Feb 13 15:16:44.281532 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 13 15:16:44.282223 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:16:44.282479 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 13 15:16:44.282680 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 13 15:16:44.283007 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 13 15:16:44.283266 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 13 15:16:44.283530 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 13 15:16:44.283878 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 15:16:44.284158 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 13 15:16:44.284446 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 15:16:44.284715 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 15:16:44.286245 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 13 15:16:44.286515 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 13 15:16:44.287809 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 13 15:16:44.288234 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 15:16:44.288545 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 13 15:16:44.288855 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 13 15:16:44.289111 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 13 15:16:44.289340 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 13 15:16:44.289572 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 13 15:16:44.290956 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 13 15:16:44.291233 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:16:44.291466 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 13 15:16:44.291503 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:16:44.291528 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:16:44.291549 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:16:44.291569 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:16:44.291592 kernel: iommu: Default domain type: Translated Feb 13 15:16:44.291632 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:16:44.291655 kernel: efivars: Registered efivars operations Feb 13 15:16:44.291679 kernel: vgaarb: loaded Feb 13 15:16:44.291703 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:16:44.291728 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:16:44.291844 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:16:44.291877 kernel: pnp: PnP ACPI init Feb 13 15:16:44.292283 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 13 15:16:44.292342 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:16:44.292385 kernel: NET: Registered PF_INET protocol family Feb 13 15:16:44.292408 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:16:44.292428 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:16:44.292447 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:16:44.292466 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:16:44.292486 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:16:44.292505 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:16:44.292524 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:16:44.292554 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:16:44.292575 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:16:44.292597 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:16:44.292616 kernel: kvm [1]: HYP mode not available Feb 13 15:16:44.292636 kernel: Initialise system trusted keyrings Feb 13 15:16:44.292656 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:16:44.292675 kernel: Key type asymmetric registered Feb 13 15:16:44.292694 kernel: Asymmetric key parser 'x509' registered Feb 13 15:16:44.292713 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:16:44.295845 kernel: io scheduler mq-deadline registered Feb 13 15:16:44.295888 kernel: io scheduler kyber registered Feb 13 15:16:44.295907 kernel: io scheduler bfq registered Feb 13 15:16:44.296189 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 13 15:16:44.296221 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:16:44.296241 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:16:44.296261 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Feb 13 15:16:44.296280 kernel: ACPI: button: Sleep Button [SLPB] Feb 13 15:16:44.296309 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:16:44.296329 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 15:16:44.296570 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 13 15:16:44.296599 kernel: printk: console [ttyS0] disabled Feb 13 15:16:44.296619 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 13 15:16:44.296637 kernel: printk: console [ttyS0] enabled Feb 13 15:16:44.296656 kernel: printk: bootconsole [uart0] disabled Feb 13 15:16:44.296675 kernel: thunder_xcv, ver 1.0 Feb 13 15:16:44.296693 kernel: thunder_bgx, ver 1.0 Feb 13 15:16:44.296718 kernel: nicpf, ver 1.0 Feb 13 15:16:44.296760 kernel: nicvf, ver 1.0 Feb 13 15:16:44.297032 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:16:44.297252 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:16:43 UTC (1739459803) Feb 13 15:16:44.297284 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:16:44.297304 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 13 15:16:44.297324 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:16:44.297343 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:16:44.297374 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:16:44.297394 kernel: Segment Routing with IPv6 Feb 13 15:16:44.297413 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:16:44.297432 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:16:44.297451 kernel: Key type dns_resolver registered Feb 13 15:16:44.297469 kernel: registered taskstats version 1 Feb 13 15:16:44.297488 kernel: Loading compiled-in X.509 certificates Feb 13 15:16:44.297507 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4531cdb19689f90a81e7969ac7d8e25a95254f51' Feb 13 15:16:44.297526 kernel: Key type .fscrypt registered Feb 13 15:16:44.297552 kernel: Key type fscrypt-provisioning registered Feb 13 15:16:44.297570 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:16:44.297589 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:16:44.297608 kernel: ima: No architecture policies found Feb 13 15:16:44.297627 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:16:44.297645 kernel: clk: Disabling unused clocks Feb 13 15:16:44.297664 kernel: Freeing unused kernel memory: 39680K Feb 13 15:16:44.297683 kernel: Run /init as init process Feb 13 15:16:44.297702 kernel: with arguments: Feb 13 15:16:44.297724 kernel: /init Feb 13 15:16:44.299969 kernel: with environment: Feb 13 15:16:44.299997 kernel: HOME=/ Feb 13 15:16:44.300017 kernel: TERM=linux Feb 13 15:16:44.300035 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:16:44.300061 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:16:44.300087 systemd[1]: Detected virtualization amazon. Feb 13 15:16:44.300108 systemd[1]: Detected architecture arm64. Feb 13 15:16:44.300143 systemd[1]: Running in initrd. Feb 13 15:16:44.300166 systemd[1]: No hostname configured, using default hostname. Feb 13 15:16:44.300186 systemd[1]: Hostname set to . Feb 13 15:16:44.300207 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:16:44.300228 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:16:44.300249 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:16:44.300270 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:16:44.300293 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:16:44.300323 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:16:44.300345 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:16:44.300390 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:16:44.300418 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:16:44.300439 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:16:44.300462 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:16:44.300483 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:16:44.300513 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:16:44.300534 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:16:44.300555 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:16:44.300576 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:16:44.300596 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:16:44.300618 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:16:44.300639 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:16:44.300660 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:16:44.300681 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:16:44.300708 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:16:44.300729 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:16:44.300784 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:16:44.300812 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:16:44.300839 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:16:44.300861 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:16:44.300883 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:16:44.300904 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:16:44.300936 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:16:44.300958 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:16:44.300979 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:16:44.301002 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:16:44.301087 systemd-journald[250]: Collecting audit messages is disabled. Feb 13 15:16:44.301152 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:16:44.301177 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:16:44.301205 systemd-journald[250]: Journal started Feb 13 15:16:44.301251 systemd-journald[250]: Runtime Journal (/run/log/journal/ec2f97c48e0f4968525e20f3f9236ab7) is 8.0M, max 75.3M, 67.3M free. Feb 13 15:16:44.274683 systemd-modules-load[252]: Inserted module 'overlay' Feb 13 15:16:44.309362 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:16:44.319359 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:16:44.317212 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:44.327433 kernel: Bridge firewalling registered Feb 13 15:16:44.326445 systemd-modules-load[252]: Inserted module 'br_netfilter' Feb 13 15:16:44.330017 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:16:44.342176 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:16:44.356374 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:16:44.368326 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:16:44.372881 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:16:44.396478 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:16:44.426639 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:16:44.443884 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:16:44.448342 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:16:44.464218 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:16:44.474072 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:16:44.479960 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:16:44.501366 dracut-cmdline[286]: dracut-dracut-053 Feb 13 15:16:44.513463 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:16:44.587359 systemd-resolved[287]: Positive Trust Anchors: Feb 13 15:16:44.589887 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:16:44.589961 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:16:44.676800 kernel: SCSI subsystem initialized Feb 13 15:16:44.684803 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:16:44.698892 kernel: iscsi: registered transport (tcp) Feb 13 15:16:44.722238 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:16:44.722314 kernel: QLogic iSCSI HBA Driver Feb 13 15:16:44.815148 kernel: random: crng init done Feb 13 15:16:44.815668 systemd-resolved[287]: Defaulting to hostname 'linux'. Feb 13 15:16:44.820667 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:16:44.825589 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:16:44.853640 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:16:44.867793 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:16:44.914846 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:16:44.914970 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:16:44.915001 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:16:44.989821 kernel: raid6: neonx8 gen() 6632 MB/s Feb 13 15:16:45.006809 kernel: raid6: neonx4 gen() 6432 MB/s Feb 13 15:16:45.023834 kernel: raid6: neonx2 gen() 5375 MB/s Feb 13 15:16:45.040803 kernel: raid6: neonx1 gen() 3892 MB/s Feb 13 15:16:45.057804 kernel: raid6: int64x8 gen() 3789 MB/s Feb 13 15:16:45.074802 kernel: raid6: int64x4 gen() 3676 MB/s Feb 13 15:16:45.091816 kernel: raid6: int64x2 gen() 3558 MB/s Feb 13 15:16:45.109664 kernel: raid6: int64x1 gen() 2734 MB/s Feb 13 15:16:45.109767 kernel: raid6: using algorithm neonx8 gen() 6632 MB/s Feb 13 15:16:45.127688 kernel: raid6: .... xor() 4773 MB/s, rmw enabled Feb 13 15:16:45.127818 kernel: raid6: using neon recovery algorithm Feb 13 15:16:45.136834 kernel: xor: measuring software checksum speed Feb 13 15:16:45.139014 kernel: 8regs : 9741 MB/sec Feb 13 15:16:45.139163 kernel: 32regs : 11953 MB/sec Feb 13 15:16:45.140223 kernel: arm64_neon : 9319 MB/sec Feb 13 15:16:45.140303 kernel: xor: using function: 32regs (11953 MB/sec) Feb 13 15:16:45.233807 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:16:45.259860 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:16:45.272427 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:16:45.323187 systemd-udevd[470]: Using default interface naming scheme 'v255'. Feb 13 15:16:45.333516 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:16:45.345038 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:16:45.387972 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Feb 13 15:16:45.458034 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:16:45.470101 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:16:45.597667 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:16:45.611583 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:16:45.666299 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:16:45.674104 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:16:45.680474 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:16:45.685940 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:16:45.696402 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:16:45.744023 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:16:45.820069 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:16:45.827092 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:16:45.827136 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 13 15:16:45.861488 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 15:16:45.864181 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 15:16:45.864506 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:c4:0c:b4:2b:cd Feb 13 15:16:45.820304 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:16:45.829100 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:16:45.831285 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:16:45.831421 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:45.833994 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:16:45.882108 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 15:16:45.882162 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 15:16:45.855460 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:16:45.867725 (udev-worker)[515]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:16:45.897788 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 15:16:45.910041 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:16:45.910113 kernel: GPT:9289727 != 16777215 Feb 13 15:16:45.910138 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:16:45.910162 kernel: GPT:9289727 != 16777215 Feb 13 15:16:45.910186 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:16:45.910209 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:16:45.923802 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:45.938115 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:16:45.980556 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:16:46.017436 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (534) Feb 13 15:16:46.053769 kernel: BTRFS: device fsid 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (532) Feb 13 15:16:46.060572 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 15:16:46.148384 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 15:16:46.165573 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:16:46.181098 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 15:16:46.183898 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 15:16:46.230074 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:16:46.243811 disk-uuid[661]: Primary Header is updated. Feb 13 15:16:46.243811 disk-uuid[661]: Secondary Entries is updated. Feb 13 15:16:46.243811 disk-uuid[661]: Secondary Header is updated. Feb 13 15:16:46.252860 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:16:47.271846 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:16:47.272909 disk-uuid[662]: The operation has completed successfully. Feb 13 15:16:47.450443 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:16:47.450646 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:16:47.497071 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:16:47.506603 sh[925]: Success Feb 13 15:16:47.533776 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:16:47.648895 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:16:47.668978 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:16:47.675878 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:16:47.717453 kernel: BTRFS info (device dm-0): first mount of filesystem 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 Feb 13 15:16:47.717530 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:16:47.717557 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:16:47.718908 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:16:47.720042 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:16:47.839797 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:16:47.879388 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:16:47.883690 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:16:47.896045 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:16:47.901074 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:16:47.929913 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:16:47.929992 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:16:47.931942 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:16:47.938239 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:16:47.955826 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:16:47.960171 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:16:47.984808 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:16:47.998143 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:16:48.093720 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:16:48.103047 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:16:48.154874 systemd-networkd[1117]: lo: Link UP Feb 13 15:16:48.154897 systemd-networkd[1117]: lo: Gained carrier Feb 13 15:16:48.159873 systemd-networkd[1117]: Enumeration completed Feb 13 15:16:48.161542 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:16:48.163880 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:48.163888 systemd-networkd[1117]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:16:48.164930 systemd[1]: Reached target network.target - Network. Feb 13 15:16:48.177650 systemd-networkd[1117]: eth0: Link UP Feb 13 15:16:48.177669 systemd-networkd[1117]: eth0: Gained carrier Feb 13 15:16:48.177689 systemd-networkd[1117]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:48.197148 systemd-networkd[1117]: eth0: DHCPv4 address 172.31.29.130/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:16:48.544068 ignition[1041]: Ignition 2.20.0 Feb 13 15:16:48.544104 ignition[1041]: Stage: fetch-offline Feb 13 15:16:48.544686 ignition[1041]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:48.544712 ignition[1041]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:16:48.547722 ignition[1041]: Ignition finished successfully Feb 13 15:16:48.556397 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:16:48.569218 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:16:48.596017 ignition[1126]: Ignition 2.20.0 Feb 13 15:16:48.596045 ignition[1126]: Stage: fetch Feb 13 15:16:48.596717 ignition[1126]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:48.596789 ignition[1126]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:16:48.596983 ignition[1126]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:16:48.609287 ignition[1126]: PUT result: OK Feb 13 15:16:48.613695 ignition[1126]: parsed url from cmdline: "" Feb 13 15:16:48.613808 ignition[1126]: no config URL provided Feb 13 15:16:48.613838 ignition[1126]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:16:48.613892 ignition[1126]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:16:48.613969 ignition[1126]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:16:48.618350 ignition[1126]: PUT result: OK Feb 13 15:16:48.618578 ignition[1126]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 15:16:48.623240 ignition[1126]: GET result: OK Feb 13 15:16:48.623946 ignition[1126]: parsing config with SHA512: 1cf869aeb565bf6fdf4ad4357f44c3ba25b4eeb5538bc5917896c282b2313ca04d039ade9d567c57ecea75ef3f19b8467c6cd62fcc73a90f0c4f9e9ae47296c6 Feb 13 15:16:48.637152 unknown[1126]: fetched base config from "system" Feb 13 15:16:48.637197 unknown[1126]: fetched base config from "system" Feb 13 15:16:48.638464 ignition[1126]: fetch: fetch complete Feb 13 15:16:48.637212 unknown[1126]: fetched user config from "aws" Feb 13 15:16:48.638479 ignition[1126]: fetch: fetch passed Feb 13 15:16:48.645404 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:16:48.638609 ignition[1126]: Ignition finished successfully Feb 13 15:16:48.665207 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:16:48.692824 ignition[1132]: Ignition 2.20.0 Feb 13 15:16:48.692850 ignition[1132]: Stage: kargs Feb 13 15:16:48.693626 ignition[1132]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:48.693659 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:16:48.693885 ignition[1132]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:16:48.697328 ignition[1132]: PUT result: OK Feb 13 15:16:48.708603 ignition[1132]: kargs: kargs passed Feb 13 15:16:48.708898 ignition[1132]: Ignition finished successfully Feb 13 15:16:48.714988 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:16:48.732116 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:16:48.754879 ignition[1138]: Ignition 2.20.0 Feb 13 15:16:48.754901 ignition[1138]: Stage: disks Feb 13 15:16:48.755464 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:48.755488 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:16:48.755640 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:16:48.758832 ignition[1138]: PUT result: OK Feb 13 15:16:48.769171 ignition[1138]: disks: disks passed Feb 13 15:16:48.769305 ignition[1138]: Ignition finished successfully Feb 13 15:16:48.774277 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:16:48.777707 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:16:48.781231 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:16:48.783791 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:16:48.787417 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:16:48.789411 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:16:48.817229 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:16:48.866596 systemd-fsck[1147]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:16:48.875809 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:16:48.888937 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:16:48.989780 kernel: EXT4-fs (nvme0n1p9): mounted filesystem b8d8a7c2-9667-48db-9266-035fd118dfdf r/w with ordered data mode. Quota mode: none. Feb 13 15:16:48.991328 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:16:48.995159 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:16:49.019008 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:16:49.025141 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:16:49.027622 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:16:49.027723 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:16:49.027804 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:16:49.046796 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1166) Feb 13 15:16:49.050337 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:16:49.050413 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:16:49.050440 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:16:49.056786 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:16:49.058867 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:16:49.066512 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:16:49.074092 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:16:49.354965 systemd-networkd[1117]: eth0: Gained IPv6LL Feb 13 15:16:49.677234 initrd-setup-root[1190]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:16:49.686952 initrd-setup-root[1197]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:16:49.712661 initrd-setup-root[1204]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:16:49.721806 initrd-setup-root[1211]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:16:50.085354 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:16:50.095983 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:16:50.106234 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:16:50.124946 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:16:50.127543 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:16:50.159019 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:16:50.181682 ignition[1280]: INFO : Ignition 2.20.0 Feb 13 15:16:50.184917 ignition[1280]: INFO : Stage: mount Feb 13 15:16:50.184917 ignition[1280]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:50.184917 ignition[1280]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:16:50.184917 ignition[1280]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:16:50.193443 ignition[1280]: INFO : PUT result: OK Feb 13 15:16:50.197043 ignition[1280]: INFO : mount: mount passed Feb 13 15:16:50.197043 ignition[1280]: INFO : Ignition finished successfully Feb 13 15:16:50.201072 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:16:50.224047 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:16:50.248847 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:16:50.265772 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1291) Feb 13 15:16:50.270100 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:16:50.270151 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:16:50.270177 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:16:50.275786 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:16:50.280052 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:16:50.321635 ignition[1308]: INFO : Ignition 2.20.0 Feb 13 15:16:50.321635 ignition[1308]: INFO : Stage: files Feb 13 15:16:50.325182 ignition[1308]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:50.325182 ignition[1308]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:16:50.325182 ignition[1308]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:16:50.332312 ignition[1308]: INFO : PUT result: OK Feb 13 15:16:50.338421 ignition[1308]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:16:50.354864 ignition[1308]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:16:50.354864 ignition[1308]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:16:50.406649 ignition[1308]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:16:50.409349 ignition[1308]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:16:50.412273 unknown[1308]: wrote ssh authorized keys file for user: core Feb 13 15:16:50.414446 ignition[1308]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:16:50.418678 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:16:50.422432 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:16:50.518582 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:16:50.694691 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:16:50.694691 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:16:50.703015 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:16:50.703015 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:16:50.703015 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:16:50.703015 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:16:50.703015 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:16:50.703015 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:16:50.703015 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:16:50.703015 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:16:50.703015 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:16:50.703015 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:16:50.703015 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:16:50.703015 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:16:50.703015 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 15:16:51.059629 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:16:51.473793 ignition[1308]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:16:51.473793 ignition[1308]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:16:51.487509 ignition[1308]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:16:51.491234 ignition[1308]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:16:51.491234 ignition[1308]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:16:51.491234 ignition[1308]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:16:51.491234 ignition[1308]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:16:51.491234 ignition[1308]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:16:51.491234 ignition[1308]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:16:51.491234 ignition[1308]: INFO : files: files passed Feb 13 15:16:51.491234 ignition[1308]: INFO : Ignition finished successfully Feb 13 15:16:51.513462 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:16:51.534313 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:16:51.541236 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:16:51.549402 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:16:51.551900 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:16:51.585336 initrd-setup-root-after-ignition[1337]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:16:51.585336 initrd-setup-root-after-ignition[1337]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:16:51.593058 initrd-setup-root-after-ignition[1341]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:16:51.596774 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:16:51.604171 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:16:51.614122 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:16:51.679535 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:16:51.681666 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:16:51.685562 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:16:51.690261 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:16:51.692479 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:16:51.705033 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:16:51.735163 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:16:51.744179 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:16:51.776148 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:16:51.780711 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:16:51.785193 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:16:51.788678 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:16:51.790164 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:16:51.795320 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:16:51.797678 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:16:51.801522 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:16:51.804513 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:16:51.813236 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:16:51.817865 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:16:51.820166 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:16:51.822823 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:16:51.826129 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:16:51.838606 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:16:51.845160 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:16:51.845421 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:16:51.868943 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:16:51.871451 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:16:51.875715 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:16:51.879726 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:16:51.885543 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:16:51.886046 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:16:51.891832 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:16:51.892921 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:16:51.898759 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:16:51.900713 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:16:51.915191 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:16:51.920710 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:16:51.925286 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:16:51.926374 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:16:51.935607 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:16:51.938170 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:16:51.958134 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:16:51.958369 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:16:51.972399 ignition[1361]: INFO : Ignition 2.20.0 Feb 13 15:16:51.974272 ignition[1361]: INFO : Stage: umount Feb 13 15:16:51.976640 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:16:51.978734 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:16:51.981294 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:16:51.984940 ignition[1361]: INFO : PUT result: OK Feb 13 15:16:51.990426 ignition[1361]: INFO : umount: umount passed Feb 13 15:16:51.992288 ignition[1361]: INFO : Ignition finished successfully Feb 13 15:16:51.990544 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:16:51.998308 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:16:52.001871 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:16:52.006292 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:16:52.006634 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:16:52.010885 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:16:52.011054 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:16:52.014536 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:16:52.014625 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:16:52.016545 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:16:52.016621 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:16:52.018865 systemd[1]: Stopped target network.target - Network. Feb 13 15:16:52.025779 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:16:52.025900 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:16:52.026067 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:16:52.026298 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:16:52.031238 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:16:52.033563 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:16:52.035450 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:16:52.037617 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:16:52.037811 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:16:52.040053 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:16:52.040200 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:16:52.042305 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:16:52.042467 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:16:52.044597 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:16:52.044718 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:16:52.051218 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:16:52.052303 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:16:52.062658 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:16:52.067136 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:16:52.071826 systemd-networkd[1117]: eth0: DHCPv6 lease lost Feb 13 15:16:52.077898 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:16:52.078165 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:16:52.081687 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:16:52.082710 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:16:52.120020 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:16:52.120115 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:16:52.134041 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:16:52.137238 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:16:52.137353 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:16:52.140047 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:16:52.140129 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:16:52.142689 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:16:52.142791 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:16:52.145055 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:16:52.145130 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:16:52.149683 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:16:52.184548 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:16:52.185138 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:16:52.202882 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:16:52.203037 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:16:52.207067 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:16:52.207160 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:16:52.215568 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:16:52.215716 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:16:52.221286 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:16:52.221414 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:16:52.223800 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:16:52.223917 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:16:52.245083 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:16:52.245266 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:16:52.245428 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:16:52.251912 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:16:52.252031 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:16:52.269941 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:16:52.270059 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:16:52.291327 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:16:52.291447 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:52.296489 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:16:52.296709 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:16:52.306847 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:16:52.307069 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:16:52.312119 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:16:52.333219 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:16:52.351215 systemd[1]: Switching root. Feb 13 15:16:52.404845 systemd-journald[250]: Journal stopped Feb 13 15:16:55.428318 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Feb 13 15:16:55.428499 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:16:55.428551 kernel: SELinux: policy capability open_perms=1 Feb 13 15:16:55.428585 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:16:55.428614 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:16:55.428649 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:16:55.428689 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:16:55.428725 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:16:55.428777 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:16:55.428821 kernel: audit: type=1403 audit(1739459813.527:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:16:55.428862 systemd[1]: Successfully loaded SELinux policy in 74.451ms. Feb 13 15:16:55.428909 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.468ms. Feb 13 15:16:55.428943 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:16:55.428976 systemd[1]: Detected virtualization amazon. Feb 13 15:16:55.429007 systemd[1]: Detected architecture arm64. Feb 13 15:16:55.429038 systemd[1]: Detected first boot. Feb 13 15:16:55.429067 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:16:55.429098 zram_generator::config[1404]: No configuration found. Feb 13 15:16:55.429140 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:16:55.429172 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:16:55.429204 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:16:55.429236 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:16:55.429268 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:16:55.429308 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:16:55.429337 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:16:55.429370 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:16:55.429406 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:16:55.429438 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:16:55.429469 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:16:55.429498 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:16:55.429528 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:16:55.429558 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:16:55.429587 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:16:55.429616 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:16:55.429645 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:16:55.429679 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:16:55.429710 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:16:55.431765 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:16:55.431805 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:16:55.431838 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:16:55.431868 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:16:55.431900 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:16:55.431935 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:16:55.431967 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:16:55.431998 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:16:55.432030 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:16:55.432061 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:16:55.432093 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:16:55.432123 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:16:55.432155 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:16:55.432184 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:16:55.432212 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:16:55.432247 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:16:55.432287 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:16:55.432319 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:16:55.432439 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:16:55.432474 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:16:55.432508 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:16:55.432540 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:16:55.432595 systemd[1]: Reached target machines.target - Containers. Feb 13 15:16:55.432630 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:16:55.432664 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:16:55.432701 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:16:55.432730 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:16:55.432803 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:16:55.432841 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:16:55.432873 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:16:55.432903 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:16:55.432934 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:16:55.432986 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:16:55.433016 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:16:55.433048 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:16:55.433079 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:16:55.433108 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:16:55.433138 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:16:55.433168 kernel: loop: module loaded Feb 13 15:16:55.433196 kernel: fuse: init (API version 7.39) Feb 13 15:16:55.433225 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:16:55.433261 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:16:55.433291 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:16:55.433321 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:16:55.433353 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:16:55.433382 systemd[1]: Stopped verity-setup.service. Feb 13 15:16:55.433411 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:16:55.433440 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:16:55.433470 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:16:55.433499 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:16:55.433534 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:16:55.433565 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:16:55.433595 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:16:55.433623 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:16:55.433656 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:16:55.433685 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:16:55.433714 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:16:55.435837 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:16:55.435889 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:16:55.435920 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:16:55.436031 kernel: ACPI: bus type drm_connector registered Feb 13 15:16:55.436162 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:16:55.436194 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:16:55.436231 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:16:55.436265 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:16:55.436296 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:16:55.436326 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:16:55.436446 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:16:55.436552 systemd-journald[1489]: Collecting audit messages is disabled. Feb 13 15:16:55.436621 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:16:55.436655 systemd-journald[1489]: Journal started Feb 13 15:16:55.436706 systemd-journald[1489]: Runtime Journal (/run/log/journal/ec2f97c48e0f4968525e20f3f9236ab7) is 8.0M, max 75.3M, 67.3M free. Feb 13 15:16:54.805437 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:16:55.439130 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:16:54.856647 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 15:16:54.857435 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:16:55.460096 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:16:55.474516 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:16:55.485074 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:16:55.496583 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:16:55.500053 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:16:55.500135 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:16:55.507090 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:16:55.518673 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:16:55.529393 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:16:55.532990 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:16:55.541072 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:16:55.554110 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:16:55.556670 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:16:55.560397 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:16:55.565080 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:16:55.568309 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:16:55.579214 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:16:55.594498 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:16:55.605509 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:16:55.610217 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:16:55.622978 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:16:55.673924 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:16:55.677408 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:16:55.692286 systemd-journald[1489]: Time spent on flushing to /var/log/journal/ec2f97c48e0f4968525e20f3f9236ab7 is 79.645ms for 909 entries. Feb 13 15:16:55.692286 systemd-journald[1489]: System Journal (/var/log/journal/ec2f97c48e0f4968525e20f3f9236ab7) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:16:55.802188 systemd-journald[1489]: Received client request to flush runtime journal. Feb 13 15:16:55.802308 kernel: loop0: detected capacity change from 0 to 194096 Feb 13 15:16:55.694128 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:16:55.762117 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:16:55.787002 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:16:55.799213 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:16:55.804795 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:16:55.828792 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:16:55.820154 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:16:55.822180 systemd-tmpfiles[1534]: ACLs are not supported, ignoring. Feb 13 15:16:55.822205 systemd-tmpfiles[1534]: ACLs are not supported, ignoring. Feb 13 15:16:55.833585 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:16:55.865299 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:16:55.879061 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:16:55.882843 kernel: loop1: detected capacity change from 0 to 113536 Feb 13 15:16:55.904629 udevadm[1547]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:16:55.943714 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:16:55.961000 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:16:56.012850 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Feb 13 15:16:56.012890 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Feb 13 15:16:56.022326 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:16:56.050789 kernel: loop2: detected capacity change from 0 to 116808 Feb 13 15:16:56.194782 kernel: loop3: detected capacity change from 0 to 53784 Feb 13 15:16:56.304866 kernel: loop4: detected capacity change from 0 to 194096 Feb 13 15:16:56.325945 kernel: loop5: detected capacity change from 0 to 113536 Feb 13 15:16:56.339165 kernel: loop6: detected capacity change from 0 to 116808 Feb 13 15:16:56.352855 kernel: loop7: detected capacity change from 0 to 53784 Feb 13 15:16:56.361892 (sd-merge)[1563]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 15:16:56.363426 (sd-merge)[1563]: Merged extensions into '/usr'. Feb 13 15:16:56.373678 systemd[1]: Reloading requested from client PID 1533 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:16:56.373926 systemd[1]: Reloading... Feb 13 15:16:56.548460 zram_generator::config[1585]: No configuration found. Feb 13 15:16:56.911321 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:16:57.021731 systemd[1]: Reloading finished in 646 ms. Feb 13 15:16:57.069815 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:16:57.073810 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:16:57.089094 systemd[1]: Starting ensure-sysext.service... Feb 13 15:16:57.099307 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:16:57.110513 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:16:57.126294 systemd[1]: Reloading requested from client PID 1641 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:16:57.126330 systemd[1]: Reloading... Feb 13 15:16:57.181675 systemd-tmpfiles[1642]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:16:57.182352 systemd-tmpfiles[1642]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:16:57.188148 systemd-tmpfiles[1642]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:16:57.188881 systemd-tmpfiles[1642]: ACLs are not supported, ignoring. Feb 13 15:16:57.191115 systemd-tmpfiles[1642]: ACLs are not supported, ignoring. Feb 13 15:16:57.203899 systemd-tmpfiles[1642]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:16:57.204132 systemd-tmpfiles[1642]: Skipping /boot Feb 13 15:16:57.238129 systemd-tmpfiles[1642]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:16:57.238496 systemd-tmpfiles[1642]: Skipping /boot Feb 13 15:16:57.260060 systemd-udevd[1643]: Using default interface naming scheme 'v255'. Feb 13 15:16:57.342505 zram_generator::config[1673]: No configuration found. Feb 13 15:16:57.410018 ldconfig[1528]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:16:57.580682 (udev-worker)[1691]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:16:57.726625 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:16:57.873083 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:16:57.873469 systemd[1]: Reloading finished in 746 ms. Feb 13 15:16:57.876795 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1691) Feb 13 15:16:57.910990 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:16:57.915857 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:16:57.925801 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:16:57.997366 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:16:58.021858 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:16:58.038481 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:16:58.053249 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:16:58.086280 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:16:58.096318 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:16:58.106720 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:16:58.129313 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:16:58.138977 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:16:58.150500 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:16:58.163621 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:16:58.166709 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:16:58.183854 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:16:58.230769 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:16:58.233308 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:16:58.243364 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:16:58.251603 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:16:58.252077 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:16:58.276163 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:16:58.288539 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:16:58.296426 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:16:58.314224 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:16:58.321337 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:16:58.323598 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:16:58.324054 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:16:58.330970 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:16:58.334420 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:16:58.334847 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:16:58.338504 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:16:58.338815 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:16:58.342167 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:16:58.344055 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:16:58.361871 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:16:58.376505 systemd[1]: Finished ensure-sysext.service. Feb 13 15:16:58.377920 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:16:58.378199 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:16:58.396873 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:16:58.416467 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:16:58.418365 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:16:58.418538 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:16:58.418599 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:16:58.421976 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:16:58.427055 augenrules[1884]: No rules Feb 13 15:16:58.437726 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:16:58.438540 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:16:58.440952 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:16:58.470899 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:16:58.473935 lvm[1882]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:16:58.493919 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:16:58.552840 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:16:58.557757 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:16:58.568351 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:16:58.588279 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:16:58.602255 lvm[1902]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:16:58.644521 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:16:58.654941 systemd-networkd[1832]: lo: Link UP Feb 13 15:16:58.654961 systemd-networkd[1832]: lo: Gained carrier Feb 13 15:16:58.657907 systemd-networkd[1832]: Enumeration completed Feb 13 15:16:58.658125 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:16:58.661383 systemd-networkd[1832]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:58.661404 systemd-networkd[1832]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:16:58.665881 systemd-networkd[1832]: eth0: Link UP Feb 13 15:16:58.666236 systemd-networkd[1832]: eth0: Gained carrier Feb 13 15:16:58.666280 systemd-networkd[1832]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:16:58.676539 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:16:58.682453 systemd-networkd[1832]: eth0: DHCPv4 address 172.31.29.130/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:16:58.696287 systemd-resolved[1844]: Positive Trust Anchors: Feb 13 15:16:58.696876 systemd-resolved[1844]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:16:58.696946 systemd-resolved[1844]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:16:58.704722 systemd-resolved[1844]: Defaulting to hostname 'linux'. Feb 13 15:16:58.707936 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:16:58.710188 systemd[1]: Reached target network.target - Network. Feb 13 15:16:58.711901 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:16:58.714102 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:16:58.716215 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:16:58.719159 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:16:58.721813 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:16:58.724424 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:16:58.726827 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:16:58.729121 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:16:58.729172 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:16:58.730878 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:16:58.733284 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:16:58.738290 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:16:58.751060 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:16:58.754347 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:16:58.756838 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:16:58.758662 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:16:58.760462 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:16:58.760524 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:16:58.771814 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:16:58.778233 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:16:58.789255 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:16:58.795079 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:16:58.801126 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:16:58.803911 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:16:58.811154 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:16:58.817490 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 15:16:58.823972 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:16:58.852142 jq[1912]: false Feb 13 15:16:58.869819 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 15:16:58.875038 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:16:58.882061 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:16:58.898263 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:16:58.901164 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:16:58.902654 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:16:58.911381 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:16:58.918991 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:16:58.928694 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:16:58.929826 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:16:58.972134 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:16:58.973876 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:16:58.989476 extend-filesystems[1913]: Found loop4 Feb 13 15:16:58.989476 extend-filesystems[1913]: Found loop5 Feb 13 15:16:58.989476 extend-filesystems[1913]: Found loop6 Feb 13 15:16:58.989476 extend-filesystems[1913]: Found loop7 Feb 13 15:16:58.989476 extend-filesystems[1913]: Found nvme0n1 Feb 13 15:16:58.989476 extend-filesystems[1913]: Found nvme0n1p1 Feb 13 15:16:58.989476 extend-filesystems[1913]: Found nvme0n1p2 Feb 13 15:16:58.989476 extend-filesystems[1913]: Found nvme0n1p3 Feb 13 15:16:58.989476 extend-filesystems[1913]: Found usr Feb 13 15:16:58.989476 extend-filesystems[1913]: Found nvme0n1p4 Feb 13 15:16:58.989476 extend-filesystems[1913]: Found nvme0n1p6 Feb 13 15:16:58.989476 extend-filesystems[1913]: Found nvme0n1p7 Feb 13 15:16:58.989476 extend-filesystems[1913]: Found nvme0n1p9 Feb 13 15:16:58.989476 extend-filesystems[1913]: Checking size of /dev/nvme0n1p9 Feb 13 15:16:59.095977 ntpd[1915]: 13 Feb 15:16:59 ntpd[1915]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:47 UTC 2025 (1): Starting Feb 13 15:16:59.095977 ntpd[1915]: 13 Feb 15:16:59 ntpd[1915]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:16:59.095977 ntpd[1915]: 13 Feb 15:16:59 ntpd[1915]: ---------------------------------------------------- Feb 13 15:16:59.095977 ntpd[1915]: 13 Feb 15:16:59 ntpd[1915]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:16:59.095977 ntpd[1915]: 13 Feb 15:16:59 ntpd[1915]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:16:59.095977 ntpd[1915]: 13 Feb 15:16:59 ntpd[1915]: corporation. Support and training for ntp-4 are Feb 13 15:16:59.095977 ntpd[1915]: 13 Feb 15:16:59 ntpd[1915]: available at https://www.nwtime.org/support Feb 13 15:16:59.095977 ntpd[1915]: 13 Feb 15:16:59 ntpd[1915]: ---------------------------------------------------- Feb 13 15:16:59.095977 ntpd[1915]: 13 Feb 15:16:59 ntpd[1915]: proto: precision = 0.096 usec (-23) Feb 13 15:16:59.095977 ntpd[1915]: 13 Feb 15:16:59 ntpd[1915]: basedate set to 2025-02-01 Feb 13 15:16:59.095977 ntpd[1915]: 13 Feb 15:16:59 ntpd[1915]: gps base set to 2025-02-02 (week 2352) Feb 13 15:16:59.095977 ntpd[1915]: 13 Feb 15:16:59 ntpd[1915]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:16:59.095977 ntpd[1915]: 13 Feb 15:16:59 ntpd[1915]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:16:59.095977 ntpd[1915]: 13 Feb 15:16:59 ntpd[1915]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:16:59.095977 ntpd[1915]: 13 Feb 15:16:59 ntpd[1915]: Listen normally on 3 eth0 172.31.29.130:123 Feb 13 15:16:59.095977 ntpd[1915]: 13 Feb 15:16:59 ntpd[1915]: Listen normally on 4 lo [::1]:123 Feb 13 15:16:59.095977 ntpd[1915]: 13 Feb 15:16:59 ntpd[1915]: bind(21) AF_INET6 fe80::4c4:cff:feb4:2bcd%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:16:59.095977 ntpd[1915]: 13 Feb 15:16:59 ntpd[1915]: unable to create socket on eth0 (5) for fe80::4c4:cff:feb4:2bcd%2#123 Feb 13 15:16:59.095977 ntpd[1915]: 13 Feb 15:16:59 ntpd[1915]: failed to init interface for address fe80::4c4:cff:feb4:2bcd%2 Feb 13 15:16:59.095977 ntpd[1915]: 13 Feb 15:16:59 ntpd[1915]: Listening on routing socket on fd #21 for interface updates Feb 13 15:16:59.156534 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 15:16:58.995845 dbus-daemon[1911]: [system] SELinux support is enabled Feb 13 15:16:58.996554 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:16:59.157337 ntpd[1915]: 13 Feb 15:16:59 ntpd[1915]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:16:59.157337 ntpd[1915]: 13 Feb 15:16:59 ntpd[1915]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:16:59.157470 extend-filesystems[1913]: Resized partition /dev/nvme0n1p9 Feb 13 15:16:59.014895 ntpd[1915]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:47 UTC 2025 (1): Starting Feb 13 15:16:59.014215 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:16:59.181656 extend-filesystems[1952]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:16:59.014949 ntpd[1915]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:16:59.014266 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:16:59.195386 update_engine[1925]: I20250213 15:16:59.084163 1925 main.cc:92] Flatcar Update Engine starting Feb 13 15:16:59.195386 update_engine[1925]: I20250213 15:16:59.115010 1925 update_check_scheduler.cc:74] Next update check in 6m40s Feb 13 15:16:59.014970 ntpd[1915]: ---------------------------------------------------- Feb 13 15:16:59.034882 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:16:59.197274 tar[1930]: linux-arm64/helm Feb 13 15:16:59.014989 ntpd[1915]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:16:59.034940 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:16:59.015009 ntpd[1915]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:16:59.109074 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 15:16:59.210490 jq[1928]: true Feb 13 15:16:59.015026 ntpd[1915]: corporation. Support and training for ntp-4 are Feb 13 15:16:59.111889 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:16:59.273509 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 15:16:59.015044 ntpd[1915]: available at https://www.nwtime.org/support Feb 13 15:16:59.118224 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:16:59.015062 ntpd[1915]: ---------------------------------------------------- Feb 13 15:16:59.132695 systemd-logind[1924]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:16:59.017188 dbus-daemon[1911]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1832 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 15:16:59.138303 systemd-logind[1924]: Watching system buttons on /dev/input/event1 (Sleep Button) Feb 13 15:16:59.030711 ntpd[1915]: proto: precision = 0.096 usec (-23) Feb 13 15:16:59.138679 systemd-logind[1924]: New seat seat0. Feb 13 15:16:59.280266 jq[1957]: true Feb 13 15:16:59.048101 ntpd[1915]: basedate set to 2025-02-01 Feb 13 15:16:59.142138 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:16:59.048136 ntpd[1915]: gps base set to 2025-02-02 (week 2352) Feb 13 15:16:59.149714 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:16:59.292455 extend-filesystems[1952]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 15:16:59.292455 extend-filesystems[1952]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:16:59.292455 extend-filesystems[1952]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 15:16:59.074244 ntpd[1915]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:16:59.150089 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:16:59.323228 extend-filesystems[1913]: Resized filesystem in /dev/nvme0n1p9 Feb 13 15:16:59.074409 ntpd[1915]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:16:59.219439 (ntainerd)[1953]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:16:59.079888 ntpd[1915]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:16:59.297583 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:16:59.079972 ntpd[1915]: Listen normally on 3 eth0 172.31.29.130:123 Feb 13 15:16:59.298153 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:16:59.082272 ntpd[1915]: Listen normally on 4 lo [::1]:123 Feb 13 15:16:59.301717 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 15:16:59.082361 ntpd[1915]: bind(21) AF_INET6 fe80::4c4:cff:feb4:2bcd%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:16:59.333500 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:16:59.082401 ntpd[1915]: unable to create socket on eth0 (5) for fe80::4c4:cff:feb4:2bcd%2#123 Feb 13 15:16:59.082429 ntpd[1915]: failed to init interface for address fe80::4c4:cff:feb4:2bcd%2 Feb 13 15:16:59.082492 ntpd[1915]: Listening on routing socket on fd #21 for interface updates Feb 13 15:16:59.090203 dbus-daemon[1911]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 15:16:59.114640 ntpd[1915]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:16:59.114695 ntpd[1915]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:16:59.377198 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1692) Feb 13 15:16:59.515571 coreos-metadata[1910]: Feb 13 15:16:59.511 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:16:59.532376 coreos-metadata[1910]: Feb 13 15:16:59.532 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 15:16:59.533731 coreos-metadata[1910]: Feb 13 15:16:59.533 INFO Fetch successful Feb 13 15:16:59.534039 coreos-metadata[1910]: Feb 13 15:16:59.533 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 15:16:59.536549 coreos-metadata[1910]: Feb 13 15:16:59.536 INFO Fetch successful Feb 13 15:16:59.536549 coreos-metadata[1910]: Feb 13 15:16:59.536 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 15:16:59.539912 coreos-metadata[1910]: Feb 13 15:16:59.539 INFO Fetch successful Feb 13 15:16:59.539912 coreos-metadata[1910]: Feb 13 15:16:59.539 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 15:16:59.542892 coreos-metadata[1910]: Feb 13 15:16:59.542 INFO Fetch successful Feb 13 15:16:59.542892 coreos-metadata[1910]: Feb 13 15:16:59.542 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 15:16:59.546200 coreos-metadata[1910]: Feb 13 15:16:59.545 INFO Fetch failed with 404: resource not found Feb 13 15:16:59.546200 coreos-metadata[1910]: Feb 13 15:16:59.546 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 15:16:59.547065 coreos-metadata[1910]: Feb 13 15:16:59.546 INFO Fetch successful Feb 13 15:16:59.547065 coreos-metadata[1910]: Feb 13 15:16:59.546 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 15:16:59.558056 coreos-metadata[1910]: Feb 13 15:16:59.547 INFO Fetch successful Feb 13 15:16:59.558056 coreos-metadata[1910]: Feb 13 15:16:59.547 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 15:16:59.558056 coreos-metadata[1910]: Feb 13 15:16:59.548 INFO Fetch successful Feb 13 15:16:59.558056 coreos-metadata[1910]: Feb 13 15:16:59.548 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 15:16:59.558056 coreos-metadata[1910]: Feb 13 15:16:59.549 INFO Fetch successful Feb 13 15:16:59.558056 coreos-metadata[1910]: Feb 13 15:16:59.549 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 15:16:59.558056 coreos-metadata[1910]: Feb 13 15:16:59.550 INFO Fetch successful Feb 13 15:16:59.573944 bash[2027]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:16:59.612603 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:16:59.632642 systemd[1]: Starting sshkeys.service... Feb 13 15:16:59.698906 locksmithd[1950]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:16:59.738141 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:16:59.741485 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:16:59.764236 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:16:59.773381 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:16:59.786968 systemd-networkd[1832]: eth0: Gained IPv6LL Feb 13 15:16:59.798920 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:16:59.802683 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:16:59.811131 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 15:16:59.819284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:16:59.832302 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:16:59.872667 dbus-daemon[1911]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 15:16:59.875052 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 15:16:59.898029 dbus-daemon[1911]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1947 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 15:16:59.923705 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 15:16:59.979921 polkitd[2097]: Started polkitd version 121 Feb 13 15:17:00.001442 amazon-ssm-agent[2088]: Initializing new seelog logger Feb 13 15:17:00.001442 amazon-ssm-agent[2088]: New Seelog Logger Creation Complete Feb 13 15:17:00.001442 amazon-ssm-agent[2088]: 2025/02/13 15:16:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:17:00.001442 amazon-ssm-agent[2088]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:17:00.001442 amazon-ssm-agent[2088]: 2025/02/13 15:16:59 processing appconfig overrides Feb 13 15:17:00.001442 amazon-ssm-agent[2088]: 2025/02/13 15:16:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:17:00.001442 amazon-ssm-agent[2088]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:17:00.001442 amazon-ssm-agent[2088]: 2025/02/13 15:16:59 processing appconfig overrides Feb 13 15:17:00.001442 amazon-ssm-agent[2088]: 2025/02/13 15:16:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:17:00.001442 amazon-ssm-agent[2088]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:17:00.001442 amazon-ssm-agent[2088]: 2025/02/13 15:16:59 processing appconfig overrides Feb 13 15:17:00.001442 amazon-ssm-agent[2088]: 2025-02-13 15:16:59 INFO Proxy environment variables: Feb 13 15:17:00.028110 amazon-ssm-agent[2088]: 2025/02/13 15:17:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:17:00.028110 amazon-ssm-agent[2088]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:17:00.028110 amazon-ssm-agent[2088]: 2025/02/13 15:17:00 processing appconfig overrides Feb 13 15:17:00.033638 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:17:00.098706 polkitd[2097]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 15:17:00.098877 polkitd[2097]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 15:17:00.104796 amazon-ssm-agent[2088]: 2025-02-13 15:16:59 INFO https_proxy: Feb 13 15:17:00.121166 polkitd[2097]: Finished loading, compiling and executing 2 rules Feb 13 15:17:00.138190 dbus-daemon[1911]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 15:17:00.138864 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 15:17:00.144641 polkitd[2097]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 15:17:00.191964 containerd[1953]: time="2025-02-13T15:17:00.191825050Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:17:00.206220 amazon-ssm-agent[2088]: 2025-02-13 15:16:59 INFO http_proxy: Feb 13 15:17:00.227500 systemd-hostnamed[1947]: Hostname set to (transient) Feb 13 15:17:00.230000 systemd-resolved[1844]: System hostname changed to 'ip-172-31-29-130'. Feb 13 15:17:00.276158 coreos-metadata[2085]: Feb 13 15:17:00.275 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:17:00.286221 coreos-metadata[2085]: Feb 13 15:17:00.285 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 15:17:00.286911 coreos-metadata[2085]: Feb 13 15:17:00.286 INFO Fetch successful Feb 13 15:17:00.286911 coreos-metadata[2085]: Feb 13 15:17:00.286 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 15:17:00.289646 coreos-metadata[2085]: Feb 13 15:17:00.288 INFO Fetch successful Feb 13 15:17:00.292879 unknown[2085]: wrote ssh authorized keys file for user: core Feb 13 15:17:00.310872 amazon-ssm-agent[2088]: 2025-02-13 15:16:59 INFO no_proxy: Feb 13 15:17:00.378701 update-ssh-keys[2128]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:17:00.380423 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:17:00.391041 systemd[1]: Finished sshkeys.service. Feb 13 15:17:00.406779 amazon-ssm-agent[2088]: 2025-02-13 15:16:59 INFO Checking if agent identity type OnPrem can be assumed Feb 13 15:17:00.413761 containerd[1953]: time="2025-02-13T15:17:00.412631292Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:00.428471 containerd[1953]: time="2025-02-13T15:17:00.428400672Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:17:00.431705 containerd[1953]: time="2025-02-13T15:17:00.430785804Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:17:00.431705 containerd[1953]: time="2025-02-13T15:17:00.430849656Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:17:00.431705 containerd[1953]: time="2025-02-13T15:17:00.431144508Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:17:00.431705 containerd[1953]: time="2025-02-13T15:17:00.431177484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:00.431705 containerd[1953]: time="2025-02-13T15:17:00.431301192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:17:00.431705 containerd[1953]: time="2025-02-13T15:17:00.431328876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:00.431705 containerd[1953]: time="2025-02-13T15:17:00.431622552Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:17:00.435785 containerd[1953]: time="2025-02-13T15:17:00.431652324Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:00.436782 containerd[1953]: time="2025-02-13T15:17:00.435944004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:17:00.436782 containerd[1953]: time="2025-02-13T15:17:00.435988320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:00.436782 containerd[1953]: time="2025-02-13T15:17:00.436194180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:00.436782 containerd[1953]: time="2025-02-13T15:17:00.436639368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:17:00.437192 containerd[1953]: time="2025-02-13T15:17:00.437156724Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:17:00.437296 containerd[1953]: time="2025-02-13T15:17:00.437268384Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:17:00.438962 containerd[1953]: time="2025-02-13T15:17:00.438926676Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:17:00.439167 containerd[1953]: time="2025-02-13T15:17:00.439140120Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:17:00.453227 containerd[1953]: time="2025-02-13T15:17:00.452464164Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:17:00.453227 containerd[1953]: time="2025-02-13T15:17:00.452578872Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:17:00.453227 containerd[1953]: time="2025-02-13T15:17:00.452617992Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:17:00.453227 containerd[1953]: time="2025-02-13T15:17:00.452655468Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:17:00.453227 containerd[1953]: time="2025-02-13T15:17:00.452693460Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:17:00.453227 containerd[1953]: time="2025-02-13T15:17:00.453006228Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:17:00.453549 containerd[1953]: time="2025-02-13T15:17:00.453467724Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:17:00.453728 containerd[1953]: time="2025-02-13T15:17:00.453688080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:17:00.453836 containerd[1953]: time="2025-02-13T15:17:00.453734424Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:17:00.453836 containerd[1953]: time="2025-02-13T15:17:00.453793728Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:17:00.453836 containerd[1953]: time="2025-02-13T15:17:00.453827988Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:17:00.453956 containerd[1953]: time="2025-02-13T15:17:00.453867012Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:17:00.453956 containerd[1953]: time="2025-02-13T15:17:00.453897732Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:17:00.453956 containerd[1953]: time="2025-02-13T15:17:00.453927288Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:17:00.454081 containerd[1953]: time="2025-02-13T15:17:00.453959076Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:17:00.454081 containerd[1953]: time="2025-02-13T15:17:00.453990096Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:17:00.454081 containerd[1953]: time="2025-02-13T15:17:00.454019280Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:17:00.454081 containerd[1953]: time="2025-02-13T15:17:00.454049292Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:17:00.454240 containerd[1953]: time="2025-02-13T15:17:00.454093104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.454240 containerd[1953]: time="2025-02-13T15:17:00.454124364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.454240 containerd[1953]: time="2025-02-13T15:17:00.454154808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.454240 containerd[1953]: time="2025-02-13T15:17:00.454185984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.454240 containerd[1953]: time="2025-02-13T15:17:00.454217592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.454443 containerd[1953]: time="2025-02-13T15:17:00.454249056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.454443 containerd[1953]: time="2025-02-13T15:17:00.454276968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.454443 containerd[1953]: time="2025-02-13T15:17:00.454307280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.454443 containerd[1953]: time="2025-02-13T15:17:00.454339692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.454443 containerd[1953]: time="2025-02-13T15:17:00.454375968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.454443 containerd[1953]: time="2025-02-13T15:17:00.454404648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.454443 containerd[1953]: time="2025-02-13T15:17:00.454439064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.454756 containerd[1953]: time="2025-02-13T15:17:00.454467780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.454756 containerd[1953]: time="2025-02-13T15:17:00.454499148Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:17:00.454756 containerd[1953]: time="2025-02-13T15:17:00.454544112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.454756 containerd[1953]: time="2025-02-13T15:17:00.454586028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.454756 containerd[1953]: time="2025-02-13T15:17:00.454613688Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:17:00.461276 containerd[1953]: time="2025-02-13T15:17:00.458417784Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:17:00.461276 containerd[1953]: time="2025-02-13T15:17:00.458523432Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:17:00.461276 containerd[1953]: time="2025-02-13T15:17:00.458555976Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:17:00.461276 containerd[1953]: time="2025-02-13T15:17:00.458715420Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:17:00.461276 containerd[1953]: time="2025-02-13T15:17:00.458914836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.461276 containerd[1953]: time="2025-02-13T15:17:00.458957040Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:17:00.461276 containerd[1953]: time="2025-02-13T15:17:00.459005952Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:17:00.461276 containerd[1953]: time="2025-02-13T15:17:00.459035256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:17:00.465553 containerd[1953]: time="2025-02-13T15:17:00.461954316Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:17:00.465553 containerd[1953]: time="2025-02-13T15:17:00.463691856Z" level=info msg="Connect containerd service" Feb 13 15:17:00.465553 containerd[1953]: time="2025-02-13T15:17:00.463799208Z" level=info msg="using legacy CRI server" Feb 13 15:17:00.465553 containerd[1953]: time="2025-02-13T15:17:00.463819428Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:17:00.465553 containerd[1953]: time="2025-02-13T15:17:00.464889936Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:17:00.470042 containerd[1953]: time="2025-02-13T15:17:00.469964880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:17:00.472219 containerd[1953]: time="2025-02-13T15:17:00.471717648Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:17:00.472427 containerd[1953]: time="2025-02-13T15:17:00.472382112Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:17:00.474033 containerd[1953]: time="2025-02-13T15:17:00.472649736Z" level=info msg="Start subscribing containerd event" Feb 13 15:17:00.474033 containerd[1953]: time="2025-02-13T15:17:00.473126976Z" level=info msg="Start recovering state" Feb 13 15:17:00.474033 containerd[1953]: time="2025-02-13T15:17:00.473302896Z" level=info msg="Start event monitor" Feb 13 15:17:00.474033 containerd[1953]: time="2025-02-13T15:17:00.473327952Z" level=info msg="Start snapshots syncer" Feb 13 15:17:00.474033 containerd[1953]: time="2025-02-13T15:17:00.473457576Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:17:00.474033 containerd[1953]: time="2025-02-13T15:17:00.473483196Z" level=info msg="Start streaming server" Feb 13 15:17:00.489544 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:17:00.498823 sshd_keygen[1959]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:17:00.501255 containerd[1953]: time="2025-02-13T15:17:00.500344236Z" level=info msg="containerd successfully booted in 0.318103s" Feb 13 15:17:00.507335 amazon-ssm-agent[2088]: 2025-02-13 15:16:59 INFO Checking if agent identity type EC2 can be assumed Feb 13 15:17:00.609836 amazon-ssm-agent[2088]: 2025-02-13 15:17:00 INFO Agent will take identity from EC2 Feb 13 15:17:00.636589 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:17:00.656889 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:17:00.669264 systemd[1]: Started sshd@0-172.31.29.130:22-139.178.68.195:59074.service - OpenSSH per-connection server daemon (139.178.68.195:59074). Feb 13 15:17:00.705377 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:17:00.707492 amazon-ssm-agent[2088]: 2025-02-13 15:17:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:17:00.707994 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:17:00.721426 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:17:00.793834 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:17:00.806866 amazon-ssm-agent[2088]: 2025-02-13 15:17:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:17:00.810152 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:17:00.816971 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:17:00.819458 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:17:00.906373 amazon-ssm-agent[2088]: 2025-02-13 15:17:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:17:00.987515 sshd[2145]: Accepted publickey for core from 139.178.68.195 port 59074 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:17:00.996054 sshd-session[2145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:01.006169 amazon-ssm-agent[2088]: 2025-02-13 15:17:00 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 15:17:01.026987 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:17:01.044553 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:17:01.055528 systemd-logind[1924]: New session 1 of user core. Feb 13 15:17:01.089846 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:17:01.107774 amazon-ssm-agent[2088]: 2025-02-13 15:17:00 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Feb 13 15:17:01.107965 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:17:01.132085 (systemd)[2156]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:17:01.207034 amazon-ssm-agent[2088]: 2025-02-13 15:17:00 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 15:17:01.307771 amazon-ssm-agent[2088]: 2025-02-13 15:17:00 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 15:17:01.414805 amazon-ssm-agent[2088]: 2025-02-13 15:17:00 INFO [Registrar] Starting registrar module Feb 13 15:17:01.417447 systemd[2156]: Queued start job for default target default.target. Feb 13 15:17:01.425497 systemd[2156]: Created slice app.slice - User Application Slice. Feb 13 15:17:01.425561 systemd[2156]: Reached target paths.target - Paths. Feb 13 15:17:01.425598 systemd[2156]: Reached target timers.target - Timers. Feb 13 15:17:01.434996 systemd[2156]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:17:01.462833 tar[1930]: linux-arm64/LICENSE Feb 13 15:17:01.465839 tar[1930]: linux-arm64/README.md Feb 13 15:17:01.480637 systemd[2156]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:17:01.482284 systemd[2156]: Reached target sockets.target - Sockets. Feb 13 15:17:01.482320 systemd[2156]: Reached target basic.target - Basic System. Feb 13 15:17:01.482534 systemd[2156]: Reached target default.target - Main User Target. Feb 13 15:17:01.482601 systemd[2156]: Startup finished in 331ms. Feb 13 15:17:01.484160 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:17:01.497034 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:17:01.510603 amazon-ssm-agent[2088]: 2025-02-13 15:17:00 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 15:17:01.513124 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:17:01.680277 systemd[1]: Started sshd@1-172.31.29.130:22-139.178.68.195:58350.service - OpenSSH per-connection server daemon (139.178.68.195:58350). Feb 13 15:17:01.898109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:01.901817 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:17:01.906901 systemd[1]: Startup finished in 1.292s (kernel) + 9.688s (initrd) + 8.451s (userspace) = 19.432s. Feb 13 15:17:01.918373 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:17:01.926809 sshd[2171]: Accepted publickey for core from 139.178.68.195 port 58350 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:17:01.929476 sshd-session[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:01.958634 systemd-logind[1924]: New session 2 of user core. Feb 13 15:17:01.969041 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:17:02.015716 ntpd[1915]: Listen normally on 6 eth0 [fe80::4c4:cff:feb4:2bcd%2]:123 Feb 13 15:17:02.016807 ntpd[1915]: 13 Feb 15:17:02 ntpd[1915]: Listen normally on 6 eth0 [fe80::4c4:cff:feb4:2bcd%2]:123 Feb 13 15:17:02.104875 sshd[2183]: Connection closed by 139.178.68.195 port 58350 Feb 13 15:17:02.105508 sshd-session[2171]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:02.114905 systemd[1]: sshd@1-172.31.29.130:22-139.178.68.195:58350.service: Deactivated successfully. Feb 13 15:17:02.117212 amazon-ssm-agent[2088]: 2025-02-13 15:17:02 INFO [EC2Identity] EC2 registration was successful. Feb 13 15:17:02.121384 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:17:02.125482 systemd-logind[1924]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:17:02.145435 systemd[1]: Started sshd@2-172.31.29.130:22-139.178.68.195:58366.service - OpenSSH per-connection server daemon (139.178.68.195:58366). Feb 13 15:17:02.148658 systemd-logind[1924]: Removed session 2. Feb 13 15:17:02.160712 amazon-ssm-agent[2088]: 2025-02-13 15:17:02 INFO [CredentialRefresher] credentialRefresher has started Feb 13 15:17:02.160712 amazon-ssm-agent[2088]: 2025-02-13 15:17:02 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 15:17:02.160712 amazon-ssm-agent[2088]: 2025-02-13 15:17:02 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 15:17:02.219421 amazon-ssm-agent[2088]: 2025-02-13 15:17:02 INFO [CredentialRefresher] Next credential rotation will be in 30.433325890633334 minutes Feb 13 15:17:02.328735 sshd[2192]: Accepted publickey for core from 139.178.68.195 port 58366 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:17:02.331358 sshd-session[2192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:02.341267 systemd-logind[1924]: New session 3 of user core. Feb 13 15:17:02.348147 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:17:02.469940 sshd[2194]: Connection closed by 139.178.68.195 port 58366 Feb 13 15:17:02.471551 sshd-session[2192]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:02.478599 systemd[1]: sshd@2-172.31.29.130:22-139.178.68.195:58366.service: Deactivated successfully. Feb 13 15:17:02.485097 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:17:02.488650 systemd-logind[1924]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:17:02.504765 systemd-logind[1924]: Removed session 3. Feb 13 15:17:02.510920 systemd[1]: Started sshd@3-172.31.29.130:22-139.178.68.195:58380.service - OpenSSH per-connection server daemon (139.178.68.195:58380). Feb 13 15:17:02.707329 sshd[2200]: Accepted publickey for core from 139.178.68.195 port 58380 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:17:02.711615 sshd-session[2200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:02.722571 systemd-logind[1924]: New session 4 of user core. Feb 13 15:17:02.730059 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:17:02.757953 kubelet[2178]: E0213 15:17:02.757861 2178 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:17:02.763674 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:17:02.764370 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:17:02.765482 systemd[1]: kubelet.service: Consumed 1.343s CPU time. Feb 13 15:17:02.865383 sshd[2203]: Connection closed by 139.178.68.195 port 58380 Feb 13 15:17:02.866322 sshd-session[2200]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:02.873464 systemd[1]: sshd@3-172.31.29.130:22-139.178.68.195:58380.service: Deactivated successfully. Feb 13 15:17:02.877114 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:17:02.880349 systemd-logind[1924]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:17:02.883191 systemd-logind[1924]: Removed session 4. Feb 13 15:17:02.906368 systemd[1]: Started sshd@4-172.31.29.130:22-139.178.68.195:58382.service - OpenSSH per-connection server daemon (139.178.68.195:58382). Feb 13 15:17:03.113773 sshd[2209]: Accepted publickey for core from 139.178.68.195 port 58382 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:17:03.116505 sshd-session[2209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:17:03.125370 systemd-logind[1924]: New session 5 of user core. Feb 13 15:17:03.136070 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:17:03.188385 amazon-ssm-agent[2088]: 2025-02-13 15:17:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 15:17:03.289494 amazon-ssm-agent[2088]: 2025-02-13 15:17:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2213) started Feb 13 15:17:03.300009 sudo[2217]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:17:03.300716 sudo[2217]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:17:03.390037 amazon-ssm-agent[2088]: 2025-02-13 15:17:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 15:17:04.009229 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:17:04.012546 (dockerd)[2241]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:17:04.531899 dockerd[2241]: time="2025-02-13T15:17:04.530336248Z" level=info msg="Starting up" Feb 13 15:17:04.844491 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport849937244-merged.mount: Deactivated successfully. Feb 13 15:17:05.418337 dockerd[2241]: time="2025-02-13T15:17:05.417804544Z" level=info msg="Loading containers: start." Feb 13 15:17:05.706798 kernel: Initializing XFRM netlink socket Feb 13 15:17:05.748567 (udev-worker)[2266]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:17:05.855024 systemd-networkd[1832]: docker0: Link UP Feb 13 15:17:05.913087 dockerd[2241]: time="2025-02-13T15:17:05.913014283Z" level=info msg="Loading containers: done." Feb 13 15:17:05.937656 dockerd[2241]: time="2025-02-13T15:17:05.937579879Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:17:05.937900 dockerd[2241]: time="2025-02-13T15:17:05.937725295Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:17:05.937962 dockerd[2241]: time="2025-02-13T15:17:05.937938043Z" level=info msg="Daemon has completed initialization" Feb 13 15:17:05.999834 dockerd[2241]: time="2025-02-13T15:17:05.999348679Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:17:06.000569 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:17:06.477611 systemd-resolved[1844]: Clock change detected. Flushing caches. Feb 13 15:17:07.690364 containerd[1953]: time="2025-02-13T15:17:07.690246610Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 15:17:08.341910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1365920354.mount: Deactivated successfully. Feb 13 15:17:10.557183 containerd[1953]: time="2025-02-13T15:17:10.557116320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:10.559215 containerd[1953]: time="2025-02-13T15:17:10.559148412Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865207" Feb 13 15:17:10.560447 containerd[1953]: time="2025-02-13T15:17:10.560348676Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:10.566015 containerd[1953]: time="2025-02-13T15:17:10.565935972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:10.569160 containerd[1953]: time="2025-02-13T15:17:10.568275456Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 2.877939758s" Feb 13 15:17:10.569160 containerd[1953]: time="2025-02-13T15:17:10.568337856Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 15:17:10.612298 containerd[1953]: time="2025-02-13T15:17:10.611854464Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 15:17:12.948136 containerd[1953]: time="2025-02-13T15:17:12.947681356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:12.949953 containerd[1953]: time="2025-02-13T15:17:12.949857580Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898594" Feb 13 15:17:12.951231 containerd[1953]: time="2025-02-13T15:17:12.951178240Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:12.956885 containerd[1953]: time="2025-02-13T15:17:12.956791012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:12.960817 containerd[1953]: time="2025-02-13T15:17:12.959226592Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 2.34726612s" Feb 13 15:17:12.960817 containerd[1953]: time="2025-02-13T15:17:12.959288440Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 15:17:13.002556 containerd[1953]: time="2025-02-13T15:17:13.002503332Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 15:17:13.471718 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:17:13.480505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:13.887389 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:13.896603 (kubelet)[2515]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:17:13.995381 kubelet[2515]: E0213 15:17:13.995249 2515 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:17:14.004430 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:17:14.004744 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:17:14.397991 containerd[1953]: time="2025-02-13T15:17:14.397922811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:14.400863 containerd[1953]: time="2025-02-13T15:17:14.400775823Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164934" Feb 13 15:17:14.402808 containerd[1953]: time="2025-02-13T15:17:14.402736779Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:14.409107 containerd[1953]: time="2025-02-13T15:17:14.408472395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:14.411000 containerd[1953]: time="2025-02-13T15:17:14.410813523Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.408249999s" Feb 13 15:17:14.411000 containerd[1953]: time="2025-02-13T15:17:14.410867211Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 15:17:14.450488 containerd[1953]: time="2025-02-13T15:17:14.450434631Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 15:17:15.747388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1018438295.mount: Deactivated successfully. Feb 13 15:17:16.212830 containerd[1953]: time="2025-02-13T15:17:16.212641516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:16.214537 containerd[1953]: time="2025-02-13T15:17:16.214434640Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663370" Feb 13 15:17:16.216143 containerd[1953]: time="2025-02-13T15:17:16.216040840Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:16.221467 containerd[1953]: time="2025-02-13T15:17:16.221400664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:16.224396 containerd[1953]: time="2025-02-13T15:17:16.224303536Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.773811701s" Feb 13 15:17:16.224396 containerd[1953]: time="2025-02-13T15:17:16.224387800Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 15:17:16.267702 containerd[1953]: time="2025-02-13T15:17:16.267646864Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:17:16.843949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount452164867.mount: Deactivated successfully. Feb 13 15:17:18.011183 containerd[1953]: time="2025-02-13T15:17:18.010661273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:18.012884 containerd[1953]: time="2025-02-13T15:17:18.012814265Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Feb 13 15:17:18.014849 containerd[1953]: time="2025-02-13T15:17:18.013932341Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:18.021776 containerd[1953]: time="2025-02-13T15:17:18.021722693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:18.024111 containerd[1953]: time="2025-02-13T15:17:18.024007325Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.756297917s" Feb 13 15:17:18.024111 containerd[1953]: time="2025-02-13T15:17:18.024113441Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:17:18.066026 containerd[1953]: time="2025-02-13T15:17:18.065901329Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:17:18.544020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3412588650.mount: Deactivated successfully. Feb 13 15:17:18.555209 containerd[1953]: time="2025-02-13T15:17:18.553893451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:18.556747 containerd[1953]: time="2025-02-13T15:17:18.556619419Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Feb 13 15:17:18.558595 containerd[1953]: time="2025-02-13T15:17:18.558516128Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:18.564717 containerd[1953]: time="2025-02-13T15:17:18.564592736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:18.566438 containerd[1953]: time="2025-02-13T15:17:18.566248952Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 499.982367ms" Feb 13 15:17:18.566438 containerd[1953]: time="2025-02-13T15:17:18.566301608Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 15:17:18.607541 containerd[1953]: time="2025-02-13T15:17:18.607479092Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 15:17:19.175984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3879552252.mount: Deactivated successfully. Feb 13 15:17:21.861576 containerd[1953]: time="2025-02-13T15:17:21.861467244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:21.863844 containerd[1953]: time="2025-02-13T15:17:21.863144316Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Feb 13 15:17:21.865891 containerd[1953]: time="2025-02-13T15:17:21.865800096Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:21.874354 containerd[1953]: time="2025-02-13T15:17:21.874277184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:21.877579 containerd[1953]: time="2025-02-13T15:17:21.877261668Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.269712784s" Feb 13 15:17:21.877579 containerd[1953]: time="2025-02-13T15:17:21.877347384Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 15:17:24.221887 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:17:24.231559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:24.770171 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:24.784936 (kubelet)[2708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:17:24.879646 kubelet[2708]: E0213 15:17:24.879585 2708 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:17:24.884809 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:17:24.885497 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:17:29.638774 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:29.653911 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:29.691877 systemd[1]: Reloading requested from client PID 2722 ('systemctl') (unit session-5.scope)... Feb 13 15:17:29.691913 systemd[1]: Reloading... Feb 13 15:17:29.899127 zram_generator::config[2766]: No configuration found. Feb 13 15:17:30.160581 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:17:30.338288 systemd[1]: Reloading finished in 645 ms. Feb 13 15:17:30.437522 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:17:30.437805 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:17:30.438538 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:30.445968 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:30.723399 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 15:17:30.937472 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:30.950955 (kubelet)[2830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:17:31.042350 kubelet[2830]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:17:31.042350 kubelet[2830]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:17:31.042350 kubelet[2830]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:17:31.042350 kubelet[2830]: I0213 15:17:31.041096 2830 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:17:32.061510 kubelet[2830]: I0213 15:17:32.061451 2830 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:17:32.061510 kubelet[2830]: I0213 15:17:32.061503 2830 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:17:32.063020 kubelet[2830]: I0213 15:17:32.061969 2830 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:17:32.098624 kubelet[2830]: E0213 15:17:32.098550 2830 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.130:6443: connect: connection refused Feb 13 15:17:32.099326 kubelet[2830]: I0213 15:17:32.099036 2830 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:17:32.115217 kubelet[2830]: I0213 15:17:32.115161 2830 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:17:32.115890 kubelet[2830]: I0213 15:17:32.115787 2830 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:17:32.116392 kubelet[2830]: I0213 15:17:32.115891 2830 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-130","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:17:32.116694 kubelet[2830]: I0213 15:17:32.116458 2830 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:17:32.116694 kubelet[2830]: I0213 15:17:32.116494 2830 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:17:32.117022 kubelet[2830]: I0213 15:17:32.116963 2830 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:17:32.120535 kubelet[2830]: I0213 15:17:32.118628 2830 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:17:32.120535 kubelet[2830]: I0213 15:17:32.118715 2830 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:17:32.120535 kubelet[2830]: I0213 15:17:32.118880 2830 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:17:32.120535 kubelet[2830]: I0213 15:17:32.118972 2830 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:17:32.120932 kubelet[2830]: W0213 15:17:32.120822 2830 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.130:6443: connect: connection refused Feb 13 15:17:32.121033 kubelet[2830]: E0213 15:17:32.120960 2830 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.130:6443: connect: connection refused Feb 13 15:17:32.121264 kubelet[2830]: W0213 15:17:32.121178 2830 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-130&limit=500&resourceVersion=0": dial tcp 172.31.29.130:6443: connect: connection refused Feb 13 15:17:32.121383 kubelet[2830]: E0213 15:17:32.121287 2830 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-130&limit=500&resourceVersion=0": dial tcp 172.31.29.130:6443: connect: connection refused Feb 13 15:17:32.122480 kubelet[2830]: I0213 15:17:32.122382 2830 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:17:32.123196 kubelet[2830]: I0213 15:17:32.123133 2830 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:17:32.123540 kubelet[2830]: W0213 15:17:32.123510 2830 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:17:32.125364 kubelet[2830]: I0213 15:17:32.125301 2830 server.go:1264] "Started kubelet" Feb 13 15:17:32.133797 kubelet[2830]: I0213 15:17:32.133597 2830 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:17:32.137526 kubelet[2830]: E0213 15:17:32.136945 2830 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.130:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.130:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-130.1823cd84fafa41a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-130,UID:ip-172-31-29-130,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-130,},FirstTimestamp:2025-02-13 15:17:32.125225383 +0000 UTC m=+1.164051223,LastTimestamp:2025-02-13 15:17:32.125225383 +0000 UTC m=+1.164051223,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-130,}" Feb 13 15:17:32.142060 kubelet[2830]: E0213 15:17:32.141885 2830 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:17:32.146024 kubelet[2830]: I0213 15:17:32.145921 2830 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:17:32.149862 kubelet[2830]: I0213 15:17:32.149805 2830 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:17:32.153190 kubelet[2830]: I0213 15:17:32.152061 2830 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:17:32.154193 kubelet[2830]: I0213 15:17:32.154007 2830 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:17:32.154584 kubelet[2830]: I0213 15:17:32.154515 2830 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:17:32.155660 kubelet[2830]: E0213 15:17:32.155583 2830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-130?timeout=10s\": dial tcp 172.31.29.130:6443: connect: connection refused" interval="200ms" Feb 13 15:17:32.155825 kubelet[2830]: I0213 15:17:32.155696 2830 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:17:32.156834 kubelet[2830]: W0213 15:17:32.156419 2830 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.130:6443: connect: connection refused Feb 13 15:17:32.156834 kubelet[2830]: E0213 15:17:32.156563 2830 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.130:6443: connect: connection refused Feb 13 15:17:32.157370 kubelet[2830]: I0213 15:17:32.156955 2830 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:17:32.157887 kubelet[2830]: I0213 15:17:32.157824 2830 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:17:32.163376 kubelet[2830]: I0213 15:17:32.163329 2830 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:17:32.165596 kubelet[2830]: I0213 15:17:32.163352 2830 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:17:32.192380 kubelet[2830]: I0213 15:17:32.192290 2830 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:17:32.206377 kubelet[2830]: I0213 15:17:32.206299 2830 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:17:32.206911 kubelet[2830]: I0213 15:17:32.206739 2830 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:17:32.206911 kubelet[2830]: I0213 15:17:32.206793 2830 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:17:32.207460 kubelet[2830]: E0213 15:17:32.207319 2830 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:17:32.210648 kubelet[2830]: W0213 15:17:32.210259 2830 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.130:6443: connect: connection refused Feb 13 15:17:32.211439 kubelet[2830]: E0213 15:17:32.211199 2830 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.130:6443: connect: connection refused Feb 13 15:17:32.221708 kubelet[2830]: I0213 15:17:32.221654 2830 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:17:32.221708 kubelet[2830]: I0213 15:17:32.221695 2830 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:17:32.221955 kubelet[2830]: I0213 15:17:32.221736 2830 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:17:32.236363 kubelet[2830]: I0213 15:17:32.236322 2830 policy_none.go:49] "None policy: Start" Feb 13 15:17:32.237738 kubelet[2830]: I0213 15:17:32.237700 2830 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:17:32.237890 kubelet[2830]: I0213 15:17:32.237750 2830 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:17:32.250013 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:17:32.254791 kubelet[2830]: I0213 15:17:32.254737 2830 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-130" Feb 13 15:17:32.255535 kubelet[2830]: E0213 15:17:32.255473 2830 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.130:6443/api/v1/nodes\": dial tcp 172.31.29.130:6443: connect: connection refused" node="ip-172-31-29-130" Feb 13 15:17:32.269247 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:17:32.276611 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:17:32.291736 kubelet[2830]: I0213 15:17:32.291393 2830 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:17:32.291909 kubelet[2830]: I0213 15:17:32.291743 2830 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:17:32.291986 kubelet[2830]: I0213 15:17:32.291912 2830 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:17:32.296455 kubelet[2830]: E0213 15:17:32.296401 2830 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-130\" not found" Feb 13 15:17:32.308330 kubelet[2830]: I0213 15:17:32.307878 2830 topology_manager.go:215] "Topology Admit Handler" podUID="8c4f404a718bd6da959d0fd4dc62eb34" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-130" Feb 13 15:17:32.310707 kubelet[2830]: I0213 15:17:32.310659 2830 topology_manager.go:215] "Topology Admit Handler" podUID="6bd35685d80fd357b9847308bc78e5a4" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-130" Feb 13 15:17:32.314359 kubelet[2830]: I0213 15:17:32.313359 2830 topology_manager.go:215] "Topology Admit Handler" podUID="8c4717e32cf1c9a05186994564410d4e" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-130" Feb 13 15:17:32.329900 systemd[1]: Created slice kubepods-burstable-pod8c4f404a718bd6da959d0fd4dc62eb34.slice - libcontainer container kubepods-burstable-pod8c4f404a718bd6da959d0fd4dc62eb34.slice. Feb 13 15:17:32.355441 systemd[1]: Created slice kubepods-burstable-pod6bd35685d80fd357b9847308bc78e5a4.slice - libcontainer container kubepods-burstable-pod6bd35685d80fd357b9847308bc78e5a4.slice. Feb 13 15:17:32.357583 kubelet[2830]: E0213 15:17:32.357465 2830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-130?timeout=10s\": dial tcp 172.31.29.130:6443: connect: connection refused" interval="400ms" Feb 13 15:17:32.365514 kubelet[2830]: I0213 15:17:32.364904 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c4f404a718bd6da959d0fd4dc62eb34-ca-certs\") pod \"kube-apiserver-ip-172-31-29-130\" (UID: \"8c4f404a718bd6da959d0fd4dc62eb34\") " pod="kube-system/kube-apiserver-ip-172-31-29-130" Feb 13 15:17:32.365514 kubelet[2830]: I0213 15:17:32.364963 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c4f404a718bd6da959d0fd4dc62eb34-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-130\" (UID: \"8c4f404a718bd6da959d0fd4dc62eb34\") " pod="kube-system/kube-apiserver-ip-172-31-29-130" Feb 13 15:17:32.365514 kubelet[2830]: I0213 15:17:32.365033 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6bd35685d80fd357b9847308bc78e5a4-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-130\" (UID: \"6bd35685d80fd357b9847308bc78e5a4\") " pod="kube-system/kube-controller-manager-ip-172-31-29-130" Feb 13 15:17:32.365514 kubelet[2830]: I0213 15:17:32.365100 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6bd35685d80fd357b9847308bc78e5a4-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-130\" (UID: \"6bd35685d80fd357b9847308bc78e5a4\") " pod="kube-system/kube-controller-manager-ip-172-31-29-130" Feb 13 15:17:32.365514 kubelet[2830]: I0213 15:17:32.365146 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c4f404a718bd6da959d0fd4dc62eb34-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-130\" (UID: \"8c4f404a718bd6da959d0fd4dc62eb34\") " pod="kube-system/kube-apiserver-ip-172-31-29-130" Feb 13 15:17:32.365195 systemd[1]: Created slice kubepods-burstable-pod8c4717e32cf1c9a05186994564410d4e.slice - libcontainer container kubepods-burstable-pod8c4717e32cf1c9a05186994564410d4e.slice. Feb 13 15:17:32.365992 kubelet[2830]: I0213 15:17:32.365201 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6bd35685d80fd357b9847308bc78e5a4-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-130\" (UID: \"6bd35685d80fd357b9847308bc78e5a4\") " pod="kube-system/kube-controller-manager-ip-172-31-29-130" Feb 13 15:17:32.365992 kubelet[2830]: I0213 15:17:32.365240 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6bd35685d80fd357b9847308bc78e5a4-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-130\" (UID: \"6bd35685d80fd357b9847308bc78e5a4\") " pod="kube-system/kube-controller-manager-ip-172-31-29-130" Feb 13 15:17:32.365992 kubelet[2830]: I0213 15:17:32.365279 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6bd35685d80fd357b9847308bc78e5a4-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-130\" (UID: \"6bd35685d80fd357b9847308bc78e5a4\") " pod="kube-system/kube-controller-manager-ip-172-31-29-130" Feb 13 15:17:32.365992 kubelet[2830]: I0213 15:17:32.365317 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c4717e32cf1c9a05186994564410d4e-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-130\" (UID: \"8c4717e32cf1c9a05186994564410d4e\") " pod="kube-system/kube-scheduler-ip-172-31-29-130" Feb 13 15:17:32.458333 kubelet[2830]: I0213 15:17:32.458290 2830 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-130" Feb 13 15:17:32.458857 kubelet[2830]: E0213 15:17:32.458793 2830 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.130:6443/api/v1/nodes\": dial tcp 172.31.29.130:6443: connect: connection refused" node="ip-172-31-29-130" Feb 13 15:17:32.649951 containerd[1953]: time="2025-02-13T15:17:32.649746886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-130,Uid:8c4f404a718bd6da959d0fd4dc62eb34,Namespace:kube-system,Attempt:0,}" Feb 13 15:17:32.663242 containerd[1953]: time="2025-02-13T15:17:32.662323762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-130,Uid:6bd35685d80fd357b9847308bc78e5a4,Namespace:kube-system,Attempt:0,}" Feb 13 15:17:32.670897 containerd[1953]: time="2025-02-13T15:17:32.670770262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-130,Uid:8c4717e32cf1c9a05186994564410d4e,Namespace:kube-system,Attempt:0,}" Feb 13 15:17:32.759231 kubelet[2830]: E0213 15:17:32.759136 2830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-130?timeout=10s\": dial tcp 172.31.29.130:6443: connect: connection refused" interval="800ms" Feb 13 15:17:32.862145 kubelet[2830]: I0213 15:17:32.862094 2830 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-130" Feb 13 15:17:32.862693 kubelet[2830]: E0213 15:17:32.862647 2830 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.130:6443/api/v1/nodes\": dial tcp 172.31.29.130:6443: connect: connection refused" node="ip-172-31-29-130" Feb 13 15:17:32.992254 kubelet[2830]: W0213 15:17:32.992123 2830 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-130&limit=500&resourceVersion=0": dial tcp 172.31.29.130:6443: connect: connection refused Feb 13 15:17:32.992254 kubelet[2830]: E0213 15:17:32.992217 2830 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-130&limit=500&resourceVersion=0": dial tcp 172.31.29.130:6443: connect: connection refused Feb 13 15:17:33.061597 kubelet[2830]: W0213 15:17:33.061442 2830 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.130:6443: connect: connection refused Feb 13 15:17:33.061597 kubelet[2830]: E0213 15:17:33.061536 2830 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.130:6443: connect: connection refused Feb 13 15:17:33.176988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1451363196.mount: Deactivated successfully. Feb 13 15:17:33.189035 containerd[1953]: time="2025-02-13T15:17:33.188228804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:17:33.191851 containerd[1953]: time="2025-02-13T15:17:33.191768384Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:17:33.194917 containerd[1953]: time="2025-02-13T15:17:33.194802668Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Feb 13 15:17:33.196607 containerd[1953]: time="2025-02-13T15:17:33.196507904Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:17:33.200319 containerd[1953]: time="2025-02-13T15:17:33.200230172Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:17:33.203182 containerd[1953]: time="2025-02-13T15:17:33.203111312Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:17:33.204633 containerd[1953]: time="2025-02-13T15:17:33.204531260Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:17:33.212128 containerd[1953]: time="2025-02-13T15:17:33.210248648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:17:33.215737 containerd[1953]: time="2025-02-13T15:17:33.215643200Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 565.754882ms" Feb 13 15:17:33.220108 containerd[1953]: time="2025-02-13T15:17:33.219981116Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 557.520638ms" Feb 13 15:17:33.226283 containerd[1953]: time="2025-02-13T15:17:33.226194152Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 555.30935ms" Feb 13 15:17:33.318576 kubelet[2830]: W0213 15:17:33.318385 2830 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.130:6443: connect: connection refused Feb 13 15:17:33.319194 kubelet[2830]: E0213 15:17:33.319149 2830 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.130:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.130:6443: connect: connection refused Feb 13 15:17:33.490767 containerd[1953]: time="2025-02-13T15:17:33.489888178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:17:33.490767 containerd[1953]: time="2025-02-13T15:17:33.490116898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:17:33.490767 containerd[1953]: time="2025-02-13T15:17:33.490162078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:33.490767 containerd[1953]: time="2025-02-13T15:17:33.490382890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:33.505233 containerd[1953]: time="2025-02-13T15:17:33.504761578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:17:33.505233 containerd[1953]: time="2025-02-13T15:17:33.504902818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:17:33.505233 containerd[1953]: time="2025-02-13T15:17:33.504942490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:33.506146 containerd[1953]: time="2025-02-13T15:17:33.505979482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:33.507302 containerd[1953]: time="2025-02-13T15:17:33.507043090Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:17:33.507302 containerd[1953]: time="2025-02-13T15:17:33.507241474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:17:33.508449 containerd[1953]: time="2025-02-13T15:17:33.507286270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:33.508449 containerd[1953]: time="2025-02-13T15:17:33.507540514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:33.553505 systemd[1]: Started cri-containerd-2b78b561c246832dafaed4697e55dd7430b831087dd8857c72bffc7f86b60ea4.scope - libcontainer container 2b78b561c246832dafaed4697e55dd7430b831087dd8857c72bffc7f86b60ea4. Feb 13 15:17:33.560651 kubelet[2830]: E0213 15:17:33.560538 2830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-130?timeout=10s\": dial tcp 172.31.29.130:6443: connect: connection refused" interval="1.6s" Feb 13 15:17:33.569268 systemd[1]: Started cri-containerd-b216840a192c2ef013bc396bbfb2218130847aa993363a7a6b303e27f3215df2.scope - libcontainer container b216840a192c2ef013bc396bbfb2218130847aa993363a7a6b303e27f3215df2. Feb 13 15:17:33.595429 systemd[1]: Started cri-containerd-c02f9f94cf3e0217f73a522ec350f7c831c1e19cedc7a8df20cbabab7fe140cb.scope - libcontainer container c02f9f94cf3e0217f73a522ec350f7c831c1e19cedc7a8df20cbabab7fe140cb. Feb 13 15:17:33.650230 kubelet[2830]: W0213 15:17:33.649909 2830 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.130:6443: connect: connection refused Feb 13 15:17:33.650230 kubelet[2830]: E0213 15:17:33.650028 2830 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.130:6443: connect: connection refused Feb 13 15:17:33.670176 kubelet[2830]: I0213 15:17:33.668803 2830 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-130" Feb 13 15:17:33.672853 kubelet[2830]: E0213 15:17:33.672036 2830 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.29.130:6443/api/v1/nodes\": dial tcp 172.31.29.130:6443: connect: connection refused" node="ip-172-31-29-130" Feb 13 15:17:33.716002 containerd[1953]: time="2025-02-13T15:17:33.715589387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-130,Uid:6bd35685d80fd357b9847308bc78e5a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b78b561c246832dafaed4697e55dd7430b831087dd8857c72bffc7f86b60ea4\"" Feb 13 15:17:33.732251 containerd[1953]: time="2025-02-13T15:17:33.731694599Z" level=info msg="CreateContainer within sandbox \"2b78b561c246832dafaed4697e55dd7430b831087dd8857c72bffc7f86b60ea4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:17:33.733157 containerd[1953]: time="2025-02-13T15:17:33.732956591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-130,Uid:8c4f404a718bd6da959d0fd4dc62eb34,Namespace:kube-system,Attempt:0,} returns sandbox id \"c02f9f94cf3e0217f73a522ec350f7c831c1e19cedc7a8df20cbabab7fe140cb\"" Feb 13 15:17:33.745214 containerd[1953]: time="2025-02-13T15:17:33.744676739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-130,Uid:8c4717e32cf1c9a05186994564410d4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b216840a192c2ef013bc396bbfb2218130847aa993363a7a6b303e27f3215df2\"" Feb 13 15:17:33.752259 containerd[1953]: time="2025-02-13T15:17:33.752202695Z" level=info msg="CreateContainer within sandbox \"c02f9f94cf3e0217f73a522ec350f7c831c1e19cedc7a8df20cbabab7fe140cb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:17:33.757259 containerd[1953]: time="2025-02-13T15:17:33.757000583Z" level=info msg="CreateContainer within sandbox \"b216840a192c2ef013bc396bbfb2218130847aa993363a7a6b303e27f3215df2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:17:33.778403 containerd[1953]: time="2025-02-13T15:17:33.778342643Z" level=info msg="CreateContainer within sandbox \"2b78b561c246832dafaed4697e55dd7430b831087dd8857c72bffc7f86b60ea4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4672473e41b3d655b9d48cc20cdfd9e2e54c8369ec3767964caaf4a63fd936dc\"" Feb 13 15:17:33.781157 containerd[1953]: time="2025-02-13T15:17:33.779925947Z" level=info msg="StartContainer for \"4672473e41b3d655b9d48cc20cdfd9e2e54c8369ec3767964caaf4a63fd936dc\"" Feb 13 15:17:33.782778 containerd[1953]: time="2025-02-13T15:17:33.782695247Z" level=info msg="CreateContainer within sandbox \"c02f9f94cf3e0217f73a522ec350f7c831c1e19cedc7a8df20cbabab7fe140cb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9414764b655ed6aca0e33746798df27b8def4fe8ebf0d5096604fafbdbd0efe1\"" Feb 13 15:17:33.784287 containerd[1953]: time="2025-02-13T15:17:33.784230095Z" level=info msg="StartContainer for \"9414764b655ed6aca0e33746798df27b8def4fe8ebf0d5096604fafbdbd0efe1\"" Feb 13 15:17:33.799759 containerd[1953]: time="2025-02-13T15:17:33.799548899Z" level=info msg="CreateContainer within sandbox \"b216840a192c2ef013bc396bbfb2218130847aa993363a7a6b303e27f3215df2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"52dfef469c8a9ff0e1dcccebdbeb12958a5228e8efc322e9d6259275c38e87e5\"" Feb 13 15:17:33.801833 containerd[1953]: time="2025-02-13T15:17:33.801411443Z" level=info msg="StartContainer for \"52dfef469c8a9ff0e1dcccebdbeb12958a5228e8efc322e9d6259275c38e87e5\"" Feb 13 15:17:33.856954 systemd[1]: Started cri-containerd-4672473e41b3d655b9d48cc20cdfd9e2e54c8369ec3767964caaf4a63fd936dc.scope - libcontainer container 4672473e41b3d655b9d48cc20cdfd9e2e54c8369ec3767964caaf4a63fd936dc. Feb 13 15:17:33.892353 systemd[1]: Started cri-containerd-52dfef469c8a9ff0e1dcccebdbeb12958a5228e8efc322e9d6259275c38e87e5.scope - libcontainer container 52dfef469c8a9ff0e1dcccebdbeb12958a5228e8efc322e9d6259275c38e87e5. Feb 13 15:17:33.914194 systemd[1]: Started cri-containerd-9414764b655ed6aca0e33746798df27b8def4fe8ebf0d5096604fafbdbd0efe1.scope - libcontainer container 9414764b655ed6aca0e33746798df27b8def4fe8ebf0d5096604fafbdbd0efe1. Feb 13 15:17:33.928699 kubelet[2830]: E0213 15:17:33.923674 2830 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.130:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.130:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-130.1823cd84fafa41a7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-130,UID:ip-172-31-29-130,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-130,},FirstTimestamp:2025-02-13 15:17:32.125225383 +0000 UTC m=+1.164051223,LastTimestamp:2025-02-13 15:17:32.125225383 +0000 UTC m=+1.164051223,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-130,}" Feb 13 15:17:34.038720 containerd[1953]: time="2025-02-13T15:17:34.038509448Z" level=info msg="StartContainer for \"4672473e41b3d655b9d48cc20cdfd9e2e54c8369ec3767964caaf4a63fd936dc\" returns successfully" Feb 13 15:17:34.060357 containerd[1953]: time="2025-02-13T15:17:34.060287829Z" level=info msg="StartContainer for \"9414764b655ed6aca0e33746798df27b8def4fe8ebf0d5096604fafbdbd0efe1\" returns successfully" Feb 13 15:17:34.098971 containerd[1953]: time="2025-02-13T15:17:34.098595141Z" level=info msg="StartContainer for \"52dfef469c8a9ff0e1dcccebdbeb12958a5228e8efc322e9d6259275c38e87e5\" returns successfully" Feb 13 15:17:34.214848 kubelet[2830]: E0213 15:17:34.214684 2830 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.130:6443: connect: connection refused Feb 13 15:17:35.280323 kubelet[2830]: I0213 15:17:35.279322 2830 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-130" Feb 13 15:17:37.984666 kubelet[2830]: E0213 15:17:37.984584 2830 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-130\" not found" node="ip-172-31-29-130" Feb 13 15:17:38.060388 kubelet[2830]: I0213 15:17:38.060055 2830 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-29-130" Feb 13 15:17:38.123576 kubelet[2830]: I0213 15:17:38.123501 2830 apiserver.go:52] "Watching apiserver" Feb 13 15:17:38.156667 kubelet[2830]: I0213 15:17:38.156561 2830 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:17:40.420622 systemd[1]: Reloading requested from client PID 3106 ('systemctl') (unit session-5.scope)... Feb 13 15:17:40.420665 systemd[1]: Reloading... Feb 13 15:17:40.686359 zram_generator::config[3149]: No configuration found. Feb 13 15:17:41.005377 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:17:41.249492 systemd[1]: Reloading finished in 827 ms. Feb 13 15:17:41.344313 kubelet[2830]: I0213 15:17:41.343932 2830 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:17:41.345294 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:41.358910 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:17:41.359516 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:41.359607 systemd[1]: kubelet.service: Consumed 1.951s CPU time, 113.6M memory peak, 0B memory swap peak. Feb 13 15:17:41.375420 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:17:41.880406 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:17:41.896724 (kubelet)[3206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:17:42.016122 kubelet[3206]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:17:42.016122 kubelet[3206]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:17:42.016122 kubelet[3206]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:17:42.016122 kubelet[3206]: I0213 15:17:42.014640 3206 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:17:42.039544 kubelet[3206]: I0213 15:17:42.039479 3206 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:17:42.039671 kubelet[3206]: I0213 15:17:42.039531 3206 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:17:42.040718 kubelet[3206]: I0213 15:17:42.040561 3206 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:17:42.044158 kubelet[3206]: I0213 15:17:42.043840 3206 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:17:42.048968 kubelet[3206]: I0213 15:17:42.047925 3206 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:17:42.065690 kubelet[3206]: I0213 15:17:42.065646 3206 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:17:42.066551 kubelet[3206]: I0213 15:17:42.066495 3206 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:17:42.067230 kubelet[3206]: I0213 15:17:42.066718 3206 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-130","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:17:42.068201 kubelet[3206]: I0213 15:17:42.067590 3206 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:17:42.068201 kubelet[3206]: I0213 15:17:42.067629 3206 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:17:42.068201 kubelet[3206]: I0213 15:17:42.067713 3206 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:17:42.068201 kubelet[3206]: I0213 15:17:42.067949 3206 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:17:42.068201 kubelet[3206]: I0213 15:17:42.067978 3206 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:17:42.068201 kubelet[3206]: I0213 15:17:42.068029 3206 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:17:42.068201 kubelet[3206]: I0213 15:17:42.068060 3206 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:17:42.072902 kubelet[3206]: I0213 15:17:42.072844 3206 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:17:42.073264 kubelet[3206]: I0213 15:17:42.073220 3206 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:17:42.082417 kubelet[3206]: I0213 15:17:42.082365 3206 server.go:1264] "Started kubelet" Feb 13 15:17:42.083063 kubelet[3206]: I0213 15:17:42.082918 3206 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:17:42.089146 kubelet[3206]: I0213 15:17:42.086678 3206 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:17:42.089146 kubelet[3206]: I0213 15:17:42.087117 3206 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:17:42.096109 kubelet[3206]: I0213 15:17:42.095370 3206 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:17:42.096109 kubelet[3206]: I0213 15:17:42.095769 3206 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:17:42.121680 kubelet[3206]: I0213 15:17:42.121636 3206 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:17:42.123717 kubelet[3206]: I0213 15:17:42.123676 3206 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:17:42.124240 kubelet[3206]: I0213 15:17:42.124205 3206 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:17:42.155154 kubelet[3206]: I0213 15:17:42.153402 3206 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:17:42.155154 kubelet[3206]: I0213 15:17:42.153607 3206 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:17:42.193974 kubelet[3206]: I0213 15:17:42.193326 3206 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:17:42.197193 kubelet[3206]: I0213 15:17:42.196808 3206 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:17:42.197193 kubelet[3206]: I0213 15:17:42.196888 3206 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:17:42.197193 kubelet[3206]: I0213 15:17:42.196920 3206 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:17:42.197193 kubelet[3206]: E0213 15:17:42.196997 3206 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:17:42.199762 kubelet[3206]: I0213 15:17:42.199700 3206 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:17:42.235052 kubelet[3206]: E0213 15:17:42.234621 3206 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Feb 13 15:17:42.255143 kubelet[3206]: I0213 15:17:42.251830 3206 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-29-130" Feb 13 15:17:42.273925 kubelet[3206]: I0213 15:17:42.273853 3206 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-29-130" Feb 13 15:17:42.276991 kubelet[3206]: I0213 15:17:42.276944 3206 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-29-130" Feb 13 15:17:42.297799 kubelet[3206]: E0213 15:17:42.297689 3206 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:17:42.405383 kubelet[3206]: I0213 15:17:42.403505 3206 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:17:42.405383 kubelet[3206]: I0213 15:17:42.403757 3206 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:17:42.405383 kubelet[3206]: I0213 15:17:42.403803 3206 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:17:42.406818 kubelet[3206]: I0213 15:17:42.406512 3206 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:17:42.406818 kubelet[3206]: I0213 15:17:42.406587 3206 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:17:42.406818 kubelet[3206]: I0213 15:17:42.406638 3206 policy_none.go:49] "None policy: Start" Feb 13 15:17:42.412799 kubelet[3206]: I0213 15:17:42.411246 3206 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:17:42.412799 kubelet[3206]: I0213 15:17:42.411295 3206 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:17:42.412799 kubelet[3206]: I0213 15:17:42.411585 3206 state_mem.go:75] "Updated machine memory state" Feb 13 15:17:42.434126 kubelet[3206]: I0213 15:17:42.433097 3206 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:17:42.437167 kubelet[3206]: I0213 15:17:42.435793 3206 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:17:42.441764 kubelet[3206]: I0213 15:17:42.438140 3206 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:17:42.498169 kubelet[3206]: I0213 15:17:42.497959 3206 topology_manager.go:215] "Topology Admit Handler" podUID="8c4f404a718bd6da959d0fd4dc62eb34" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-130" Feb 13 15:17:42.498419 kubelet[3206]: I0213 15:17:42.498297 3206 topology_manager.go:215] "Topology Admit Handler" podUID="6bd35685d80fd357b9847308bc78e5a4" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-130" Feb 13 15:17:42.498419 kubelet[3206]: I0213 15:17:42.498397 3206 topology_manager.go:215] "Topology Admit Handler" podUID="8c4717e32cf1c9a05186994564410d4e" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-130" Feb 13 15:17:42.514629 kubelet[3206]: E0213 15:17:42.514572 3206 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-29-130\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-29-130" Feb 13 15:17:42.528203 kubelet[3206]: I0213 15:17:42.528034 3206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8c4f404a718bd6da959d0fd4dc62eb34-ca-certs\") pod \"kube-apiserver-ip-172-31-29-130\" (UID: \"8c4f404a718bd6da959d0fd4dc62eb34\") " pod="kube-system/kube-apiserver-ip-172-31-29-130" Feb 13 15:17:42.528203 kubelet[3206]: I0213 15:17:42.528190 3206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8c4f404a718bd6da959d0fd4dc62eb34-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-130\" (UID: \"8c4f404a718bd6da959d0fd4dc62eb34\") " pod="kube-system/kube-apiserver-ip-172-31-29-130" Feb 13 15:17:42.528563 kubelet[3206]: I0213 15:17:42.528244 3206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6bd35685d80fd357b9847308bc78e5a4-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-130\" (UID: \"6bd35685d80fd357b9847308bc78e5a4\") " pod="kube-system/kube-controller-manager-ip-172-31-29-130" Feb 13 15:17:42.528563 kubelet[3206]: I0213 15:17:42.528283 3206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6bd35685d80fd357b9847308bc78e5a4-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-130\" (UID: \"6bd35685d80fd357b9847308bc78e5a4\") " pod="kube-system/kube-controller-manager-ip-172-31-29-130" Feb 13 15:17:42.528563 kubelet[3206]: I0213 15:17:42.528330 3206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c4717e32cf1c9a05186994564410d4e-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-130\" (UID: \"8c4717e32cf1c9a05186994564410d4e\") " pod="kube-system/kube-scheduler-ip-172-31-29-130" Feb 13 15:17:42.528563 kubelet[3206]: I0213 15:17:42.528377 3206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8c4f404a718bd6da959d0fd4dc62eb34-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-130\" (UID: \"8c4f404a718bd6da959d0fd4dc62eb34\") " pod="kube-system/kube-apiserver-ip-172-31-29-130" Feb 13 15:17:42.528563 kubelet[3206]: I0213 15:17:42.528440 3206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6bd35685d80fd357b9847308bc78e5a4-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-130\" (UID: \"6bd35685d80fd357b9847308bc78e5a4\") " pod="kube-system/kube-controller-manager-ip-172-31-29-130" Feb 13 15:17:42.529214 kubelet[3206]: I0213 15:17:42.528484 3206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6bd35685d80fd357b9847308bc78e5a4-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-130\" (UID: \"6bd35685d80fd357b9847308bc78e5a4\") " pod="kube-system/kube-controller-manager-ip-172-31-29-130" Feb 13 15:17:42.529214 kubelet[3206]: I0213 15:17:42.528525 3206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6bd35685d80fd357b9847308bc78e5a4-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-130\" (UID: \"6bd35685d80fd357b9847308bc78e5a4\") " pod="kube-system/kube-controller-manager-ip-172-31-29-130" Feb 13 15:17:43.070599 kubelet[3206]: I0213 15:17:43.070538 3206 apiserver.go:52] "Watching apiserver" Feb 13 15:17:43.126239 kubelet[3206]: I0213 15:17:43.126171 3206 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:17:43.127772 kubelet[3206]: I0213 15:17:43.127673 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-130" podStartSLOduration=1.127652142 podStartE2EDuration="1.127652142s" podCreationTimestamp="2025-02-13 15:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:17:43.124980726 +0000 UTC m=+1.219228184" watchObservedRunningTime="2025-02-13 15:17:43.127652142 +0000 UTC m=+1.221899588" Feb 13 15:17:43.145808 kubelet[3206]: I0213 15:17:43.145499 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-130" podStartSLOduration=4.145478598 podStartE2EDuration="4.145478598s" podCreationTimestamp="2025-02-13 15:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:17:43.145462026 +0000 UTC m=+1.239709496" watchObservedRunningTime="2025-02-13 15:17:43.145478598 +0000 UTC m=+1.239726056" Feb 13 15:17:43.327909 kubelet[3206]: E0213 15:17:43.326425 3206 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-29-130\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-130" Feb 13 15:17:43.334893 kubelet[3206]: I0213 15:17:43.334784 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-130" podStartSLOduration=1.334756783 podStartE2EDuration="1.334756783s" podCreationTimestamp="2025-02-13 15:17:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:17:43.165910746 +0000 UTC m=+1.260158228" watchObservedRunningTime="2025-02-13 15:17:43.334756783 +0000 UTC m=+1.429004241" Feb 13 15:17:43.831261 sudo[2217]: pam_unix(sudo:session): session closed for user root Feb 13 15:17:43.855132 sshd[2211]: Connection closed by 139.178.68.195 port 58382 Feb 13 15:17:43.856051 sshd-session[2209]: pam_unix(sshd:session): session closed for user core Feb 13 15:17:43.863376 systemd[1]: sshd@4-172.31.29.130:22-139.178.68.195:58382.service: Deactivated successfully. Feb 13 15:17:43.868018 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:17:43.868486 systemd[1]: session-5.scope: Consumed 10.047s CPU time, 192.1M memory peak, 0B memory swap peak. Feb 13 15:17:43.870713 systemd-logind[1924]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:17:43.873259 systemd-logind[1924]: Removed session 5. Feb 13 15:17:44.937692 update_engine[1925]: I20250213 15:17:44.937594 1925 update_attempter.cc:509] Updating boot flags... Feb 13 15:17:45.035310 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3279) Feb 13 15:17:45.359200 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3283) Feb 13 15:17:45.621141 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3283) Feb 13 15:17:55.436243 kubelet[3206]: I0213 15:17:55.436187 3206 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:17:55.436954 containerd[1953]: time="2025-02-13T15:17:55.436905727Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:17:55.440229 kubelet[3206]: I0213 15:17:55.437314 3206 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:17:55.515296 kubelet[3206]: I0213 15:17:55.514554 3206 topology_manager.go:215] "Topology Admit Handler" podUID="92ce547d-7353-4faa-9465-e5d1996888e8" podNamespace="kube-system" podName="kube-proxy-s2wh4" Feb 13 15:17:55.544528 systemd[1]: Created slice kubepods-besteffort-pod92ce547d_7353_4faa_9465_e5d1996888e8.slice - libcontainer container kubepods-besteffort-pod92ce547d_7353_4faa_9465_e5d1996888e8.slice. Feb 13 15:17:55.571494 kubelet[3206]: I0213 15:17:55.571373 3206 topology_manager.go:215] "Topology Admit Handler" podUID="3288a184-d608-4c67-ba06-165bcbe1e001" podNamespace="kube-flannel" podName="kube-flannel-ds-rxqj2" Feb 13 15:17:55.595629 systemd[1]: Created slice kubepods-burstable-pod3288a184_d608_4c67_ba06_165bcbe1e001.slice - libcontainer container kubepods-burstable-pod3288a184_d608_4c67_ba06_165bcbe1e001.slice. Feb 13 15:17:55.614429 kubelet[3206]: I0213 15:17:55.614361 3206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/92ce547d-7353-4faa-9465-e5d1996888e8-kube-proxy\") pod \"kube-proxy-s2wh4\" (UID: \"92ce547d-7353-4faa-9465-e5d1996888e8\") " pod="kube-system/kube-proxy-s2wh4" Feb 13 15:17:55.614847 kubelet[3206]: I0213 15:17:55.614803 3206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92ce547d-7353-4faa-9465-e5d1996888e8-xtables-lock\") pod \"kube-proxy-s2wh4\" (UID: \"92ce547d-7353-4faa-9465-e5d1996888e8\") " pod="kube-system/kube-proxy-s2wh4" Feb 13 15:17:55.614950 kubelet[3206]: I0213 15:17:55.614873 3206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/3288a184-d608-4c67-ba06-165bcbe1e001-cni-plugin\") pod \"kube-flannel-ds-rxqj2\" (UID: \"3288a184-d608-4c67-ba06-165bcbe1e001\") " pod="kube-flannel/kube-flannel-ds-rxqj2" Feb 13 15:17:55.614950 kubelet[3206]: I0213 15:17:55.614915 3206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/3288a184-d608-4c67-ba06-165bcbe1e001-cni\") pod \"kube-flannel-ds-rxqj2\" (UID: \"3288a184-d608-4c67-ba06-165bcbe1e001\") " pod="kube-flannel/kube-flannel-ds-rxqj2" Feb 13 15:17:55.615059 kubelet[3206]: I0213 15:17:55.614953 3206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92ce547d-7353-4faa-9465-e5d1996888e8-lib-modules\") pod \"kube-proxy-s2wh4\" (UID: \"92ce547d-7353-4faa-9465-e5d1996888e8\") " pod="kube-system/kube-proxy-s2wh4" Feb 13 15:17:55.615059 kubelet[3206]: I0213 15:17:55.614991 3206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh97q\" (UniqueName: \"kubernetes.io/projected/92ce547d-7353-4faa-9465-e5d1996888e8-kube-api-access-zh97q\") pod \"kube-proxy-s2wh4\" (UID: \"92ce547d-7353-4faa-9465-e5d1996888e8\") " pod="kube-system/kube-proxy-s2wh4" Feb 13 15:17:55.615059 kubelet[3206]: I0213 15:17:55.615032 3206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/3288a184-d608-4c67-ba06-165bcbe1e001-flannel-cfg\") pod \"kube-flannel-ds-rxqj2\" (UID: \"3288a184-d608-4c67-ba06-165bcbe1e001\") " pod="kube-flannel/kube-flannel-ds-rxqj2" Feb 13 15:17:55.615258 kubelet[3206]: I0213 15:17:55.615066 3206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wkgt\" (UniqueName: \"kubernetes.io/projected/3288a184-d608-4c67-ba06-165bcbe1e001-kube-api-access-5wkgt\") pod \"kube-flannel-ds-rxqj2\" (UID: \"3288a184-d608-4c67-ba06-165bcbe1e001\") " pod="kube-flannel/kube-flannel-ds-rxqj2" Feb 13 15:17:55.615258 kubelet[3206]: I0213 15:17:55.615126 3206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3288a184-d608-4c67-ba06-165bcbe1e001-run\") pod \"kube-flannel-ds-rxqj2\" (UID: \"3288a184-d608-4c67-ba06-165bcbe1e001\") " pod="kube-flannel/kube-flannel-ds-rxqj2" Feb 13 15:17:55.615258 kubelet[3206]: I0213 15:17:55.615160 3206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3288a184-d608-4c67-ba06-165bcbe1e001-xtables-lock\") pod \"kube-flannel-ds-rxqj2\" (UID: \"3288a184-d608-4c67-ba06-165bcbe1e001\") " pod="kube-flannel/kube-flannel-ds-rxqj2" Feb 13 15:17:55.617459 kubelet[3206]: W0213 15:17:55.617394 3206 reflector.go:547] object-"kube-flannel"/"kube-flannel-cfg": failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ip-172-31-29-130" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-29-130' and this object Feb 13 15:17:55.617710 kubelet[3206]: E0213 15:17:55.617474 3206 reflector.go:150] object-"kube-flannel"/"kube-flannel-cfg": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ip-172-31-29-130" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-29-130' and this object Feb 13 15:17:55.617710 kubelet[3206]: W0213 15:17:55.617394 3206 reflector.go:547] object-"kube-flannel"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-29-130" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-29-130' and this object Feb 13 15:17:55.617710 kubelet[3206]: E0213 15:17:55.617525 3206 reflector.go:150] object-"kube-flannel"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-29-130" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-29-130' and this object Feb 13 15:17:55.860459 containerd[1953]: time="2025-02-13T15:17:55.860235273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s2wh4,Uid:92ce547d-7353-4faa-9465-e5d1996888e8,Namespace:kube-system,Attempt:0,}" Feb 13 15:17:55.910417 containerd[1953]: time="2025-02-13T15:17:55.910291941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:17:55.910881 containerd[1953]: time="2025-02-13T15:17:55.910655085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:17:55.911057 containerd[1953]: time="2025-02-13T15:17:55.910760325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:55.911570 containerd[1953]: time="2025-02-13T15:17:55.911341905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:55.951595 systemd[1]: run-containerd-runc-k8s.io-649a9ca73bfab6e9d26503823b909455965031c74d6fc6488e1b47e8168da3db-runc.bDErpb.mount: Deactivated successfully. Feb 13 15:17:55.967708 systemd[1]: Started cri-containerd-649a9ca73bfab6e9d26503823b909455965031c74d6fc6488e1b47e8168da3db.scope - libcontainer container 649a9ca73bfab6e9d26503823b909455965031c74d6fc6488e1b47e8168da3db. Feb 13 15:17:56.015179 containerd[1953]: time="2025-02-13T15:17:56.015057570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s2wh4,Uid:92ce547d-7353-4faa-9465-e5d1996888e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"649a9ca73bfab6e9d26503823b909455965031c74d6fc6488e1b47e8168da3db\"" Feb 13 15:17:56.024686 containerd[1953]: time="2025-02-13T15:17:56.024316554Z" level=info msg="CreateContainer within sandbox \"649a9ca73bfab6e9d26503823b909455965031c74d6fc6488e1b47e8168da3db\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:17:56.058635 containerd[1953]: time="2025-02-13T15:17:56.058549050Z" level=info msg="CreateContainer within sandbox \"649a9ca73bfab6e9d26503823b909455965031c74d6fc6488e1b47e8168da3db\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f5ff9696cd7483621bcc879042248607fb677f754bfedb4b07714af1767a57ee\"" Feb 13 15:17:56.061824 containerd[1953]: time="2025-02-13T15:17:56.059627586Z" level=info msg="StartContainer for \"f5ff9696cd7483621bcc879042248607fb677f754bfedb4b07714af1767a57ee\"" Feb 13 15:17:56.119439 systemd[1]: Started cri-containerd-f5ff9696cd7483621bcc879042248607fb677f754bfedb4b07714af1767a57ee.scope - libcontainer container f5ff9696cd7483621bcc879042248607fb677f754bfedb4b07714af1767a57ee. Feb 13 15:17:56.181956 containerd[1953]: time="2025-02-13T15:17:56.181889802Z" level=info msg="StartContainer for \"f5ff9696cd7483621bcc879042248607fb677f754bfedb4b07714af1767a57ee\" returns successfully" Feb 13 15:17:56.745544 kubelet[3206]: E0213 15:17:56.745476 3206 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:17:56.745544 kubelet[3206]: E0213 15:17:56.745537 3206 projected.go:200] Error preparing data for projected volume kube-api-access-5wkgt for pod kube-flannel/kube-flannel-ds-rxqj2: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:17:56.746212 kubelet[3206]: E0213 15:17:56.745659 3206 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3288a184-d608-4c67-ba06-165bcbe1e001-kube-api-access-5wkgt podName:3288a184-d608-4c67-ba06-165bcbe1e001 nodeName:}" failed. No retries permitted until 2025-02-13 15:17:57.245611645 +0000 UTC m=+15.339859091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5wkgt" (UniqueName: "kubernetes.io/projected/3288a184-d608-4c67-ba06-165bcbe1e001-kube-api-access-5wkgt") pod "kube-flannel-ds-rxqj2" (UID: "3288a184-d608-4c67-ba06-165bcbe1e001") : failed to sync configmap cache: timed out waiting for the condition Feb 13 15:17:57.404255 containerd[1953]: time="2025-02-13T15:17:57.404137856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-rxqj2,Uid:3288a184-d608-4c67-ba06-165bcbe1e001,Namespace:kube-flannel,Attempt:0,}" Feb 13 15:17:57.453993 containerd[1953]: time="2025-02-13T15:17:57.453561033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:17:57.453993 containerd[1953]: time="2025-02-13T15:17:57.453669897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:17:57.453993 containerd[1953]: time="2025-02-13T15:17:57.453699453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:57.454415 containerd[1953]: time="2025-02-13T15:17:57.453883701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:17:57.494369 systemd[1]: Started cri-containerd-c83a2d8b3723894405d4465c477e439d0b954e864e14cbfd928a4e0629f40823.scope - libcontainer container c83a2d8b3723894405d4465c477e439d0b954e864e14cbfd928a4e0629f40823. Feb 13 15:17:57.561484 containerd[1953]: time="2025-02-13T15:17:57.561286425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-rxqj2,Uid:3288a184-d608-4c67-ba06-165bcbe1e001,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"c83a2d8b3723894405d4465c477e439d0b954e864e14cbfd928a4e0629f40823\"" Feb 13 15:17:57.565726 containerd[1953]: time="2025-02-13T15:17:57.565664805Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 15:17:59.651213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount980101210.mount: Deactivated successfully. Feb 13 15:17:59.718645 containerd[1953]: time="2025-02-13T15:17:59.718578588Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:59.720399 containerd[1953]: time="2025-02-13T15:17:59.720283032Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Feb 13 15:17:59.721959 containerd[1953]: time="2025-02-13T15:17:59.721854252Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:59.726411 containerd[1953]: time="2025-02-13T15:17:59.726329832Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:17:59.728303 containerd[1953]: time="2025-02-13T15:17:59.728041884Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 2.162317895s" Feb 13 15:17:59.728303 containerd[1953]: time="2025-02-13T15:17:59.728115588Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Feb 13 15:17:59.732739 containerd[1953]: time="2025-02-13T15:17:59.732669072Z" level=info msg="CreateContainer within sandbox \"c83a2d8b3723894405d4465c477e439d0b954e864e14cbfd928a4e0629f40823\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 15:17:59.768323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount920011286.mount: Deactivated successfully. Feb 13 15:17:59.768762 containerd[1953]: time="2025-02-13T15:17:59.768425388Z" level=info msg="CreateContainer within sandbox \"c83a2d8b3723894405d4465c477e439d0b954e864e14cbfd928a4e0629f40823\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"9afe6cb038e0b17f329b8a0adeac160c8e41ef3d11dd92654521fdfdc6b13c54\"" Feb 13 15:17:59.771124 containerd[1953]: time="2025-02-13T15:17:59.770212224Z" level=info msg="StartContainer for \"9afe6cb038e0b17f329b8a0adeac160c8e41ef3d11dd92654521fdfdc6b13c54\"" Feb 13 15:17:59.824380 systemd[1]: Started cri-containerd-9afe6cb038e0b17f329b8a0adeac160c8e41ef3d11dd92654521fdfdc6b13c54.scope - libcontainer container 9afe6cb038e0b17f329b8a0adeac160c8e41ef3d11dd92654521fdfdc6b13c54. Feb 13 15:17:59.881460 systemd[1]: cri-containerd-9afe6cb038e0b17f329b8a0adeac160c8e41ef3d11dd92654521fdfdc6b13c54.scope: Deactivated successfully. Feb 13 15:17:59.883182 containerd[1953]: time="2025-02-13T15:17:59.880860577Z" level=info msg="StartContainer for \"9afe6cb038e0b17f329b8a0adeac160c8e41ef3d11dd92654521fdfdc6b13c54\" returns successfully" Feb 13 15:17:59.980240 containerd[1953]: time="2025-02-13T15:17:59.980109385Z" level=info msg="shim disconnected" id=9afe6cb038e0b17f329b8a0adeac160c8e41ef3d11dd92654521fdfdc6b13c54 namespace=k8s.io Feb 13 15:17:59.980240 containerd[1953]: time="2025-02-13T15:17:59.980215573Z" level=warning msg="cleaning up after shim disconnected" id=9afe6cb038e0b17f329b8a0adeac160c8e41ef3d11dd92654521fdfdc6b13c54 namespace=k8s.io Feb 13 15:17:59.980240 containerd[1953]: time="2025-02-13T15:17:59.980239633Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:18:00.359385 containerd[1953]: time="2025-02-13T15:18:00.358807487Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 15:18:00.377430 kubelet[3206]: I0213 15:18:00.377029 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s2wh4" podStartSLOduration=5.377005583 podStartE2EDuration="5.377005583s" podCreationTimestamp="2025-02-13 15:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:17:56.356233255 +0000 UTC m=+14.450480701" watchObservedRunningTime="2025-02-13 15:18:00.377005583 +0000 UTC m=+18.471253029" Feb 13 15:18:00.501430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9afe6cb038e0b17f329b8a0adeac160c8e41ef3d11dd92654521fdfdc6b13c54-rootfs.mount: Deactivated successfully. Feb 13 15:18:02.811708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3149526584.mount: Deactivated successfully. Feb 13 15:18:04.077990 containerd[1953]: time="2025-02-13T15:18:04.076560926Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:04.079745 containerd[1953]: time="2025-02-13T15:18:04.079663442Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874260" Feb 13 15:18:04.081662 containerd[1953]: time="2025-02-13T15:18:04.081603206Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:04.093369 containerd[1953]: time="2025-02-13T15:18:04.093296474Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:18:04.101251 containerd[1953]: time="2025-02-13T15:18:04.101168186Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 3.741890491s" Feb 13 15:18:04.101420 containerd[1953]: time="2025-02-13T15:18:04.101269346Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Feb 13 15:18:04.112055 containerd[1953]: time="2025-02-13T15:18:04.111966374Z" level=info msg="CreateContainer within sandbox \"c83a2d8b3723894405d4465c477e439d0b954e864e14cbfd928a4e0629f40823\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:18:04.141609 containerd[1953]: time="2025-02-13T15:18:04.141524714Z" level=info msg="CreateContainer within sandbox \"c83a2d8b3723894405d4465c477e439d0b954e864e14cbfd928a4e0629f40823\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c10c0dddac9ed21acba70e56f481df04c0b5a2da4eb6b4e6611135e8eb1cbc5d\"" Feb 13 15:18:04.144209 containerd[1953]: time="2025-02-13T15:18:04.142757990Z" level=info msg="StartContainer for \"c10c0dddac9ed21acba70e56f481df04c0b5a2da4eb6b4e6611135e8eb1cbc5d\"" Feb 13 15:18:04.202472 systemd[1]: Started cri-containerd-c10c0dddac9ed21acba70e56f481df04c0b5a2da4eb6b4e6611135e8eb1cbc5d.scope - libcontainer container c10c0dddac9ed21acba70e56f481df04c0b5a2da4eb6b4e6611135e8eb1cbc5d. Feb 13 15:18:04.254586 containerd[1953]: time="2025-02-13T15:18:04.254528018Z" level=info msg="StartContainer for \"c10c0dddac9ed21acba70e56f481df04c0b5a2da4eb6b4e6611135e8eb1cbc5d\" returns successfully" Feb 13 15:18:04.255836 systemd[1]: cri-containerd-c10c0dddac9ed21acba70e56f481df04c0b5a2da4eb6b4e6611135e8eb1cbc5d.scope: Deactivated successfully. Feb 13 15:18:04.299315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c10c0dddac9ed21acba70e56f481df04c0b5a2da4eb6b4e6611135e8eb1cbc5d-rootfs.mount: Deactivated successfully. Feb 13 15:18:04.308025 kubelet[3206]: I0213 15:18:04.307665 3206 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:18:04.350314 kubelet[3206]: I0213 15:18:04.347753 3206 topology_manager.go:215] "Topology Admit Handler" podUID="b919846a-cf4b-4e4c-8590-9cbb8a6993dc" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jb59s" Feb 13 15:18:04.353642 kubelet[3206]: I0213 15:18:04.353589 3206 topology_manager.go:215] "Topology Admit Handler" podUID="d1e2bc38-d6a9-43c7-98fd-158578c639e6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8dmtf" Feb 13 15:18:04.378623 kubelet[3206]: I0213 15:18:04.378052 3206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b919846a-cf4b-4e4c-8590-9cbb8a6993dc-config-volume\") pod \"coredns-7db6d8ff4d-jb59s\" (UID: \"b919846a-cf4b-4e4c-8590-9cbb8a6993dc\") " pod="kube-system/coredns-7db6d8ff4d-jb59s" Feb 13 15:18:04.378623 kubelet[3206]: I0213 15:18:04.378138 3206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1e2bc38-d6a9-43c7-98fd-158578c639e6-config-volume\") pod \"coredns-7db6d8ff4d-8dmtf\" (UID: \"d1e2bc38-d6a9-43c7-98fd-158578c639e6\") " pod="kube-system/coredns-7db6d8ff4d-8dmtf" Feb 13 15:18:04.378623 kubelet[3206]: I0213 15:18:04.378179 3206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bb4t\" (UniqueName: \"kubernetes.io/projected/d1e2bc38-d6a9-43c7-98fd-158578c639e6-kube-api-access-7bb4t\") pod \"coredns-7db6d8ff4d-8dmtf\" (UID: \"d1e2bc38-d6a9-43c7-98fd-158578c639e6\") " pod="kube-system/coredns-7db6d8ff4d-8dmtf" Feb 13 15:18:04.378623 kubelet[3206]: I0213 15:18:04.378221 3206 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s78x\" (UniqueName: \"kubernetes.io/projected/b919846a-cf4b-4e4c-8590-9cbb8a6993dc-kube-api-access-7s78x\") pod \"coredns-7db6d8ff4d-jb59s\" (UID: \"b919846a-cf4b-4e4c-8590-9cbb8a6993dc\") " pod="kube-system/coredns-7db6d8ff4d-jb59s" Feb 13 15:18:04.385554 systemd[1]: Created slice kubepods-burstable-podb919846a_cf4b_4e4c_8590_9cbb8a6993dc.slice - libcontainer container kubepods-burstable-podb919846a_cf4b_4e4c_8590_9cbb8a6993dc.slice. Feb 13 15:18:04.407523 systemd[1]: Created slice kubepods-burstable-podd1e2bc38_d6a9_43c7_98fd_158578c639e6.slice - libcontainer container kubepods-burstable-podd1e2bc38_d6a9_43c7_98fd_158578c639e6.slice. Feb 13 15:18:04.635677 containerd[1953]: time="2025-02-13T15:18:04.634888696Z" level=info msg="shim disconnected" id=c10c0dddac9ed21acba70e56f481df04c0b5a2da4eb6b4e6611135e8eb1cbc5d namespace=k8s.io Feb 13 15:18:04.635677 containerd[1953]: time="2025-02-13T15:18:04.634967080Z" level=warning msg="cleaning up after shim disconnected" id=c10c0dddac9ed21acba70e56f481df04c0b5a2da4eb6b4e6611135e8eb1cbc5d namespace=k8s.io Feb 13 15:18:04.635677 containerd[1953]: time="2025-02-13T15:18:04.634988296Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:18:04.698574 containerd[1953]: time="2025-02-13T15:18:04.698383673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jb59s,Uid:b919846a-cf4b-4e4c-8590-9cbb8a6993dc,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:04.716118 containerd[1953]: time="2025-02-13T15:18:04.715848269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8dmtf,Uid:d1e2bc38-d6a9-43c7-98fd-158578c639e6,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:04.753136 containerd[1953]: time="2025-02-13T15:18:04.752913005Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jb59s,Uid:b919846a-cf4b-4e4c-8590-9cbb8a6993dc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3ddb80a711b7d65649aabeab7f82dcc9e56b36157dada014c0acb6bb88f607af\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:18:04.754467 kubelet[3206]: E0213 15:18:04.753724 3206 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ddb80a711b7d65649aabeab7f82dcc9e56b36157dada014c0acb6bb88f607af\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:18:04.754467 kubelet[3206]: E0213 15:18:04.753917 3206 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ddb80a711b7d65649aabeab7f82dcc9e56b36157dada014c0acb6bb88f607af\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-jb59s" Feb 13 15:18:04.754467 kubelet[3206]: E0213 15:18:04.753954 3206 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ddb80a711b7d65649aabeab7f82dcc9e56b36157dada014c0acb6bb88f607af\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-jb59s" Feb 13 15:18:04.754467 kubelet[3206]: E0213 15:18:04.754102 3206 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-jb59s_kube-system(b919846a-cf4b-4e4c-8590-9cbb8a6993dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-jb59s_kube-system(b919846a-cf4b-4e4c-8590-9cbb8a6993dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ddb80a711b7d65649aabeab7f82dcc9e56b36157dada014c0acb6bb88f607af\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-jb59s" podUID="b919846a-cf4b-4e4c-8590-9cbb8a6993dc" Feb 13 15:18:04.770030 containerd[1953]: time="2025-02-13T15:18:04.769898321Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8dmtf,Uid:d1e2bc38-d6a9-43c7-98fd-158578c639e6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8e3fae8aa790e7413aeedf0ae146c4e3aae39d7373cab4062e3e0891328d7264\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:18:04.770396 kubelet[3206]: E0213 15:18:04.770232 3206 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e3fae8aa790e7413aeedf0ae146c4e3aae39d7373cab4062e3e0891328d7264\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:18:04.770396 kubelet[3206]: E0213 15:18:04.770305 3206 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e3fae8aa790e7413aeedf0ae146c4e3aae39d7373cab4062e3e0891328d7264\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-8dmtf" Feb 13 15:18:04.770396 kubelet[3206]: E0213 15:18:04.770336 3206 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e3fae8aa790e7413aeedf0ae146c4e3aae39d7373cab4062e3e0891328d7264\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-8dmtf" Feb 13 15:18:04.770820 kubelet[3206]: E0213 15:18:04.770410 3206 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8dmtf_kube-system(d1e2bc38-d6a9-43c7-98fd-158578c639e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8dmtf_kube-system(d1e2bc38-d6a9-43c7-98fd-158578c639e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e3fae8aa790e7413aeedf0ae146c4e3aae39d7373cab4062e3e0891328d7264\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-8dmtf" podUID="d1e2bc38-d6a9-43c7-98fd-158578c639e6" Feb 13 15:18:05.398031 containerd[1953]: time="2025-02-13T15:18:05.397943800Z" level=info msg="CreateContainer within sandbox \"c83a2d8b3723894405d4465c477e439d0b954e864e14cbfd928a4e0629f40823\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 15:18:05.425624 containerd[1953]: time="2025-02-13T15:18:05.425550532Z" level=info msg="CreateContainer within sandbox \"c83a2d8b3723894405d4465c477e439d0b954e864e14cbfd928a4e0629f40823\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"f62fc1a306a074a3fdc25eb9692436cf452519885ce80c1da95d49490c72cb60\"" Feb 13 15:18:05.426792 containerd[1953]: time="2025-02-13T15:18:05.426580264Z" level=info msg="StartContainer for \"f62fc1a306a074a3fdc25eb9692436cf452519885ce80c1da95d49490c72cb60\"" Feb 13 15:18:05.485390 systemd[1]: Started cri-containerd-f62fc1a306a074a3fdc25eb9692436cf452519885ce80c1da95d49490c72cb60.scope - libcontainer container f62fc1a306a074a3fdc25eb9692436cf452519885ce80c1da95d49490c72cb60. Feb 13 15:18:05.539452 containerd[1953]: time="2025-02-13T15:18:05.539374865Z" level=info msg="StartContainer for \"f62fc1a306a074a3fdc25eb9692436cf452519885ce80c1da95d49490c72cb60\" returns successfully" Feb 13 15:18:06.599982 (udev-worker)[4011]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:18:06.624597 systemd-networkd[1832]: flannel.1: Link UP Feb 13 15:18:06.624619 systemd-networkd[1832]: flannel.1: Gained carrier Feb 13 15:18:08.344370 systemd-networkd[1832]: flannel.1: Gained IPv6LL Feb 13 15:18:10.476973 ntpd[1915]: Listen normally on 7 flannel.1 192.168.0.0:123 Feb 13 15:18:10.477631 ntpd[1915]: 13 Feb 15:18:10 ntpd[1915]: Listen normally on 7 flannel.1 192.168.0.0:123 Feb 13 15:18:10.477631 ntpd[1915]: 13 Feb 15:18:10 ntpd[1915]: Listen normally on 8 flannel.1 [fe80::d034:e2ff:fe7f:80c1%4]:123 Feb 13 15:18:10.477141 ntpd[1915]: Listen normally on 8 flannel.1 [fe80::d034:e2ff:fe7f:80c1%4]:123 Feb 13 15:18:17.198761 containerd[1953]: time="2025-02-13T15:18:17.198530175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jb59s,Uid:b919846a-cf4b-4e4c-8590-9cbb8a6993dc,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:17.233170 systemd-networkd[1832]: cni0: Link UP Feb 13 15:18:17.233190 systemd-networkd[1832]: cni0: Gained carrier Feb 13 15:18:17.239460 (udev-worker)[4148]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:18:17.240509 systemd-networkd[1832]: cni0: Lost carrier Feb 13 15:18:17.248841 systemd-networkd[1832]: vethe4402451: Link UP Feb 13 15:18:17.257129 kernel: cni0: port 1(vethe4402451) entered blocking state Feb 13 15:18:17.257275 kernel: cni0: port 1(vethe4402451) entered disabled state Feb 13 15:18:17.258465 kernel: vethe4402451: entered allmulticast mode Feb 13 15:18:17.262149 kernel: vethe4402451: entered promiscuous mode Feb 13 15:18:17.264629 kernel: cni0: port 1(vethe4402451) entered blocking state Feb 13 15:18:17.264690 kernel: cni0: port 1(vethe4402451) entered forwarding state Feb 13 15:18:17.267112 kernel: cni0: port 1(vethe4402451) entered disabled state Feb 13 15:18:17.268632 (udev-worker)[4154]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:18:17.279725 kernel: cni0: port 1(vethe4402451) entered blocking state Feb 13 15:18:17.279859 kernel: cni0: port 1(vethe4402451) entered forwarding state Feb 13 15:18:17.280316 systemd-networkd[1832]: vethe4402451: Gained carrier Feb 13 15:18:17.281235 systemd-networkd[1832]: cni0: Gained carrier Feb 13 15:18:17.287200 containerd[1953]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000016938), "name":"cbr0", "type":"bridge"} Feb 13 15:18:17.287200 containerd[1953]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:18:17.323537 containerd[1953]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T15:18:17.323357799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:17.323537 containerd[1953]: time="2025-02-13T15:18:17.323486487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:17.323537 containerd[1953]: time="2025-02-13T15:18:17.323527131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:17.323882 containerd[1953]: time="2025-02-13T15:18:17.323690343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:17.381443 systemd[1]: Started cri-containerd-12346a3f3aa9ef69754f9eeeaa55fa71dbfcf8528a2e89681b3f997a9ac18f83.scope - libcontainer container 12346a3f3aa9ef69754f9eeeaa55fa71dbfcf8528a2e89681b3f997a9ac18f83. Feb 13 15:18:17.447517 containerd[1953]: time="2025-02-13T15:18:17.447462460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jb59s,Uid:b919846a-cf4b-4e4c-8590-9cbb8a6993dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"12346a3f3aa9ef69754f9eeeaa55fa71dbfcf8528a2e89681b3f997a9ac18f83\"" Feb 13 15:18:17.457205 containerd[1953]: time="2025-02-13T15:18:17.454462408Z" level=info msg="CreateContainer within sandbox \"12346a3f3aa9ef69754f9eeeaa55fa71dbfcf8528a2e89681b3f997a9ac18f83\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:18:17.478754 containerd[1953]: time="2025-02-13T15:18:17.478532536Z" level=info msg="CreateContainer within sandbox \"12346a3f3aa9ef69754f9eeeaa55fa71dbfcf8528a2e89681b3f997a9ac18f83\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5d91d2f580b8744ec8fae32ded195e56a07ca5ea53be414f288020281a54e13c\"" Feb 13 15:18:17.480114 containerd[1953]: time="2025-02-13T15:18:17.479850040Z" level=info msg="StartContainer for \"5d91d2f580b8744ec8fae32ded195e56a07ca5ea53be414f288020281a54e13c\"" Feb 13 15:18:17.528059 systemd[1]: Started cri-containerd-5d91d2f580b8744ec8fae32ded195e56a07ca5ea53be414f288020281a54e13c.scope - libcontainer container 5d91d2f580b8744ec8fae32ded195e56a07ca5ea53be414f288020281a54e13c. Feb 13 15:18:17.585644 containerd[1953]: time="2025-02-13T15:18:17.585486845Z" level=info msg="StartContainer for \"5d91d2f580b8744ec8fae32ded195e56a07ca5ea53be414f288020281a54e13c\" returns successfully" Feb 13 15:18:18.199806 containerd[1953]: time="2025-02-13T15:18:18.199059544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8dmtf,Uid:d1e2bc38-d6a9-43c7-98fd-158578c639e6,Namespace:kube-system,Attempt:0,}" Feb 13 15:18:18.239857 systemd-networkd[1832]: veth58c57aa6: Link UP Feb 13 15:18:18.244603 kernel: cni0: port 2(veth58c57aa6) entered blocking state Feb 13 15:18:18.244726 kernel: cni0: port 2(veth58c57aa6) entered disabled state Feb 13 15:18:18.244770 kernel: veth58c57aa6: entered allmulticast mode Feb 13 15:18:18.244898 kernel: veth58c57aa6: entered promiscuous mode Feb 13 15:18:18.255121 kernel: cni0: port 2(veth58c57aa6) entered blocking state Feb 13 15:18:18.255225 kernel: cni0: port 2(veth58c57aa6) entered forwarding state Feb 13 15:18:18.255357 systemd-networkd[1832]: veth58c57aa6: Gained carrier Feb 13 15:18:18.263049 containerd[1953]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000012938), "name":"cbr0", "type":"bridge"} Feb 13 15:18:18.263049 containerd[1953]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:18:18.300029 containerd[1953]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T15:18:18.299624812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:18:18.300029 containerd[1953]: time="2025-02-13T15:18:18.299715076Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:18:18.300029 containerd[1953]: time="2025-02-13T15:18:18.299754412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:18.300029 containerd[1953]: time="2025-02-13T15:18:18.299912596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:18:18.333339 systemd-networkd[1832]: vethe4402451: Gained IPv6LL Feb 13 15:18:18.347532 systemd[1]: Started cri-containerd-65836eaa1b1a2c06345b66cb0709cd81db9fcd7c9d629ce3bda6e6759fbf8f7c.scope - libcontainer container 65836eaa1b1a2c06345b66cb0709cd81db9fcd7c9d629ce3bda6e6759fbf8f7c. Feb 13 15:18:18.412092 containerd[1953]: time="2025-02-13T15:18:18.411993665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8dmtf,Uid:d1e2bc38-d6a9-43c7-98fd-158578c639e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"65836eaa1b1a2c06345b66cb0709cd81db9fcd7c9d629ce3bda6e6759fbf8f7c\"" Feb 13 15:18:18.420496 containerd[1953]: time="2025-02-13T15:18:18.420317129Z" level=info msg="CreateContainer within sandbox \"65836eaa1b1a2c06345b66cb0709cd81db9fcd7c9d629ce3bda6e6759fbf8f7c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:18:18.479331 containerd[1953]: time="2025-02-13T15:18:18.479176217Z" level=info msg="CreateContainer within sandbox \"65836eaa1b1a2c06345b66cb0709cd81db9fcd7c9d629ce3bda6e6759fbf8f7c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"84c3818e728e88e95b79069c4998dc110ba7c30b32ebd8eb00702c76497cf005\"" Feb 13 15:18:18.486685 containerd[1953]: time="2025-02-13T15:18:18.483948677Z" level=info msg="StartContainer for \"84c3818e728e88e95b79069c4998dc110ba7c30b32ebd8eb00702c76497cf005\"" Feb 13 15:18:18.560744 systemd[1]: Started cri-containerd-84c3818e728e88e95b79069c4998dc110ba7c30b32ebd8eb00702c76497cf005.scope - libcontainer container 84c3818e728e88e95b79069c4998dc110ba7c30b32ebd8eb00702c76497cf005. Feb 13 15:18:18.574687 kubelet[3206]: I0213 15:18:18.573598 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-rxqj2" podStartSLOduration=17.032853285 podStartE2EDuration="23.573573978s" podCreationTimestamp="2025-02-13 15:17:55 +0000 UTC" firstStartedPulling="2025-02-13 15:17:57.563936385 +0000 UTC m=+15.658183831" lastFinishedPulling="2025-02-13 15:18:04.10465709 +0000 UTC m=+22.198904524" observedRunningTime="2025-02-13 15:18:06.430779533 +0000 UTC m=+24.525026991" watchObservedRunningTime="2025-02-13 15:18:18.573573978 +0000 UTC m=+36.667821436" Feb 13 15:18:18.577831 kubelet[3206]: I0213 15:18:18.576289 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jb59s" podStartSLOduration=23.576254766 podStartE2EDuration="23.576254766s" podCreationTimestamp="2025-02-13 15:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:18.576022638 +0000 UTC m=+36.670270072" watchObservedRunningTime="2025-02-13 15:18:18.576254766 +0000 UTC m=+36.670502224" Feb 13 15:18:18.680712 containerd[1953]: time="2025-02-13T15:18:18.680645622Z" level=info msg="StartContainer for \"84c3818e728e88e95b79069c4998dc110ba7c30b32ebd8eb00702c76497cf005\" returns successfully" Feb 13 15:18:18.840784 systemd-networkd[1832]: cni0: Gained IPv6LL Feb 13 15:18:19.465009 kubelet[3206]: I0213 15:18:19.464576 3206 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8dmtf" podStartSLOduration=24.464552574 podStartE2EDuration="24.464552574s" podCreationTimestamp="2025-02-13 15:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:19.462697158 +0000 UTC m=+37.556944616" watchObservedRunningTime="2025-02-13 15:18:19.464552574 +0000 UTC m=+37.558800032" Feb 13 15:18:19.622618 systemd[1]: Started sshd@5-172.31.29.130:22-139.178.68.195:37594.service - OpenSSH per-connection server daemon (139.178.68.195:37594). Feb 13 15:18:19.800564 systemd-networkd[1832]: veth58c57aa6: Gained IPv6LL Feb 13 15:18:19.816799 sshd[4351]: Accepted publickey for core from 139.178.68.195 port 37594 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:19.819385 sshd-session[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:19.826928 systemd-logind[1924]: New session 6 of user core. Feb 13 15:18:19.835497 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:18:20.117233 sshd[4356]: Connection closed by 139.178.68.195 port 37594 Feb 13 15:18:20.118050 sshd-session[4351]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:20.124389 systemd[1]: sshd@5-172.31.29.130:22-139.178.68.195:37594.service: Deactivated successfully. Feb 13 15:18:20.128261 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:18:20.132396 systemd-logind[1924]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:18:20.134972 systemd-logind[1924]: Removed session 6. Feb 13 15:18:22.477064 ntpd[1915]: Listen normally on 9 cni0 192.168.0.1:123 Feb 13 15:18:22.477250 ntpd[1915]: Listen normally on 10 cni0 [fe80::649a:2cff:fe02:dcfb%5]:123 Feb 13 15:18:22.477725 ntpd[1915]: 13 Feb 15:18:22 ntpd[1915]: Listen normally on 9 cni0 192.168.0.1:123 Feb 13 15:18:22.477725 ntpd[1915]: 13 Feb 15:18:22 ntpd[1915]: Listen normally on 10 cni0 [fe80::649a:2cff:fe02:dcfb%5]:123 Feb 13 15:18:22.477725 ntpd[1915]: 13 Feb 15:18:22 ntpd[1915]: Listen normally on 11 vethe4402451 [fe80::4cea:e9ff:fe18:3749%6]:123 Feb 13 15:18:22.477725 ntpd[1915]: 13 Feb 15:18:22 ntpd[1915]: Listen normally on 12 veth58c57aa6 [fe80::f841:12ff:fe24:cbe9%7]:123 Feb 13 15:18:22.477332 ntpd[1915]: Listen normally on 11 vethe4402451 [fe80::4cea:e9ff:fe18:3749%6]:123 Feb 13 15:18:22.477414 ntpd[1915]: Listen normally on 12 veth58c57aa6 [fe80::f841:12ff:fe24:cbe9%7]:123 Feb 13 15:18:25.157662 systemd[1]: Started sshd@6-172.31.29.130:22-139.178.68.195:37610.service - OpenSSH per-connection server daemon (139.178.68.195:37610). Feb 13 15:18:25.357635 sshd[4395]: Accepted publickey for core from 139.178.68.195 port 37610 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:25.360206 sshd-session[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:25.369418 systemd-logind[1924]: New session 7 of user core. Feb 13 15:18:25.374421 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:18:25.622777 sshd[4397]: Connection closed by 139.178.68.195 port 37610 Feb 13 15:18:25.623374 sshd-session[4395]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:25.630950 systemd[1]: sshd@6-172.31.29.130:22-139.178.68.195:37610.service: Deactivated successfully. Feb 13 15:18:25.636493 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:18:25.638251 systemd-logind[1924]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:18:25.640699 systemd-logind[1924]: Removed session 7. Feb 13 15:18:30.663571 systemd[1]: Started sshd@7-172.31.29.130:22-139.178.68.195:45056.service - OpenSSH per-connection server daemon (139.178.68.195:45056). Feb 13 15:18:30.857168 sshd[4433]: Accepted publickey for core from 139.178.68.195 port 45056 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:30.859699 sshd-session[4433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:30.868424 systemd-logind[1924]: New session 8 of user core. Feb 13 15:18:30.878380 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:18:31.130428 sshd[4435]: Connection closed by 139.178.68.195 port 45056 Feb 13 15:18:31.131397 sshd-session[4433]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:31.139643 systemd[1]: sshd@7-172.31.29.130:22-139.178.68.195:45056.service: Deactivated successfully. Feb 13 15:18:31.147222 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:18:31.149298 systemd-logind[1924]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:18:31.151283 systemd-logind[1924]: Removed session 8. Feb 13 15:18:36.168611 systemd[1]: Started sshd@8-172.31.29.130:22-139.178.68.195:45066.service - OpenSSH per-connection server daemon (139.178.68.195:45066). Feb 13 15:18:36.360252 sshd[4468]: Accepted publickey for core from 139.178.68.195 port 45066 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:36.362855 sshd-session[4468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:36.371898 systemd-logind[1924]: New session 9 of user core. Feb 13 15:18:36.377442 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:18:36.633890 sshd[4470]: Connection closed by 139.178.68.195 port 45066 Feb 13 15:18:36.635393 sshd-session[4468]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:36.642860 systemd[1]: sshd@8-172.31.29.130:22-139.178.68.195:45066.service: Deactivated successfully. Feb 13 15:18:36.647712 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:18:36.649214 systemd-logind[1924]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:18:36.651812 systemd-logind[1924]: Removed session 9. Feb 13 15:18:36.671665 systemd[1]: Started sshd@9-172.31.29.130:22-139.178.68.195:38480.service - OpenSSH per-connection server daemon (139.178.68.195:38480). Feb 13 15:18:36.863674 sshd[4482]: Accepted publickey for core from 139.178.68.195 port 38480 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:36.866286 sshd-session[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:36.874722 systemd-logind[1924]: New session 10 of user core. Feb 13 15:18:36.882347 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:18:37.200270 sshd[4490]: Connection closed by 139.178.68.195 port 38480 Feb 13 15:18:37.203530 sshd-session[4482]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:37.212914 systemd[1]: sshd@9-172.31.29.130:22-139.178.68.195:38480.service: Deactivated successfully. Feb 13 15:18:37.219604 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:18:37.228402 systemd-logind[1924]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:18:37.250601 systemd[1]: Started sshd@10-172.31.29.130:22-139.178.68.195:38492.service - OpenSSH per-connection server daemon (139.178.68.195:38492). Feb 13 15:18:37.252714 systemd-logind[1924]: Removed session 10. Feb 13 15:18:37.444108 sshd[4514]: Accepted publickey for core from 139.178.68.195 port 38492 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:37.447306 sshd-session[4514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:37.456012 systemd-logind[1924]: New session 11 of user core. Feb 13 15:18:37.464569 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:18:37.720819 sshd[4516]: Connection closed by 139.178.68.195 port 38492 Feb 13 15:18:37.722051 sshd-session[4514]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:37.729639 systemd[1]: sshd@10-172.31.29.130:22-139.178.68.195:38492.service: Deactivated successfully. Feb 13 15:18:37.733419 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:18:37.735972 systemd-logind[1924]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:18:37.738747 systemd-logind[1924]: Removed session 11. Feb 13 15:18:42.762891 systemd[1]: Started sshd@11-172.31.29.130:22-139.178.68.195:38508.service - OpenSSH per-connection server daemon (139.178.68.195:38508). Feb 13 15:18:42.958100 sshd[4549]: Accepted publickey for core from 139.178.68.195 port 38508 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:42.960705 sshd-session[4549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:42.969173 systemd-logind[1924]: New session 12 of user core. Feb 13 15:18:42.978339 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:18:43.241107 sshd[4551]: Connection closed by 139.178.68.195 port 38508 Feb 13 15:18:43.239609 sshd-session[4549]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:43.251641 systemd[1]: sshd@11-172.31.29.130:22-139.178.68.195:38508.service: Deactivated successfully. Feb 13 15:18:43.263169 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:18:43.268878 systemd-logind[1924]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:18:43.274808 systemd-logind[1924]: Removed session 12. Feb 13 15:18:48.278583 systemd[1]: Started sshd@12-172.31.29.130:22-139.178.68.195:40364.service - OpenSSH per-connection server daemon (139.178.68.195:40364). Feb 13 15:18:48.477689 sshd[4583]: Accepted publickey for core from 139.178.68.195 port 40364 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:48.480240 sshd-session[4583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:48.488764 systemd-logind[1924]: New session 13 of user core. Feb 13 15:18:48.498340 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:18:48.746733 sshd[4585]: Connection closed by 139.178.68.195 port 40364 Feb 13 15:18:48.748063 sshd-session[4583]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:48.753730 systemd-logind[1924]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:18:48.754579 systemd[1]: sshd@12-172.31.29.130:22-139.178.68.195:40364.service: Deactivated successfully. Feb 13 15:18:48.759322 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:18:48.764262 systemd-logind[1924]: Removed session 13. Feb 13 15:18:48.787652 systemd[1]: Started sshd@13-172.31.29.130:22-139.178.68.195:40372.service - OpenSSH per-connection server daemon (139.178.68.195:40372). Feb 13 15:18:48.979507 sshd[4596]: Accepted publickey for core from 139.178.68.195 port 40372 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:48.982149 sshd-session[4596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:48.991135 systemd-logind[1924]: New session 14 of user core. Feb 13 15:18:48.998352 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:18:49.314739 sshd[4598]: Connection closed by 139.178.68.195 port 40372 Feb 13 15:18:49.315971 sshd-session[4596]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:49.323061 systemd[1]: sshd@13-172.31.29.130:22-139.178.68.195:40372.service: Deactivated successfully. Feb 13 15:18:49.327866 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:18:49.329533 systemd-logind[1924]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:18:49.331823 systemd-logind[1924]: Removed session 14. Feb 13 15:18:49.354641 systemd[1]: Started sshd@14-172.31.29.130:22-139.178.68.195:40378.service - OpenSSH per-connection server daemon (139.178.68.195:40378). Feb 13 15:18:49.543371 sshd[4607]: Accepted publickey for core from 139.178.68.195 port 40378 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:49.546155 sshd-session[4607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:49.554227 systemd-logind[1924]: New session 15 of user core. Feb 13 15:18:49.569441 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:18:52.025583 sshd[4609]: Connection closed by 139.178.68.195 port 40378 Feb 13 15:18:52.026179 sshd-session[4607]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:52.040444 systemd[1]: sshd@14-172.31.29.130:22-139.178.68.195:40378.service: Deactivated successfully. Feb 13 15:18:52.047956 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:18:52.053243 systemd-logind[1924]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:18:52.083396 systemd[1]: Started sshd@15-172.31.29.130:22-139.178.68.195:40390.service - OpenSSH per-connection server daemon (139.178.68.195:40390). Feb 13 15:18:52.088811 systemd-logind[1924]: Removed session 15. Feb 13 15:18:52.281756 sshd[4633]: Accepted publickey for core from 139.178.68.195 port 40390 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:52.284814 sshd-session[4633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:52.296490 systemd-logind[1924]: New session 16 of user core. Feb 13 15:18:52.304356 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:18:52.811318 sshd[4648]: Connection closed by 139.178.68.195 port 40390 Feb 13 15:18:52.813449 sshd-session[4633]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:52.820883 systemd[1]: sshd@15-172.31.29.130:22-139.178.68.195:40390.service: Deactivated successfully. Feb 13 15:18:52.825736 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:18:52.827561 systemd-logind[1924]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:18:52.830190 systemd-logind[1924]: Removed session 16. Feb 13 15:18:52.851646 systemd[1]: Started sshd@16-172.31.29.130:22-139.178.68.195:40400.service - OpenSSH per-connection server daemon (139.178.68.195:40400). Feb 13 15:18:53.040977 sshd[4658]: Accepted publickey for core from 139.178.68.195 port 40400 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:53.043489 sshd-session[4658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:53.051462 systemd-logind[1924]: New session 17 of user core. Feb 13 15:18:53.058361 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:18:53.310658 sshd[4660]: Connection closed by 139.178.68.195 port 40400 Feb 13 15:18:53.310461 sshd-session[4658]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:53.318789 systemd[1]: sshd@16-172.31.29.130:22-139.178.68.195:40400.service: Deactivated successfully. Feb 13 15:18:53.322757 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:18:53.324663 systemd-logind[1924]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:18:53.327480 systemd-logind[1924]: Removed session 17. Feb 13 15:18:58.351711 systemd[1]: Started sshd@17-172.31.29.130:22-139.178.68.195:59944.service - OpenSSH per-connection server daemon (139.178.68.195:59944). Feb 13 15:18:58.537831 sshd[4695]: Accepted publickey for core from 139.178.68.195 port 59944 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:18:58.540778 sshd-session[4695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:18:58.549654 systemd-logind[1924]: New session 18 of user core. Feb 13 15:18:58.561430 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:18:58.803130 sshd[4697]: Connection closed by 139.178.68.195 port 59944 Feb 13 15:18:58.803949 sshd-session[4695]: pam_unix(sshd:session): session closed for user core Feb 13 15:18:58.809298 systemd-logind[1924]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:18:58.812007 systemd[1]: sshd@17-172.31.29.130:22-139.178.68.195:59944.service: Deactivated successfully. Feb 13 15:18:58.816010 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:18:58.818685 systemd-logind[1924]: Removed session 18. Feb 13 15:19:03.843591 systemd[1]: Started sshd@18-172.31.29.130:22-139.178.68.195:59948.service - OpenSSH per-connection server daemon (139.178.68.195:59948). Feb 13 15:19:04.024690 sshd[4732]: Accepted publickey for core from 139.178.68.195 port 59948 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:04.027384 sshd-session[4732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:04.035034 systemd-logind[1924]: New session 19 of user core. Feb 13 15:19:04.043347 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:19:04.284283 sshd[4734]: Connection closed by 139.178.68.195 port 59948 Feb 13 15:19:04.285459 sshd-session[4732]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:04.290774 systemd[1]: sshd@18-172.31.29.130:22-139.178.68.195:59948.service: Deactivated successfully. Feb 13 15:19:04.294884 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:19:04.299812 systemd-logind[1924]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:19:04.302357 systemd-logind[1924]: Removed session 19. Feb 13 15:19:09.330599 systemd[1]: Started sshd@19-172.31.29.130:22-139.178.68.195:49578.service - OpenSSH per-connection server daemon (139.178.68.195:49578). Feb 13 15:19:09.516424 sshd[4767]: Accepted publickey for core from 139.178.68.195 port 49578 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:09.518919 sshd-session[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:09.528209 systemd-logind[1924]: New session 20 of user core. Feb 13 15:19:09.533390 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:19:09.773457 sshd[4769]: Connection closed by 139.178.68.195 port 49578 Feb 13 15:19:09.774661 sshd-session[4767]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:09.782625 systemd[1]: sshd@19-172.31.29.130:22-139.178.68.195:49578.service: Deactivated successfully. Feb 13 15:19:09.787994 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:19:09.790262 systemd-logind[1924]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:19:09.792629 systemd-logind[1924]: Removed session 20. Feb 13 15:19:14.813605 systemd[1]: Started sshd@20-172.31.29.130:22-139.178.68.195:49580.service - OpenSSH per-connection server daemon (139.178.68.195:49580). Feb 13 15:19:15.003856 sshd[4800]: Accepted publickey for core from 139.178.68.195 port 49580 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o Feb 13 15:19:15.007154 sshd-session[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:19:15.022292 systemd-logind[1924]: New session 21 of user core. Feb 13 15:19:15.030524 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:19:15.284504 sshd[4802]: Connection closed by 139.178.68.195 port 49580 Feb 13 15:19:15.285547 sshd-session[4800]: pam_unix(sshd:session): session closed for user core Feb 13 15:19:15.292911 systemd[1]: sshd@20-172.31.29.130:22-139.178.68.195:49580.service: Deactivated successfully. Feb 13 15:19:15.293343 systemd-logind[1924]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:19:15.300333 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:19:15.306210 systemd-logind[1924]: Removed session 21. Feb 13 15:19:29.322889 systemd[1]: cri-containerd-4672473e41b3d655b9d48cc20cdfd9e2e54c8369ec3767964caaf4a63fd936dc.scope: Deactivated successfully. Feb 13 15:19:29.324163 systemd[1]: cri-containerd-4672473e41b3d655b9d48cc20cdfd9e2e54c8369ec3767964caaf4a63fd936dc.scope: Consumed 4.021s CPU time, 21.9M memory peak, 0B memory swap peak. Feb 13 15:19:29.367472 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4672473e41b3d655b9d48cc20cdfd9e2e54c8369ec3767964caaf4a63fd936dc-rootfs.mount: Deactivated successfully. Feb 13 15:19:29.385913 containerd[1953]: time="2025-02-13T15:19:29.385820785Z" level=info msg="shim disconnected" id=4672473e41b3d655b9d48cc20cdfd9e2e54c8369ec3767964caaf4a63fd936dc namespace=k8s.io Feb 13 15:19:29.385913 containerd[1953]: time="2025-02-13T15:19:29.385899697Z" level=warning msg="cleaning up after shim disconnected" id=4672473e41b3d655b9d48cc20cdfd9e2e54c8369ec3767964caaf4a63fd936dc namespace=k8s.io Feb 13 15:19:29.386708 containerd[1953]: time="2025-02-13T15:19:29.385924189Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:19:29.627178 kubelet[3206]: I0213 15:19:29.625807 3206 scope.go:117] "RemoveContainer" containerID="4672473e41b3d655b9d48cc20cdfd9e2e54c8369ec3767964caaf4a63fd936dc" Feb 13 15:19:29.632718 containerd[1953]: time="2025-02-13T15:19:29.632646735Z" level=info msg="CreateContainer within sandbox \"2b78b561c246832dafaed4697e55dd7430b831087dd8857c72bffc7f86b60ea4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 15:19:29.660098 containerd[1953]: time="2025-02-13T15:19:29.660017487Z" level=info msg="CreateContainer within sandbox \"2b78b561c246832dafaed4697e55dd7430b831087dd8857c72bffc7f86b60ea4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d22d2fd411e804be7b678e996b7f4e0770b362d704adb5940b881ca50c4375bd\"" Feb 13 15:19:29.660823 containerd[1953]: time="2025-02-13T15:19:29.660762255Z" level=info msg="StartContainer for \"d22d2fd411e804be7b678e996b7f4e0770b362d704adb5940b881ca50c4375bd\"" Feb 13 15:19:29.713428 systemd[1]: Started cri-containerd-d22d2fd411e804be7b678e996b7f4e0770b362d704adb5940b881ca50c4375bd.scope - libcontainer container d22d2fd411e804be7b678e996b7f4e0770b362d704adb5940b881ca50c4375bd. Feb 13 15:19:29.785201 containerd[1953]: time="2025-02-13T15:19:29.785132523Z" level=info msg="StartContainer for \"d22d2fd411e804be7b678e996b7f4e0770b362d704adb5940b881ca50c4375bd\" returns successfully" Feb 13 15:19:33.825186 kubelet[3206]: E0213 15:19:33.824752 3206 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-130?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 15:19:34.286477 systemd[1]: cri-containerd-52dfef469c8a9ff0e1dcccebdbeb12958a5228e8efc322e9d6259275c38e87e5.scope: Deactivated successfully. Feb 13 15:19:34.286970 systemd[1]: cri-containerd-52dfef469c8a9ff0e1dcccebdbeb12958a5228e8efc322e9d6259275c38e87e5.scope: Consumed 3.185s CPU time, 16.7M memory peak, 0B memory swap peak. Feb 13 15:19:34.333875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52dfef469c8a9ff0e1dcccebdbeb12958a5228e8efc322e9d6259275c38e87e5-rootfs.mount: Deactivated successfully. Feb 13 15:19:34.349161 containerd[1953]: time="2025-02-13T15:19:34.349009338Z" level=info msg="shim disconnected" id=52dfef469c8a9ff0e1dcccebdbeb12958a5228e8efc322e9d6259275c38e87e5 namespace=k8s.io Feb 13 15:19:34.349161 containerd[1953]: time="2025-02-13T15:19:34.349145418Z" level=warning msg="cleaning up after shim disconnected" id=52dfef469c8a9ff0e1dcccebdbeb12958a5228e8efc322e9d6259275c38e87e5 namespace=k8s.io Feb 13 15:19:34.349161 containerd[1953]: time="2025-02-13T15:19:34.349169310Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:19:34.648132 kubelet[3206]: I0213 15:19:34.646460 3206 scope.go:117] "RemoveContainer" containerID="52dfef469c8a9ff0e1dcccebdbeb12958a5228e8efc322e9d6259275c38e87e5" Feb 13 15:19:34.650533 containerd[1953]: time="2025-02-13T15:19:34.650484127Z" level=info msg="CreateContainer within sandbox \"b216840a192c2ef013bc396bbfb2218130847aa993363a7a6b303e27f3215df2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 15:19:34.675522 containerd[1953]: time="2025-02-13T15:19:34.675389792Z" level=info msg="CreateContainer within sandbox \"b216840a192c2ef013bc396bbfb2218130847aa993363a7a6b303e27f3215df2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"4a5e4a57a53571d6f8321faecd479fb0bb1fe1d82b6a0aba68c21b72cacbbfcc\"" Feb 13 15:19:34.676519 containerd[1953]: time="2025-02-13T15:19:34.676461512Z" level=info msg="StartContainer for \"4a5e4a57a53571d6f8321faecd479fb0bb1fe1d82b6a0aba68c21b72cacbbfcc\"" Feb 13 15:19:34.738587 systemd[1]: Started cri-containerd-4a5e4a57a53571d6f8321faecd479fb0bb1fe1d82b6a0aba68c21b72cacbbfcc.scope - libcontainer container 4a5e4a57a53571d6f8321faecd479fb0bb1fe1d82b6a0aba68c21b72cacbbfcc. Feb 13 15:19:34.801984 containerd[1953]: time="2025-02-13T15:19:34.801894116Z" level=info msg="StartContainer for \"4a5e4a57a53571d6f8321faecd479fb0bb1fe1d82b6a0aba68c21b72cacbbfcc\" returns successfully"