Dec 13 13:13:23.158135 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 13 13:13:23.158186 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Fri Dec 13 11:56:07 -00 2024 Dec 13 13:13:23.158212 kernel: KASLR disabled due to lack of seed Dec 13 13:13:23.158411 kernel: efi: EFI v2.7 by EDK II Dec 13 13:13:23.158434 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 Dec 13 13:13:23.158451 kernel: secureboot: Secure boot disabled Dec 13 13:13:23.158469 kernel: ACPI: Early table checksum verification disabled Dec 13 13:13:23.158484 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 13 13:13:23.158500 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 13 13:13:23.158515 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 13:13:23.158538 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Dec 13 13:13:23.158554 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 13:13:23.158569 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 13 13:13:23.158584 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 13 13:13:23.158602 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 13 13:13:23.158623 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 13:13:23.158640 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 13 13:13:23.158656 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 13 13:13:23.158671 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 13 13:13:23.158688 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 13 13:13:23.158704 kernel: printk: bootconsole [uart0] enabled Dec 13 13:13:23.158719 kernel: NUMA: Failed to initialise from firmware Dec 13 13:13:23.158736 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 13:13:23.158752 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Dec 13 13:13:23.158767 kernel: Zone ranges: Dec 13 13:13:23.158783 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 13:13:23.158803 kernel: DMA32 empty Dec 13 13:13:23.158820 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 13 13:13:23.158836 kernel: Movable zone start for each node Dec 13 13:13:23.158851 kernel: Early memory node ranges Dec 13 13:13:23.158867 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 13 13:13:23.158883 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 13 13:13:23.158899 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 13 13:13:23.158914 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 13 13:13:23.158930 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 13 13:13:23.158946 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 13 13:13:23.158962 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 13 13:13:23.158978 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 13 13:13:23.158998 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 13:13:23.159015 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 13 13:13:23.159037 kernel: psci: probing for conduit method from ACPI. Dec 13 13:13:23.159054 kernel: psci: PSCIv1.0 detected in firmware. Dec 13 13:13:23.159071 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 13:13:23.159092 kernel: psci: Trusted OS migration not required Dec 13 13:13:23.159109 kernel: psci: SMC Calling Convention v1.1 Dec 13 13:13:23.159125 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 13:13:23.159142 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 13:13:23.159159 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 13:13:23.159176 kernel: Detected PIPT I-cache on CPU0 Dec 13 13:13:23.159193 kernel: CPU features: detected: GIC system register CPU interface Dec 13 13:13:23.159209 kernel: CPU features: detected: Spectre-v2 Dec 13 13:13:23.159289 kernel: CPU features: detected: Spectre-v3a Dec 13 13:13:23.159312 kernel: CPU features: detected: Spectre-BHB Dec 13 13:13:23.159329 kernel: CPU features: detected: ARM erratum 1742098 Dec 13 13:13:23.159347 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 13 13:13:23.159371 kernel: alternatives: applying boot alternatives Dec 13 13:13:23.159390 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 13:13:23.159409 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:13:23.159426 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:13:23.159443 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:13:23.159459 kernel: Fallback order for Node 0: 0 Dec 13 13:13:23.159476 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Dec 13 13:13:23.159492 kernel: Policy zone: Normal Dec 13 13:13:23.159509 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:13:23.159525 kernel: software IO TLB: area num 2. Dec 13 13:13:23.159547 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Dec 13 13:13:23.159564 kernel: Memory: 3819640K/4030464K available (10304K kernel code, 2184K rwdata, 8088K rodata, 39936K init, 897K bss, 210824K reserved, 0K cma-reserved) Dec 13 13:13:23.159584 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 13:13:23.159601 kernel: trace event string verifier disabled Dec 13 13:13:23.159617 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:13:23.159635 kernel: rcu: RCU event tracing is enabled. Dec 13 13:13:23.159652 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 13:13:23.159669 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:13:23.159686 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:13:23.159703 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:13:23.159720 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 13:13:23.159743 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 13:13:23.159761 kernel: GICv3: 96 SPIs implemented Dec 13 13:13:23.159778 kernel: GICv3: 0 Extended SPIs implemented Dec 13 13:13:23.159795 kernel: Root IRQ handler: gic_handle_irq Dec 13 13:13:23.159811 kernel: GICv3: GICv3 features: 16 PPIs Dec 13 13:13:23.159827 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 13 13:13:23.159844 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 13 13:13:23.159861 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 13:13:23.159878 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Dec 13 13:13:23.159895 kernel: GICv3: using LPI property table @0x00000004000d0000 Dec 13 13:13:23.159912 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 13 13:13:23.159928 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Dec 13 13:13:23.159950 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:13:23.159967 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 13 13:13:23.159985 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 13 13:13:23.160001 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 13 13:13:23.160018 kernel: Console: colour dummy device 80x25 Dec 13 13:13:23.160036 kernel: printk: console [tty1] enabled Dec 13 13:13:23.160053 kernel: ACPI: Core revision 20230628 Dec 13 13:13:23.160071 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 13 13:13:23.160088 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:13:23.160106 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:13:23.160127 kernel: landlock: Up and running. Dec 13 13:13:23.160144 kernel: SELinux: Initializing. Dec 13 13:13:23.160162 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:13:23.160179 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:13:23.160196 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:13:23.160214 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:13:23.160277 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:13:23.160299 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:13:23.160317 kernel: Platform MSI: ITS@0x10080000 domain created Dec 13 13:13:23.160341 kernel: PCI/MSI: ITS@0x10080000 domain created Dec 13 13:13:23.160358 kernel: Remapping and enabling EFI services. Dec 13 13:13:23.160375 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:13:23.160392 kernel: Detected PIPT I-cache on CPU1 Dec 13 13:13:23.160410 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 13 13:13:23.160427 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Dec 13 13:13:23.160444 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 13 13:13:23.160460 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 13:13:23.160477 kernel: SMP: Total of 2 processors activated. Dec 13 13:13:23.160499 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 13:13:23.160516 kernel: CPU features: detected: 32-bit EL1 Support Dec 13 13:13:23.160544 kernel: CPU features: detected: CRC32 instructions Dec 13 13:13:23.160566 kernel: CPU: All CPU(s) started at EL1 Dec 13 13:13:23.160584 kernel: alternatives: applying system-wide alternatives Dec 13 13:13:23.160601 kernel: devtmpfs: initialized Dec 13 13:13:23.160619 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:13:23.160637 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 13:13:23.160655 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:13:23.160677 kernel: SMBIOS 3.0.0 present. Dec 13 13:13:23.160695 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 13 13:13:23.160713 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:13:23.160731 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 13:13:23.160749 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 13:13:23.160767 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 13:13:23.160785 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:13:23.160807 kernel: audit: type=2000 audit(0.220:1): state=initialized audit_enabled=0 res=1 Dec 13 13:13:23.160825 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:13:23.160843 kernel: cpuidle: using governor menu Dec 13 13:13:23.160860 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 13:13:23.160878 kernel: ASID allocator initialised with 65536 entries Dec 13 13:13:23.160896 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:13:23.160915 kernel: Serial: AMBA PL011 UART driver Dec 13 13:13:23.160933 kernel: Modules: 17360 pages in range for non-PLT usage Dec 13 13:13:23.160951 kernel: Modules: 508880 pages in range for PLT usage Dec 13 13:13:23.160968 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:13:23.160990 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:13:23.161008 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 13:13:23.161026 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 13:13:23.161044 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:13:23.161062 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:13:23.161080 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 13:13:23.161097 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 13:13:23.161115 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:13:23.161133 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:13:23.161154 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:13:23.161172 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:13:23.161190 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:13:23.161207 kernel: ACPI: Interpreter enabled Dec 13 13:13:23.161286 kernel: ACPI: Using GIC for interrupt routing Dec 13 13:13:23.161311 kernel: ACPI: MCFG table detected, 1 entries Dec 13 13:13:23.161330 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Dec 13 13:13:23.161618 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:13:23.161830 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 13:13:23.162048 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 13:13:23.162268 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Dec 13 13:13:23.164487 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Dec 13 13:13:23.164531 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 13 13:13:23.164551 kernel: acpiphp: Slot [1] registered Dec 13 13:13:23.164570 kernel: acpiphp: Slot [2] registered Dec 13 13:13:23.164588 kernel: acpiphp: Slot [3] registered Dec 13 13:13:23.164615 kernel: acpiphp: Slot [4] registered Dec 13 13:13:23.164633 kernel: acpiphp: Slot [5] registered Dec 13 13:13:23.164651 kernel: acpiphp: Slot [6] registered Dec 13 13:13:23.164669 kernel: acpiphp: Slot [7] registered Dec 13 13:13:23.164686 kernel: acpiphp: Slot [8] registered Dec 13 13:13:23.164704 kernel: acpiphp: Slot [9] registered Dec 13 13:13:23.164722 kernel: acpiphp: Slot [10] registered Dec 13 13:13:23.164740 kernel: acpiphp: Slot [11] registered Dec 13 13:13:23.164757 kernel: acpiphp: Slot [12] registered Dec 13 13:13:23.164779 kernel: acpiphp: Slot [13] registered Dec 13 13:13:23.164798 kernel: acpiphp: Slot [14] registered Dec 13 13:13:23.164817 kernel: acpiphp: Slot [15] registered Dec 13 13:13:23.164834 kernel: acpiphp: Slot [16] registered Dec 13 13:13:23.164852 kernel: acpiphp: Slot [17] registered Dec 13 13:13:23.164870 kernel: acpiphp: Slot [18] registered Dec 13 13:13:23.164889 kernel: acpiphp: Slot [19] registered Dec 13 13:13:23.164906 kernel: acpiphp: Slot [20] registered Dec 13 13:13:23.164924 kernel: acpiphp: Slot [21] registered Dec 13 13:13:23.164943 kernel: acpiphp: Slot [22] registered Dec 13 13:13:23.164965 kernel: acpiphp: Slot [23] registered Dec 13 13:13:23.164983 kernel: acpiphp: Slot [24] registered Dec 13 13:13:23.165001 kernel: acpiphp: Slot [25] registered Dec 13 13:13:23.165018 kernel: acpiphp: Slot [26] registered Dec 13 13:13:23.165036 kernel: acpiphp: Slot [27] registered Dec 13 13:13:23.165053 kernel: acpiphp: Slot [28] registered Dec 13 13:13:23.165071 kernel: acpiphp: Slot [29] registered Dec 13 13:13:23.165091 kernel: acpiphp: Slot [30] registered Dec 13 13:13:23.165109 kernel: acpiphp: Slot [31] registered Dec 13 13:13:23.165130 kernel: PCI host bridge to bus 0000:00 Dec 13 13:13:23.165390 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 13 13:13:23.165585 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 13:13:23.165774 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 13 13:13:23.165959 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Dec 13 13:13:23.168296 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Dec 13 13:13:23.168638 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Dec 13 13:13:23.168860 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Dec 13 13:13:23.169079 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 13:13:23.169324 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Dec 13 13:13:23.169544 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 13:13:23.169767 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 13:13:23.169970 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Dec 13 13:13:23.170212 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Dec 13 13:13:23.172932 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Dec 13 13:13:23.173141 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 13:13:23.173402 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Dec 13 13:13:23.173616 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Dec 13 13:13:23.173820 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Dec 13 13:13:23.174039 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Dec 13 13:13:23.176369 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Dec 13 13:13:23.176611 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 13 13:13:23.176797 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 13:13:23.176980 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 13 13:13:23.177006 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 13:13:23.177025 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 13:13:23.177044 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 13:13:23.177062 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 13:13:23.177090 kernel: iommu: Default domain type: Translated Dec 13 13:13:23.177109 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 13:13:23.177128 kernel: efivars: Registered efivars operations Dec 13 13:13:23.177146 kernel: vgaarb: loaded Dec 13 13:13:23.177165 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 13:13:23.177183 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:13:23.177201 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:13:23.177219 kernel: pnp: PnP ACPI init Dec 13 13:13:23.177509 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 13 13:13:23.177544 kernel: pnp: PnP ACPI: found 1 devices Dec 13 13:13:23.177562 kernel: NET: Registered PF_INET protocol family Dec 13 13:13:23.177580 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:13:23.177598 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 13:13:23.177616 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:13:23.177634 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:13:23.177652 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 13:13:23.177670 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 13:13:23.177693 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:13:23.177712 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:13:23.177730 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:13:23.177773 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:13:23.177816 kernel: kvm [1]: HYP mode not available Dec 13 13:13:23.177836 kernel: Initialise system trusted keyrings Dec 13 13:13:23.177855 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 13:13:23.177873 kernel: Key type asymmetric registered Dec 13 13:13:23.177890 kernel: Asymmetric key parser 'x509' registered Dec 13 13:13:23.177916 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 13:13:23.177934 kernel: io scheduler mq-deadline registered Dec 13 13:13:23.177952 kernel: io scheduler kyber registered Dec 13 13:13:23.177971 kernel: io scheduler bfq registered Dec 13 13:13:23.178209 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 13 13:13:23.179082 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 13:13:23.179677 kernel: ACPI: button: Power Button [PWRB] Dec 13 13:13:23.179976 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 13 13:13:23.180283 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 13:13:23.180352 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:13:23.180372 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 13:13:23.180609 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 13 13:13:23.180635 kernel: printk: console [ttyS0] disabled Dec 13 13:13:23.180654 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 13 13:13:23.180672 kernel: printk: console [ttyS0] enabled Dec 13 13:13:23.180690 kernel: printk: bootconsole [uart0] disabled Dec 13 13:13:23.180708 kernel: thunder_xcv, ver 1.0 Dec 13 13:13:23.180725 kernel: thunder_bgx, ver 1.0 Dec 13 13:13:23.180749 kernel: nicpf, ver 1.0 Dec 13 13:13:23.180767 kernel: nicvf, ver 1.0 Dec 13 13:13:23.180989 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 13:13:23.181178 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T13:13:22 UTC (1734095602) Dec 13 13:13:23.184825 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 13:13:23.184858 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Dec 13 13:13:23.184877 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 13:13:23.184896 kernel: watchdog: Hard watchdog permanently disabled Dec 13 13:13:23.184924 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:13:23.184942 kernel: Segment Routing with IPv6 Dec 13 13:13:23.184959 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:13:23.184977 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:13:23.184995 kernel: Key type dns_resolver registered Dec 13 13:13:23.185013 kernel: registered taskstats version 1 Dec 13 13:13:23.185031 kernel: Loading compiled-in X.509 certificates Dec 13 13:13:23.185049 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 752b3e36c6039904ea643ccad2b3f5f3cb4ebf78' Dec 13 13:13:23.185066 kernel: Key type .fscrypt registered Dec 13 13:13:23.185089 kernel: Key type fscrypt-provisioning registered Dec 13 13:13:23.185106 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:13:23.185124 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:13:23.185142 kernel: ima: No architecture policies found Dec 13 13:13:23.185160 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 13:13:23.185178 kernel: clk: Disabling unused clocks Dec 13 13:13:23.185195 kernel: Freeing unused kernel memory: 39936K Dec 13 13:13:23.185213 kernel: Run /init as init process Dec 13 13:13:23.185298 kernel: with arguments: Dec 13 13:13:23.185326 kernel: /init Dec 13 13:13:23.185344 kernel: with environment: Dec 13 13:13:23.185361 kernel: HOME=/ Dec 13 13:13:23.185379 kernel: TERM=linux Dec 13 13:13:23.185396 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:13:23.185419 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:13:23.185442 systemd[1]: Detected virtualization amazon. Dec 13 13:13:23.185462 systemd[1]: Detected architecture arm64. Dec 13 13:13:23.185487 systemd[1]: Running in initrd. Dec 13 13:13:23.185506 systemd[1]: No hostname configured, using default hostname. Dec 13 13:13:23.185525 systemd[1]: Hostname set to . Dec 13 13:13:23.185545 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:13:23.185564 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:13:23.185584 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:13:23.185603 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:13:23.185624 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:13:23.185648 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:13:23.185668 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:13:23.185688 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:13:23.185710 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:13:23.185729 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:13:23.185749 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:13:23.185773 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:13:23.185793 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:13:23.185812 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:13:23.185832 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:13:23.185851 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:13:23.185870 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:13:23.185890 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:13:23.185910 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:13:23.185929 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:13:23.185952 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:13:23.185972 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:13:23.185992 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:13:23.186032 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:13:23.186053 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:13:23.186072 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:13:23.186092 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:13:23.186111 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:13:23.186130 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:13:23.186156 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:13:23.186175 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:13:23.186195 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:13:23.186214 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:13:23.186298 systemd-journald[251]: Collecting audit messages is disabled. Dec 13 13:13:23.186346 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:13:23.186368 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:13:23.186388 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:13:23.186412 systemd-journald[251]: Journal started Dec 13 13:13:23.186457 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2ff3bf0ce65563b58c3964a5e0e948) is 8.0M, max 75.3M, 67.3M free. Dec 13 13:13:23.174764 systemd-modules-load[252]: Inserted module 'overlay' Dec 13 13:13:23.196273 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:13:23.205448 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:13:23.211582 kernel: Bridge firewalling registered Dec 13 13:13:23.210553 systemd-modules-load[252]: Inserted module 'br_netfilter' Dec 13 13:13:23.215510 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:13:23.228088 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:13:23.234458 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:13:23.236914 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:13:23.255586 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:13:23.260538 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:13:23.278802 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:13:23.295578 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:13:23.300785 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:13:23.322654 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:13:23.334579 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:13:23.350829 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:13:23.378263 dracut-cmdline[286]: dracut-dracut-053 Dec 13 13:13:23.381714 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 13:13:23.423300 systemd-resolved[288]: Positive Trust Anchors: Dec 13 13:13:23.425077 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:13:23.425144 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:13:23.522266 kernel: SCSI subsystem initialized Dec 13 13:13:23.532259 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:13:23.542276 kernel: iscsi: registered transport (tcp) Dec 13 13:13:23.564265 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:13:23.564336 kernel: QLogic iSCSI HBA Driver Dec 13 13:13:23.641269 kernel: random: crng init done Dec 13 13:13:23.641488 systemd-resolved[288]: Defaulting to hostname 'linux'. Dec 13 13:13:23.644911 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:13:23.647121 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:13:23.674519 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:13:23.683530 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:13:23.724949 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:13:23.725022 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:13:23.726751 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:13:23.807252 kernel: raid6: neonx8 gen() 6566 MB/s Dec 13 13:13:23.808280 kernel: raid6: neonx4 gen() 6534 MB/s Dec 13 13:13:23.825260 kernel: raid6: neonx2 gen() 5436 MB/s Dec 13 13:13:23.842259 kernel: raid6: neonx1 gen() 3953 MB/s Dec 13 13:13:23.859260 kernel: raid6: int64x8 gen() 3603 MB/s Dec 13 13:13:23.876259 kernel: raid6: int64x4 gen() 3708 MB/s Dec 13 13:13:23.893259 kernel: raid6: int64x2 gen() 3603 MB/s Dec 13 13:13:23.911017 kernel: raid6: int64x1 gen() 2764 MB/s Dec 13 13:13:23.911053 kernel: raid6: using algorithm neonx8 gen() 6566 MB/s Dec 13 13:13:23.929029 kernel: raid6: .... xor() 4806 MB/s, rmw enabled Dec 13 13:13:23.929098 kernel: raid6: using neon recovery algorithm Dec 13 13:13:23.936264 kernel: xor: measuring software checksum speed Dec 13 13:13:23.937259 kernel: 8regs : 11945 MB/sec Dec 13 13:13:23.938263 kernel: 32regs : 12037 MB/sec Dec 13 13:13:23.940273 kernel: arm64_neon : 8849 MB/sec Dec 13 13:13:23.940307 kernel: xor: using function: 32regs (12037 MB/sec) Dec 13 13:13:24.023280 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:13:24.041951 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:13:24.050490 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:13:24.091471 systemd-udevd[471]: Using default interface naming scheme 'v255'. Dec 13 13:13:24.100879 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:13:24.111590 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:13:24.147805 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Dec 13 13:13:24.203295 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:13:24.215527 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:13:24.336299 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:13:24.351770 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:13:24.386081 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:13:24.389248 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:13:24.391986 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:13:24.394379 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:13:24.419515 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:13:24.462786 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:13:24.541750 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 13:13:24.541862 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 13 13:13:24.560672 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 13:13:24.560931 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 13:13:24.561163 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:1b:ae:01:9c:99 Dec 13 13:13:24.542736 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:13:24.542989 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:13:24.550330 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:13:24.552475 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:13:24.553304 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:13:24.555573 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:13:24.582593 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:13:24.610381 (udev-worker)[526]: Network interface NamePolicy= disabled on kernel command line. Dec 13 13:13:24.618351 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 13:13:24.618391 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 13:13:24.624292 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 13:13:24.637444 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:13:24.637513 kernel: GPT:9289727 != 16777215 Dec 13 13:13:24.637538 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:13:24.639273 kernel: GPT:9289727 != 16777215 Dec 13 13:13:24.639307 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:13:24.640255 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 13:13:24.645323 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:13:24.655548 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:13:24.704165 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:13:24.737105 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (531) Dec 13 13:13:24.784918 kernel: BTRFS: device fsid 47b12626-f7d3-4179-9720-ca262eb4c614 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (543) Dec 13 13:13:24.823077 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 13 13:13:24.870319 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 13 13:13:24.899377 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 13:13:24.913312 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 13 13:13:24.919184 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 13 13:13:24.931549 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:13:24.947315 disk-uuid[662]: Primary Header is updated. Dec 13 13:13:24.947315 disk-uuid[662]: Secondary Entries is updated. Dec 13 13:13:24.947315 disk-uuid[662]: Secondary Header is updated. Dec 13 13:13:24.959269 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 13:13:24.967263 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 13:13:25.975487 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 13:13:25.980042 disk-uuid[663]: The operation has completed successfully. Dec 13 13:13:26.155594 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:13:26.157458 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:13:26.210644 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:13:26.219495 sh[921]: Success Dec 13 13:13:26.242787 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 13:13:26.362899 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:13:26.368858 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:13:26.373373 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:13:26.412897 kernel: BTRFS info (device dm-0): first mount of filesystem 47b12626-f7d3-4179-9720-ca262eb4c614 Dec 13 13:13:26.412973 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:13:26.413012 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:13:26.414607 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:13:26.415803 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:13:26.512266 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 13:13:26.527890 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:13:26.531062 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:13:26.546576 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:13:26.552795 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:13:26.587899 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:13:26.587969 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:13:26.587995 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 13:13:26.596265 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 13:13:26.615809 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:13:26.618220 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:13:26.628689 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:13:26.639559 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:13:26.734349 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:13:26.746978 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:13:26.806718 systemd-networkd[1113]: lo: Link UP Dec 13 13:13:26.808287 systemd-networkd[1113]: lo: Gained carrier Dec 13 13:13:26.812104 systemd-networkd[1113]: Enumeration completed Dec 13 13:13:26.814107 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:13:26.814164 systemd-networkd[1113]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:13:26.814171 systemd-networkd[1113]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:13:26.824329 systemd[1]: Reached target network.target - Network. Dec 13 13:13:26.829738 systemd-networkd[1113]: eth0: Link UP Dec 13 13:13:26.829758 systemd-networkd[1113]: eth0: Gained carrier Dec 13 13:13:26.829776 systemd-networkd[1113]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:13:26.853327 systemd-networkd[1113]: eth0: DHCPv4 address 172.31.27.111/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 13:13:27.023474 ignition[1030]: Ignition 2.20.0 Dec 13 13:13:27.023982 ignition[1030]: Stage: fetch-offline Dec 13 13:13:27.024485 ignition[1030]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:13:27.024509 ignition[1030]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 13:13:27.024977 ignition[1030]: Ignition finished successfully Dec 13 13:13:27.033853 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:13:27.051634 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 13:13:27.074351 ignition[1127]: Ignition 2.20.0 Dec 13 13:13:27.074833 ignition[1127]: Stage: fetch Dec 13 13:13:27.075506 ignition[1127]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:13:27.075563 ignition[1127]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 13:13:27.075739 ignition[1127]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 13:13:27.097844 ignition[1127]: PUT result: OK Dec 13 13:13:27.101008 ignition[1127]: parsed url from cmdline: "" Dec 13 13:13:27.101030 ignition[1127]: no config URL provided Dec 13 13:13:27.101049 ignition[1127]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:13:27.101075 ignition[1127]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:13:27.101107 ignition[1127]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 13:13:27.104707 ignition[1127]: PUT result: OK Dec 13 13:13:27.104782 ignition[1127]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 13:13:27.108654 ignition[1127]: GET result: OK Dec 13 13:13:27.108984 ignition[1127]: parsing config with SHA512: 3cd61ec796785c6ce2627e7feeacac967a46ad4e4b2b608b1cc450bdd0fa07a0dbd51328e6d412ae2e480ce7ad203d8916bb0c7786d154accacd801052285d92 Dec 13 13:13:27.118354 unknown[1127]: fetched base config from "system" Dec 13 13:13:27.118809 unknown[1127]: fetched base config from "system" Dec 13 13:13:27.119620 ignition[1127]: fetch: fetch complete Dec 13 13:13:27.118823 unknown[1127]: fetched user config from "aws" Dec 13 13:13:27.119632 ignition[1127]: fetch: fetch passed Dec 13 13:13:27.119718 ignition[1127]: Ignition finished successfully Dec 13 13:13:27.131283 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 13:13:27.145434 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:13:27.173629 ignition[1134]: Ignition 2.20.0 Dec 13 13:13:27.173650 ignition[1134]: Stage: kargs Dec 13 13:13:27.174992 ignition[1134]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:13:27.175018 ignition[1134]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 13:13:27.175658 ignition[1134]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 13:13:27.178737 ignition[1134]: PUT result: OK Dec 13 13:13:27.188797 ignition[1134]: kargs: kargs passed Dec 13 13:13:27.188895 ignition[1134]: Ignition finished successfully Dec 13 13:13:27.193445 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:13:27.201541 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:13:27.228814 ignition[1140]: Ignition 2.20.0 Dec 13 13:13:27.228840 ignition[1140]: Stage: disks Dec 13 13:13:27.230444 ignition[1140]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:13:27.230500 ignition[1140]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 13:13:27.231532 ignition[1140]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 13:13:27.237569 ignition[1140]: PUT result: OK Dec 13 13:13:27.242334 ignition[1140]: disks: disks passed Dec 13 13:13:27.242437 ignition[1140]: Ignition finished successfully Dec 13 13:13:27.244690 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:13:27.251584 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:13:27.253670 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:13:27.255911 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:13:27.257743 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:13:27.259659 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:13:27.282250 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:13:27.324580 systemd-fsck[1148]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 13:13:27.329393 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:13:27.340431 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:13:27.426273 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 0aa4851d-a2ba-4d04-90b3-5d00bf608ecc r/w with ordered data mode. Quota mode: none. Dec 13 13:13:27.427212 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:13:27.431083 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:13:27.453496 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:13:27.458885 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:13:27.461278 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:13:27.461373 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:13:27.462900 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:13:27.487260 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1168) Dec 13 13:13:27.490982 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:13:27.491051 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:13:27.492893 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:13:27.496593 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 13:13:27.504396 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 13:13:27.504602 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:13:27.514871 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:13:27.863100 initrd-setup-root[1192]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:13:27.880333 initrd-setup-root[1199]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:13:27.898167 initrd-setup-root[1206]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:13:27.905754 initrd-setup-root[1213]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:13:28.158420 systemd-networkd[1113]: eth0: Gained IPv6LL Dec 13 13:13:28.243593 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:13:28.252478 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:13:28.262659 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:13:28.279533 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:13:28.284267 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:13:28.317003 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:13:28.334700 ignition[1281]: INFO : Ignition 2.20.0 Dec 13 13:13:28.337890 ignition[1281]: INFO : Stage: mount Dec 13 13:13:28.337890 ignition[1281]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:13:28.337890 ignition[1281]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 13:13:28.337890 ignition[1281]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 13:13:28.347385 ignition[1281]: INFO : PUT result: OK Dec 13 13:13:28.350786 ignition[1281]: INFO : mount: mount passed Dec 13 13:13:28.352305 ignition[1281]: INFO : Ignition finished successfully Dec 13 13:13:28.354567 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:13:28.365504 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:13:28.433588 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:13:28.469271 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1293) Dec 13 13:13:28.472843 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:13:28.472900 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:13:28.472927 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 13:13:28.479425 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 13:13:28.481943 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:13:28.516016 ignition[1310]: INFO : Ignition 2.20.0 Dec 13 13:13:28.517875 ignition[1310]: INFO : Stage: files Dec 13 13:13:28.519608 ignition[1310]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:13:28.521567 ignition[1310]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 13:13:28.521567 ignition[1310]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 13:13:28.526293 ignition[1310]: INFO : PUT result: OK Dec 13 13:13:28.530477 ignition[1310]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:13:28.533036 ignition[1310]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:13:28.533036 ignition[1310]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:13:28.554430 ignition[1310]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:13:28.557073 ignition[1310]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:13:28.559952 unknown[1310]: wrote ssh authorized keys file for user: core Dec 13 13:13:28.562202 ignition[1310]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:13:28.565441 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 13:13:28.568948 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 13:13:28.674415 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 13:13:28.795722 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 13:13:28.799384 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:13:28.799384 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:13:28.799384 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:13:28.799384 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:13:28.799384 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:13:28.799384 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:13:28.799384 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:13:28.799384 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:13:28.824214 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:13:28.824214 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:13:28.824214 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 13:13:28.824214 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 13:13:28.824214 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 13:13:28.824214 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 13:13:29.291383 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 13:13:29.739487 ignition[1310]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 13:13:29.739487 ignition[1310]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 13:13:29.746116 ignition[1310]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:13:29.750030 ignition[1310]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:13:29.750030 ignition[1310]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 13:13:29.755722 ignition[1310]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:13:29.758864 ignition[1310]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:13:29.758864 ignition[1310]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:13:29.764514 ignition[1310]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:13:29.764514 ignition[1310]: INFO : files: files passed Dec 13 13:13:29.764514 ignition[1310]: INFO : Ignition finished successfully Dec 13 13:13:29.761965 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:13:29.781509 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:13:29.789447 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:13:29.801509 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:13:29.801710 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:13:29.825322 initrd-setup-root-after-ignition[1339]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:13:29.825322 initrd-setup-root-after-ignition[1339]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:13:29.833102 initrd-setup-root-after-ignition[1343]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:13:29.840281 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:13:29.843462 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:13:29.863093 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:13:29.911128 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:13:29.911579 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:13:29.919209 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:13:29.922908 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:13:29.923164 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:13:29.935597 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:13:29.958921 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:13:29.976620 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:13:30.000427 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:13:30.004503 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:13:30.007207 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:13:30.009041 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:13:30.009284 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:13:30.012047 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:13:30.014209 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:13:30.016155 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:13:30.018368 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:13:30.020604 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:13:30.022850 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:13:30.024880 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:13:30.027299 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:13:30.029345 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:13:30.031378 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:13:30.033024 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:13:30.033259 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:13:30.035687 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:13:30.037924 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:13:30.040256 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:13:30.042387 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:13:30.044711 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:13:30.044928 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:13:30.047245 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:13:30.047461 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:13:30.049969 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:13:30.050179 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:13:30.061318 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:13:30.076524 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:13:30.076796 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:13:30.126356 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:13:30.128074 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:13:30.130771 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:13:30.142548 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:13:30.146684 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:13:30.161122 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:13:30.162308 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:13:30.175461 ignition[1363]: INFO : Ignition 2.20.0 Dec 13 13:13:30.175461 ignition[1363]: INFO : Stage: umount Dec 13 13:13:30.175461 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:13:30.175461 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 13:13:30.175461 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 13:13:30.186011 ignition[1363]: INFO : PUT result: OK Dec 13 13:13:30.190675 ignition[1363]: INFO : umount: umount passed Dec 13 13:13:30.192440 ignition[1363]: INFO : Ignition finished successfully Dec 13 13:13:30.196014 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:13:30.199138 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:13:30.204055 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:13:30.204203 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:13:30.207848 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:13:30.207941 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:13:30.209702 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 13:13:30.209786 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 13:13:30.209927 systemd[1]: Stopped target network.target - Network. Dec 13 13:13:30.210197 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:13:30.210295 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:13:30.210838 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:13:30.211090 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:13:30.216580 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:13:30.216678 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:13:30.216732 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:13:30.216840 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:13:30.216914 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:13:30.217024 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:13:30.217090 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:13:30.217177 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:13:30.217280 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:13:30.241750 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:13:30.241857 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:13:30.244472 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:13:30.278068 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:13:30.281765 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:13:30.283136 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:13:30.283647 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:13:30.293969 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:13:30.294153 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:13:30.303315 systemd-networkd[1113]: eth0: DHCPv6 lease lost Dec 13 13:13:30.303604 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:13:30.303834 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:13:30.313859 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:13:30.314727 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:13:30.321105 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:13:30.321491 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:13:30.326854 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:13:30.326967 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:13:30.345495 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:13:30.347344 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:13:30.347460 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:13:30.350052 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:13:30.350132 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:13:30.353930 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:13:30.354033 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:13:30.367849 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:13:30.389326 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:13:30.391532 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:13:30.403388 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:13:30.403868 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:13:30.410882 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:13:30.410971 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:13:30.413850 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:13:30.413915 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:13:30.416177 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:13:30.416282 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:13:30.418513 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:13:30.418600 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:13:30.434857 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:13:30.434951 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:13:30.456665 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:13:30.461532 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:13:30.461652 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:13:30.464779 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:13:30.464876 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:13:30.485402 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:13:30.487744 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:13:30.492388 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:13:30.500581 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:13:30.526316 systemd[1]: Switching root. Dec 13 13:13:30.571949 systemd-journald[251]: Journal stopped Dec 13 13:13:32.938844 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Dec 13 13:13:32.938982 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:13:32.939033 kernel: SELinux: policy capability open_perms=1 Dec 13 13:13:32.939074 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:13:32.939104 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:13:32.939133 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:13:32.939162 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:13:32.939190 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:13:32.939222 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:13:32.939281 kernel: audit: type=1403 audit(1734095611.090:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:13:32.939320 systemd[1]: Successfully loaded SELinux policy in 65.075ms. Dec 13 13:13:32.939369 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.158ms. Dec 13 13:13:32.939408 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:13:32.939439 systemd[1]: Detected virtualization amazon. Dec 13 13:13:32.939467 systemd[1]: Detected architecture arm64. Dec 13 13:13:32.939497 systemd[1]: Detected first boot. Dec 13 13:13:32.939527 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:13:32.939556 zram_generator::config[1406]: No configuration found. Dec 13 13:13:32.939591 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:13:32.939622 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:13:32.939655 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:13:32.939686 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:13:32.939719 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:13:32.939750 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:13:32.939782 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:13:32.939811 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:13:32.939841 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:13:32.939870 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:13:32.939900 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:13:32.939934 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:13:32.939965 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:13:32.939994 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:13:32.940023 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:13:32.940054 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:13:32.940093 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:13:32.940122 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:13:32.940155 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 13:13:32.940185 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:13:32.940218 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:13:32.942303 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:13:32.942340 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:13:32.942370 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:13:32.942412 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:13:32.942443 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:13:32.942473 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:13:32.942511 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:13:32.942541 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:13:32.942572 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:13:32.942600 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:13:32.942630 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:13:32.942661 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:13:32.942690 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:13:32.942722 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:13:32.942753 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:13:32.942784 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:13:32.942818 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:13:32.942850 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:13:32.942879 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:13:32.942909 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:13:32.942938 systemd[1]: Reached target machines.target - Containers. Dec 13 13:13:32.942970 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:13:32.943000 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:13:32.943031 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:13:32.943064 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:13:32.943093 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:13:32.943121 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:13:32.943150 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:13:32.943182 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:13:32.943214 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:13:32.943273 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:13:32.943305 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:13:32.943342 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:13:32.943374 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:13:32.943406 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:13:32.943438 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:13:32.943468 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:13:32.943498 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:13:32.943533 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:13:32.943566 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:13:32.943599 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:13:32.943634 systemd[1]: Stopped verity-setup.service. Dec 13 13:13:32.943663 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:13:32.943692 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:13:32.943723 kernel: loop: module loaded Dec 13 13:13:32.943752 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:13:32.943784 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:13:32.943818 kernel: fuse: init (API version 7.39) Dec 13 13:13:32.943850 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:13:32.943882 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:13:32.943915 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:13:32.943945 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:13:32.943977 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:13:32.944009 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:13:32.944038 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:13:32.944071 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:13:32.944101 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:13:32.944129 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:13:32.944157 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:13:32.944192 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:13:32.944220 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:13:32.946406 systemd-journald[1491]: Collecting audit messages is disabled. Dec 13 13:13:32.946469 systemd-journald[1491]: Journal started Dec 13 13:13:32.946526 systemd-journald[1491]: Runtime Journal (/run/log/journal/ec2ff3bf0ce65563b58c3964a5e0e948) is 8.0M, max 75.3M, 67.3M free. Dec 13 13:13:32.360503 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:13:32.413602 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 13 13:13:32.414430 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:13:32.955607 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:13:32.955678 kernel: ACPI: bus type drm_connector registered Dec 13 13:13:32.971255 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:13:32.981849 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:13:32.981932 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:13:32.986331 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:13:32.987060 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:13:32.990628 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:13:32.994023 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:13:32.997436 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:13:33.003107 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:13:33.011027 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:13:33.014016 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:13:33.048721 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:13:33.051830 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:13:33.051903 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:13:33.056572 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:13:33.067680 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:13:33.074510 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:13:33.078604 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:13:33.082033 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:13:33.097157 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:13:33.099380 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:13:33.102101 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:13:33.108542 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:13:33.117511 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:13:33.122658 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:13:33.128354 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:13:33.169046 systemd-journald[1491]: Time spent on flushing to /var/log/journal/ec2ff3bf0ce65563b58c3964a5e0e948 is 51.755ms for 907 entries. Dec 13 13:13:33.169046 systemd-journald[1491]: System Journal (/var/log/journal/ec2ff3bf0ce65563b58c3964a5e0e948) is 8.0M, max 195.6M, 187.6M free. Dec 13 13:13:33.248738 systemd-journald[1491]: Received client request to flush runtime journal. Dec 13 13:13:33.248826 kernel: loop0: detected capacity change from 0 to 116784 Dec 13 13:13:33.180411 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:13:33.184619 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:13:33.194647 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:13:33.255357 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:13:33.268525 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:13:33.294064 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:13:33.295146 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:13:33.321692 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:13:33.336612 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:13:33.351924 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:13:33.359356 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:13:33.366577 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:13:33.394592 udevadm[1550]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 13:13:33.409285 kernel: loop1: detected capacity change from 0 to 113552 Dec 13 13:13:33.457423 systemd-tmpfiles[1554]: ACLs are not supported, ignoring. Dec 13 13:13:33.457453 systemd-tmpfiles[1554]: ACLs are not supported, ignoring. Dec 13 13:13:33.469524 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:13:33.535282 kernel: loop2: detected capacity change from 0 to 53784 Dec 13 13:13:33.580271 kernel: loop3: detected capacity change from 0 to 194512 Dec 13 13:13:33.639411 kernel: loop4: detected capacity change from 0 to 116784 Dec 13 13:13:33.661270 kernel: loop5: detected capacity change from 0 to 113552 Dec 13 13:13:33.680604 kernel: loop6: detected capacity change from 0 to 53784 Dec 13 13:13:33.702570 kernel: loop7: detected capacity change from 0 to 194512 Dec 13 13:13:33.734461 (sd-merge)[1560]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 13 13:13:33.735422 (sd-merge)[1560]: Merged extensions into '/usr'. Dec 13 13:13:33.749683 systemd[1]: Reloading requested from client PID 1537 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:13:33.749709 systemd[1]: Reloading... Dec 13 13:13:33.946303 zram_generator::config[1586]: No configuration found. Dec 13 13:13:34.225782 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:13:34.342520 systemd[1]: Reloading finished in 591 ms. Dec 13 13:13:34.378373 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:13:34.381458 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:13:34.394536 systemd[1]: Starting ensure-sysext.service... Dec 13 13:13:34.398887 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:13:34.410697 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:13:34.433321 systemd[1]: Reloading requested from client PID 1638 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:13:34.433355 systemd[1]: Reloading... Dec 13 13:13:34.462397 systemd-tmpfiles[1639]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:13:34.462922 systemd-tmpfiles[1639]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:13:34.465744 systemd-tmpfiles[1639]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:13:34.466385 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Dec 13 13:13:34.466524 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Dec 13 13:13:34.485099 systemd-tmpfiles[1639]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:13:34.485129 systemd-tmpfiles[1639]: Skipping /boot Dec 13 13:13:34.536208 systemd-udevd[1640]: Using default interface naming scheme 'v255'. Dec 13 13:13:34.539444 systemd-tmpfiles[1639]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:13:34.539599 systemd-tmpfiles[1639]: Skipping /boot Dec 13 13:13:34.641967 zram_generator::config[1670]: No configuration found. Dec 13 13:13:34.841315 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1677) Dec 13 13:13:34.845272 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1677) Dec 13 13:13:34.855442 (udev-worker)[1678]: Network interface NamePolicy= disabled on kernel command line. Dec 13 13:13:34.899465 ldconfig[1532]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:13:35.035755 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:13:35.156370 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1681) Dec 13 13:13:35.173927 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 13:13:35.174878 systemd[1]: Reloading finished in 740 ms. Dec 13 13:13:35.203595 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:13:35.208339 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:13:35.211175 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:13:35.302904 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:13:35.308845 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:13:35.311950 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:13:35.317115 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:13:35.331900 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:13:35.337867 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:13:35.340548 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:13:35.356678 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:13:35.377329 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:13:35.443784 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:13:35.457737 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:13:35.469711 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:13:35.477789 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:13:35.481372 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:13:35.484616 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:13:35.496954 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:13:35.501524 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:13:35.502592 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:13:35.544674 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:13:35.560109 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:13:35.566430 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:13:35.597532 augenrules[1871]: No rules Dec 13 13:13:35.599190 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:13:35.600362 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:13:35.613334 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:13:35.637519 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 13:13:35.651866 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:13:35.655562 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:13:35.661780 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:13:35.676128 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:13:35.687806 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:13:35.692458 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:13:35.700782 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:13:35.703186 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:13:35.709829 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:13:35.712422 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:13:35.719831 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:13:35.727610 lvm[1881]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:13:35.730833 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:13:35.732888 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:13:35.740348 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:13:35.757040 systemd[1]: Finished ensure-sysext.service. Dec 13 13:13:35.798326 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:13:35.801267 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:13:35.810088 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:13:35.813062 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:13:35.833880 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:13:35.838897 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:13:35.839294 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:13:35.843140 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:13:35.846565 augenrules[1879]: /sbin/augenrules: No change Dec 13 13:13:35.872287 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:13:35.876118 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:13:35.876895 augenrules[1914]: No rules Dec 13 13:13:35.877500 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:13:35.880326 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:13:35.880629 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:13:35.883405 lvm[1901]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:13:35.884170 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:13:35.884549 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:13:35.888132 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:13:35.898091 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:13:35.926357 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:13:35.942068 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:13:36.038863 systemd-networkd[1840]: lo: Link UP Dec 13 13:13:36.038886 systemd-networkd[1840]: lo: Gained carrier Dec 13 13:13:36.041620 systemd-networkd[1840]: Enumeration completed Dec 13 13:13:36.041811 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:13:36.046900 systemd-networkd[1840]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:13:36.046923 systemd-networkd[1840]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:13:36.048995 systemd-networkd[1840]: eth0: Link UP Dec 13 13:13:36.049418 systemd-networkd[1840]: eth0: Gained carrier Dec 13 13:13:36.049453 systemd-networkd[1840]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:13:36.052585 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:13:36.058370 systemd-networkd[1840]: eth0: DHCPv4 address 172.31.27.111/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 13:13:36.067541 systemd-resolved[1846]: Positive Trust Anchors: Dec 13 13:13:36.067579 systemd-resolved[1846]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:13:36.067643 systemd-resolved[1846]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:13:36.089158 systemd-resolved[1846]: Defaulting to hostname 'linux'. Dec 13 13:13:36.092284 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:13:36.094610 systemd[1]: Reached target network.target - Network. Dec 13 13:13:36.096438 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:13:36.098561 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:13:36.100594 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:13:36.102839 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:13:36.105357 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:13:36.107449 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:13:36.109670 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:13:36.111931 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:13:36.111973 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:13:36.113604 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:13:36.116541 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:13:36.121170 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:13:36.133678 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:13:36.136724 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:13:36.138953 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:13:36.140759 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:13:36.142567 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:13:36.142617 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:13:36.146430 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:13:36.154594 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 13:13:36.168461 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:13:36.174439 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:13:36.178697 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:13:36.181445 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:13:36.187604 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:13:36.193402 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 13:13:36.198431 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 13:13:36.204468 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 13:13:36.210110 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:13:36.218670 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:13:36.226263 jq[1936]: false Dec 13 13:13:36.252766 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:13:36.258355 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:13:36.259307 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:13:36.261578 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:13:36.268148 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:13:36.276864 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:13:36.278418 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:13:36.326640 jq[1946]: true Dec 13 13:13:36.327052 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:13:36.351062 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:13:36.354353 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:13:36.408666 dbus-daemon[1935]: [system] SELinux support is enabled Dec 13 13:13:36.409308 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:13:36.418131 extend-filesystems[1937]: Found loop4 Dec 13 13:13:36.418131 extend-filesystems[1937]: Found loop5 Dec 13 13:13:36.418131 extend-filesystems[1937]: Found loop6 Dec 13 13:13:36.418131 extend-filesystems[1937]: Found loop7 Dec 13 13:13:36.418131 extend-filesystems[1937]: Found nvme0n1 Dec 13 13:13:36.418131 extend-filesystems[1937]: Found nvme0n1p1 Dec 13 13:13:36.418131 extend-filesystems[1937]: Found nvme0n1p2 Dec 13 13:13:36.418131 extend-filesystems[1937]: Found nvme0n1p3 Dec 13 13:13:36.418131 extend-filesystems[1937]: Found usr Dec 13 13:13:36.418131 extend-filesystems[1937]: Found nvme0n1p4 Dec 13 13:13:36.418131 extend-filesystems[1937]: Found nvme0n1p6 Dec 13 13:13:36.418131 extend-filesystems[1937]: Found nvme0n1p7 Dec 13 13:13:36.418131 extend-filesystems[1937]: Found nvme0n1p9 Dec 13 13:13:36.418131 extend-filesystems[1937]: Checking size of /dev/nvme0n1p9 Dec 13 13:13:36.418954 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:13:36.509777 update_engine[1945]: I20241213 13:13:36.467749 1945 main.cc:92] Flatcar Update Engine starting Dec 13 13:13:36.557622 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 13:13:36.453368 dbus-daemon[1935]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1840 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 13:13:36.419022 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:13:36.558471 ntpd[1939]: 13 Dec 13:13:36 ntpd[1939]: ntpd 4.2.8p17@1.4004-o Fri Dec 13 11:28:25 UTC 2024 (1): Starting Dec 13 13:13:36.558471 ntpd[1939]: 13 Dec 13:13:36 ntpd[1939]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 13:13:36.558471 ntpd[1939]: 13 Dec 13:13:36 ntpd[1939]: ---------------------------------------------------- Dec 13 13:13:36.558471 ntpd[1939]: 13 Dec 13:13:36 ntpd[1939]: ntp-4 is maintained by Network Time Foundation, Dec 13 13:13:36.558471 ntpd[1939]: 13 Dec 13:13:36 ntpd[1939]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 13:13:36.558471 ntpd[1939]: 13 Dec 13:13:36 ntpd[1939]: corporation. Support and training for ntp-4 are Dec 13 13:13:36.558471 ntpd[1939]: 13 Dec 13:13:36 ntpd[1939]: available at https://www.nwtime.org/support Dec 13 13:13:36.558471 ntpd[1939]: 13 Dec 13:13:36 ntpd[1939]: ---------------------------------------------------- Dec 13 13:13:36.558471 ntpd[1939]: 13 Dec 13:13:36 ntpd[1939]: proto: precision = 0.096 usec (-23) Dec 13 13:13:36.558471 ntpd[1939]: 13 Dec 13:13:36 ntpd[1939]: basedate set to 2024-12-01 Dec 13 13:13:36.558471 ntpd[1939]: 13 Dec 13:13:36 ntpd[1939]: gps base set to 2024-12-01 (week 2343) Dec 13 13:13:36.558471 ntpd[1939]: 13 Dec 13:13:36 ntpd[1939]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 13:13:36.558471 ntpd[1939]: 13 Dec 13:13:36 ntpd[1939]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 13:13:36.558471 ntpd[1939]: 13 Dec 13:13:36 ntpd[1939]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 13:13:36.558471 ntpd[1939]: 13 Dec 13:13:36 ntpd[1939]: Listen normally on 3 eth0 172.31.27.111:123 Dec 13 13:13:36.558471 ntpd[1939]: 13 Dec 13:13:36 ntpd[1939]: Listen normally on 4 lo [::1]:123 Dec 13 13:13:36.558471 ntpd[1939]: 13 Dec 13:13:36 ntpd[1939]: bind(21) AF_INET6 fe80::41b:aeff:fe01:9c99%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 13:13:36.558471 ntpd[1939]: 13 Dec 13:13:36 ntpd[1939]: unable to create socket on eth0 (5) for fe80::41b:aeff:fe01:9c99%2#123 Dec 13 13:13:36.558471 ntpd[1939]: 13 Dec 13:13:36 ntpd[1939]: failed to init interface for address fe80::41b:aeff:fe01:9c99%2 Dec 13 13:13:36.558471 ntpd[1939]: 13 Dec 13:13:36 ntpd[1939]: Listening on routing socket on fd #21 for interface updates Dec 13 13:13:36.558471 ntpd[1939]: 13 Dec 13:13:36 ntpd[1939]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 13:13:36.558471 ntpd[1939]: 13 Dec 13:13:36 ntpd[1939]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 13:13:36.586659 extend-filesystems[1937]: Resized partition /dev/nvme0n1p9 Dec 13 13:13:36.599074 tar[1952]: linux-arm64/helm Dec 13 13:13:36.599561 update_engine[1945]: I20241213 13:13:36.516678 1945 update_check_scheduler.cc:74] Next update check in 9m25s Dec 13 13:13:36.487643 dbus-daemon[1935]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 13:13:36.421591 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:13:36.603836 extend-filesystems[1980]: resize2fs 1.47.1 (20-May-2024) Dec 13 13:13:36.621859 jq[1959]: true Dec 13 13:13:36.535166 ntpd[1939]: ntpd 4.2.8p17@1.4004-o Fri Dec 13 11:28:25 UTC 2024 (1): Starting Dec 13 13:13:36.421632 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:13:36.535213 ntpd[1939]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 13:13:36.518093 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:13:36.536737 ntpd[1939]: ---------------------------------------------------- Dec 13 13:13:36.555068 (ntainerd)[1970]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:13:36.653069 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 13:13:36.536784 ntpd[1939]: ntp-4 is maintained by Network Time Foundation, Dec 13 13:13:36.560584 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 13:13:36.536803 ntpd[1939]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 13:13:36.568583 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:13:36.536821 ntpd[1939]: corporation. Support and training for ntp-4 are Dec 13 13:13:36.536839 ntpd[1939]: available at https://www.nwtime.org/support Dec 13 13:13:36.536857 ntpd[1939]: ---------------------------------------------------- Dec 13 13:13:36.541772 ntpd[1939]: proto: precision = 0.096 usec (-23) Dec 13 13:13:36.542968 ntpd[1939]: basedate set to 2024-12-01 Dec 13 13:13:36.543019 ntpd[1939]: gps base set to 2024-12-01 (week 2343) Dec 13 13:13:36.549426 ntpd[1939]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 13:13:36.549549 ntpd[1939]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 13:13:36.550160 ntpd[1939]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 13:13:36.550353 ntpd[1939]: Listen normally on 3 eth0 172.31.27.111:123 Dec 13 13:13:36.550433 ntpd[1939]: Listen normally on 4 lo [::1]:123 Dec 13 13:13:36.550534 ntpd[1939]: bind(21) AF_INET6 fe80::41b:aeff:fe01:9c99%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 13:13:36.550572 ntpd[1939]: unable to create socket on eth0 (5) for fe80::41b:aeff:fe01:9c99%2#123 Dec 13 13:13:36.550606 ntpd[1939]: failed to init interface for address fe80::41b:aeff:fe01:9c99%2 Dec 13 13:13:36.550671 ntpd[1939]: Listening on routing socket on fd #21 for interface updates Dec 13 13:13:36.556160 ntpd[1939]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 13:13:36.556266 ntpd[1939]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 13:13:36.677655 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:13:36.682455 extend-filesystems[1980]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 13:13:36.682455 extend-filesystems[1980]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 13:13:36.682455 extend-filesystems[1980]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 13:13:36.678504 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:13:36.705595 extend-filesystems[1937]: Resized filesystem in /dev/nvme0n1p9 Dec 13 13:13:36.681292 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:13:36.684946 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:13:36.708840 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 13:13:36.749711 systemd-logind[1944]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 13:13:36.749780 systemd-logind[1944]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 13 13:13:36.752654 systemd-logind[1944]: New seat seat0. Dec 13 13:13:36.758950 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:13:36.794550 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1678) Dec 13 13:13:36.838783 coreos-metadata[1934]: Dec 13 13:13:36.835 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 13:13:36.841734 coreos-metadata[1934]: Dec 13 13:13:36.840 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 13 13:13:36.845223 coreos-metadata[1934]: Dec 13 13:13:36.844 INFO Fetch successful Dec 13 13:13:36.845223 coreos-metadata[1934]: Dec 13 13:13:36.844 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 13 13:13:36.848975 coreos-metadata[1934]: Dec 13 13:13:36.848 INFO Fetch successful Dec 13 13:13:36.848975 coreos-metadata[1934]: Dec 13 13:13:36.848 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 13 13:13:36.849802 coreos-metadata[1934]: Dec 13 13:13:36.849 INFO Fetch successful Dec 13 13:13:36.849802 coreos-metadata[1934]: Dec 13 13:13:36.849 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 13 13:13:36.855844 coreos-metadata[1934]: Dec 13 13:13:36.852 INFO Fetch successful Dec 13 13:13:36.860098 coreos-metadata[1934]: Dec 13 13:13:36.859 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 13 13:13:36.867514 coreos-metadata[1934]: Dec 13 13:13:36.860 INFO Fetch failed with 404: resource not found Dec 13 13:13:36.867514 coreos-metadata[1934]: Dec 13 13:13:36.867 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 13 13:13:36.870741 coreos-metadata[1934]: Dec 13 13:13:36.870 INFO Fetch successful Dec 13 13:13:36.870741 coreos-metadata[1934]: Dec 13 13:13:36.870 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 13 13:13:36.875388 coreos-metadata[1934]: Dec 13 13:13:36.874 INFO Fetch successful Dec 13 13:13:36.875388 coreos-metadata[1934]: Dec 13 13:13:36.875 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 13 13:13:36.876107 coreos-metadata[1934]: Dec 13 13:13:36.875 INFO Fetch successful Dec 13 13:13:36.876107 coreos-metadata[1934]: Dec 13 13:13:36.875 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 13 13:13:36.879311 coreos-metadata[1934]: Dec 13 13:13:36.879 INFO Fetch successful Dec 13 13:13:36.879508 coreos-metadata[1934]: Dec 13 13:13:36.879 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 13 13:13:36.882386 coreos-metadata[1934]: Dec 13 13:13:36.881 INFO Fetch successful Dec 13 13:13:36.891516 bash[2024]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:13:36.897514 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:13:36.964700 systemd[1]: Starting sshkeys.service... Dec 13 13:13:37.042838 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 13:13:37.068175 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 13:13:37.073325 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 13:13:37.078295 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:13:37.147263 containerd[1970]: time="2024-12-13T13:13:37.147042164Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:13:37.221286 containerd[1970]: time="2024-12-13T13:13:37.220725357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:13:37.233151 containerd[1970]: time="2024-12-13T13:13:37.233084877Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:13:37.233345 containerd[1970]: time="2024-12-13T13:13:37.233314341Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:13:37.233461 containerd[1970]: time="2024-12-13T13:13:37.233432613Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:13:37.234034 containerd[1970]: time="2024-12-13T13:13:37.233995605Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:13:37.236265 containerd[1970]: time="2024-12-13T13:13:37.235351473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:13:37.236265 containerd[1970]: time="2024-12-13T13:13:37.235555329Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:13:37.236265 containerd[1970]: time="2024-12-13T13:13:37.235585725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:13:37.236265 containerd[1970]: time="2024-12-13T13:13:37.235923045Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:13:37.236265 containerd[1970]: time="2024-12-13T13:13:37.235955313Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:13:37.236265 containerd[1970]: time="2024-12-13T13:13:37.235988325Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:13:37.236265 containerd[1970]: time="2024-12-13T13:13:37.236013201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:13:37.236265 containerd[1970]: time="2024-12-13T13:13:37.236187165Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:13:37.237191 containerd[1970]: time="2024-12-13T13:13:37.237140373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:13:37.237666 containerd[1970]: time="2024-12-13T13:13:37.237629397Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:13:37.237859 containerd[1970]: time="2024-12-13T13:13:37.237800337Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:13:37.238160 containerd[1970]: time="2024-12-13T13:13:37.238130025Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:13:37.238541 containerd[1970]: time="2024-12-13T13:13:37.238389441Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:13:37.248219 locksmithd[1983]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:13:37.251613 containerd[1970]: time="2024-12-13T13:13:37.249330477Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:13:37.251613 containerd[1970]: time="2024-12-13T13:13:37.249504597Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:13:37.251613 containerd[1970]: time="2024-12-13T13:13:37.249543345Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:13:37.251613 containerd[1970]: time="2024-12-13T13:13:37.249580689Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:13:37.251613 containerd[1970]: time="2024-12-13T13:13:37.249617217Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:13:37.251613 containerd[1970]: time="2024-12-13T13:13:37.249883329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:13:37.251613 containerd[1970]: time="2024-12-13T13:13:37.250339197Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:13:37.251613 containerd[1970]: time="2024-12-13T13:13:37.250547661Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:13:37.251613 containerd[1970]: time="2024-12-13T13:13:37.250582965Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:13:37.251613 containerd[1970]: time="2024-12-13T13:13:37.250616013Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:13:37.251613 containerd[1970]: time="2024-12-13T13:13:37.250654017Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:13:37.251613 containerd[1970]: time="2024-12-13T13:13:37.250685865Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:13:37.251613 containerd[1970]: time="2024-12-13T13:13:37.250714893Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:13:37.251613 containerd[1970]: time="2024-12-13T13:13:37.250746789Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:13:37.252174 containerd[1970]: time="2024-12-13T13:13:37.250780533Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:13:37.252174 containerd[1970]: time="2024-12-13T13:13:37.250810965Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:13:37.252174 containerd[1970]: time="2024-12-13T13:13:37.250840113Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:13:37.252174 containerd[1970]: time="2024-12-13T13:13:37.250867905Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:13:37.252174 containerd[1970]: time="2024-12-13T13:13:37.250907349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:13:37.252174 containerd[1970]: time="2024-12-13T13:13:37.250937949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:13:37.252174 containerd[1970]: time="2024-12-13T13:13:37.250975257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:13:37.252174 containerd[1970]: time="2024-12-13T13:13:37.251005893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:13:37.252174 containerd[1970]: time="2024-12-13T13:13:37.251035257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:13:37.252174 containerd[1970]: time="2024-12-13T13:13:37.251065341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:13:37.252174 containerd[1970]: time="2024-12-13T13:13:37.251092629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:13:37.252174 containerd[1970]: time="2024-12-13T13:13:37.251124501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:13:37.252174 containerd[1970]: time="2024-12-13T13:13:37.251154189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:13:37.252174 containerd[1970]: time="2024-12-13T13:13:37.251189793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:13:37.253252 containerd[1970]: time="2024-12-13T13:13:37.251217177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:13:37.253252 containerd[1970]: time="2024-12-13T13:13:37.252946089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:13:37.253252 containerd[1970]: time="2024-12-13T13:13:37.253009509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:13:37.253252 containerd[1970]: time="2024-12-13T13:13:37.253049301Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:13:37.253252 containerd[1970]: time="2024-12-13T13:13:37.253134201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:13:37.253252 containerd[1970]: time="2024-12-13T13:13:37.253193925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:13:37.253916 containerd[1970]: time="2024-12-13T13:13:37.253567137Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:13:37.253916 containerd[1970]: time="2024-12-13T13:13:37.253777965Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:13:37.254240 containerd[1970]: time="2024-12-13T13:13:37.254071005Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:13:37.254240 containerd[1970]: time="2024-12-13T13:13:37.254109261Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:13:37.254240 containerd[1970]: time="2024-12-13T13:13:37.254168385Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:13:37.254240 containerd[1970]: time="2024-12-13T13:13:37.254195493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:13:37.254726 containerd[1970]: time="2024-12-13T13:13:37.254489073Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:13:37.254726 containerd[1970]: time="2024-12-13T13:13:37.254527221Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:13:37.254726 containerd[1970]: time="2024-12-13T13:13:37.254578245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:13:37.255716 containerd[1970]: time="2024-12-13T13:13:37.255483837Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:13:37.255716 containerd[1970]: time="2024-12-13T13:13:37.255621009Z" level=info msg="Connect containerd service" Dec 13 13:13:37.256396 containerd[1970]: time="2024-12-13T13:13:37.256038177Z" level=info msg="using legacy CRI server" Dec 13 13:13:37.256396 containerd[1970]: time="2024-12-13T13:13:37.256069797Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:13:37.256776 containerd[1970]: time="2024-12-13T13:13:37.256606713Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:13:37.262101 containerd[1970]: time="2024-12-13T13:13:37.260445069Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:13:37.262101 containerd[1970]: time="2024-12-13T13:13:37.260595285Z" level=info msg="Start subscribing containerd event" Dec 13 13:13:37.262101 containerd[1970]: time="2024-12-13T13:13:37.260660901Z" level=info msg="Start recovering state" Dec 13 13:13:37.262101 containerd[1970]: time="2024-12-13T13:13:37.260777373Z" level=info msg="Start event monitor" Dec 13 13:13:37.262101 containerd[1970]: time="2024-12-13T13:13:37.260800233Z" level=info msg="Start snapshots syncer" Dec 13 13:13:37.262101 containerd[1970]: time="2024-12-13T13:13:37.260822493Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:13:37.262101 containerd[1970]: time="2024-12-13T13:13:37.260841549Z" level=info msg="Start streaming server" Dec 13 13:13:37.262101 containerd[1970]: time="2024-12-13T13:13:37.261818805Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:13:37.262101 containerd[1970]: time="2024-12-13T13:13:37.262060497Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:13:37.263473 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:13:37.268973 containerd[1970]: time="2024-12-13T13:13:37.265107501Z" level=info msg="containerd successfully booted in 0.122225s" Dec 13 13:13:37.369698 dbus-daemon[1935]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 13:13:37.371758 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 13:13:37.372094 dbus-daemon[1935]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1979 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 13:13:37.379717 coreos-metadata[2068]: Dec 13 13:13:37.376 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 13:13:37.385609 coreos-metadata[2068]: Dec 13 13:13:37.380 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 13 13:13:37.388537 coreos-metadata[2068]: Dec 13 13:13:37.388 INFO Fetch successful Dec 13 13:13:37.388537 coreos-metadata[2068]: Dec 13 13:13:37.388 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 13:13:37.390579 coreos-metadata[2068]: Dec 13 13:13:37.389 INFO Fetch successful Dec 13 13:13:37.390371 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 13:13:37.411425 unknown[2068]: wrote ssh authorized keys file for user: core Dec 13 13:13:37.469460 polkitd[2115]: Started polkitd version 121 Dec 13 13:13:37.482163 sshd_keygen[1963]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:13:37.489916 polkitd[2115]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 13:13:37.490071 polkitd[2115]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 13:13:37.500839 polkitd[2115]: Finished loading, compiling and executing 2 rules Dec 13 13:13:37.502736 update-ssh-keys[2117]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:13:37.504454 dbus-daemon[1935]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 13:13:37.512289 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 13:13:37.515892 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 13:13:37.518052 polkitd[2115]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 13:13:37.531881 systemd[1]: Finished sshkeys.service. Dec 13 13:13:37.538939 ntpd[1939]: bind(24) AF_INET6 fe80::41b:aeff:fe01:9c99%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 13:13:37.539652 ntpd[1939]: 13 Dec 13:13:37 ntpd[1939]: bind(24) AF_INET6 fe80::41b:aeff:fe01:9c99%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 13:13:37.539652 ntpd[1939]: 13 Dec 13:13:37 ntpd[1939]: unable to create socket on eth0 (6) for fe80::41b:aeff:fe01:9c99%2#123 Dec 13 13:13:37.539652 ntpd[1939]: 13 Dec 13:13:37 ntpd[1939]: failed to init interface for address fe80::41b:aeff:fe01:9c99%2 Dec 13 13:13:37.539005 ntpd[1939]: unable to create socket on eth0 (6) for fe80::41b:aeff:fe01:9c99%2#123 Dec 13 13:13:37.539034 ntpd[1939]: failed to init interface for address fe80::41b:aeff:fe01:9c99%2 Dec 13 13:13:37.593632 systemd-hostnamed[1979]: Hostname set to (transient) Dec 13 13:13:37.594365 systemd-resolved[1846]: System hostname changed to 'ip-172-31-27-111'. Dec 13 13:13:37.611910 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:13:37.626476 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:13:37.639406 systemd[1]: Started sshd@0-172.31.27.111:22-139.178.89.65:40604.service - OpenSSH per-connection server daemon (139.178.89.65:40604). Dec 13 13:13:37.667613 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:13:37.671358 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:13:37.685802 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:13:37.725186 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:13:37.735966 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:13:37.746923 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 13:13:37.749395 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:13:37.919641 sshd[2148]: Accepted publickey for core from 139.178.89.65 port 40604 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:13:37.923518 sshd-session[2148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:13:37.941378 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:13:37.950421 systemd-networkd[1840]: eth0: Gained IPv6LL Dec 13 13:13:37.953632 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:13:37.962269 systemd-logind[1944]: New session 1 of user core. Dec 13 13:13:37.967757 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:13:37.971155 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:13:37.984689 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 13 13:13:38.002935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:13:38.010753 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:13:38.022296 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:13:38.052909 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:13:38.072725 (systemd)[2165]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:13:38.128673 amazon-ssm-agent[2159]: Initializing new seelog logger Dec 13 13:13:38.128673 amazon-ssm-agent[2159]: New Seelog Logger Creation Complete Dec 13 13:13:38.128673 amazon-ssm-agent[2159]: 2024/12/13 13:13:38 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 13:13:38.128673 amazon-ssm-agent[2159]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 13:13:38.128673 amazon-ssm-agent[2159]: 2024/12/13 13:13:38 processing appconfig overrides Dec 13 13:13:38.131093 amazon-ssm-agent[2159]: 2024/12/13 13:13:38 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 13:13:38.132904 amazon-ssm-agent[2159]: 2024-12-13 13:13:38 INFO Proxy environment variables: Dec 13 13:13:38.133851 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:13:38.137638 amazon-ssm-agent[2159]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 13:13:38.137638 amazon-ssm-agent[2159]: 2024/12/13 13:13:38 processing appconfig overrides Dec 13 13:13:38.138370 amazon-ssm-agent[2159]: 2024/12/13 13:13:38 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 13:13:38.138370 amazon-ssm-agent[2159]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 13:13:38.138802 amazon-ssm-agent[2159]: 2024/12/13 13:13:38 processing appconfig overrides Dec 13 13:13:38.149148 amazon-ssm-agent[2159]: 2024/12/13 13:13:38 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 13:13:38.149148 amazon-ssm-agent[2159]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 13:13:38.149148 amazon-ssm-agent[2159]: 2024/12/13 13:13:38 processing appconfig overrides Dec 13 13:13:38.197080 tar[1952]: linux-arm64/LICENSE Dec 13 13:13:38.198365 tar[1952]: linux-arm64/README.md Dec 13 13:13:38.233291 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 13:13:38.236142 amazon-ssm-agent[2159]: 2024-12-13 13:13:38 INFO https_proxy: Dec 13 13:13:38.335360 amazon-ssm-agent[2159]: 2024-12-13 13:13:38 INFO http_proxy: Dec 13 13:13:38.377207 systemd[2165]: Queued start job for default target default.target. Dec 13 13:13:38.386358 systemd[2165]: Created slice app.slice - User Application Slice. Dec 13 13:13:38.386411 systemd[2165]: Reached target paths.target - Paths. Dec 13 13:13:38.386441 systemd[2165]: Reached target timers.target - Timers. Dec 13 13:13:38.397413 systemd[2165]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:13:38.430758 systemd[2165]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:13:38.431487 systemd[2165]: Reached target sockets.target - Sockets. Dec 13 13:13:38.431534 systemd[2165]: Reached target basic.target - Basic System. Dec 13 13:13:38.431625 systemd[2165]: Reached target default.target - Main User Target. Dec 13 13:13:38.439599 amazon-ssm-agent[2159]: 2024-12-13 13:13:38 INFO no_proxy: Dec 13 13:13:38.431690 systemd[2165]: Startup finished in 339ms. Dec 13 13:13:38.431725 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:13:38.440518 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:13:38.536401 amazon-ssm-agent[2159]: 2024-12-13 13:13:38 INFO Checking if agent identity type OnPrem can be assumed Dec 13 13:13:38.615735 systemd[1]: Started sshd@1-172.31.27.111:22-139.178.89.65:43240.service - OpenSSH per-connection server daemon (139.178.89.65:43240). Dec 13 13:13:38.635143 amazon-ssm-agent[2159]: 2024-12-13 13:13:38 INFO Checking if agent identity type EC2 can be assumed Dec 13 13:13:38.733537 amazon-ssm-agent[2159]: 2024-12-13 13:13:38 INFO Agent will take identity from EC2 Dec 13 13:13:38.747412 amazon-ssm-agent[2159]: 2024-12-13 13:13:38 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 13:13:38.747412 amazon-ssm-agent[2159]: 2024-12-13 13:13:38 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 13:13:38.747412 amazon-ssm-agent[2159]: 2024-12-13 13:13:38 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 13:13:38.747976 amazon-ssm-agent[2159]: 2024-12-13 13:13:38 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Dec 13 13:13:38.747976 amazon-ssm-agent[2159]: 2024-12-13 13:13:38 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Dec 13 13:13:38.747976 amazon-ssm-agent[2159]: 2024-12-13 13:13:38 INFO [amazon-ssm-agent] Starting Core Agent Dec 13 13:13:38.747976 amazon-ssm-agent[2159]: 2024-12-13 13:13:38 INFO [amazon-ssm-agent] registrar detected. Attempting registration Dec 13 13:13:38.747976 amazon-ssm-agent[2159]: 2024-12-13 13:13:38 INFO [Registrar] Starting registrar module Dec 13 13:13:38.747976 amazon-ssm-agent[2159]: 2024-12-13 13:13:38 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Dec 13 13:13:38.747976 amazon-ssm-agent[2159]: 2024-12-13 13:13:38 INFO [EC2Identity] EC2 registration was successful. Dec 13 13:13:38.747976 amazon-ssm-agent[2159]: 2024-12-13 13:13:38 INFO [CredentialRefresher] credentialRefresher has started Dec 13 13:13:38.747976 amazon-ssm-agent[2159]: 2024-12-13 13:13:38 INFO [CredentialRefresher] Starting credentials refresher loop Dec 13 13:13:38.748670 amazon-ssm-agent[2159]: 2024-12-13 13:13:38 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 13 13:13:38.833948 amazon-ssm-agent[2159]: 2024-12-13 13:13:38 INFO [CredentialRefresher] Next credential rotation will be in 31.441639240966666 minutes Dec 13 13:13:38.834071 sshd[2192]: Accepted publickey for core from 139.178.89.65 port 43240 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:13:38.836744 sshd-session[2192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:13:38.846551 systemd-logind[1944]: New session 2 of user core. Dec 13 13:13:38.852537 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:13:38.983125 sshd[2194]: Connection closed by 139.178.89.65 port 43240 Dec 13 13:13:38.983991 sshd-session[2192]: pam_unix(sshd:session): session closed for user core Dec 13 13:13:38.995687 systemd[1]: sshd@1-172.31.27.111:22-139.178.89.65:43240.service: Deactivated successfully. Dec 13 13:13:39.000112 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 13:13:39.001582 systemd-logind[1944]: Session 2 logged out. Waiting for processes to exit. Dec 13 13:13:39.003686 systemd-logind[1944]: Removed session 2. Dec 13 13:13:39.021028 systemd[1]: Started sshd@2-172.31.27.111:22-139.178.89.65:43254.service - OpenSSH per-connection server daemon (139.178.89.65:43254). Dec 13 13:13:39.220762 sshd[2199]: Accepted publickey for core from 139.178.89.65 port 43254 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:13:39.223216 sshd-session[2199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:13:39.235061 systemd-logind[1944]: New session 3 of user core. Dec 13 13:13:39.242561 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:13:39.374285 sshd[2201]: Connection closed by 139.178.89.65 port 43254 Dec 13 13:13:39.374895 sshd-session[2199]: pam_unix(sshd:session): session closed for user core Dec 13 13:13:39.381026 systemd[1]: sshd@2-172.31.27.111:22-139.178.89.65:43254.service: Deactivated successfully. Dec 13 13:13:39.384355 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 13:13:39.386471 systemd-logind[1944]: Session 3 logged out. Waiting for processes to exit. Dec 13 13:13:39.390004 systemd-logind[1944]: Removed session 3. Dec 13 13:13:39.611548 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:13:39.615051 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:13:39.617744 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:13:39.621410 systemd[1]: Startup finished in 1.071s (kernel) + 8.292s (initrd) + 8.594s (userspace) = 17.958s. Dec 13 13:13:39.650695 agetty[2155]: failed to open credentials directory Dec 13 13:13:39.651976 agetty[2156]: failed to open credentials directory Dec 13 13:13:39.776526 amazon-ssm-agent[2159]: 2024-12-13 13:13:39 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 13 13:13:39.878025 amazon-ssm-agent[2159]: 2024-12-13 13:13:39 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2216) started Dec 13 13:13:39.978056 amazon-ssm-agent[2159]: 2024-12-13 13:13:39 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 13 13:13:40.538852 ntpd[1939]: Listen normally on 7 eth0 [fe80::41b:aeff:fe01:9c99%2]:123 Dec 13 13:13:40.539345 ntpd[1939]: 13 Dec 13:13:40 ntpd[1939]: Listen normally on 7 eth0 [fe80::41b:aeff:fe01:9c99%2]:123 Dec 13 13:13:40.661348 kubelet[2210]: E1213 13:13:40.658756 2210 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:13:40.666615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:13:40.666973 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:13:40.667690 systemd[1]: kubelet.service: Consumed 1.323s CPU time. Dec 13 13:13:49.414339 systemd[1]: Started sshd@3-172.31.27.111:22-139.178.89.65:34060.service - OpenSSH per-connection server daemon (139.178.89.65:34060). Dec 13 13:13:49.609169 sshd[2234]: Accepted publickey for core from 139.178.89.65 port 34060 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:13:49.611602 sshd-session[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:13:49.620066 systemd-logind[1944]: New session 4 of user core. Dec 13 13:13:49.627506 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:13:49.753769 sshd[2236]: Connection closed by 139.178.89.65 port 34060 Dec 13 13:13:49.754981 sshd-session[2234]: pam_unix(sshd:session): session closed for user core Dec 13 13:13:49.760160 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 13:13:49.762354 systemd[1]: sshd@3-172.31.27.111:22-139.178.89.65:34060.service: Deactivated successfully. Dec 13 13:13:49.768056 systemd-logind[1944]: Session 4 logged out. Waiting for processes to exit. Dec 13 13:13:49.769720 systemd-logind[1944]: Removed session 4. Dec 13 13:13:49.791762 systemd[1]: Started sshd@4-172.31.27.111:22-139.178.89.65:34074.service - OpenSSH per-connection server daemon (139.178.89.65:34074). Dec 13 13:13:49.975816 sshd[2241]: Accepted publickey for core from 139.178.89.65 port 34074 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:13:49.978867 sshd-session[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:13:49.986289 systemd-logind[1944]: New session 5 of user core. Dec 13 13:13:49.994498 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:13:50.111110 sshd[2243]: Connection closed by 139.178.89.65 port 34074 Dec 13 13:13:50.112066 sshd-session[2241]: pam_unix(sshd:session): session closed for user core Dec 13 13:13:50.118538 systemd[1]: sshd@4-172.31.27.111:22-139.178.89.65:34074.service: Deactivated successfully. Dec 13 13:13:50.121768 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:13:50.123082 systemd-logind[1944]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:13:50.125608 systemd-logind[1944]: Removed session 5. Dec 13 13:13:50.150775 systemd[1]: Started sshd@5-172.31.27.111:22-139.178.89.65:34084.service - OpenSSH per-connection server daemon (139.178.89.65:34084). Dec 13 13:13:50.342597 sshd[2248]: Accepted publickey for core from 139.178.89.65 port 34084 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:13:50.345110 sshd-session[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:13:50.352436 systemd-logind[1944]: New session 6 of user core. Dec 13 13:13:50.363499 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:13:50.492508 sshd[2250]: Connection closed by 139.178.89.65 port 34084 Dec 13 13:13:50.493394 sshd-session[2248]: pam_unix(sshd:session): session closed for user core Dec 13 13:13:50.499535 systemd[1]: sshd@5-172.31.27.111:22-139.178.89.65:34084.service: Deactivated successfully. Dec 13 13:13:50.503213 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:13:50.504683 systemd-logind[1944]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:13:50.506585 systemd-logind[1944]: Removed session 6. Dec 13 13:13:50.529501 systemd[1]: Started sshd@6-172.31.27.111:22-139.178.89.65:34090.service - OpenSSH per-connection server daemon (139.178.89.65:34090). Dec 13 13:13:50.695371 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 13:13:50.706602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:13:50.729125 sshd[2255]: Accepted publickey for core from 139.178.89.65 port 34090 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:13:50.731630 sshd-session[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:13:50.745003 systemd-logind[1944]: New session 7 of user core. Dec 13 13:13:50.753088 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:13:50.900183 sudo[2261]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:13:50.901510 sudo[2261]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:13:51.015182 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:13:51.033825 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:13:51.173147 kubelet[2271]: E1213 13:13:51.172495 2271 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:13:51.181451 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:13:51.181765 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:13:51.711000 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 13:13:51.723771 (dockerd)[2294]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 13:13:52.178298 dockerd[2294]: time="2024-12-13T13:13:52.177970722Z" level=info msg="Starting up" Dec 13 13:13:52.376282 dockerd[2294]: time="2024-12-13T13:13:52.376212931Z" level=info msg="Loading containers: start." Dec 13 13:13:52.626270 kernel: Initializing XFRM netlink socket Dec 13 13:13:52.677309 (udev-worker)[2317]: Network interface NamePolicy= disabled on kernel command line. Dec 13 13:13:52.776425 systemd-networkd[1840]: docker0: Link UP Dec 13 13:13:52.815582 dockerd[2294]: time="2024-12-13T13:13:52.815508717Z" level=info msg="Loading containers: done." Dec 13 13:13:52.840131 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4175996997-merged.mount: Deactivated successfully. Dec 13 13:13:52.842669 dockerd[2294]: time="2024-12-13T13:13:52.842589261Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 13:13:52.842816 dockerd[2294]: time="2024-12-13T13:13:52.842742021Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Dec 13 13:13:52.843011 dockerd[2294]: time="2024-12-13T13:13:52.842963385Z" level=info msg="Daemon has completed initialization" Dec 13 13:13:52.893458 dockerd[2294]: time="2024-12-13T13:13:52.893127381Z" level=info msg="API listen on /run/docker.sock" Dec 13 13:13:52.893746 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 13:13:54.083405 containerd[1970]: time="2024-12-13T13:13:54.083045083Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 13:13:54.718275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount342167273.mount: Deactivated successfully. Dec 13 13:13:57.004299 containerd[1970]: time="2024-12-13T13:13:57.003088522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:13:57.004299 containerd[1970]: time="2024-12-13T13:13:57.004191322Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201250" Dec 13 13:13:57.005950 containerd[1970]: time="2024-12-13T13:13:57.005901634Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:13:57.018036 containerd[1970]: time="2024-12-13T13:13:57.017977690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:13:57.020593 containerd[1970]: time="2024-12-13T13:13:57.020523298Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 2.937402411s" Dec 13 13:13:57.020725 containerd[1970]: time="2024-12-13T13:13:57.020592334Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 13:13:57.060275 containerd[1970]: time="2024-12-13T13:13:57.060207106Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 13:13:59.312347 containerd[1970]: time="2024-12-13T13:13:59.311478373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:13:59.314437 containerd[1970]: time="2024-12-13T13:13:59.314348029Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381297" Dec 13 13:13:59.316932 containerd[1970]: time="2024-12-13T13:13:59.316863601Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:13:59.322947 containerd[1970]: time="2024-12-13T13:13:59.322849261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:13:59.325268 containerd[1970]: time="2024-12-13T13:13:59.325011781Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 2.264537039s" Dec 13 13:13:59.325268 containerd[1970]: time="2024-12-13T13:13:59.325112713Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 13:13:59.365593 containerd[1970]: time="2024-12-13T13:13:59.365266909Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 13:14:00.871287 containerd[1970]: time="2024-12-13T13:14:00.870469877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:00.873267 containerd[1970]: time="2024-12-13T13:14:00.872901317Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765640" Dec 13 13:14:00.875413 containerd[1970]: time="2024-12-13T13:14:00.875348345Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:00.883517 containerd[1970]: time="2024-12-13T13:14:00.883437581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:00.886559 containerd[1970]: time="2024-12-13T13:14:00.885869477Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.520541524s" Dec 13 13:14:00.886559 containerd[1970]: time="2024-12-13T13:14:00.885926453Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 13:14:00.926758 containerd[1970]: time="2024-12-13T13:14:00.926451029Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 13:14:01.337621 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 13:14:01.346606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:14:01.674623 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:14:01.687998 (kubelet)[2571]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:14:01.811483 kubelet[2571]: E1213 13:14:01.810260 2571 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:14:01.818461 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:14:01.818787 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:14:02.401667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1679350858.mount: Deactivated successfully. Dec 13 13:14:03.014088 containerd[1970]: time="2024-12-13T13:14:03.013100668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:03.015301 containerd[1970]: time="2024-12-13T13:14:03.015200956Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273977" Dec 13 13:14:03.017960 containerd[1970]: time="2024-12-13T13:14:03.017914204Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:03.023942 containerd[1970]: time="2024-12-13T13:14:03.023851048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:03.025643 containerd[1970]: time="2024-12-13T13:14:03.025439020Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 2.098927535s" Dec 13 13:14:03.025643 containerd[1970]: time="2024-12-13T13:14:03.025490608Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 13:14:03.064545 containerd[1970]: time="2024-12-13T13:14:03.064457608Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 13:14:03.652099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1559378821.mount: Deactivated successfully. Dec 13 13:14:04.762961 containerd[1970]: time="2024-12-13T13:14:04.762880400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:04.765080 containerd[1970]: time="2024-12-13T13:14:04.765009536Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Dec 13 13:14:04.767330 containerd[1970]: time="2024-12-13T13:14:04.767255960Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:04.773593 containerd[1970]: time="2024-12-13T13:14:04.773511416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:04.775949 containerd[1970]: time="2024-12-13T13:14:04.775742408Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.7112239s" Dec 13 13:14:04.775949 containerd[1970]: time="2024-12-13T13:14:04.775803728Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 13:14:04.815084 containerd[1970]: time="2024-12-13T13:14:04.815006325Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 13:14:05.340298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1487945062.mount: Deactivated successfully. Dec 13 13:14:05.354038 containerd[1970]: time="2024-12-13T13:14:05.353984371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:05.357460 containerd[1970]: time="2024-12-13T13:14:05.357386983Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Dec 13 13:14:05.359946 containerd[1970]: time="2024-12-13T13:14:05.359877163Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:05.371142 containerd[1970]: time="2024-12-13T13:14:05.371069383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:05.373401 containerd[1970]: time="2024-12-13T13:14:05.373141567Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 558.07475ms" Dec 13 13:14:05.373401 containerd[1970]: time="2024-12-13T13:14:05.373204807Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 13:14:05.411737 containerd[1970]: time="2024-12-13T13:14:05.411675535Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 13:14:05.971574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3516597671.mount: Deactivated successfully. Dec 13 13:14:07.629833 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 13:14:08.872289 containerd[1970]: time="2024-12-13T13:14:08.871920961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:08.874216 containerd[1970]: time="2024-12-13T13:14:08.874131241Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Dec 13 13:14:08.876826 containerd[1970]: time="2024-12-13T13:14:08.876730129Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:08.883087 containerd[1970]: time="2024-12-13T13:14:08.882987661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:08.885467 containerd[1970]: time="2024-12-13T13:14:08.885260509Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.473313462s" Dec 13 13:14:08.885467 containerd[1970]: time="2024-12-13T13:14:08.885311077Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 13:14:11.837627 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 13:14:11.848421 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:14:12.131653 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:14:12.142762 (kubelet)[2761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:14:12.234252 kubelet[2761]: E1213 13:14:12.233273 2761 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:14:12.238396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:14:12.238721 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:14:15.402983 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:14:15.410733 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:14:15.458423 systemd[1]: Reloading requested from client PID 2777 ('systemctl') (unit session-7.scope)... Dec 13 13:14:15.458636 systemd[1]: Reloading... Dec 13 13:14:15.712265 zram_generator::config[2826]: No configuration found. Dec 13 13:14:15.921973 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:14:16.089601 systemd[1]: Reloading finished in 630 ms. Dec 13 13:14:16.167464 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 13:14:16.167647 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 13:14:16.168186 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:14:16.176835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:14:16.482503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:14:16.495755 (kubelet)[2879]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:14:16.583183 kubelet[2879]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:14:16.583183 kubelet[2879]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:14:16.583183 kubelet[2879]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:14:16.583828 kubelet[2879]: I1213 13:14:16.583296 2879 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:14:17.861595 kubelet[2879]: I1213 13:14:17.861532 2879 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 13:14:17.861595 kubelet[2879]: I1213 13:14:17.861584 2879 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:14:17.862212 kubelet[2879]: I1213 13:14:17.861932 2879 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 13:14:17.899463 kubelet[2879]: I1213 13:14:17.899407 2879 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:14:17.900011 kubelet[2879]: E1213 13:14:17.899804 2879 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.27.111:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.27.111:6443: connect: connection refused Dec 13 13:14:17.913570 kubelet[2879]: I1213 13:14:17.912431 2879 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:14:17.913570 kubelet[2879]: I1213 13:14:17.912906 2879 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:14:17.913570 kubelet[2879]: I1213 13:14:17.913197 2879 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:14:17.913570 kubelet[2879]: I1213 13:14:17.913251 2879 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:14:17.913570 kubelet[2879]: I1213 13:14:17.913275 2879 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:14:17.915740 kubelet[2879]: I1213 13:14:17.915709 2879 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:14:17.920454 kubelet[2879]: I1213 13:14:17.920420 2879 kubelet.go:396] "Attempting to sync node with API server" Dec 13 13:14:17.920633 kubelet[2879]: I1213 13:14:17.920610 2879 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:14:17.920803 kubelet[2879]: I1213 13:14:17.920783 2879 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:14:17.920915 kubelet[2879]: I1213 13:14:17.920895 2879 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:14:17.923555 kubelet[2879]: W1213 13:14:17.923481 2879 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.27.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-111&limit=500&resourceVersion=0": dial tcp 172.31.27.111:6443: connect: connection refused Dec 13 13:14:17.923740 kubelet[2879]: E1213 13:14:17.923719 2879 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.27.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-111&limit=500&resourceVersion=0": dial tcp 172.31.27.111:6443: connect: connection refused Dec 13 13:14:17.924465 kubelet[2879]: W1213 13:14:17.924409 2879 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.27.111:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.111:6443: connect: connection refused Dec 13 13:14:17.924675 kubelet[2879]: E1213 13:14:17.924651 2879 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.27.111:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.111:6443: connect: connection refused Dec 13 13:14:17.925636 kubelet[2879]: I1213 13:14:17.924860 2879 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:14:17.925636 kubelet[2879]: I1213 13:14:17.925396 2879 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:14:17.927841 kubelet[2879]: W1213 13:14:17.927803 2879 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:14:17.929535 kubelet[2879]: I1213 13:14:17.929501 2879 server.go:1256] "Started kubelet" Dec 13 13:14:17.932543 kubelet[2879]: I1213 13:14:17.932500 2879 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:14:17.941406 kubelet[2879]: E1213 13:14:17.941334 2879 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.27.111:6443/api/v1/namespaces/default/events\": dial tcp 172.31.27.111:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-27-111.1810bed0316ea6c2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-111,UID:ip-172-31-27-111,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-111,},FirstTimestamp:2024-12-13 13:14:17.929451202 +0000 UTC m=+1.426861424,LastTimestamp:2024-12-13 13:14:17.929451202 +0000 UTC m=+1.426861424,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-111,}" Dec 13 13:14:17.943151 kubelet[2879]: I1213 13:14:17.941749 2879 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:14:17.943434 kubelet[2879]: I1213 13:14:17.943403 2879 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:14:17.944064 kubelet[2879]: I1213 13:14:17.944006 2879 server.go:461] "Adding debug handlers to kubelet server" Dec 13 13:14:17.945872 kubelet[2879]: I1213 13:14:17.945811 2879 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:14:17.946123 kubelet[2879]: I1213 13:14:17.946087 2879 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:14:17.947739 kubelet[2879]: I1213 13:14:17.947690 2879 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 13:14:17.948029 kubelet[2879]: I1213 13:14:17.948009 2879 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 13:14:17.949324 kubelet[2879]: E1213 13:14:17.949212 2879 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-111?timeout=10s\": dial tcp 172.31.27.111:6443: connect: connection refused" interval="200ms" Dec 13 13:14:17.950361 kubelet[2879]: I1213 13:14:17.950265 2879 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:14:17.956894 kubelet[2879]: W1213 13:14:17.956804 2879 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.27.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.111:6443: connect: connection refused Dec 13 13:14:17.957134 kubelet[2879]: E1213 13:14:17.957108 2879 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.27.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.111:6443: connect: connection refused Dec 13 13:14:17.957645 kubelet[2879]: I1213 13:14:17.957608 2879 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:14:17.959266 kubelet[2879]: I1213 13:14:17.957787 2879 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:14:17.973168 kubelet[2879]: I1213 13:14:17.973103 2879 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:14:17.975548 kubelet[2879]: I1213 13:14:17.975499 2879 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:14:17.975548 kubelet[2879]: I1213 13:14:17.975545 2879 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:14:17.975744 kubelet[2879]: I1213 13:14:17.975585 2879 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 13:14:17.975744 kubelet[2879]: E1213 13:14:17.975665 2879 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:14:17.995968 kubelet[2879]: W1213 13:14:17.995874 2879 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.27.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.111:6443: connect: connection refused Dec 13 13:14:17.995968 kubelet[2879]: E1213 13:14:17.995972 2879 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.27.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.111:6443: connect: connection refused Dec 13 13:14:18.000204 kubelet[2879]: E1213 13:14:18.000061 2879 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:14:18.008055 kubelet[2879]: I1213 13:14:18.007948 2879 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:14:18.008055 kubelet[2879]: I1213 13:14:18.007987 2879 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:14:18.008055 kubelet[2879]: I1213 13:14:18.008018 2879 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:14:18.010494 kubelet[2879]: I1213 13:14:18.010439 2879 policy_none.go:49] "None policy: Start" Dec 13 13:14:18.011592 kubelet[2879]: I1213 13:14:18.011558 2879 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:14:18.011709 kubelet[2879]: I1213 13:14:18.011626 2879 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:14:18.025661 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:14:18.042267 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:14:18.046784 kubelet[2879]: I1213 13:14:18.046157 2879 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-111" Dec 13 13:14:18.046784 kubelet[2879]: E1213 13:14:18.046729 2879 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.111:6443/api/v1/nodes\": dial tcp 172.31.27.111:6443: connect: connection refused" node="ip-172-31-27-111" Dec 13 13:14:18.049666 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:14:18.062149 kubelet[2879]: I1213 13:14:18.061449 2879 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:14:18.062149 kubelet[2879]: I1213 13:14:18.061839 2879 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:14:18.064117 kubelet[2879]: E1213 13:14:18.064086 2879 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-27-111\" not found" Dec 13 13:14:18.076566 kubelet[2879]: I1213 13:14:18.076510 2879 topology_manager.go:215] "Topology Admit Handler" podUID="c1fdcb5f3aef6025a61a0585462cfd01" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-27-111" Dec 13 13:14:18.078426 kubelet[2879]: I1213 13:14:18.078384 2879 topology_manager.go:215] "Topology Admit Handler" podUID="6debe0ff2dfed4981ea8ec0ba8b35b7d" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-27-111" Dec 13 13:14:18.080751 kubelet[2879]: I1213 13:14:18.080637 2879 topology_manager.go:215] "Topology Admit Handler" podUID="47ecd35f4ac178f653db0fa42a172b03" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-27-111" Dec 13 13:14:18.095527 systemd[1]: Created slice kubepods-burstable-podc1fdcb5f3aef6025a61a0585462cfd01.slice - libcontainer container kubepods-burstable-podc1fdcb5f3aef6025a61a0585462cfd01.slice. Dec 13 13:14:18.115393 systemd[1]: Created slice kubepods-burstable-pod6debe0ff2dfed4981ea8ec0ba8b35b7d.slice - libcontainer container kubepods-burstable-pod6debe0ff2dfed4981ea8ec0ba8b35b7d.slice. Dec 13 13:14:18.124783 systemd[1]: Created slice kubepods-burstable-pod47ecd35f4ac178f653db0fa42a172b03.slice - libcontainer container kubepods-burstable-pod47ecd35f4ac178f653db0fa42a172b03.slice. Dec 13 13:14:18.149982 kubelet[2879]: I1213 13:14:18.149428 2879 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6debe0ff2dfed4981ea8ec0ba8b35b7d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-111\" (UID: \"6debe0ff2dfed4981ea8ec0ba8b35b7d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-111" Dec 13 13:14:18.149982 kubelet[2879]: I1213 13:14:18.149501 2879 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/47ecd35f4ac178f653db0fa42a172b03-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-111\" (UID: \"47ecd35f4ac178f653db0fa42a172b03\") " pod="kube-system/kube-scheduler-ip-172-31-27-111" Dec 13 13:14:18.149982 kubelet[2879]: I1213 13:14:18.149550 2879 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1fdcb5f3aef6025a61a0585462cfd01-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-111\" (UID: \"c1fdcb5f3aef6025a61a0585462cfd01\") " pod="kube-system/kube-apiserver-ip-172-31-27-111" Dec 13 13:14:18.149982 kubelet[2879]: I1213 13:14:18.149596 2879 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6debe0ff2dfed4981ea8ec0ba8b35b7d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-111\" (UID: \"6debe0ff2dfed4981ea8ec0ba8b35b7d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-111" Dec 13 13:14:18.149982 kubelet[2879]: I1213 13:14:18.149643 2879 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6debe0ff2dfed4981ea8ec0ba8b35b7d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-111\" (UID: \"6debe0ff2dfed4981ea8ec0ba8b35b7d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-111" Dec 13 13:14:18.150366 kubelet[2879]: I1213 13:14:18.149686 2879 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6debe0ff2dfed4981ea8ec0ba8b35b7d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-111\" (UID: \"6debe0ff2dfed4981ea8ec0ba8b35b7d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-111" Dec 13 13:14:18.150366 kubelet[2879]: I1213 13:14:18.149728 2879 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1fdcb5f3aef6025a61a0585462cfd01-ca-certs\") pod \"kube-apiserver-ip-172-31-27-111\" (UID: \"c1fdcb5f3aef6025a61a0585462cfd01\") " pod="kube-system/kube-apiserver-ip-172-31-27-111" Dec 13 13:14:18.150366 kubelet[2879]: I1213 13:14:18.149789 2879 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1fdcb5f3aef6025a61a0585462cfd01-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-111\" (UID: \"c1fdcb5f3aef6025a61a0585462cfd01\") " pod="kube-system/kube-apiserver-ip-172-31-27-111" Dec 13 13:14:18.150366 kubelet[2879]: I1213 13:14:18.149865 2879 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6debe0ff2dfed4981ea8ec0ba8b35b7d-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-111\" (UID: \"6debe0ff2dfed4981ea8ec0ba8b35b7d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-111" Dec 13 13:14:18.150366 kubelet[2879]: E1213 13:14:18.149947 2879 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-111?timeout=10s\": dial tcp 172.31.27.111:6443: connect: connection refused" interval="400ms" Dec 13 13:14:18.249312 kubelet[2879]: I1213 13:14:18.249270 2879 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-111" Dec 13 13:14:18.249798 kubelet[2879]: E1213 13:14:18.249759 2879 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.111:6443/api/v1/nodes\": dial tcp 172.31.27.111:6443: connect: connection refused" node="ip-172-31-27-111" Dec 13 13:14:18.412268 containerd[1970]: time="2024-12-13T13:14:18.411999236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-111,Uid:c1fdcb5f3aef6025a61a0585462cfd01,Namespace:kube-system,Attempt:0,}" Dec 13 13:14:18.426088 containerd[1970]: time="2024-12-13T13:14:18.426019748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-111,Uid:6debe0ff2dfed4981ea8ec0ba8b35b7d,Namespace:kube-system,Attempt:0,}" Dec 13 13:14:18.429727 containerd[1970]: time="2024-12-13T13:14:18.429359576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-111,Uid:47ecd35f4ac178f653db0fa42a172b03,Namespace:kube-system,Attempt:0,}" Dec 13 13:14:18.551375 kubelet[2879]: E1213 13:14:18.551318 2879 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-111?timeout=10s\": dial tcp 172.31.27.111:6443: connect: connection refused" interval="800ms" Dec 13 13:14:18.651953 kubelet[2879]: I1213 13:14:18.651898 2879 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-111" Dec 13 13:14:18.652428 kubelet[2879]: E1213 13:14:18.652382 2879 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.111:6443/api/v1/nodes\": dial tcp 172.31.27.111:6443: connect: connection refused" node="ip-172-31-27-111" Dec 13 13:14:18.858259 kubelet[2879]: W1213 13:14:18.858162 2879 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.27.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-111&limit=500&resourceVersion=0": dial tcp 172.31.27.111:6443: connect: connection refused Dec 13 13:14:18.858440 kubelet[2879]: E1213 13:14:18.858282 2879 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.27.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-111&limit=500&resourceVersion=0": dial tcp 172.31.27.111:6443: connect: connection refused Dec 13 13:14:18.911564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1067816245.mount: Deactivated successfully. Dec 13 13:14:18.919647 containerd[1970]: time="2024-12-13T13:14:18.919201991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:14:18.921532 containerd[1970]: time="2024-12-13T13:14:18.921473915Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:14:18.923439 containerd[1970]: time="2024-12-13T13:14:18.923354759Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Dec 13 13:14:18.924343 containerd[1970]: time="2024-12-13T13:14:18.924261887Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:14:18.927802 containerd[1970]: time="2024-12-13T13:14:18.927430763Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:14:18.929123 containerd[1970]: time="2024-12-13T13:14:18.928971071Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:14:18.933027 containerd[1970]: time="2024-12-13T13:14:18.932939279Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:14:18.937817 containerd[1970]: time="2024-12-13T13:14:18.937748099Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 511.611723ms" Dec 13 13:14:18.940210 containerd[1970]: time="2024-12-13T13:14:18.939825647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:14:18.944796 containerd[1970]: time="2024-12-13T13:14:18.944733479Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 532.627119ms" Dec 13 13:14:18.945856 kubelet[2879]: W1213 13:14:18.945792 2879 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.27.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.111:6443: connect: connection refused Dec 13 13:14:18.945856 kubelet[2879]: E1213 13:14:18.945859 2879 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.27.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.111:6443: connect: connection refused Dec 13 13:14:18.948334 containerd[1970]: time="2024-12-13T13:14:18.948098675Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 518.631243ms" Dec 13 13:14:19.013069 kubelet[2879]: W1213 13:14:19.012537 2879 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.27.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.111:6443: connect: connection refused Dec 13 13:14:19.013069 kubelet[2879]: E1213 13:14:19.012614 2879 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.27.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.111:6443: connect: connection refused Dec 13 13:14:19.110644 kubelet[2879]: W1213 13:14:19.110471 2879 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.27.111:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.111:6443: connect: connection refused Dec 13 13:14:19.110827 kubelet[2879]: E1213 13:14:19.110804 2879 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.27.111:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.111:6443: connect: connection refused Dec 13 13:14:19.186659 containerd[1970]: time="2024-12-13T13:14:19.186473072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:14:19.186659 containerd[1970]: time="2024-12-13T13:14:19.186585224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:14:19.186918 containerd[1970]: time="2024-12-13T13:14:19.186611660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:14:19.186918 containerd[1970]: time="2024-12-13T13:14:19.186763868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:14:19.199201 containerd[1970]: time="2024-12-13T13:14:19.199051016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:14:19.199535 containerd[1970]: time="2024-12-13T13:14:19.199489844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:14:19.199677 containerd[1970]: time="2024-12-13T13:14:19.199636040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:14:19.200041 containerd[1970]: time="2024-12-13T13:14:19.199997912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:14:19.203892 containerd[1970]: time="2024-12-13T13:14:19.203321300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:14:19.203892 containerd[1970]: time="2024-12-13T13:14:19.203451164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:14:19.203892 containerd[1970]: time="2024-12-13T13:14:19.203481536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:14:19.203892 containerd[1970]: time="2024-12-13T13:14:19.203628704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:14:19.249121 systemd[1]: Started cri-containerd-d0528f45cf71a7be8a5b1c6d6132ef45c8cc4fe40376ccbba99e15ec47a2f5f7.scope - libcontainer container d0528f45cf71a7be8a5b1c6d6132ef45c8cc4fe40376ccbba99e15ec47a2f5f7. Dec 13 13:14:19.265103 systemd[1]: Started cri-containerd-be6e094ef8ccf157bff33672e42a6ccabad436d3dd332f478d3166ecb9c689e8.scope - libcontainer container be6e094ef8ccf157bff33672e42a6ccabad436d3dd332f478d3166ecb9c689e8. Dec 13 13:14:19.272152 systemd[1]: Started cri-containerd-d73786682600e972ff6d21e198b5784fba93c023b3dabb5d6554b3293d8a4116.scope - libcontainer container d73786682600e972ff6d21e198b5784fba93c023b3dabb5d6554b3293d8a4116. Dec 13 13:14:19.353513 kubelet[2879]: E1213 13:14:19.353442 2879 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-111?timeout=10s\": dial tcp 172.31.27.111:6443: connect: connection refused" interval="1.6s" Dec 13 13:14:19.373200 containerd[1970]: time="2024-12-13T13:14:19.372919857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-111,Uid:c1fdcb5f3aef6025a61a0585462cfd01,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0528f45cf71a7be8a5b1c6d6132ef45c8cc4fe40376ccbba99e15ec47a2f5f7\"" Dec 13 13:14:19.388943 containerd[1970]: time="2024-12-13T13:14:19.388185753Z" level=info msg="CreateContainer within sandbox \"d0528f45cf71a7be8a5b1c6d6132ef45c8cc4fe40376ccbba99e15ec47a2f5f7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 13:14:19.394678 containerd[1970]: time="2024-12-13T13:14:19.394413549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-111,Uid:6debe0ff2dfed4981ea8ec0ba8b35b7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"be6e094ef8ccf157bff33672e42a6ccabad436d3dd332f478d3166ecb9c689e8\"" Dec 13 13:14:19.406094 containerd[1970]: time="2024-12-13T13:14:19.403782585Z" level=info msg="CreateContainer within sandbox \"be6e094ef8ccf157bff33672e42a6ccabad436d3dd332f478d3166ecb9c689e8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 13:14:19.430818 containerd[1970]: time="2024-12-13T13:14:19.430180497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-111,Uid:47ecd35f4ac178f653db0fa42a172b03,Namespace:kube-system,Attempt:0,} returns sandbox id \"d73786682600e972ff6d21e198b5784fba93c023b3dabb5d6554b3293d8a4116\"" Dec 13 13:14:19.437502 containerd[1970]: time="2024-12-13T13:14:19.437122461Z" level=info msg="CreateContainer within sandbox \"d73786682600e972ff6d21e198b5784fba93c023b3dabb5d6554b3293d8a4116\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 13:14:19.444182 containerd[1970]: time="2024-12-13T13:14:19.444124401Z" level=info msg="CreateContainer within sandbox \"d0528f45cf71a7be8a5b1c6d6132ef45c8cc4fe40376ccbba99e15ec47a2f5f7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8d7114abc8cce67c11acb0e06cfabe2e9b43576b8987e91012547f6582dd2bc1\"" Dec 13 13:14:19.447246 containerd[1970]: time="2024-12-13T13:14:19.445371477Z" level=info msg="StartContainer for \"8d7114abc8cce67c11acb0e06cfabe2e9b43576b8987e91012547f6582dd2bc1\"" Dec 13 13:14:19.456485 kubelet[2879]: I1213 13:14:19.456447 2879 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-111" Dec 13 13:14:19.457138 kubelet[2879]: E1213 13:14:19.457098 2879 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.111:6443/api/v1/nodes\": dial tcp 172.31.27.111:6443: connect: connection refused" node="ip-172-31-27-111" Dec 13 13:14:19.465316 containerd[1970]: time="2024-12-13T13:14:19.465224613Z" level=info msg="CreateContainer within sandbox \"be6e094ef8ccf157bff33672e42a6ccabad436d3dd332f478d3166ecb9c689e8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bf51da624258d8e97a94a8c2238579f049d57a52e98f4843ca0fc95aa1de366a\"" Dec 13 13:14:19.466318 containerd[1970]: time="2024-12-13T13:14:19.466272357Z" level=info msg="StartContainer for \"bf51da624258d8e97a94a8c2238579f049d57a52e98f4843ca0fc95aa1de366a\"" Dec 13 13:14:19.489525 containerd[1970]: time="2024-12-13T13:14:19.489470913Z" level=info msg="CreateContainer within sandbox \"d73786682600e972ff6d21e198b5784fba93c023b3dabb5d6554b3293d8a4116\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"472fc8d1f1ac3ee868edc415e544f2446723b5f87f9c205626d4ab6d5eb784e8\"" Dec 13 13:14:19.493713 containerd[1970]: time="2024-12-13T13:14:19.492641637Z" level=info msg="StartContainer for \"472fc8d1f1ac3ee868edc415e544f2446723b5f87f9c205626d4ab6d5eb784e8\"" Dec 13 13:14:19.519414 systemd[1]: Started cri-containerd-8d7114abc8cce67c11acb0e06cfabe2e9b43576b8987e91012547f6582dd2bc1.scope - libcontainer container 8d7114abc8cce67c11acb0e06cfabe2e9b43576b8987e91012547f6582dd2bc1. Dec 13 13:14:19.538581 systemd[1]: Started cri-containerd-bf51da624258d8e97a94a8c2238579f049d57a52e98f4843ca0fc95aa1de366a.scope - libcontainer container bf51da624258d8e97a94a8c2238579f049d57a52e98f4843ca0fc95aa1de366a. Dec 13 13:14:19.584524 systemd[1]: Started cri-containerd-472fc8d1f1ac3ee868edc415e544f2446723b5f87f9c205626d4ab6d5eb784e8.scope - libcontainer container 472fc8d1f1ac3ee868edc415e544f2446723b5f87f9c205626d4ab6d5eb784e8. Dec 13 13:14:19.669023 containerd[1970]: time="2024-12-13T13:14:19.668611486Z" level=info msg="StartContainer for \"8d7114abc8cce67c11acb0e06cfabe2e9b43576b8987e91012547f6582dd2bc1\" returns successfully" Dec 13 13:14:19.682689 containerd[1970]: time="2024-12-13T13:14:19.682622338Z" level=info msg="StartContainer for \"bf51da624258d8e97a94a8c2238579f049d57a52e98f4843ca0fc95aa1de366a\" returns successfully" Dec 13 13:14:19.729522 containerd[1970]: time="2024-12-13T13:14:19.729450659Z" level=info msg="StartContainer for \"472fc8d1f1ac3ee868edc415e544f2446723b5f87f9c205626d4ab6d5eb784e8\" returns successfully" Dec 13 13:14:21.059926 kubelet[2879]: I1213 13:14:21.059868 2879 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-111" Dec 13 13:14:21.666303 update_engine[1945]: I20241213 13:14:21.665272 1945 update_attempter.cc:509] Updating boot flags... Dec 13 13:14:21.782304 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3166) Dec 13 13:14:24.926638 kubelet[2879]: I1213 13:14:24.926592 2879 apiserver.go:52] "Watching apiserver" Dec 13 13:14:25.048153 kubelet[2879]: I1213 13:14:25.048089 2879 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 13:14:25.052500 kubelet[2879]: E1213 13:14:25.052445 2879 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-27-111\" not found" node="ip-172-31-27-111" Dec 13 13:14:25.129589 kubelet[2879]: I1213 13:14:25.129533 2879 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-27-111" Dec 13 13:14:25.220418 kubelet[2879]: E1213 13:14:25.219924 2879 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-27-111.1810bed0316ea6c2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-111,UID:ip-172-31-27-111,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-111,},FirstTimestamp:2024-12-13 13:14:17.929451202 +0000 UTC m=+1.426861424,LastTimestamp:2024-12-13 13:14:17.929451202 +0000 UTC m=+1.426861424,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-111,}" Dec 13 13:14:27.693679 systemd[1]: Reloading requested from client PID 3250 ('systemctl') (unit session-7.scope)... Dec 13 13:14:27.694135 systemd[1]: Reloading... Dec 13 13:14:27.866391 zram_generator::config[3293]: No configuration found. Dec 13 13:14:28.105591 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:14:28.303976 systemd[1]: Reloading finished in 608 ms. Dec 13 13:14:28.378799 kubelet[2879]: I1213 13:14:28.378038 2879 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:14:28.380502 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:14:28.397527 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:14:28.397987 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:14:28.398082 systemd[1]: kubelet.service: Consumed 2.161s CPU time, 112.6M memory peak, 0B memory swap peak. Dec 13 13:14:28.405844 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:14:28.746600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:14:28.759807 (kubelet)[3352]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:14:28.881778 kubelet[3352]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:14:28.881778 kubelet[3352]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:14:28.881778 kubelet[3352]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:14:28.881778 kubelet[3352]: I1213 13:14:28.881481 3352 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:14:28.891031 kubelet[3352]: I1213 13:14:28.890961 3352 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 13:14:28.891031 kubelet[3352]: I1213 13:14:28.891018 3352 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:14:28.892482 kubelet[3352]: I1213 13:14:28.891393 3352 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 13:14:28.895832 kubelet[3352]: I1213 13:14:28.895493 3352 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 13:14:28.900375 kubelet[3352]: I1213 13:14:28.899423 3352 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:14:28.924291 kubelet[3352]: I1213 13:14:28.922262 3352 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:14:28.924291 kubelet[3352]: I1213 13:14:28.922777 3352 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:14:28.924291 kubelet[3352]: I1213 13:14:28.923054 3352 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:14:28.924291 kubelet[3352]: I1213 13:14:28.923148 3352 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:14:28.924291 kubelet[3352]: I1213 13:14:28.923173 3352 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:14:28.924291 kubelet[3352]: I1213 13:14:28.923312 3352 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:14:28.924778 kubelet[3352]: I1213 13:14:28.923598 3352 kubelet.go:396] "Attempting to sync node with API server" Dec 13 13:14:28.924778 kubelet[3352]: I1213 13:14:28.924576 3352 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:14:28.924778 kubelet[3352]: I1213 13:14:28.924672 3352 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:14:28.926083 kubelet[3352]: I1213 13:14:28.926016 3352 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:14:28.930752 kubelet[3352]: I1213 13:14:28.930705 3352 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:14:28.931097 kubelet[3352]: I1213 13:14:28.931061 3352 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:14:28.931875 kubelet[3352]: I1213 13:14:28.931827 3352 server.go:1256] "Started kubelet" Dec 13 13:14:28.940938 kubelet[3352]: I1213 13:14:28.940885 3352 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:14:28.955978 kubelet[3352]: I1213 13:14:28.954863 3352 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:14:28.958891 kubelet[3352]: I1213 13:14:28.958849 3352 server.go:461] "Adding debug handlers to kubelet server" Dec 13 13:14:28.962952 kubelet[3352]: I1213 13:14:28.962911 3352 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:14:28.982049 kubelet[3352]: I1213 13:14:28.967333 3352 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:14:28.982199 kubelet[3352]: I1213 13:14:28.967392 3352 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 13:14:28.984781 kubelet[3352]: I1213 13:14:28.984274 3352 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 13:14:28.998054 kubelet[3352]: I1213 13:14:28.996427 3352 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:14:29.006553 kubelet[3352]: I1213 13:14:29.006464 3352 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:14:29.020198 kubelet[3352]: I1213 13:14:29.016866 3352 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:14:29.020198 kubelet[3352]: I1213 13:14:29.016910 3352 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:14:29.043098 kubelet[3352]: E1213 13:14:29.042904 3352 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:14:29.044695 kubelet[3352]: I1213 13:14:29.043537 3352 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:14:29.054740 kubelet[3352]: I1213 13:14:29.053696 3352 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:14:29.054740 kubelet[3352]: I1213 13:14:29.053771 3352 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:14:29.054740 kubelet[3352]: I1213 13:14:29.053833 3352 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 13:14:29.054740 kubelet[3352]: E1213 13:14:29.053981 3352 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:14:29.086902 kubelet[3352]: E1213 13:14:29.086207 3352 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Dec 13 13:14:29.091741 kubelet[3352]: I1213 13:14:29.091505 3352 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-111" Dec 13 13:14:29.109418 kubelet[3352]: I1213 13:14:29.109350 3352 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-27-111" Dec 13 13:14:29.110894 kubelet[3352]: I1213 13:14:29.109794 3352 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-27-111" Dec 13 13:14:29.154108 kubelet[3352]: E1213 13:14:29.154062 3352 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 13:14:29.190567 kubelet[3352]: I1213 13:14:29.190530 3352 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:14:29.190776 kubelet[3352]: I1213 13:14:29.190753 3352 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:14:29.190893 kubelet[3352]: I1213 13:14:29.190873 3352 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:14:29.191330 kubelet[3352]: I1213 13:14:29.191221 3352 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 13:14:29.191993 kubelet[3352]: I1213 13:14:29.191476 3352 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 13:14:29.191993 kubelet[3352]: I1213 13:14:29.191501 3352 policy_none.go:49] "None policy: Start" Dec 13 13:14:29.194409 kubelet[3352]: I1213 13:14:29.193971 3352 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:14:29.194409 kubelet[3352]: I1213 13:14:29.194027 3352 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:14:29.194870 kubelet[3352]: I1213 13:14:29.194845 3352 state_mem.go:75] "Updated machine memory state" Dec 13 13:14:29.207461 kubelet[3352]: I1213 13:14:29.206480 3352 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:14:29.208981 kubelet[3352]: I1213 13:14:29.208948 3352 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:14:29.355338 kubelet[3352]: I1213 13:14:29.355293 3352 topology_manager.go:215] "Topology Admit Handler" podUID="c1fdcb5f3aef6025a61a0585462cfd01" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-27-111" Dec 13 13:14:29.356313 kubelet[3352]: I1213 13:14:29.355615 3352 topology_manager.go:215] "Topology Admit Handler" podUID="6debe0ff2dfed4981ea8ec0ba8b35b7d" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-27-111" Dec 13 13:14:29.356313 kubelet[3352]: I1213 13:14:29.355733 3352 topology_manager.go:215] "Topology Admit Handler" podUID="47ecd35f4ac178f653db0fa42a172b03" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-27-111" Dec 13 13:14:29.403388 kubelet[3352]: I1213 13:14:29.402318 3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6debe0ff2dfed4981ea8ec0ba8b35b7d-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-111\" (UID: \"6debe0ff2dfed4981ea8ec0ba8b35b7d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-111" Dec 13 13:14:29.403388 kubelet[3352]: I1213 13:14:29.402392 3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6debe0ff2dfed4981ea8ec0ba8b35b7d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-111\" (UID: \"6debe0ff2dfed4981ea8ec0ba8b35b7d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-111" Dec 13 13:14:29.403388 kubelet[3352]: I1213 13:14:29.402453 3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6debe0ff2dfed4981ea8ec0ba8b35b7d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-111\" (UID: \"6debe0ff2dfed4981ea8ec0ba8b35b7d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-111" Dec 13 13:14:29.403388 kubelet[3352]: I1213 13:14:29.402511 3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/47ecd35f4ac178f653db0fa42a172b03-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-111\" (UID: \"47ecd35f4ac178f653db0fa42a172b03\") " pod="kube-system/kube-scheduler-ip-172-31-27-111" Dec 13 13:14:29.403388 kubelet[3352]: I1213 13:14:29.402575 3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1fdcb5f3aef6025a61a0585462cfd01-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-111\" (UID: \"c1fdcb5f3aef6025a61a0585462cfd01\") " pod="kube-system/kube-apiserver-ip-172-31-27-111" Dec 13 13:14:29.403745 kubelet[3352]: I1213 13:14:29.402623 3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1fdcb5f3aef6025a61a0585462cfd01-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-111\" (UID: \"c1fdcb5f3aef6025a61a0585462cfd01\") " pod="kube-system/kube-apiserver-ip-172-31-27-111" Dec 13 13:14:29.403745 kubelet[3352]: I1213 13:14:29.402689 3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6debe0ff2dfed4981ea8ec0ba8b35b7d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-111\" (UID: \"6debe0ff2dfed4981ea8ec0ba8b35b7d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-111" Dec 13 13:14:29.403745 kubelet[3352]: I1213 13:14:29.403074 3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6debe0ff2dfed4981ea8ec0ba8b35b7d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-111\" (UID: \"6debe0ff2dfed4981ea8ec0ba8b35b7d\") " pod="kube-system/kube-controller-manager-ip-172-31-27-111" Dec 13 13:14:29.403745 kubelet[3352]: I1213 13:14:29.403133 3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1fdcb5f3aef6025a61a0585462cfd01-ca-certs\") pod \"kube-apiserver-ip-172-31-27-111\" (UID: \"c1fdcb5f3aef6025a61a0585462cfd01\") " pod="kube-system/kube-apiserver-ip-172-31-27-111" Dec 13 13:14:29.928556 kubelet[3352]: I1213 13:14:29.927873 3352 apiserver.go:52] "Watching apiserver" Dec 13 13:14:29.982453 kubelet[3352]: I1213 13:14:29.982393 3352 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 13:14:30.204720 kubelet[3352]: I1213 13:14:30.204547 3352 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-27-111" podStartSLOduration=1.204479671 podStartE2EDuration="1.204479671s" podCreationTimestamp="2024-12-13 13:14:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:14:30.187959427 +0000 UTC m=+1.417621700" watchObservedRunningTime="2024-12-13 13:14:30.204479671 +0000 UTC m=+1.434141920" Dec 13 13:14:30.231269 kubelet[3352]: I1213 13:14:30.229459 3352 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-27-111" podStartSLOduration=1.2293839069999999 podStartE2EDuration="1.229383907s" podCreationTimestamp="2024-12-13 13:14:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:14:30.205491139 +0000 UTC m=+1.435153412" watchObservedRunningTime="2024-12-13 13:14:30.229383907 +0000 UTC m=+1.459046192" Dec 13 13:14:30.562944 sudo[2261]: pam_unix(sudo:session): session closed for user root Dec 13 13:14:30.588349 sshd[2260]: Connection closed by 139.178.89.65 port 34090 Dec 13 13:14:30.588835 sshd-session[2255]: pam_unix(sshd:session): session closed for user core Dec 13 13:14:30.598214 systemd[1]: sshd@6-172.31.27.111:22-139.178.89.65:34090.service: Deactivated successfully. Dec 13 13:14:30.602946 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:14:30.603455 systemd[1]: session-7.scope: Consumed 8.730s CPU time, 188.3M memory peak, 0B memory swap peak. Dec 13 13:14:30.604974 systemd-logind[1944]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:14:30.607604 systemd-logind[1944]: Removed session 7. Dec 13 13:14:33.820376 kubelet[3352]: I1213 13:14:33.820300 3352 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-27-111" podStartSLOduration=4.820208833 podStartE2EDuration="4.820208833s" podCreationTimestamp="2024-12-13 13:14:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:14:30.229907731 +0000 UTC m=+1.459569992" watchObservedRunningTime="2024-12-13 13:14:33.820208833 +0000 UTC m=+5.049871094" Dec 13 13:14:42.418191 kubelet[3352]: I1213 13:14:42.418123 3352 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 13:14:42.420852 containerd[1970]: time="2024-12-13T13:14:42.420592579Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:14:42.425531 kubelet[3352]: I1213 13:14:42.422315 3352 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 13:14:42.546994 kubelet[3352]: I1213 13:14:42.546929 3352 topology_manager.go:215] "Topology Admit Handler" podUID="a4ef25c9-d563-49f7-bafa-f3b215f6fc4c" podNamespace="kube-system" podName="kube-proxy-8hfpv" Dec 13 13:14:42.559757 kubelet[3352]: W1213 13:14:42.558007 3352 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-27-111" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-27-111' and this object Dec 13 13:14:42.559757 kubelet[3352]: E1213 13:14:42.558062 3352 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-27-111" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-27-111' and this object Dec 13 13:14:42.559757 kubelet[3352]: W1213 13:14:42.558481 3352 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-27-111" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-27-111' and this object Dec 13 13:14:42.559757 kubelet[3352]: E1213 13:14:42.558519 3352 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-27-111" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-27-111' and this object Dec 13 13:14:42.565371 systemd[1]: Created slice kubepods-besteffort-poda4ef25c9_d563_49f7_bafa_f3b215f6fc4c.slice - libcontainer container kubepods-besteffort-poda4ef25c9_d563_49f7_bafa_f3b215f6fc4c.slice. Dec 13 13:14:42.581608 kubelet[3352]: I1213 13:14:42.581539 3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4ef25c9-d563-49f7-bafa-f3b215f6fc4c-lib-modules\") pod \"kube-proxy-8hfpv\" (UID: \"a4ef25c9-d563-49f7-bafa-f3b215f6fc4c\") " pod="kube-system/kube-proxy-8hfpv" Dec 13 13:14:42.581770 kubelet[3352]: I1213 13:14:42.581619 3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a4ef25c9-d563-49f7-bafa-f3b215f6fc4c-kube-proxy\") pod \"kube-proxy-8hfpv\" (UID: \"a4ef25c9-d563-49f7-bafa-f3b215f6fc4c\") " pod="kube-system/kube-proxy-8hfpv" Dec 13 13:14:42.581770 kubelet[3352]: I1213 13:14:42.581673 3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4ef25c9-d563-49f7-bafa-f3b215f6fc4c-xtables-lock\") pod \"kube-proxy-8hfpv\" (UID: \"a4ef25c9-d563-49f7-bafa-f3b215f6fc4c\") " pod="kube-system/kube-proxy-8hfpv" Dec 13 13:14:42.581770 kubelet[3352]: I1213 13:14:42.581722 3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdgcf\" (UniqueName: \"kubernetes.io/projected/a4ef25c9-d563-49f7-bafa-f3b215f6fc4c-kube-api-access-gdgcf\") pod \"kube-proxy-8hfpv\" (UID: \"a4ef25c9-d563-49f7-bafa-f3b215f6fc4c\") " pod="kube-system/kube-proxy-8hfpv" Dec 13 13:14:42.592747 kubelet[3352]: I1213 13:14:42.592681 3352 topology_manager.go:215] "Topology Admit Handler" podUID="a9c5cdbd-d6ae-4ad3-8fa4-373b70a20f7a" podNamespace="kube-flannel" podName="kube-flannel-ds-tdxvx" Dec 13 13:14:42.610734 systemd[1]: Created slice kubepods-burstable-poda9c5cdbd_d6ae_4ad3_8fa4_373b70a20f7a.slice - libcontainer container kubepods-burstable-poda9c5cdbd_d6ae_4ad3_8fa4_373b70a20f7a.slice. Dec 13 13:14:42.684050 kubelet[3352]: I1213 13:14:42.682675 3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9c5cdbd-d6ae-4ad3-8fa4-373b70a20f7a-xtables-lock\") pod \"kube-flannel-ds-tdxvx\" (UID: \"a9c5cdbd-d6ae-4ad3-8fa4-373b70a20f7a\") " pod="kube-flannel/kube-flannel-ds-tdxvx" Dec 13 13:14:42.684050 kubelet[3352]: I1213 13:14:42.682819 3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a9c5cdbd-d6ae-4ad3-8fa4-373b70a20f7a-run\") pod \"kube-flannel-ds-tdxvx\" (UID: \"a9c5cdbd-d6ae-4ad3-8fa4-373b70a20f7a\") " pod="kube-flannel/kube-flannel-ds-tdxvx" Dec 13 13:14:42.684050 kubelet[3352]: I1213 13:14:42.682917 3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkkjc\" (UniqueName: \"kubernetes.io/projected/a9c5cdbd-d6ae-4ad3-8fa4-373b70a20f7a-kube-api-access-wkkjc\") pod \"kube-flannel-ds-tdxvx\" (UID: \"a9c5cdbd-d6ae-4ad3-8fa4-373b70a20f7a\") " pod="kube-flannel/kube-flannel-ds-tdxvx" Dec 13 13:14:42.684050 kubelet[3352]: I1213 13:14:42.682974 3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/a9c5cdbd-d6ae-4ad3-8fa4-373b70a20f7a-cni-plugin\") pod \"kube-flannel-ds-tdxvx\" (UID: \"a9c5cdbd-d6ae-4ad3-8fa4-373b70a20f7a\") " pod="kube-flannel/kube-flannel-ds-tdxvx" Dec 13 13:14:42.684050 kubelet[3352]: I1213 13:14:42.683021 3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/a9c5cdbd-d6ae-4ad3-8fa4-373b70a20f7a-cni\") pod \"kube-flannel-ds-tdxvx\" (UID: \"a9c5cdbd-d6ae-4ad3-8fa4-373b70a20f7a\") " pod="kube-flannel/kube-flannel-ds-tdxvx" Dec 13 13:14:42.684427 kubelet[3352]: I1213 13:14:42.683069 3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/a9c5cdbd-d6ae-4ad3-8fa4-373b70a20f7a-flannel-cfg\") pod \"kube-flannel-ds-tdxvx\" (UID: \"a9c5cdbd-d6ae-4ad3-8fa4-373b70a20f7a\") " pod="kube-flannel/kube-flannel-ds-tdxvx" Dec 13 13:14:42.917019 containerd[1970]: time="2024-12-13T13:14:42.916732702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-tdxvx,Uid:a9c5cdbd-d6ae-4ad3-8fa4-373b70a20f7a,Namespace:kube-flannel,Attempt:0,}" Dec 13 13:14:42.971556 containerd[1970]: time="2024-12-13T13:14:42.970742770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:14:42.971556 containerd[1970]: time="2024-12-13T13:14:42.971031274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:14:42.971556 containerd[1970]: time="2024-12-13T13:14:42.971080570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:14:42.973552 containerd[1970]: time="2024-12-13T13:14:42.973354378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:14:43.020577 systemd[1]: Started cri-containerd-474587b09eb7b984315b1d34b8de8dcd2029b5d1c0cf61d525f26f84d3c97d8d.scope - libcontainer container 474587b09eb7b984315b1d34b8de8dcd2029b5d1c0cf61d525f26f84d3c97d8d. Dec 13 13:14:43.083816 containerd[1970]: time="2024-12-13T13:14:43.083748463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-tdxvx,Uid:a9c5cdbd-d6ae-4ad3-8fa4-373b70a20f7a,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"474587b09eb7b984315b1d34b8de8dcd2029b5d1c0cf61d525f26f84d3c97d8d\"" Dec 13 13:14:43.088822 containerd[1970]: time="2024-12-13T13:14:43.088506247Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 13 13:14:43.710832 kubelet[3352]: E1213 13:14:43.710678 3352 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 13 13:14:43.710832 kubelet[3352]: E1213 13:14:43.710728 3352 projected.go:200] Error preparing data for projected volume kube-api-access-gdgcf for pod kube-system/kube-proxy-8hfpv: failed to sync configmap cache: timed out waiting for the condition Dec 13 13:14:43.710832 kubelet[3352]: E1213 13:14:43.710840 3352 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a4ef25c9-d563-49f7-bafa-f3b215f6fc4c-kube-api-access-gdgcf podName:a4ef25c9-d563-49f7-bafa-f3b215f6fc4c nodeName:}" failed. No retries permitted until 2024-12-13 13:14:44.21080453 +0000 UTC m=+15.440466803 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gdgcf" (UniqueName: "kubernetes.io/projected/a4ef25c9-d563-49f7-bafa-f3b215f6fc4c-kube-api-access-gdgcf") pod "kube-proxy-8hfpv" (UID: "a4ef25c9-d563-49f7-bafa-f3b215f6fc4c") : failed to sync configmap cache: timed out waiting for the condition Dec 13 13:14:44.382193 containerd[1970]: time="2024-12-13T13:14:44.382119357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8hfpv,Uid:a4ef25c9-d563-49f7-bafa-f3b215f6fc4c,Namespace:kube-system,Attempt:0,}" Dec 13 13:14:44.434559 containerd[1970]: time="2024-12-13T13:14:44.434012205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:14:44.434559 containerd[1970]: time="2024-12-13T13:14:44.434141925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:14:44.434559 containerd[1970]: time="2024-12-13T13:14:44.434172321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:14:44.435657 containerd[1970]: time="2024-12-13T13:14:44.435545889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:14:44.481569 systemd[1]: Started cri-containerd-01c4226c8124619821292000fab3ce30552039429821fb2ff9746303c5b4962a.scope - libcontainer container 01c4226c8124619821292000fab3ce30552039429821fb2ff9746303c5b4962a. Dec 13 13:14:44.528656 containerd[1970]: time="2024-12-13T13:14:44.528587458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8hfpv,Uid:a4ef25c9-d563-49f7-bafa-f3b215f6fc4c,Namespace:kube-system,Attempt:0,} returns sandbox id \"01c4226c8124619821292000fab3ce30552039429821fb2ff9746303c5b4962a\"" Dec 13 13:14:44.537376 containerd[1970]: time="2024-12-13T13:14:44.536943610Z" level=info msg="CreateContainer within sandbox \"01c4226c8124619821292000fab3ce30552039429821fb2ff9746303c5b4962a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:14:44.571855 containerd[1970]: time="2024-12-13T13:14:44.571712470Z" level=info msg="CreateContainer within sandbox \"01c4226c8124619821292000fab3ce30552039429821fb2ff9746303c5b4962a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b7e34ba3561e46881db3a62480625189b51b37f056a16ab241038ee719087b1b\"" Dec 13 13:14:44.574303 containerd[1970]: time="2024-12-13T13:14:44.572596486Z" level=info msg="StartContainer for \"b7e34ba3561e46881db3a62480625189b51b37f056a16ab241038ee719087b1b\"" Dec 13 13:14:44.622568 systemd[1]: Started cri-containerd-b7e34ba3561e46881db3a62480625189b51b37f056a16ab241038ee719087b1b.scope - libcontainer container b7e34ba3561e46881db3a62480625189b51b37f056a16ab241038ee719087b1b. Dec 13 13:14:44.689045 containerd[1970]: time="2024-12-13T13:14:44.688723835Z" level=info msg="StartContainer for \"b7e34ba3561e46881db3a62480625189b51b37f056a16ab241038ee719087b1b\" returns successfully" Dec 13 13:14:45.155491 containerd[1970]: time="2024-12-13T13:14:45.155425485Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:45.157884 containerd[1970]: time="2024-12-13T13:14:45.157792689Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Dec 13 13:14:45.160814 containerd[1970]: time="2024-12-13T13:14:45.160741101Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:45.174779 containerd[1970]: time="2024-12-13T13:14:45.174424089Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:45.179507 containerd[1970]: time="2024-12-13T13:14:45.179170185Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 2.090599222s" Dec 13 13:14:45.179507 containerd[1970]: time="2024-12-13T13:14:45.179269089Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Dec 13 13:14:45.184837 containerd[1970]: time="2024-12-13T13:14:45.184573893Z" level=info msg="CreateContainer within sandbox \"474587b09eb7b984315b1d34b8de8dcd2029b5d1c0cf61d525f26f84d3c97d8d\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 13 13:14:45.224687 containerd[1970]: time="2024-12-13T13:14:45.224616297Z" level=info msg="CreateContainer within sandbox \"474587b09eb7b984315b1d34b8de8dcd2029b5d1c0cf61d525f26f84d3c97d8d\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"573e81639980789c6e401e72fc810190143ab415b4d1ff32058aabd0bf499ee5\"" Dec 13 13:14:45.225871 containerd[1970]: time="2024-12-13T13:14:45.225816813Z" level=info msg="StartContainer for \"573e81639980789c6e401e72fc810190143ab415b4d1ff32058aabd0bf499ee5\"" Dec 13 13:14:45.278538 systemd[1]: Started cri-containerd-573e81639980789c6e401e72fc810190143ab415b4d1ff32058aabd0bf499ee5.scope - libcontainer container 573e81639980789c6e401e72fc810190143ab415b4d1ff32058aabd0bf499ee5. Dec 13 13:14:45.349772 containerd[1970]: time="2024-12-13T13:14:45.348465406Z" level=info msg="StartContainer for \"573e81639980789c6e401e72fc810190143ab415b4d1ff32058aabd0bf499ee5\" returns successfully" Dec 13 13:14:45.352586 systemd[1]: cri-containerd-573e81639980789c6e401e72fc810190143ab415b4d1ff32058aabd0bf499ee5.scope: Deactivated successfully. Dec 13 13:14:45.390825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-573e81639980789c6e401e72fc810190143ab415b4d1ff32058aabd0bf499ee5-rootfs.mount: Deactivated successfully. Dec 13 13:14:45.432472 containerd[1970]: time="2024-12-13T13:14:45.432148978Z" level=info msg="shim disconnected" id=573e81639980789c6e401e72fc810190143ab415b4d1ff32058aabd0bf499ee5 namespace=k8s.io Dec 13 13:14:45.432472 containerd[1970]: time="2024-12-13T13:14:45.432306094Z" level=warning msg="cleaning up after shim disconnected" id=573e81639980789c6e401e72fc810190143ab415b4d1ff32058aabd0bf499ee5 namespace=k8s.io Dec 13 13:14:45.432472 containerd[1970]: time="2024-12-13T13:14:45.432326446Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:14:46.179551 containerd[1970]: time="2024-12-13T13:14:46.179434810Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 13 13:14:46.199151 kubelet[3352]: I1213 13:14:46.199094 3352 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8hfpv" podStartSLOduration=4.19903555 podStartE2EDuration="4.19903555s" podCreationTimestamp="2024-12-13 13:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:14:45.195402501 +0000 UTC m=+16.425064786" watchObservedRunningTime="2024-12-13 13:14:46.19903555 +0000 UTC m=+17.428697823" Dec 13 13:14:48.379074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3197875809.mount: Deactivated successfully. Dec 13 13:14:49.645800 containerd[1970]: time="2024-12-13T13:14:49.645191919Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:49.647667 containerd[1970]: time="2024-12-13T13:14:49.647579031Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Dec 13 13:14:49.650358 containerd[1970]: time="2024-12-13T13:14:49.650278551Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:49.656550 containerd[1970]: time="2024-12-13T13:14:49.656449335Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:14:49.660562 containerd[1970]: time="2024-12-13T13:14:49.658689723Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 3.479195897s" Dec 13 13:14:49.660562 containerd[1970]: time="2024-12-13T13:14:49.658752195Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Dec 13 13:14:49.664701 containerd[1970]: time="2024-12-13T13:14:49.664147755Z" level=info msg="CreateContainer within sandbox \"474587b09eb7b984315b1d34b8de8dcd2029b5d1c0cf61d525f26f84d3c97d8d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 13:14:49.692514 containerd[1970]: time="2024-12-13T13:14:49.692436627Z" level=info msg="CreateContainer within sandbox \"474587b09eb7b984315b1d34b8de8dcd2029b5d1c0cf61d525f26f84d3c97d8d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1bbf133e708e684306f6e67a640d37e5d26896af0b0e7d569a052a41edd3a6de\"" Dec 13 13:14:49.693583 containerd[1970]: time="2024-12-13T13:14:49.693209715Z" level=info msg="StartContainer for \"1bbf133e708e684306f6e67a640d37e5d26896af0b0e7d569a052a41edd3a6de\"" Dec 13 13:14:49.750544 systemd[1]: Started cri-containerd-1bbf133e708e684306f6e67a640d37e5d26896af0b0e7d569a052a41edd3a6de.scope - libcontainer container 1bbf133e708e684306f6e67a640d37e5d26896af0b0e7d569a052a41edd3a6de. Dec 13 13:14:49.797176 systemd[1]: cri-containerd-1bbf133e708e684306f6e67a640d37e5d26896af0b0e7d569a052a41edd3a6de.scope: Deactivated successfully. Dec 13 13:14:49.802893 containerd[1970]: time="2024-12-13T13:14:49.802774900Z" level=info msg="StartContainer for \"1bbf133e708e684306f6e67a640d37e5d26896af0b0e7d569a052a41edd3a6de\" returns successfully" Dec 13 13:14:49.881113 kubelet[3352]: I1213 13:14:49.881000 3352 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 13:14:49.932686 kubelet[3352]: I1213 13:14:49.930176 3352 topology_manager.go:215] "Topology Admit Handler" podUID="1b1eb772-5864-4726-866c-757f4dcf8eee" podNamespace="kube-system" podName="coredns-76f75df574-pgv48" Dec 13 13:14:49.942426 kubelet[3352]: I1213 13:14:49.939067 3352 topology_manager.go:215] "Topology Admit Handler" podUID="c90bffea-5474-4816-ab6d-8abd101915ff" podNamespace="kube-system" podName="coredns-76f75df574-smtzm" Dec 13 13:14:49.961829 systemd[1]: Created slice kubepods-burstable-pod1b1eb772_5864_4726_866c_757f4dcf8eee.slice - libcontainer container kubepods-burstable-pod1b1eb772_5864_4726_866c_757f4dcf8eee.slice. Dec 13 13:14:49.979048 containerd[1970]: time="2024-12-13T13:14:49.978367661Z" level=info msg="shim disconnected" id=1bbf133e708e684306f6e67a640d37e5d26896af0b0e7d569a052a41edd3a6de namespace=k8s.io Dec 13 13:14:49.979048 containerd[1970]: time="2024-12-13T13:14:49.978602081Z" level=warning msg="cleaning up after shim disconnected" id=1bbf133e708e684306f6e67a640d37e5d26896af0b0e7d569a052a41edd3a6de namespace=k8s.io Dec 13 13:14:49.979048 containerd[1970]: time="2024-12-13T13:14:49.978626825Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:14:49.988039 systemd[1]: Created slice kubepods-burstable-podc90bffea_5474_4816_ab6d_8abd101915ff.slice - libcontainer container kubepods-burstable-podc90bffea_5474_4816_ab6d_8abd101915ff.slice. Dec 13 13:14:50.033543 kubelet[3352]: I1213 13:14:50.033450 3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b1eb772-5864-4726-866c-757f4dcf8eee-config-volume\") pod \"coredns-76f75df574-pgv48\" (UID: \"1b1eb772-5864-4726-866c-757f4dcf8eee\") " pod="kube-system/coredns-76f75df574-pgv48" Dec 13 13:14:50.033723 kubelet[3352]: I1213 13:14:50.033566 3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45889\" (UniqueName: \"kubernetes.io/projected/c90bffea-5474-4816-ab6d-8abd101915ff-kube-api-access-45889\") pod \"coredns-76f75df574-smtzm\" (UID: \"c90bffea-5474-4816-ab6d-8abd101915ff\") " pod="kube-system/coredns-76f75df574-smtzm" Dec 13 13:14:50.033723 kubelet[3352]: I1213 13:14:50.033656 3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlhcj\" (UniqueName: \"kubernetes.io/projected/1b1eb772-5864-4726-866c-757f4dcf8eee-kube-api-access-zlhcj\") pod \"coredns-76f75df574-pgv48\" (UID: \"1b1eb772-5864-4726-866c-757f4dcf8eee\") " pod="kube-system/coredns-76f75df574-pgv48" Dec 13 13:14:50.033723 kubelet[3352]: I1213 13:14:50.033705 3352 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c90bffea-5474-4816-ab6d-8abd101915ff-config-volume\") pod \"coredns-76f75df574-smtzm\" (UID: \"c90bffea-5474-4816-ab6d-8abd101915ff\") " pod="kube-system/coredns-76f75df574-smtzm" Dec 13 13:14:50.195706 containerd[1970]: time="2024-12-13T13:14:50.195180614Z" level=info msg="CreateContainer within sandbox \"474587b09eb7b984315b1d34b8de8dcd2029b5d1c0cf61d525f26f84d3c97d8d\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 13 13:14:50.219507 containerd[1970]: time="2024-12-13T13:14:50.218990762Z" level=info msg="CreateContainer within sandbox \"474587b09eb7b984315b1d34b8de8dcd2029b5d1c0cf61d525f26f84d3c97d8d\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"911f18a55d9bcc63bd82640993fd1585b59d33b6065d93131adbaf40039ae754\"" Dec 13 13:14:50.222800 containerd[1970]: time="2024-12-13T13:14:50.221551730Z" level=info msg="StartContainer for \"911f18a55d9bcc63bd82640993fd1585b59d33b6065d93131adbaf40039ae754\"" Dec 13 13:14:50.264779 systemd[1]: Started cri-containerd-911f18a55d9bcc63bd82640993fd1585b59d33b6065d93131adbaf40039ae754.scope - libcontainer container 911f18a55d9bcc63bd82640993fd1585b59d33b6065d93131adbaf40039ae754. Dec 13 13:14:50.274616 containerd[1970]: time="2024-12-13T13:14:50.274543730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pgv48,Uid:1b1eb772-5864-4726-866c-757f4dcf8eee,Namespace:kube-system,Attempt:0,}" Dec 13 13:14:50.301753 containerd[1970]: time="2024-12-13T13:14:50.301584074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-smtzm,Uid:c90bffea-5474-4816-ab6d-8abd101915ff,Namespace:kube-system,Attempt:0,}" Dec 13 13:14:50.338272 containerd[1970]: time="2024-12-13T13:14:50.335053179Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pgv48,Uid:1b1eb772-5864-4726-866c-757f4dcf8eee,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e8a105dda6daa79903c0bd04e46453d9a30a974bb6b83a8508fb9986395ae639\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 13:14:50.338439 kubelet[3352]: E1213 13:14:50.336006 3352 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8a105dda6daa79903c0bd04e46453d9a30a974bb6b83a8508fb9986395ae639\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 13:14:50.338439 kubelet[3352]: E1213 13:14:50.336088 3352 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8a105dda6daa79903c0bd04e46453d9a30a974bb6b83a8508fb9986395ae639\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-pgv48" Dec 13 13:14:50.338439 kubelet[3352]: E1213 13:14:50.336125 3352 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8a105dda6daa79903c0bd04e46453d9a30a974bb6b83a8508fb9986395ae639\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-pgv48" Dec 13 13:14:50.338439 kubelet[3352]: E1213 13:14:50.336263 3352 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-pgv48_kube-system(1b1eb772-5864-4726-866c-757f4dcf8eee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-pgv48_kube-system(1b1eb772-5864-4726-866c-757f4dcf8eee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8a105dda6daa79903c0bd04e46453d9a30a974bb6b83a8508fb9986395ae639\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-pgv48" podUID="1b1eb772-5864-4726-866c-757f4dcf8eee" Dec 13 13:14:50.344608 containerd[1970]: time="2024-12-13T13:14:50.343876503Z" level=info msg="StartContainer for \"911f18a55d9bcc63bd82640993fd1585b59d33b6065d93131adbaf40039ae754\" returns successfully" Dec 13 13:14:50.362470 containerd[1970]: time="2024-12-13T13:14:50.362402259Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-smtzm,Uid:c90bffea-5474-4816-ab6d-8abd101915ff,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9757aa7076c8d899a027df02ce0ca631b7184ccfc6fe5f1d4dfd513b734a561f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 13:14:50.362793 kubelet[3352]: E1213 13:14:50.362749 3352 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9757aa7076c8d899a027df02ce0ca631b7184ccfc6fe5f1d4dfd513b734a561f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 13:14:50.362884 kubelet[3352]: E1213 13:14:50.362820 3352 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9757aa7076c8d899a027df02ce0ca631b7184ccfc6fe5f1d4dfd513b734a561f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-smtzm" Dec 13 13:14:50.362884 kubelet[3352]: E1213 13:14:50.362856 3352 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9757aa7076c8d899a027df02ce0ca631b7184ccfc6fe5f1d4dfd513b734a561f\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-smtzm" Dec 13 13:14:50.362993 kubelet[3352]: E1213 13:14:50.362930 3352 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-smtzm_kube-system(c90bffea-5474-4816-ab6d-8abd101915ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-smtzm_kube-system(c90bffea-5474-4816-ab6d-8abd101915ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9757aa7076c8d899a027df02ce0ca631b7184ccfc6fe5f1d4dfd513b734a561f\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-smtzm" podUID="c90bffea-5474-4816-ab6d-8abd101915ff" Dec 13 13:14:50.688663 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bbf133e708e684306f6e67a640d37e5d26896af0b0e7d569a052a41edd3a6de-rootfs.mount: Deactivated successfully. Dec 13 13:14:51.424711 (udev-worker)[3901]: Network interface NamePolicy= disabled on kernel command line. Dec 13 13:14:51.443875 systemd-networkd[1840]: flannel.1: Link UP Dec 13 13:14:51.443890 systemd-networkd[1840]: flannel.1: Gained carrier Dec 13 13:14:53.470584 systemd-networkd[1840]: flannel.1: Gained IPv6LL Dec 13 13:14:55.538791 ntpd[1939]: Listen normally on 8 flannel.1 192.168.0.0:123 Dec 13 13:14:55.538925 ntpd[1939]: Listen normally on 9 flannel.1 [fe80::8c48:c0ff:fe38:cdc4%4]:123 Dec 13 13:14:55.539791 ntpd[1939]: 13 Dec 13:14:55 ntpd[1939]: Listen normally on 8 flannel.1 192.168.0.0:123 Dec 13 13:14:55.539791 ntpd[1939]: 13 Dec 13:14:55 ntpd[1939]: Listen normally on 9 flannel.1 [fe80::8c48:c0ff:fe38:cdc4%4]:123 Dec 13 13:15:02.056005 containerd[1970]: time="2024-12-13T13:15:02.055886749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-smtzm,Uid:c90bffea-5474-4816-ab6d-8abd101915ff,Namespace:kube-system,Attempt:0,}" Dec 13 13:15:02.090505 systemd-networkd[1840]: cni0: Link UP Dec 13 13:15:02.090520 systemd-networkd[1840]: cni0: Gained carrier Dec 13 13:15:02.101608 systemd-networkd[1840]: vethbc756d69: Link UP Dec 13 13:15:02.102630 kernel: cni0: port 1(vethbc756d69) entered blocking state Dec 13 13:15:02.102723 kernel: cni0: port 1(vethbc756d69) entered disabled state Dec 13 13:15:02.105604 kernel: vethbc756d69: entered allmulticast mode Dec 13 13:15:02.105692 kernel: vethbc756d69: entered promiscuous mode Dec 13 13:15:02.107052 systemd-networkd[1840]: cni0: Lost carrier Dec 13 13:15:02.108715 (udev-worker)[4043]: Network interface NamePolicy= disabled on kernel command line. Dec 13 13:15:02.109630 (udev-worker)[4039]: Network interface NamePolicy= disabled on kernel command line. Dec 13 13:15:02.116728 kernel: cni0: port 1(vethbc756d69) entered blocking state Dec 13 13:15:02.117844 kernel: cni0: port 1(vethbc756d69) entered forwarding state Dec 13 13:15:02.117026 systemd-networkd[1840]: vethbc756d69: Gained carrier Dec 13 13:15:02.120585 systemd-networkd[1840]: cni0: Gained carrier Dec 13 13:15:02.125593 containerd[1970]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"} Dec 13 13:15:02.125593 containerd[1970]: delegateAdd: netconf sent to delegate plugin: Dec 13 13:15:02.167494 containerd[1970]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2024-12-13T13:15:02.167301505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:15:02.167494 containerd[1970]: time="2024-12-13T13:15:02.167394337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:15:02.167494 containerd[1970]: time="2024-12-13T13:15:02.167420485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:15:02.169530 containerd[1970]: time="2024-12-13T13:15:02.167566633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:15:02.206536 systemd[1]: Started cri-containerd-5345f5e42ee1ef9c61097b61b98d12034f0cdb0674380c6f2f0ab6dfcdbc64b9.scope - libcontainer container 5345f5e42ee1ef9c61097b61b98d12034f0cdb0674380c6f2f0ab6dfcdbc64b9. Dec 13 13:15:02.281415 containerd[1970]: time="2024-12-13T13:15:02.281179310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-smtzm,Uid:c90bffea-5474-4816-ab6d-8abd101915ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"5345f5e42ee1ef9c61097b61b98d12034f0cdb0674380c6f2f0ab6dfcdbc64b9\"" Dec 13 13:15:02.287142 containerd[1970]: time="2024-12-13T13:15:02.287055794Z" level=info msg="CreateContainer within sandbox \"5345f5e42ee1ef9c61097b61b98d12034f0cdb0674380c6f2f0ab6dfcdbc64b9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:15:02.308753 containerd[1970]: time="2024-12-13T13:15:02.308095142Z" level=info msg="CreateContainer within sandbox \"5345f5e42ee1ef9c61097b61b98d12034f0cdb0674380c6f2f0ab6dfcdbc64b9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bf26c83dd0de07b7aaec26eb6cf858e00dfe863143c4b1936873d01f17e141b4\"" Dec 13 13:15:02.310196 containerd[1970]: time="2024-12-13T13:15:02.310152710Z" level=info msg="StartContainer for \"bf26c83dd0de07b7aaec26eb6cf858e00dfe863143c4b1936873d01f17e141b4\"" Dec 13 13:15:02.358554 systemd[1]: Started cri-containerd-bf26c83dd0de07b7aaec26eb6cf858e00dfe863143c4b1936873d01f17e141b4.scope - libcontainer container bf26c83dd0de07b7aaec26eb6cf858e00dfe863143c4b1936873d01f17e141b4. Dec 13 13:15:02.404164 containerd[1970]: time="2024-12-13T13:15:02.403878267Z" level=info msg="StartContainer for \"bf26c83dd0de07b7aaec26eb6cf858e00dfe863143c4b1936873d01f17e141b4\" returns successfully" Dec 13 13:15:02.583782 systemd[1]: Started sshd@7-172.31.27.111:22-139.178.89.65:54082.service - OpenSSH per-connection server daemon (139.178.89.65:54082). Dec 13 13:15:02.772410 sshd[4132]: Accepted publickey for core from 139.178.89.65 port 54082 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:15:02.774928 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:15:02.782944 systemd-logind[1944]: New session 8 of user core. Dec 13 13:15:02.791546 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 13:15:03.041538 sshd[4134]: Connection closed by 139.178.89.65 port 54082 Dec 13 13:15:03.042044 sshd-session[4132]: pam_unix(sshd:session): session closed for user core Dec 13 13:15:03.048101 systemd[1]: sshd@7-172.31.27.111:22-139.178.89.65:54082.service: Deactivated successfully. Dec 13 13:15:03.052396 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 13:15:03.058710 systemd-logind[1944]: Session 8 logged out. Waiting for processes to exit. Dec 13 13:15:03.062058 systemd-logind[1944]: Removed session 8. Dec 13 13:15:03.255193 kubelet[3352]: I1213 13:15:03.254536 3352 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-tdxvx" podStartSLOduration=14.680851115 podStartE2EDuration="21.254480439s" podCreationTimestamp="2024-12-13 13:14:42 +0000 UTC" firstStartedPulling="2024-12-13 13:14:43.086691031 +0000 UTC m=+14.316353280" lastFinishedPulling="2024-12-13 13:14:49.660320355 +0000 UTC m=+20.889982604" observedRunningTime="2024-12-13 13:14:51.219394119 +0000 UTC m=+22.449056392" watchObservedRunningTime="2024-12-13 13:15:03.254480439 +0000 UTC m=+34.484142712" Dec 13 13:15:03.255829 kubelet[3352]: I1213 13:15:03.255502 3352 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-smtzm" podStartSLOduration=21.255442191 podStartE2EDuration="21.255442191s" podCreationTimestamp="2024-12-13 13:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:15:03.254368059 +0000 UTC m=+34.484030320" watchObservedRunningTime="2024-12-13 13:15:03.255442191 +0000 UTC m=+34.485104452" Dec 13 13:15:03.262531 systemd-networkd[1840]: vethbc756d69: Gained IPv6LL Dec 13 13:15:03.710603 systemd-networkd[1840]: cni0: Gained IPv6LL Dec 13 13:15:04.055843 containerd[1970]: time="2024-12-13T13:15:04.055678311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pgv48,Uid:1b1eb772-5864-4726-866c-757f4dcf8eee,Namespace:kube-system,Attempt:0,}" Dec 13 13:15:04.097335 (udev-worker)[4041]: Network interface NamePolicy= disabled on kernel command line. Dec 13 13:15:04.098401 systemd-networkd[1840]: veth8ce55756: Link UP Dec 13 13:15:04.102107 kernel: cni0: port 2(veth8ce55756) entered blocking state Dec 13 13:15:04.102195 kernel: cni0: port 2(veth8ce55756) entered disabled state Dec 13 13:15:04.103160 kernel: veth8ce55756: entered allmulticast mode Dec 13 13:15:04.103221 kernel: veth8ce55756: entered promiscuous mode Dec 13 13:15:04.105655 kernel: cni0: port 2(veth8ce55756) entered blocking state Dec 13 13:15:04.105751 kernel: cni0: port 2(veth8ce55756) entered forwarding state Dec 13 13:15:04.114619 systemd-networkd[1840]: veth8ce55756: Gained carrier Dec 13 13:15:04.123363 containerd[1970]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000016938), "name":"cbr0", "type":"bridge"} Dec 13 13:15:04.123363 containerd[1970]: delegateAdd: netconf sent to delegate plugin: Dec 13 13:15:04.158920 containerd[1970]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2024-12-13T13:15:04.158461743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:15:04.158920 containerd[1970]: time="2024-12-13T13:15:04.158574567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:15:04.158920 containerd[1970]: time="2024-12-13T13:15:04.158610231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:15:04.158920 containerd[1970]: time="2024-12-13T13:15:04.158766795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:15:04.199567 systemd[1]: Started cri-containerd-a2760a6567d6e720c3a4909751c94d76310cc900f99b78ec354c3db60d530d7d.scope - libcontainer container a2760a6567d6e720c3a4909751c94d76310cc900f99b78ec354c3db60d530d7d. Dec 13 13:15:04.279808 containerd[1970]: time="2024-12-13T13:15:04.279691876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-pgv48,Uid:1b1eb772-5864-4726-866c-757f4dcf8eee,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2760a6567d6e720c3a4909751c94d76310cc900f99b78ec354c3db60d530d7d\"" Dec 13 13:15:04.287116 containerd[1970]: time="2024-12-13T13:15:04.287055196Z" level=info msg="CreateContainer within sandbox \"a2760a6567d6e720c3a4909751c94d76310cc900f99b78ec354c3db60d530d7d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:15:04.317299 containerd[1970]: time="2024-12-13T13:15:04.317120836Z" level=info msg="CreateContainer within sandbox \"a2760a6567d6e720c3a4909751c94d76310cc900f99b78ec354c3db60d530d7d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"634510ce979dc30e233947e98bce33db9df3b065d63cd2e57071f698bd3c233a\"" Dec 13 13:15:04.319484 containerd[1970]: time="2024-12-13T13:15:04.319054948Z" level=info msg="StartContainer for \"634510ce979dc30e233947e98bce33db9df3b065d63cd2e57071f698bd3c233a\"" Dec 13 13:15:04.362555 systemd[1]: Started cri-containerd-634510ce979dc30e233947e98bce33db9df3b065d63cd2e57071f698bd3c233a.scope - libcontainer container 634510ce979dc30e233947e98bce33db9df3b065d63cd2e57071f698bd3c233a. Dec 13 13:15:04.409945 containerd[1970]: time="2024-12-13T13:15:04.409873925Z" level=info msg="StartContainer for \"634510ce979dc30e233947e98bce33db9df3b065d63cd2e57071f698bd3c233a\" returns successfully" Dec 13 13:15:05.078141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1087482193.mount: Deactivated successfully. Dec 13 13:15:05.263340 kubelet[3352]: I1213 13:15:05.262634 3352 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-pgv48" podStartSLOduration=23.262442729 podStartE2EDuration="23.262442729s" podCreationTimestamp="2024-12-13 13:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:15:05.261917885 +0000 UTC m=+36.491580146" watchObservedRunningTime="2024-12-13 13:15:05.262442729 +0000 UTC m=+36.492104990" Dec 13 13:15:05.694582 systemd-networkd[1840]: veth8ce55756: Gained IPv6LL Dec 13 13:15:08.087945 systemd[1]: Started sshd@8-172.31.27.111:22-139.178.89.65:35214.service - OpenSSH per-connection server daemon (139.178.89.65:35214). Dec 13 13:15:08.266899 sshd[4283]: Accepted publickey for core from 139.178.89.65 port 35214 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:15:08.269727 sshd-session[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:15:08.278018 systemd-logind[1944]: New session 9 of user core. Dec 13 13:15:08.287508 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 13:15:08.538830 ntpd[1939]: Listen normally on 10 cni0 192.168.0.1:123 Dec 13 13:15:08.539827 ntpd[1939]: 13 Dec 13:15:08 ntpd[1939]: Listen normally on 10 cni0 192.168.0.1:123 Dec 13 13:15:08.539827 ntpd[1939]: 13 Dec 13:15:08 ntpd[1939]: Listen normally on 11 cni0 [fe80::a4a0:f7ff:fe2d:bcb7%5]:123 Dec 13 13:15:08.539827 ntpd[1939]: 13 Dec 13:15:08 ntpd[1939]: Listen normally on 12 vethbc756d69 [fe80::a86a:e4ff:fef4:a29c%6]:123 Dec 13 13:15:08.539827 ntpd[1939]: 13 Dec 13:15:08 ntpd[1939]: Listen normally on 13 veth8ce55756 [fe80::b854:4fff:fe49:2479%7]:123 Dec 13 13:15:08.538966 ntpd[1939]: Listen normally on 11 cni0 [fe80::a4a0:f7ff:fe2d:bcb7%5]:123 Dec 13 13:15:08.539082 ntpd[1939]: Listen normally on 12 vethbc756d69 [fe80::a86a:e4ff:fef4:a29c%6]:123 Dec 13 13:15:08.539162 ntpd[1939]: Listen normally on 13 veth8ce55756 [fe80::b854:4fff:fe49:2479%7]:123 Dec 13 13:15:08.548765 sshd[4285]: Connection closed by 139.178.89.65 port 35214 Dec 13 13:15:08.547570 sshd-session[4283]: pam_unix(sshd:session): session closed for user core Dec 13 13:15:08.553883 systemd[1]: sshd@8-172.31.27.111:22-139.178.89.65:35214.service: Deactivated successfully. Dec 13 13:15:08.558783 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 13:15:08.560429 systemd-logind[1944]: Session 9 logged out. Waiting for processes to exit. Dec 13 13:15:08.562526 systemd-logind[1944]: Removed session 9. Dec 13 13:15:13.586705 systemd[1]: Started sshd@9-172.31.27.111:22-139.178.89.65:35230.service - OpenSSH per-connection server daemon (139.178.89.65:35230). Dec 13 13:15:13.782050 sshd[4319]: Accepted publickey for core from 139.178.89.65 port 35230 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:15:13.784561 sshd-session[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:15:13.792667 systemd-logind[1944]: New session 10 of user core. Dec 13 13:15:13.798527 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 13:15:14.040806 sshd[4321]: Connection closed by 139.178.89.65 port 35230 Dec 13 13:15:14.042053 sshd-session[4319]: pam_unix(sshd:session): session closed for user core Dec 13 13:15:14.048742 systemd[1]: sshd@9-172.31.27.111:22-139.178.89.65:35230.service: Deactivated successfully. Dec 13 13:15:14.053376 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 13:15:14.055120 systemd-logind[1944]: Session 10 logged out. Waiting for processes to exit. Dec 13 13:15:14.056948 systemd-logind[1944]: Removed session 10. Dec 13 13:15:14.077804 systemd[1]: Started sshd@10-172.31.27.111:22-139.178.89.65:35236.service - OpenSSH per-connection server daemon (139.178.89.65:35236). Dec 13 13:15:14.262763 sshd[4333]: Accepted publickey for core from 139.178.89.65 port 35236 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:15:14.265438 sshd-session[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:15:14.274902 systemd-logind[1944]: New session 11 of user core. Dec 13 13:15:14.281491 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 13:15:14.609269 sshd[4335]: Connection closed by 139.178.89.65 port 35236 Dec 13 13:15:14.607416 sshd-session[4333]: pam_unix(sshd:session): session closed for user core Dec 13 13:15:14.615078 systemd[1]: sshd@10-172.31.27.111:22-139.178.89.65:35236.service: Deactivated successfully. Dec 13 13:15:14.623675 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 13:15:14.628643 systemd-logind[1944]: Session 11 logged out. Waiting for processes to exit. Dec 13 13:15:14.652907 systemd[1]: Started sshd@11-172.31.27.111:22-139.178.89.65:35246.service - OpenSSH per-connection server daemon (139.178.89.65:35246). Dec 13 13:15:14.656295 systemd-logind[1944]: Removed session 11. Dec 13 13:15:14.839279 sshd[4344]: Accepted publickey for core from 139.178.89.65 port 35246 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:15:14.841744 sshd-session[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:15:14.850207 systemd-logind[1944]: New session 12 of user core. Dec 13 13:15:14.856553 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 13:15:15.107106 sshd[4348]: Connection closed by 139.178.89.65 port 35246 Dec 13 13:15:15.108027 sshd-session[4344]: pam_unix(sshd:session): session closed for user core Dec 13 13:15:15.114138 systemd[1]: sshd@11-172.31.27.111:22-139.178.89.65:35246.service: Deactivated successfully. Dec 13 13:15:15.119079 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 13:15:15.120591 systemd-logind[1944]: Session 12 logged out. Waiting for processes to exit. Dec 13 13:15:15.122880 systemd-logind[1944]: Removed session 12. Dec 13 13:15:20.147767 systemd[1]: Started sshd@12-172.31.27.111:22-139.178.89.65:45238.service - OpenSSH per-connection server daemon (139.178.89.65:45238). Dec 13 13:15:20.336071 sshd[4380]: Accepted publickey for core from 139.178.89.65 port 45238 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:15:20.339120 sshd-session[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:15:20.346827 systemd-logind[1944]: New session 13 of user core. Dec 13 13:15:20.358511 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 13:15:20.620945 sshd[4382]: Connection closed by 139.178.89.65 port 45238 Dec 13 13:15:20.621875 sshd-session[4380]: pam_unix(sshd:session): session closed for user core Dec 13 13:15:20.628583 systemd[1]: sshd@12-172.31.27.111:22-139.178.89.65:45238.service: Deactivated successfully. Dec 13 13:15:20.633725 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 13:15:20.635383 systemd-logind[1944]: Session 13 logged out. Waiting for processes to exit. Dec 13 13:15:20.636969 systemd-logind[1944]: Removed session 13. Dec 13 13:15:20.659775 systemd[1]: Started sshd@13-172.31.27.111:22-139.178.89.65:45242.service - OpenSSH per-connection server daemon (139.178.89.65:45242). Dec 13 13:15:20.853988 sshd[4393]: Accepted publickey for core from 139.178.89.65 port 45242 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:15:20.856988 sshd-session[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:15:20.864544 systemd-logind[1944]: New session 14 of user core. Dec 13 13:15:20.874523 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 13:15:21.169573 sshd[4395]: Connection closed by 139.178.89.65 port 45242 Dec 13 13:15:21.171564 sshd-session[4393]: pam_unix(sshd:session): session closed for user core Dec 13 13:15:21.176604 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 13:15:21.178537 systemd[1]: sshd@13-172.31.27.111:22-139.178.89.65:45242.service: Deactivated successfully. Dec 13 13:15:21.184396 systemd-logind[1944]: Session 14 logged out. Waiting for processes to exit. Dec 13 13:15:21.185984 systemd-logind[1944]: Removed session 14. Dec 13 13:15:21.206959 systemd[1]: Started sshd@14-172.31.27.111:22-139.178.89.65:45252.service - OpenSSH per-connection server daemon (139.178.89.65:45252). Dec 13 13:15:21.393540 sshd[4403]: Accepted publickey for core from 139.178.89.65 port 45252 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:15:21.395959 sshd-session[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:15:21.404468 systemd-logind[1944]: New session 15 of user core. Dec 13 13:15:21.412498 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 13:15:23.919719 sshd[4405]: Connection closed by 139.178.89.65 port 45252 Dec 13 13:15:23.919591 sshd-session[4403]: pam_unix(sshd:session): session closed for user core Dec 13 13:15:23.933594 systemd[1]: sshd@14-172.31.27.111:22-139.178.89.65:45252.service: Deactivated successfully. Dec 13 13:15:23.940659 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 13:15:23.942549 systemd-logind[1944]: Session 15 logged out. Waiting for processes to exit. Dec 13 13:15:23.969591 systemd[1]: Started sshd@15-172.31.27.111:22-139.178.89.65:45268.service - OpenSSH per-connection server daemon (139.178.89.65:45268). Dec 13 13:15:23.972193 systemd-logind[1944]: Removed session 15. Dec 13 13:15:24.151112 sshd[4443]: Accepted publickey for core from 139.178.89.65 port 45268 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:15:24.154248 sshd-session[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:15:24.162261 systemd-logind[1944]: New session 16 of user core. Dec 13 13:15:24.169549 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 13:15:24.651303 sshd[4445]: Connection closed by 139.178.89.65 port 45268 Dec 13 13:15:24.652152 sshd-session[4443]: pam_unix(sshd:session): session closed for user core Dec 13 13:15:24.659334 systemd[1]: sshd@15-172.31.27.111:22-139.178.89.65:45268.service: Deactivated successfully. Dec 13 13:15:24.663800 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 13:15:24.666364 systemd-logind[1944]: Session 16 logged out. Waiting for processes to exit. Dec 13 13:15:24.668985 systemd-logind[1944]: Removed session 16. Dec 13 13:15:24.690761 systemd[1]: Started sshd@16-172.31.27.111:22-139.178.89.65:45274.service - OpenSSH per-connection server daemon (139.178.89.65:45274). Dec 13 13:15:24.876953 sshd[4454]: Accepted publickey for core from 139.178.89.65 port 45274 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:15:24.879726 sshd-session[4454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:15:24.891306 systemd-logind[1944]: New session 17 of user core. Dec 13 13:15:24.896564 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 13:15:25.139288 sshd[4456]: Connection closed by 139.178.89.65 port 45274 Dec 13 13:15:25.140440 sshd-session[4454]: pam_unix(sshd:session): session closed for user core Dec 13 13:15:25.146430 systemd[1]: sshd@16-172.31.27.111:22-139.178.89.65:45274.service: Deactivated successfully. Dec 13 13:15:25.150997 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 13:15:25.153010 systemd-logind[1944]: Session 17 logged out. Waiting for processes to exit. Dec 13 13:15:25.155399 systemd-logind[1944]: Removed session 17. Dec 13 13:15:30.179774 systemd[1]: Started sshd@17-172.31.27.111:22-139.178.89.65:41500.service - OpenSSH per-connection server daemon (139.178.89.65:41500). Dec 13 13:15:30.379950 sshd[4489]: Accepted publickey for core from 139.178.89.65 port 41500 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:15:30.382460 sshd-session[4489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:15:30.389978 systemd-logind[1944]: New session 18 of user core. Dec 13 13:15:30.396482 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 13:15:30.637422 sshd[4491]: Connection closed by 139.178.89.65 port 41500 Dec 13 13:15:30.637910 sshd-session[4489]: pam_unix(sshd:session): session closed for user core Dec 13 13:15:30.644639 systemd[1]: sshd@17-172.31.27.111:22-139.178.89.65:41500.service: Deactivated successfully. Dec 13 13:15:30.648907 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 13:15:30.651348 systemd-logind[1944]: Session 18 logged out. Waiting for processes to exit. Dec 13 13:15:30.653160 systemd-logind[1944]: Removed session 18. Dec 13 13:15:35.677806 systemd[1]: Started sshd@18-172.31.27.111:22-139.178.89.65:41508.service - OpenSSH per-connection server daemon (139.178.89.65:41508). Dec 13 13:15:35.871277 sshd[4526]: Accepted publickey for core from 139.178.89.65 port 41508 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:15:35.873714 sshd-session[4526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:15:35.881953 systemd-logind[1944]: New session 19 of user core. Dec 13 13:15:35.888509 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 13:15:36.132097 sshd[4528]: Connection closed by 139.178.89.65 port 41508 Dec 13 13:15:36.132934 sshd-session[4526]: pam_unix(sshd:session): session closed for user core Dec 13 13:15:36.139342 systemd[1]: sshd@18-172.31.27.111:22-139.178.89.65:41508.service: Deactivated successfully. Dec 13 13:15:36.143500 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 13:15:36.145536 systemd-logind[1944]: Session 19 logged out. Waiting for processes to exit. Dec 13 13:15:36.147383 systemd-logind[1944]: Removed session 19. Dec 13 13:15:41.171781 systemd[1]: Started sshd@19-172.31.27.111:22-139.178.89.65:37714.service - OpenSSH per-connection server daemon (139.178.89.65:37714). Dec 13 13:15:41.366460 sshd[4561]: Accepted publickey for core from 139.178.89.65 port 37714 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:15:41.368979 sshd-session[4561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:15:41.376205 systemd-logind[1944]: New session 20 of user core. Dec 13 13:15:41.385563 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 13:15:41.630503 sshd[4563]: Connection closed by 139.178.89.65 port 37714 Dec 13 13:15:41.631537 sshd-session[4561]: pam_unix(sshd:session): session closed for user core Dec 13 13:15:41.638048 systemd[1]: sshd@19-172.31.27.111:22-139.178.89.65:37714.service: Deactivated successfully. Dec 13 13:15:41.641968 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 13:15:41.643896 systemd-logind[1944]: Session 20 logged out. Waiting for processes to exit. Dec 13 13:15:41.645888 systemd-logind[1944]: Removed session 20. Dec 13 13:15:46.672759 systemd[1]: Started sshd@20-172.31.27.111:22-139.178.89.65:37716.service - OpenSSH per-connection server daemon (139.178.89.65:37716). Dec 13 13:15:46.861559 sshd[4597]: Accepted publickey for core from 139.178.89.65 port 37716 ssh2: RSA SHA256:5Kg9OcrZzPx9+IQT5C5GfxT/ghwdzAdT4IUYKbDF5Cw Dec 13 13:15:46.864013 sshd-session[4597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:15:46.872386 systemd-logind[1944]: New session 21 of user core. Dec 13 13:15:46.877875 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 13:15:47.127819 sshd[4605]: Connection closed by 139.178.89.65 port 37716 Dec 13 13:15:47.128340 sshd-session[4597]: pam_unix(sshd:session): session closed for user core Dec 13 13:15:47.135718 systemd[1]: sshd@20-172.31.27.111:22-139.178.89.65:37716.service: Deactivated successfully. Dec 13 13:15:47.139127 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 13:15:47.140796 systemd-logind[1944]: Session 21 logged out. Waiting for processes to exit. Dec 13 13:15:47.143303 systemd-logind[1944]: Removed session 21. Dec 13 13:16:00.930591 systemd[1]: cri-containerd-bf51da624258d8e97a94a8c2238579f049d57a52e98f4843ca0fc95aa1de366a.scope: Deactivated successfully. Dec 13 13:16:00.931313 systemd[1]: cri-containerd-bf51da624258d8e97a94a8c2238579f049d57a52e98f4843ca0fc95aa1de366a.scope: Consumed 3.810s CPU time, 22.2M memory peak, 0B memory swap peak. Dec 13 13:16:00.973083 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf51da624258d8e97a94a8c2238579f049d57a52e98f4843ca0fc95aa1de366a-rootfs.mount: Deactivated successfully. Dec 13 13:16:00.986772 kubelet[3352]: E1213 13:16:00.986417 3352 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-111?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 13 13:16:00.990706 containerd[1970]: time="2024-12-13T13:16:00.989740394Z" level=info msg="shim disconnected" id=bf51da624258d8e97a94a8c2238579f049d57a52e98f4843ca0fc95aa1de366a namespace=k8s.io Dec 13 13:16:00.990706 containerd[1970]: time="2024-12-13T13:16:00.989821634Z" level=warning msg="cleaning up after shim disconnected" id=bf51da624258d8e97a94a8c2238579f049d57a52e98f4843ca0fc95aa1de366a namespace=k8s.io Dec 13 13:16:00.990706 containerd[1970]: time="2024-12-13T13:16:00.989840570Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:16:01.379722 kubelet[3352]: I1213 13:16:01.379669 3352 scope.go:117] "RemoveContainer" containerID="bf51da624258d8e97a94a8c2238579f049d57a52e98f4843ca0fc95aa1de366a" Dec 13 13:16:01.383591 containerd[1970]: time="2024-12-13T13:16:01.383336208Z" level=info msg="CreateContainer within sandbox \"be6e094ef8ccf157bff33672e42a6ccabad436d3dd332f478d3166ecb9c689e8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 13:16:01.414216 containerd[1970]: time="2024-12-13T13:16:01.414004200Z" level=info msg="CreateContainer within sandbox \"be6e094ef8ccf157bff33672e42a6ccabad436d3dd332f478d3166ecb9c689e8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"5ab978827517b4943759e59b8156a91784cac4df88c7758062bb8ce2f3cb1816\"" Dec 13 13:16:01.414839 containerd[1970]: time="2024-12-13T13:16:01.414779088Z" level=info msg="StartContainer for \"5ab978827517b4943759e59b8156a91784cac4df88c7758062bb8ce2f3cb1816\"" Dec 13 13:16:01.472551 systemd[1]: Started cri-containerd-5ab978827517b4943759e59b8156a91784cac4df88c7758062bb8ce2f3cb1816.scope - libcontainer container 5ab978827517b4943759e59b8156a91784cac4df88c7758062bb8ce2f3cb1816. Dec 13 13:16:01.539775 containerd[1970]: time="2024-12-13T13:16:01.539632740Z" level=info msg="StartContainer for \"5ab978827517b4943759e59b8156a91784cac4df88c7758062bb8ce2f3cb1816\" returns successfully" Dec 13 13:16:06.805696 systemd[1]: cri-containerd-472fc8d1f1ac3ee868edc415e544f2446723b5f87f9c205626d4ab6d5eb784e8.scope: Deactivated successfully. Dec 13 13:16:06.806704 systemd[1]: cri-containerd-472fc8d1f1ac3ee868edc415e544f2446723b5f87f9c205626d4ab6d5eb784e8.scope: Consumed 3.671s CPU time, 16.1M memory peak, 0B memory swap peak. Dec 13 13:16:06.849266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-472fc8d1f1ac3ee868edc415e544f2446723b5f87f9c205626d4ab6d5eb784e8-rootfs.mount: Deactivated successfully. Dec 13 13:16:06.865662 containerd[1970]: time="2024-12-13T13:16:06.865575835Z" level=info msg="shim disconnected" id=472fc8d1f1ac3ee868edc415e544f2446723b5f87f9c205626d4ab6d5eb784e8 namespace=k8s.io Dec 13 13:16:06.865662 containerd[1970]: time="2024-12-13T13:16:06.865655311Z" level=warning msg="cleaning up after shim disconnected" id=472fc8d1f1ac3ee868edc415e544f2446723b5f87f9c205626d4ab6d5eb784e8 namespace=k8s.io Dec 13 13:16:06.865662 containerd[1970]: time="2024-12-13T13:16:06.865676071Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:16:07.400188 kubelet[3352]: I1213 13:16:07.400049 3352 scope.go:117] "RemoveContainer" containerID="472fc8d1f1ac3ee868edc415e544f2446723b5f87f9c205626d4ab6d5eb784e8" Dec 13 13:16:07.403779 containerd[1970]: time="2024-12-13T13:16:07.403642181Z" level=info msg="CreateContainer within sandbox \"d73786682600e972ff6d21e198b5784fba93c023b3dabb5d6554b3293d8a4116\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 13:16:07.433119 containerd[1970]: time="2024-12-13T13:16:07.432972186Z" level=info msg="CreateContainer within sandbox \"d73786682600e972ff6d21e198b5784fba93c023b3dabb5d6554b3293d8a4116\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8856fe8187661b92cb3e28a3a57d2cd9ca06e9410819ddbcfba91b40cbfb05e8\"" Dec 13 13:16:07.433850 containerd[1970]: time="2024-12-13T13:16:07.433694514Z" level=info msg="StartContainer for \"8856fe8187661b92cb3e28a3a57d2cd9ca06e9410819ddbcfba91b40cbfb05e8\"" Dec 13 13:16:07.489560 systemd[1]: Started cri-containerd-8856fe8187661b92cb3e28a3a57d2cd9ca06e9410819ddbcfba91b40cbfb05e8.scope - libcontainer container 8856fe8187661b92cb3e28a3a57d2cd9ca06e9410819ddbcfba91b40cbfb05e8. Dec 13 13:16:07.558571 containerd[1970]: time="2024-12-13T13:16:07.558358014Z" level=info msg="StartContainer for \"8856fe8187661b92cb3e28a3a57d2cd9ca06e9410819ddbcfba91b40cbfb05e8\" returns successfully" Dec 13 13:16:10.988503 kubelet[3352]: E1213 13:16:10.988093 3352 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-111?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 13:16:20.989836 kubelet[3352]: E1213 13:16:20.989523 3352 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-27-111)"