Jan 13 21:10:41.253121 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 13 21:10:41.253232 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jan 13 19:43:39 -00 2025 Jan 13 21:10:41.253301 kernel: KASLR disabled due to lack of seed Jan 13 21:10:41.253349 kernel: efi: EFI v2.7 by EDK II Jan 13 21:10:41.253398 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Jan 13 21:10:41.253437 kernel: ACPI: Early table checksum verification disabled Jan 13 21:10:41.253471 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 13 21:10:41.253496 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 13 21:10:41.253530 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 21:10:41.253562 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 13 21:10:41.253609 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 21:10:41.253642 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 13 21:10:41.253675 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 13 21:10:41.253709 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 13 21:10:41.253751 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 21:10:41.253780 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 13 21:10:41.253820 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 13 21:10:41.253870 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 13 21:10:41.253912 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 13 21:10:41.253948 kernel: printk: bootconsole [uart0] enabled Jan 13 21:10:41.253969 kernel: NUMA: Failed to initialise from firmware Jan 13 21:10:41.253988 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 21:10:41.254008 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 13 21:10:41.254026 kernel: Zone ranges: Jan 13 21:10:41.254081 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 13 21:10:41.254101 kernel: DMA32 empty Jan 13 21:10:41.254128 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 13 21:10:41.254146 kernel: Movable zone start for each node Jan 13 21:10:41.254165 kernel: Early memory node ranges Jan 13 21:10:41.254183 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 13 21:10:41.254200 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 13 21:10:41.254218 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 13 21:10:41.254235 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 13 21:10:41.254252 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 13 21:10:41.254270 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 13 21:10:41.254287 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 13 21:10:41.254305 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 13 21:10:41.254324 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 13 21:10:41.254348 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 13 21:10:41.254367 kernel: psci: probing for conduit method from ACPI. Jan 13 21:10:41.254394 kernel: psci: PSCIv1.0 detected in firmware. Jan 13 21:10:41.254413 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 21:10:41.254432 kernel: psci: Trusted OS migration not required Jan 13 21:10:41.254455 kernel: psci: SMC Calling Convention v1.1 Jan 13 21:10:41.254481 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 21:10:41.254499 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 21:10:41.254518 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 13 21:10:41.254538 kernel: Detected PIPT I-cache on CPU0 Jan 13 21:10:41.254556 kernel: CPU features: detected: GIC system register CPU interface Jan 13 21:10:41.254574 kernel: CPU features: detected: Spectre-v2 Jan 13 21:10:41.254592 kernel: CPU features: detected: Spectre-v3a Jan 13 21:10:41.254610 kernel: CPU features: detected: Spectre-BHB Jan 13 21:10:41.254628 kernel: CPU features: detected: ARM erratum 1742098 Jan 13 21:10:41.254646 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 13 21:10:41.254669 kernel: alternatives: applying boot alternatives Jan 13 21:10:41.254691 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:10:41.254711 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:10:41.254730 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:10:41.254748 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:10:41.254768 kernel: Fallback order for Node 0: 0 Jan 13 21:10:41.254787 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 13 21:10:41.254805 kernel: Policy zone: Normal Jan 13 21:10:41.254823 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:10:41.254841 kernel: software IO TLB: area num 2. Jan 13 21:10:41.254859 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 13 21:10:41.254885 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Jan 13 21:10:41.254904 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 21:10:41.254923 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:10:41.254943 kernel: rcu: RCU event tracing is enabled. Jan 13 21:10:41.254962 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 21:10:41.254981 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:10:41.255000 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:10:41.255018 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:10:41.257101 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 21:10:41.257145 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 21:10:41.257164 kernel: GICv3: 96 SPIs implemented Jan 13 21:10:41.257193 kernel: GICv3: 0 Extended SPIs implemented Jan 13 21:10:41.257213 kernel: Root IRQ handler: gic_handle_irq Jan 13 21:10:41.257230 kernel: GICv3: GICv3 features: 16 PPIs Jan 13 21:10:41.257249 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 13 21:10:41.257267 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 13 21:10:41.257285 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 21:10:41.257303 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 13 21:10:41.257321 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 13 21:10:41.257339 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 13 21:10:41.257357 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 13 21:10:41.257375 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:10:41.257393 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 13 21:10:41.257417 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 13 21:10:41.257435 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 13 21:10:41.257453 kernel: Console: colour dummy device 80x25 Jan 13 21:10:41.257472 kernel: printk: console [tty1] enabled Jan 13 21:10:41.257490 kernel: ACPI: Core revision 20230628 Jan 13 21:10:41.257509 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 13 21:10:41.257527 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:10:41.257545 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:10:41.257563 kernel: landlock: Up and running. Jan 13 21:10:41.257587 kernel: SELinux: Initializing. Jan 13 21:10:41.257605 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:10:41.257623 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:10:41.257642 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:10:41.257660 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:10:41.257678 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:10:41.257698 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:10:41.257716 kernel: Platform MSI: ITS@0x10080000 domain created Jan 13 21:10:41.257734 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 13 21:10:41.257758 kernel: Remapping and enabling EFI services. Jan 13 21:10:41.257777 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:10:41.257795 kernel: Detected PIPT I-cache on CPU1 Jan 13 21:10:41.257813 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 13 21:10:41.257832 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 13 21:10:41.257850 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 13 21:10:41.257868 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:10:41.257886 kernel: SMP: Total of 2 processors activated. Jan 13 21:10:41.257904 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 21:10:41.257926 kernel: CPU features: detected: 32-bit EL1 Support Jan 13 21:10:41.257945 kernel: CPU features: detected: CRC32 instructions Jan 13 21:10:41.257963 kernel: CPU: All CPU(s) started at EL1 Jan 13 21:10:41.257994 kernel: alternatives: applying system-wide alternatives Jan 13 21:10:41.258018 kernel: devtmpfs: initialized Jan 13 21:10:41.258071 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:10:41.260190 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 21:10:41.260242 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:10:41.260282 kernel: SMBIOS 3.0.0 present. Jan 13 21:10:41.260328 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 13 21:10:41.260387 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:10:41.260420 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 21:10:41.260467 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 21:10:41.260519 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 21:10:41.260560 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:10:41.260590 kernel: audit: type=2000 audit(0.294:1): state=initialized audit_enabled=0 res=1 Jan 13 21:10:41.260612 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:10:41.260639 kernel: cpuidle: using governor menu Jan 13 21:10:41.260659 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 21:10:41.260678 kernel: ASID allocator initialised with 65536 entries Jan 13 21:10:41.260698 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:10:41.260717 kernel: Serial: AMBA PL011 UART driver Jan 13 21:10:41.260736 kernel: Modules: 17520 pages in range for non-PLT usage Jan 13 21:10:41.260755 kernel: Modules: 509040 pages in range for PLT usage Jan 13 21:10:41.260774 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:10:41.260793 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:10:41.260819 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 21:10:41.260838 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 21:10:41.260858 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:10:41.260877 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:10:41.260896 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 21:10:41.260914 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 21:10:41.260933 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:10:41.260952 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:10:41.260970 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:10:41.260994 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:10:41.261014 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:10:41.261056 kernel: ACPI: Interpreter enabled Jan 13 21:10:41.261104 kernel: ACPI: Using GIC for interrupt routing Jan 13 21:10:41.261128 kernel: ACPI: MCFG table detected, 1 entries Jan 13 21:10:41.261148 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 13 21:10:41.261459 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:10:41.261683 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 21:10:41.261904 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 21:10:41.264377 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 13 21:10:41.264680 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 13 21:10:41.264717 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 13 21:10:41.264738 kernel: acpiphp: Slot [1] registered Jan 13 21:10:41.264758 kernel: acpiphp: Slot [2] registered Jan 13 21:10:41.264778 kernel: acpiphp: Slot [3] registered Jan 13 21:10:41.264797 kernel: acpiphp: Slot [4] registered Jan 13 21:10:41.264830 kernel: acpiphp: Slot [5] registered Jan 13 21:10:41.264850 kernel: acpiphp: Slot [6] registered Jan 13 21:10:41.264870 kernel: acpiphp: Slot [7] registered Jan 13 21:10:41.264891 kernel: acpiphp: Slot [8] registered Jan 13 21:10:41.264910 kernel: acpiphp: Slot [9] registered Jan 13 21:10:41.264928 kernel: acpiphp: Slot [10] registered Jan 13 21:10:41.264947 kernel: acpiphp: Slot [11] registered Jan 13 21:10:41.264966 kernel: acpiphp: Slot [12] registered Jan 13 21:10:41.264985 kernel: acpiphp: Slot [13] registered Jan 13 21:10:41.265003 kernel: acpiphp: Slot [14] registered Jan 13 21:10:41.265028 kernel: acpiphp: Slot [15] registered Jan 13 21:10:41.265079 kernel: acpiphp: Slot [16] registered Jan 13 21:10:41.265098 kernel: acpiphp: Slot [17] registered Jan 13 21:10:41.265117 kernel: acpiphp: Slot [18] registered Jan 13 21:10:41.265136 kernel: acpiphp: Slot [19] registered Jan 13 21:10:41.265155 kernel: acpiphp: Slot [20] registered Jan 13 21:10:41.265174 kernel: acpiphp: Slot [21] registered Jan 13 21:10:41.265193 kernel: acpiphp: Slot [22] registered Jan 13 21:10:41.265211 kernel: acpiphp: Slot [23] registered Jan 13 21:10:41.265236 kernel: acpiphp: Slot [24] registered Jan 13 21:10:41.265256 kernel: acpiphp: Slot [25] registered Jan 13 21:10:41.265274 kernel: acpiphp: Slot [26] registered Jan 13 21:10:41.265293 kernel: acpiphp: Slot [27] registered Jan 13 21:10:41.265311 kernel: acpiphp: Slot [28] registered Jan 13 21:10:41.265331 kernel: acpiphp: Slot [29] registered Jan 13 21:10:41.265350 kernel: acpiphp: Slot [30] registered Jan 13 21:10:41.265368 kernel: acpiphp: Slot [31] registered Jan 13 21:10:41.265387 kernel: PCI host bridge to bus 0000:00 Jan 13 21:10:41.265623 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 13 21:10:41.265823 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 21:10:41.266013 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 13 21:10:41.266223 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 13 21:10:41.266485 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 13 21:10:41.266760 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 13 21:10:41.281504 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 13 21:10:41.281982 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 13 21:10:41.282290 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 13 21:10:41.282521 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 21:10:41.282763 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 13 21:10:41.282974 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 13 21:10:41.285579 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 13 21:10:41.286103 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 13 21:10:41.286328 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 13 21:10:41.286534 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 13 21:10:41.286746 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 13 21:10:41.286958 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 13 21:10:41.289443 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 13 21:10:41.289688 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 13 21:10:41.289923 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 13 21:10:41.290158 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 21:10:41.290361 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 13 21:10:41.290392 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 21:10:41.290413 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 21:10:41.290433 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 21:10:41.290453 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 21:10:41.290472 kernel: iommu: Default domain type: Translated Jan 13 21:10:41.290492 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 21:10:41.290522 kernel: efivars: Registered efivars operations Jan 13 21:10:41.290542 kernel: vgaarb: loaded Jan 13 21:10:41.290561 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 21:10:41.290580 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:10:41.290599 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:10:41.290618 kernel: pnp: PnP ACPI init Jan 13 21:10:41.295451 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 13 21:10:41.295503 kernel: pnp: PnP ACPI: found 1 devices Jan 13 21:10:41.295536 kernel: NET: Registered PF_INET protocol family Jan 13 21:10:41.295557 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:10:41.295577 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:10:41.295597 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:10:41.295616 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:10:41.295636 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:10:41.295655 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:10:41.295675 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:10:41.295694 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:10:41.295720 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:10:41.295739 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:10:41.295758 kernel: kvm [1]: HYP mode not available Jan 13 21:10:41.295777 kernel: Initialise system trusted keyrings Jan 13 21:10:41.295797 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:10:41.295817 kernel: Key type asymmetric registered Jan 13 21:10:41.295836 kernel: Asymmetric key parser 'x509' registered Jan 13 21:10:41.295855 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 21:10:41.295874 kernel: io scheduler mq-deadline registered Jan 13 21:10:41.295900 kernel: io scheduler kyber registered Jan 13 21:10:41.295919 kernel: io scheduler bfq registered Jan 13 21:10:41.298346 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 13 21:10:41.298397 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 21:10:41.298418 kernel: ACPI: button: Power Button [PWRB] Jan 13 21:10:41.298438 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 13 21:10:41.298458 kernel: ACPI: button: Sleep Button [SLPB] Jan 13 21:10:41.298482 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:10:41.298514 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 13 21:10:41.298750 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 13 21:10:41.298780 kernel: printk: console [ttyS0] disabled Jan 13 21:10:41.298800 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 13 21:10:41.298819 kernel: printk: console [ttyS0] enabled Jan 13 21:10:41.298838 kernel: printk: bootconsole [uart0] disabled Jan 13 21:10:41.298857 kernel: thunder_xcv, ver 1.0 Jan 13 21:10:41.298875 kernel: thunder_bgx, ver 1.0 Jan 13 21:10:41.298894 kernel: nicpf, ver 1.0 Jan 13 21:10:41.298919 kernel: nicvf, ver 1.0 Jan 13 21:10:41.299189 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 21:10:41.299439 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T21:10:40 UTC (1736802640) Jan 13 21:10:41.299468 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 21:10:41.299488 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 13 21:10:41.299508 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 21:10:41.299527 kernel: watchdog: Hard watchdog permanently disabled Jan 13 21:10:41.299546 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:10:41.299576 kernel: Segment Routing with IPv6 Jan 13 21:10:41.299596 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:10:41.299615 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:10:41.299634 kernel: Key type dns_resolver registered Jan 13 21:10:41.299655 kernel: registered taskstats version 1 Jan 13 21:10:41.299727 kernel: Loading compiled-in X.509 certificates Jan 13 21:10:41.299750 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4d59b6166d6886703230c188f8df863190489638' Jan 13 21:10:41.299770 kernel: Key type .fscrypt registered Jan 13 21:10:41.299788 kernel: Key type fscrypt-provisioning registered Jan 13 21:10:41.299813 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:10:41.299833 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:10:41.299852 kernel: ima: No architecture policies found Jan 13 21:10:41.299872 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 21:10:41.299891 kernel: clk: Disabling unused clocks Jan 13 21:10:41.299909 kernel: Freeing unused kernel memory: 39360K Jan 13 21:10:41.299928 kernel: Run /init as init process Jan 13 21:10:41.299946 kernel: with arguments: Jan 13 21:10:41.299964 kernel: /init Jan 13 21:10:41.299982 kernel: with environment: Jan 13 21:10:41.300006 kernel: HOME=/ Jan 13 21:10:41.300024 kernel: TERM=linux Jan 13 21:10:41.300717 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:10:41.300745 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:10:41.300770 systemd[1]: Detected virtualization amazon. Jan 13 21:10:41.300791 systemd[1]: Detected architecture arm64. Jan 13 21:10:41.300811 systemd[1]: Running in initrd. Jan 13 21:10:41.305113 systemd[1]: No hostname configured, using default hostname. Jan 13 21:10:41.305184 systemd[1]: Hostname set to . Jan 13 21:10:41.305244 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:10:41.305299 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:10:41.305347 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:10:41.305390 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:10:41.305449 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:10:41.305489 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:10:41.305553 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:10:41.305603 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:10:41.305650 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:10:41.305714 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:10:41.305762 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:10:41.305806 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:10:41.305830 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:10:41.305894 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:10:41.305937 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:10:41.305991 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:10:41.306073 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:10:41.306118 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:10:41.306157 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:10:41.306209 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:10:41.306271 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:10:41.306324 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:10:41.306389 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:10:41.306431 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:10:41.306471 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:10:41.306512 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:10:41.306567 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:10:41.306594 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:10:41.306616 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:10:41.306638 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:10:41.306668 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:10:41.306689 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:10:41.306711 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:10:41.306733 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:10:41.306755 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:10:41.306836 systemd-journald[250]: Collecting audit messages is disabled. Jan 13 21:10:41.306887 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:10:41.306909 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:10:41.306937 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:10:41.306959 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:10:41.306979 kernel: Bridge firewalling registered Jan 13 21:10:41.307000 systemd-journald[250]: Journal started Jan 13 21:10:41.309101 systemd-journald[250]: Runtime Journal (/run/log/journal/ec27f548e27df44114aa3647a930d704) is 8.0M, max 75.3M, 67.3M free. Jan 13 21:10:41.219110 systemd-modules-load[251]: Inserted module 'overlay' Jan 13 21:10:41.325185 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:10:41.325271 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:10:41.301789 systemd-modules-load[251]: Inserted module 'br_netfilter' Jan 13 21:10:41.326342 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:10:41.333343 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:10:41.338742 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:10:41.346389 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:10:41.401535 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:10:41.421598 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:10:41.427754 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:10:41.430616 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:10:41.446369 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:10:41.483121 dracut-cmdline[283]: dracut-dracut-053 Jan 13 21:10:41.492795 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:10:41.529656 systemd-resolved[289]: Positive Trust Anchors: Jan 13 21:10:41.531586 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:10:41.533283 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:10:41.681067 kernel: SCSI subsystem initialized Jan 13 21:10:41.687083 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:10:41.700079 kernel: iscsi: registered transport (tcp) Jan 13 21:10:41.722429 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:10:41.722505 kernel: QLogic iSCSI HBA Driver Jan 13 21:10:41.750068 kernel: random: crng init done Jan 13 21:10:41.750301 systemd-resolved[289]: Defaulting to hostname 'linux'. Jan 13 21:10:41.754405 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:10:41.757598 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:10:41.822692 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:10:41.842560 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:10:41.878535 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:10:41.878627 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:10:41.878656 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:10:41.946119 kernel: raid6: neonx8 gen() 6578 MB/s Jan 13 21:10:41.963096 kernel: raid6: neonx4 gen() 6420 MB/s Jan 13 21:10:41.980091 kernel: raid6: neonx2 gen() 5370 MB/s Jan 13 21:10:41.997091 kernel: raid6: neonx1 gen() 3902 MB/s Jan 13 21:10:42.014097 kernel: raid6: int64x8 gen() 3776 MB/s Jan 13 21:10:42.031097 kernel: raid6: int64x4 gen() 3694 MB/s Jan 13 21:10:42.048092 kernel: raid6: int64x2 gen() 3577 MB/s Jan 13 21:10:42.065900 kernel: raid6: int64x1 gen() 2749 MB/s Jan 13 21:10:42.065978 kernel: raid6: using algorithm neonx8 gen() 6578 MB/s Jan 13 21:10:42.083869 kernel: raid6: .... xor() 4909 MB/s, rmw enabled Jan 13 21:10:42.083952 kernel: raid6: using neon recovery algorithm Jan 13 21:10:42.092501 kernel: xor: measuring software checksum speed Jan 13 21:10:42.092582 kernel: 8regs : 10970 MB/sec Jan 13 21:10:42.093619 kernel: 32regs : 11949 MB/sec Jan 13 21:10:42.094822 kernel: arm64_neon : 9557 MB/sec Jan 13 21:10:42.094857 kernel: xor: using function: 32regs (11949 MB/sec) Jan 13 21:10:42.179090 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:10:42.197877 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:10:42.211461 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:10:42.244443 systemd-udevd[469]: Using default interface naming scheme 'v255'. Jan 13 21:10:42.252413 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:10:42.276315 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:10:42.301970 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Jan 13 21:10:42.366122 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:10:42.388329 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:10:42.507873 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:10:42.526588 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:10:42.580224 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:10:42.590983 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:10:42.606467 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:10:42.616016 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:10:42.633772 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:10:42.681289 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:10:42.742650 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 21:10:42.742716 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 13 21:10:42.762528 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 21:10:42.762881 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 21:10:42.763697 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:60:67:85:7e:67 Jan 13 21:10:42.767859 (udev-worker)[536]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:10:42.777778 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:10:42.778090 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:10:42.783977 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:10:42.788956 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:10:42.821953 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 13 21:10:42.821995 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 21:10:42.789311 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:10:42.794250 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:10:42.829688 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:10:42.850112 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 21:10:42.862366 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:10:42.862444 kernel: GPT:9289727 != 16777215 Jan 13 21:10:42.862472 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:10:42.862499 kernel: GPT:9289727 != 16777215 Jan 13 21:10:42.863527 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:10:42.864403 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:10:42.868249 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:10:42.888166 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:10:42.937029 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:10:42.978422 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (540) Jan 13 21:10:42.999084 kernel: BTRFS: device fsid 475b4555-939b-441c-9b47-b8244f532234 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (520) Jan 13 21:10:43.069348 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 21:10:43.135026 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 13 21:10:43.139098 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 21:10:43.164663 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 21:10:43.183850 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 21:10:43.202388 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:10:43.216456 disk-uuid[664]: Primary Header is updated. Jan 13 21:10:43.216456 disk-uuid[664]: Secondary Entries is updated. Jan 13 21:10:43.216456 disk-uuid[664]: Secondary Header is updated. Jan 13 21:10:43.229322 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:10:43.241096 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:10:43.250114 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:10:44.246126 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 21:10:44.249084 disk-uuid[665]: The operation has completed successfully. Jan 13 21:10:44.428651 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:10:44.428914 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:10:44.507375 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:10:44.516094 sh[1006]: Success Jan 13 21:10:44.544130 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 21:10:44.678643 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:10:44.697277 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:10:44.714951 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:10:44.739548 kernel: BTRFS info (device dm-0): first mount of filesystem 475b4555-939b-441c-9b47-b8244f532234 Jan 13 21:10:44.739616 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:10:44.739644 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:10:44.742508 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:10:44.742550 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:10:44.867080 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 21:10:44.886862 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:10:44.887421 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:10:44.901439 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:10:44.910405 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:10:44.939007 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:10:44.939113 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:10:44.939150 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:10:44.946111 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:10:44.965181 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:10:44.965965 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:10:44.988881 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:10:45.002398 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:10:45.122016 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:10:45.145440 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:10:45.199080 systemd-networkd[1198]: lo: Link UP Jan 13 21:10:45.199594 systemd-networkd[1198]: lo: Gained carrier Jan 13 21:10:45.202771 systemd-networkd[1198]: Enumeration completed Jan 13 21:10:45.203764 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:10:45.203772 systemd-networkd[1198]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:10:45.205680 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:10:45.211101 systemd[1]: Reached target network.target - Network. Jan 13 21:10:45.217741 systemd-networkd[1198]: eth0: Link UP Jan 13 21:10:45.217749 systemd-networkd[1198]: eth0: Gained carrier Jan 13 21:10:45.217767 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:10:45.247174 systemd-networkd[1198]: eth0: DHCPv4 address 172.31.24.5/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 21:10:45.493125 ignition[1123]: Ignition 2.19.0 Jan 13 21:10:45.493157 ignition[1123]: Stage: fetch-offline Jan 13 21:10:45.495057 ignition[1123]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:10:45.495087 ignition[1123]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:10:45.499218 ignition[1123]: Ignition finished successfully Jan 13 21:10:45.505918 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:10:45.525319 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:10:45.553566 ignition[1208]: Ignition 2.19.0 Jan 13 21:10:45.556656 ignition[1208]: Stage: fetch Jan 13 21:10:45.558794 ignition[1208]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:10:45.558838 ignition[1208]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:10:45.561120 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:10:45.574391 ignition[1208]: PUT result: OK Jan 13 21:10:45.577991 ignition[1208]: parsed url from cmdline: "" Jan 13 21:10:45.578013 ignition[1208]: no config URL provided Jan 13 21:10:45.578030 ignition[1208]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:10:45.578079 ignition[1208]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:10:45.578121 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:10:45.583129 ignition[1208]: PUT result: OK Jan 13 21:10:45.585288 ignition[1208]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 21:10:45.593333 ignition[1208]: GET result: OK Jan 13 21:10:45.594790 ignition[1208]: parsing config with SHA512: 96e9dcbce5fc887388e2774b7603a877d71dc1fb46155b096e5bfe18ab68c974d732dd7fc3960a787c216546fce5589dbf27c12171cd8bd4c285fb77f32ab84d Jan 13 21:10:45.602266 unknown[1208]: fetched base config from "system" Jan 13 21:10:45.602975 ignition[1208]: fetch: fetch complete Jan 13 21:10:45.602289 unknown[1208]: fetched base config from "system" Jan 13 21:10:45.602986 ignition[1208]: fetch: fetch passed Jan 13 21:10:45.602303 unknown[1208]: fetched user config from "aws" Jan 13 21:10:45.603090 ignition[1208]: Ignition finished successfully Jan 13 21:10:45.608607 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:10:45.629346 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:10:45.670654 ignition[1214]: Ignition 2.19.0 Jan 13 21:10:45.671489 ignition[1214]: Stage: kargs Jan 13 21:10:45.672264 ignition[1214]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:10:45.672292 ignition[1214]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:10:45.672457 ignition[1214]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:10:45.675758 ignition[1214]: PUT result: OK Jan 13 21:10:45.688617 ignition[1214]: kargs: kargs passed Jan 13 21:10:45.688964 ignition[1214]: Ignition finished successfully Jan 13 21:10:45.695445 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:10:45.707549 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:10:45.734923 ignition[1220]: Ignition 2.19.0 Jan 13 21:10:45.735518 ignition[1220]: Stage: disks Jan 13 21:10:45.736212 ignition[1220]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:10:45.736238 ignition[1220]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:10:45.736394 ignition[1220]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:10:45.753250 ignition[1220]: PUT result: OK Jan 13 21:10:45.758528 ignition[1220]: disks: disks passed Jan 13 21:10:45.758640 ignition[1220]: Ignition finished successfully Jan 13 21:10:45.764289 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:10:45.769261 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:10:45.774985 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:10:45.778119 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:10:45.780617 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:10:45.783254 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:10:45.804380 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:10:45.861638 systemd-fsck[1228]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:10:45.870400 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:10:45.890371 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:10:45.981095 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 238cddae-3c4d-4696-a666-660fd149aa3e r/w with ordered data mode. Quota mode: none. Jan 13 21:10:45.982648 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:10:45.986060 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:10:46.012272 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:10:46.023271 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:10:46.036449 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1247) Jan 13 21:10:46.036489 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:10:46.036516 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:10:46.036544 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:10:46.040726 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:10:46.047991 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:10:46.040831 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:10:46.040885 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:10:46.069347 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:10:46.078964 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:10:46.090392 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:10:46.250183 systemd-networkd[1198]: eth0: Gained IPv6LL Jan 13 21:10:46.532878 initrd-setup-root[1271]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:10:46.542391 initrd-setup-root[1278]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:10:46.551764 initrd-setup-root[1285]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:10:46.575057 initrd-setup-root[1292]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:10:46.894227 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:10:46.911750 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:10:46.917330 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:10:46.945410 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:10:46.947709 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:10:46.973009 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:10:46.997778 ignition[1361]: INFO : Ignition 2.19.0 Jan 13 21:10:47.000627 ignition[1361]: INFO : Stage: mount Jan 13 21:10:47.000627 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:10:47.000627 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:10:47.000627 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:10:47.011638 ignition[1361]: INFO : PUT result: OK Jan 13 21:10:47.021758 ignition[1361]: INFO : mount: mount passed Jan 13 21:10:47.023383 ignition[1361]: INFO : Ignition finished successfully Jan 13 21:10:47.031741 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:10:47.045296 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:10:47.066416 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:10:47.092692 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1371) Jan 13 21:10:47.092756 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:10:47.094383 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:10:47.095535 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 21:10:47.100067 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 21:10:47.103993 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:10:47.143385 ignition[1388]: INFO : Ignition 2.19.0 Jan 13 21:10:47.143385 ignition[1388]: INFO : Stage: files Jan 13 21:10:47.149014 ignition[1388]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:10:47.149014 ignition[1388]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:10:47.149014 ignition[1388]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:10:47.160711 ignition[1388]: INFO : PUT result: OK Jan 13 21:10:47.165535 ignition[1388]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:10:47.170523 ignition[1388]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:10:47.170523 ignition[1388]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:10:47.181589 ignition[1388]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:10:47.185592 ignition[1388]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:10:47.190529 unknown[1388]: wrote ssh authorized keys file for user: core Jan 13 21:10:47.193325 ignition[1388]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:10:47.206994 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 21:10:47.213359 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 21:10:47.213359 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 21:10:47.213359 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 21:10:47.307133 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 21:10:47.489179 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 21:10:47.493894 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:10:47.497973 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:10:47.501943 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:10:47.505602 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:10:47.505602 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:10:47.513685 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:10:47.518285 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:10:47.518285 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:10:47.528194 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:10:47.528194 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:10:47.528194 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 21:10:47.528194 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 21:10:47.528194 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 21:10:47.528194 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 13 21:10:47.871860 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 21:10:48.215170 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 21:10:48.223694 ignition[1388]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 13 21:10:48.223694 ignition[1388]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 21:10:48.223694 ignition[1388]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 21:10:48.223694 ignition[1388]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 13 21:10:48.223694 ignition[1388]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 13 21:10:48.223694 ignition[1388]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:10:48.223694 ignition[1388]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:10:48.223694 ignition[1388]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 13 21:10:48.223694 ignition[1388]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:10:48.223694 ignition[1388]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:10:48.223694 ignition[1388]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:10:48.223694 ignition[1388]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:10:48.223694 ignition[1388]: INFO : files: files passed Jan 13 21:10:48.223694 ignition[1388]: INFO : Ignition finished successfully Jan 13 21:10:48.275750 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:10:48.299882 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:10:48.307648 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:10:48.331157 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:10:48.331454 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:10:48.349880 initrd-setup-root-after-ignition[1416]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:10:48.349880 initrd-setup-root-after-ignition[1416]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:10:48.357265 initrd-setup-root-after-ignition[1420]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:10:48.363511 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:10:48.368902 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:10:48.391520 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:10:48.445930 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:10:48.446270 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:10:48.468011 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:10:48.470835 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:10:48.473628 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:10:48.485545 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:10:48.527153 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:10:48.542509 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:10:48.570200 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:10:48.574791 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:10:48.583174 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:10:48.592556 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:10:48.592834 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:10:48.596748 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:10:48.605664 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:10:48.608726 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:10:48.617311 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:10:48.620854 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:10:48.629573 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:10:48.632471 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:10:48.635783 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:10:48.638773 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:10:48.649257 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:10:48.651839 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:10:48.652142 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:10:48.664493 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:10:48.667393 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:10:48.670414 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:10:48.675585 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:10:48.686439 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:10:48.686697 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:10:48.690352 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:10:48.691191 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:10:48.704876 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:10:48.705826 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:10:48.721536 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:10:48.724682 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:10:48.724966 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:10:48.735672 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:10:48.744646 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:10:48.745282 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:10:48.758212 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:10:48.758480 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:10:48.779355 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:10:48.782332 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:10:48.812171 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:10:48.816967 ignition[1440]: INFO : Ignition 2.19.0 Jan 13 21:10:48.816967 ignition[1440]: INFO : Stage: umount Jan 13 21:10:48.823103 ignition[1440]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:10:48.823103 ignition[1440]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 21:10:48.823103 ignition[1440]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 21:10:48.832270 ignition[1440]: INFO : PUT result: OK Jan 13 21:10:48.844239 ignition[1440]: INFO : umount: umount passed Jan 13 21:10:48.844239 ignition[1440]: INFO : Ignition finished successfully Jan 13 21:10:48.840502 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:10:48.840717 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:10:48.847225 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:10:48.847456 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:10:48.850833 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:10:48.851005 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:10:48.853983 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:10:48.854112 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:10:48.854337 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:10:48.854419 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:10:48.854662 systemd[1]: Stopped target network.target - Network. Jan 13 21:10:48.854928 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:10:48.855016 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:10:48.855661 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:10:48.855920 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:10:48.872255 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:10:48.872439 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:10:48.879485 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:10:48.881902 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:10:48.881998 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:10:48.884477 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:10:48.884565 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:10:48.886957 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:10:48.887088 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:10:48.889815 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:10:48.889918 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:10:48.892944 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:10:48.893057 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:10:48.896579 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:10:48.911422 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:10:48.977164 systemd-networkd[1198]: eth0: DHCPv6 lease lost Jan 13 21:10:48.980129 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:10:48.980445 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:10:49.000305 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:10:49.000392 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:10:49.020507 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:10:49.020747 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:10:49.020902 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:10:49.021604 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:10:49.024225 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:10:49.024510 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:10:49.032298 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:10:49.032528 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:10:49.032800 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:10:49.032919 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:10:49.036643 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:10:49.039226 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:10:49.076843 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:10:49.079400 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:10:49.109836 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:10:49.109936 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:10:49.112868 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:10:49.112957 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:10:49.116295 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:10:49.116407 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:10:49.127847 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:10:49.127954 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:10:49.133085 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:10:49.133173 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:10:49.160323 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:10:49.164108 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:10:49.164237 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:10:49.168031 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:10:49.168162 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:10:49.172330 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:10:49.172548 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:10:49.214811 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:10:49.215272 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:10:49.221874 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:10:49.239636 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:10:49.285762 systemd[1]: Switching root. Jan 13 21:10:49.313002 systemd-journald[250]: Journal stopped Jan 13 21:10:52.365882 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Jan 13 21:10:52.366186 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:10:52.366258 kernel: SELinux: policy capability open_perms=1 Jan 13 21:10:52.366296 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:10:52.366329 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:10:52.366362 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:10:52.366406 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:10:52.366440 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:10:52.366471 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:10:52.366504 kernel: audit: type=1403 audit(1736802650.642:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:10:52.366545 systemd[1]: Successfully loaded SELinux policy in 61.089ms. Jan 13 21:10:52.366596 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.653ms. Jan 13 21:10:52.366632 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:10:52.366668 systemd[1]: Detected virtualization amazon. Jan 13 21:10:52.366700 systemd[1]: Detected architecture arm64. Jan 13 21:10:52.366731 systemd[1]: Detected first boot. Jan 13 21:10:52.366762 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:10:52.366797 zram_generator::config[1500]: No configuration found. Jan 13 21:10:52.366841 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:10:52.366876 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:10:52.366913 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 21:10:52.366952 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:10:52.366988 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:10:52.367024 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:10:52.367703 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:10:52.367758 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:10:52.367804 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:10:52.367841 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:10:52.367876 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:10:52.367923 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:10:52.367958 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:10:52.367993 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:10:52.368031 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:10:52.368141 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:10:52.368184 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:10:52.368227 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:10:52.368264 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:10:52.373119 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:10:52.373189 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:10:52.373226 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:10:52.373259 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:10:52.373295 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:10:52.373327 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:10:52.373372 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:10:52.373406 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:10:52.373438 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:10:52.373472 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:10:52.373506 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:10:52.373549 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:10:52.373586 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:10:52.373628 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:10:52.373668 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:10:52.373707 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:10:52.373755 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:10:52.373794 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:10:52.373826 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:10:52.373858 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:10:52.373889 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:10:52.373920 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:10:52.373951 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:10:52.373983 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:10:52.374022 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:10:52.374105 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:10:52.374166 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:10:52.374217 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:10:52.374283 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:10:52.374358 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 13 21:10:52.374437 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 13 21:10:52.374500 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:10:52.374553 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:10:52.374624 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:10:52.374691 kernel: loop: module loaded Jan 13 21:10:52.374778 kernel: fuse: init (API version 7.39) Jan 13 21:10:52.374846 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:10:52.374895 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:10:52.374967 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:10:52.379324 systemd-journald[1611]: Collecting audit messages is disabled. Jan 13 21:10:52.379433 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:10:52.379469 kernel: ACPI: bus type drm_connector registered Jan 13 21:10:52.379501 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:10:52.379538 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:10:52.379572 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:10:52.379604 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:10:52.379639 systemd-journald[1611]: Journal started Jan 13 21:10:52.379696 systemd-journald[1611]: Runtime Journal (/run/log/journal/ec27f548e27df44114aa3647a930d704) is 8.0M, max 75.3M, 67.3M free. Jan 13 21:10:52.388663 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:10:52.395591 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:10:52.400424 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:10:52.406509 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:10:52.406921 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:10:52.412917 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:10:52.413368 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:10:52.418954 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:10:52.420009 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:10:52.423565 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:10:52.423951 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:10:52.429738 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:10:52.430188 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:10:52.435746 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:10:52.436360 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:10:52.443006 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:10:52.449137 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:10:52.456195 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:10:52.488826 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:10:52.502381 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:10:52.518271 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:10:52.530547 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:10:52.547613 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:10:52.573646 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:10:52.579022 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:10:52.592358 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:10:52.598350 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:10:52.601661 systemd-journald[1611]: Time spent on flushing to /var/log/journal/ec27f548e27df44114aa3647a930d704 is 116.148ms for 890 entries. Jan 13 21:10:52.601661 systemd-journald[1611]: System Journal (/var/log/journal/ec27f548e27df44114aa3647a930d704) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:10:52.730687 systemd-journald[1611]: Received client request to flush runtime journal. Jan 13 21:10:52.612342 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:10:52.630352 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:10:52.651527 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:10:52.657387 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:10:52.663083 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:10:52.684459 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:10:52.690410 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:10:52.698631 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:10:52.729859 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:10:52.744238 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:10:52.763577 udevadm[1659]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:10:52.769148 systemd-tmpfiles[1653]: ACLs are not supported, ignoring. Jan 13 21:10:52.769187 systemd-tmpfiles[1653]: ACLs are not supported, ignoring. Jan 13 21:10:52.781504 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:10:52.797489 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:10:52.893024 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:10:52.909344 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:10:52.956562 systemd-tmpfiles[1674]: ACLs are not supported, ignoring. Jan 13 21:10:52.956610 systemd-tmpfiles[1674]: ACLs are not supported, ignoring. Jan 13 21:10:52.968638 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:10:53.742892 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:10:53.755426 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:10:53.821270 systemd-udevd[1680]: Using default interface naming scheme 'v255'. Jan 13 21:10:53.855123 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:10:53.890328 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:10:53.921803 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:10:54.028966 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 13 21:10:54.069778 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:10:54.096390 (udev-worker)[1685]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:10:54.251731 systemd-networkd[1690]: lo: Link UP Jan 13 21:10:54.251754 systemd-networkd[1690]: lo: Gained carrier Jan 13 21:10:54.255963 systemd-networkd[1690]: Enumeration completed Jan 13 21:10:54.256265 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:10:54.265783 systemd-networkd[1690]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:10:54.265809 systemd-networkd[1690]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:10:54.267982 systemd-networkd[1690]: eth0: Link UP Jan 13 21:10:54.268405 systemd-networkd[1690]: eth0: Gained carrier Jan 13 21:10:54.268438 systemd-networkd[1690]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:10:54.312812 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:10:54.344138 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1685) Jan 13 21:10:54.344340 systemd-networkd[1690]: eth0: DHCPv4 address 172.31.24.5/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 21:10:54.416726 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:10:54.609027 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:10:54.631505 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:10:54.664625 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 21:10:54.682393 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:10:54.723463 lvm[1809]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:10:54.765947 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:10:54.772661 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:10:54.782391 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:10:54.796922 lvm[1812]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:10:54.837938 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:10:54.843914 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:10:54.847729 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:10:54.847795 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:10:54.850306 systemd[1]: Reached target machines.target - Containers. Jan 13 21:10:54.854412 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:10:54.873608 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:10:54.882461 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:10:54.887629 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:10:54.894568 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:10:54.904427 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:10:54.917366 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:10:54.927357 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:10:54.940460 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:10:55.003112 kernel: loop0: detected capacity change from 0 to 114328 Jan 13 21:10:55.015004 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:10:55.018583 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:10:55.069146 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:10:55.108273 kernel: loop1: detected capacity change from 0 to 114432 Jan 13 21:10:55.232090 kernel: loop2: detected capacity change from 0 to 52536 Jan 13 21:10:55.331122 kernel: loop3: detected capacity change from 0 to 194512 Jan 13 21:10:55.375083 kernel: loop4: detected capacity change from 0 to 114328 Jan 13 21:10:55.395148 kernel: loop5: detected capacity change from 0 to 114432 Jan 13 21:10:55.409125 kernel: loop6: detected capacity change from 0 to 52536 Jan 13 21:10:55.423530 kernel: loop7: detected capacity change from 0 to 194512 Jan 13 21:10:55.440579 (sd-merge)[1834]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 13 21:10:55.441775 (sd-merge)[1834]: Merged extensions into '/usr'. Jan 13 21:10:55.452294 systemd[1]: Reloading requested from client PID 1820 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:10:55.452348 systemd[1]: Reloading... Jan 13 21:10:55.606090 zram_generator::config[1865]: No configuration found. Jan 13 21:10:55.912583 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:10:56.002098 ldconfig[1817]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:10:56.042261 systemd-networkd[1690]: eth0: Gained IPv6LL Jan 13 21:10:56.070742 systemd[1]: Reloading finished in 617 ms. Jan 13 21:10:56.102566 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:10:56.107457 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:10:56.111978 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:10:56.132412 systemd[1]: Starting ensure-sysext.service... Jan 13 21:10:56.143474 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:10:56.164517 systemd[1]: Reloading requested from client PID 1923 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:10:56.164763 systemd[1]: Reloading... Jan 13 21:10:56.186020 systemd-tmpfiles[1924]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:10:56.187852 systemd-tmpfiles[1924]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:10:56.191379 systemd-tmpfiles[1924]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:10:56.192512 systemd-tmpfiles[1924]: ACLs are not supported, ignoring. Jan 13 21:10:56.192672 systemd-tmpfiles[1924]: ACLs are not supported, ignoring. Jan 13 21:10:56.204598 systemd-tmpfiles[1924]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:10:56.204856 systemd-tmpfiles[1924]: Skipping /boot Jan 13 21:10:56.235359 systemd-tmpfiles[1924]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:10:56.235600 systemd-tmpfiles[1924]: Skipping /boot Jan 13 21:10:56.351098 zram_generator::config[1956]: No configuration found. Jan 13 21:10:56.630349 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:10:56.786544 systemd[1]: Reloading finished in 621 ms. Jan 13 21:10:56.821338 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:10:56.839386 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:10:56.850386 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:10:56.867385 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:10:56.890899 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:10:56.903834 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:10:56.930443 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:10:56.942619 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:10:56.955620 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:10:56.971608 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:10:56.979846 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:10:56.990746 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:10:57.003305 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:10:57.004023 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:10:57.019757 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:10:57.032584 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:10:57.037488 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:10:57.054568 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:10:57.060755 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:10:57.061340 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:10:57.070761 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:10:57.071347 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:10:57.082118 augenrules[2044]: No rules Jan 13 21:10:57.097483 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:10:57.103997 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:10:57.104515 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:10:57.119987 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:10:57.138737 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:10:57.150425 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:10:57.185922 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:10:57.190251 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:10:57.192943 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:10:57.200255 systemd[1]: Finished ensure-sysext.service. Jan 13 21:10:57.204970 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:10:57.211427 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:10:57.219116 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:10:57.219552 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:10:57.254503 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:10:57.260754 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:10:57.262137 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:10:57.268900 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:10:57.269389 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:10:57.274646 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:10:57.274779 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:10:57.274828 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:10:57.316722 systemd-resolved[2022]: Positive Trust Anchors: Jan 13 21:10:57.316758 systemd-resolved[2022]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:10:57.316823 systemd-resolved[2022]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:10:57.325603 systemd-resolved[2022]: Defaulting to hostname 'linux'. Jan 13 21:10:57.329775 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:10:57.332949 systemd[1]: Reached target network.target - Network. Jan 13 21:10:57.347656 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:10:57.350614 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:10:57.353807 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:10:57.356799 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:10:57.359799 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:10:57.363805 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:10:57.366978 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:10:57.369916 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:10:57.373351 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:10:57.373620 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:10:57.376408 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:10:57.380119 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:10:57.386488 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:10:57.392559 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:10:57.399144 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:10:57.404259 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:10:57.408569 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:10:57.412012 systemd[1]: System is tainted: cgroupsv1 Jan 13 21:10:57.412507 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:10:57.412695 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:10:57.423229 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:10:57.432400 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 21:10:57.445541 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:10:57.455245 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:10:57.474397 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:10:57.478809 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:10:57.518298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:10:57.522612 jq[2080]: false Jan 13 21:10:57.543829 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:10:57.560359 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 21:10:57.585029 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:10:57.597089 extend-filesystems[2081]: Found loop4 Jan 13 21:10:57.597089 extend-filesystems[2081]: Found loop5 Jan 13 21:10:57.602640 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:10:57.612467 extend-filesystems[2081]: Found loop6 Jan 13 21:10:57.612467 extend-filesystems[2081]: Found loop7 Jan 13 21:10:57.612467 extend-filesystems[2081]: Found nvme0n1 Jan 13 21:10:57.612467 extend-filesystems[2081]: Found nvme0n1p1 Jan 13 21:10:57.612467 extend-filesystems[2081]: Found nvme0n1p2 Jan 13 21:10:57.612467 extend-filesystems[2081]: Found nvme0n1p3 Jan 13 21:10:57.612467 extend-filesystems[2081]: Found usr Jan 13 21:10:57.612467 extend-filesystems[2081]: Found nvme0n1p4 Jan 13 21:10:57.612467 extend-filesystems[2081]: Found nvme0n1p6 Jan 13 21:10:57.612467 extend-filesystems[2081]: Found nvme0n1p7 Jan 13 21:10:57.612467 extend-filesystems[2081]: Found nvme0n1p9 Jan 13 21:10:57.612467 extend-filesystems[2081]: Checking size of /dev/nvme0n1p9 Jan 13 21:10:57.627926 dbus-daemon[2079]: [system] SELinux support is enabled Jan 13 21:10:57.639267 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 21:10:57.646938 dbus-daemon[2079]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1690 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 21:10:57.671804 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:10:57.697616 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:10:57.726112 extend-filesystems[2081]: Resized partition /dev/nvme0n1p9 Jan 13 21:10:57.731982 extend-filesystems[2112]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:10:57.771917 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 13 21:10:57.756354 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:10:57.762516 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:10:57.795471 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:10:57.806243 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:10:57.819935 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:10:57.848827 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:10:57.849400 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:10:57.853254 ntpd[2089]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:33 UTC 2025 (1): Starting Jan 13 21:10:57.855004 ntpd[2089]: 13 Jan 21:10:57 ntpd[2089]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:33 UTC 2025 (1): Starting Jan 13 21:10:57.855004 ntpd[2089]: 13 Jan 21:10:57 ntpd[2089]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:10:57.855004 ntpd[2089]: 13 Jan 21:10:57 ntpd[2089]: ---------------------------------------------------- Jan 13 21:10:57.855004 ntpd[2089]: 13 Jan 21:10:57 ntpd[2089]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:10:57.855004 ntpd[2089]: 13 Jan 21:10:57 ntpd[2089]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:10:57.855004 ntpd[2089]: 13 Jan 21:10:57 ntpd[2089]: corporation. Support and training for ntp-4 are Jan 13 21:10:57.855004 ntpd[2089]: 13 Jan 21:10:57 ntpd[2089]: available at https://www.nwtime.org/support Jan 13 21:10:57.855004 ntpd[2089]: 13 Jan 21:10:57 ntpd[2089]: ---------------------------------------------------- Jan 13 21:10:57.853318 ntpd[2089]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:10:57.853342 ntpd[2089]: ---------------------------------------------------- Jan 13 21:10:57.857274 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:10:57.891023 ntpd[2089]: 13 Jan 21:10:57 ntpd[2089]: proto: precision = 0.096 usec (-23) Jan 13 21:10:57.891023 ntpd[2089]: 13 Jan 21:10:57 ntpd[2089]: basedate set to 2025-01-01 Jan 13 21:10:57.891023 ntpd[2089]: 13 Jan 21:10:57 ntpd[2089]: gps base set to 2025-01-05 (week 2348) Jan 13 21:10:57.853363 ntpd[2089]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:10:57.900535 update_engine[2117]: I20250113 21:10:57.885111 2117 main.cc:92] Flatcar Update Engine starting Jan 13 21:10:57.857817 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:10:57.853385 ntpd[2089]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:10:57.908723 jq[2119]: true Jan 13 21:10:57.873076 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:10:57.853406 ntpd[2089]: corporation. Support and training for ntp-4 are Jan 13 21:10:57.920996 ntpd[2089]: 13 Jan 21:10:57 ntpd[2089]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:10:57.920996 ntpd[2089]: 13 Jan 21:10:57 ntpd[2089]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:10:57.884985 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:10:57.853426 ntpd[2089]: available at https://www.nwtime.org/support Jan 13 21:10:57.930468 update_engine[2117]: I20250113 21:10:57.921072 2117 update_check_scheduler.cc:74] Next update check in 9m5s Jan 13 21:10:57.930548 ntpd[2089]: 13 Jan 21:10:57 ntpd[2089]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:10:57.930548 ntpd[2089]: 13 Jan 21:10:57 ntpd[2089]: Listen normally on 3 eth0 172.31.24.5:123 Jan 13 21:10:57.930548 ntpd[2089]: 13 Jan 21:10:57 ntpd[2089]: Listen normally on 4 lo [::1]:123 Jan 13 21:10:57.930548 ntpd[2089]: 13 Jan 21:10:57 ntpd[2089]: Listen normally on 5 eth0 [fe80::460:67ff:fe85:7e67%2]:123 Jan 13 21:10:57.930548 ntpd[2089]: 13 Jan 21:10:57 ntpd[2089]: Listening on routing socket on fd #22 for interface updates Jan 13 21:10:57.885551 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:10:57.853445 ntpd[2089]: ---------------------------------------------------- Jan 13 21:10:57.870533 ntpd[2089]: proto: precision = 0.096 usec (-23) Jan 13 21:10:57.879880 ntpd[2089]: basedate set to 2025-01-01 Jan 13 21:10:57.879919 ntpd[2089]: gps base set to 2025-01-05 (week 2348) Jan 13 21:10:57.910853 ntpd[2089]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:10:57.910944 ntpd[2089]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:10:57.923250 ntpd[2089]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:10:57.923382 ntpd[2089]: Listen normally on 3 eth0 172.31.24.5:123 Jan 13 21:10:57.923454 ntpd[2089]: Listen normally on 4 lo [::1]:123 Jan 13 21:10:57.923533 ntpd[2089]: Listen normally on 5 eth0 [fe80::460:67ff:fe85:7e67%2]:123 Jan 13 21:10:57.923608 ntpd[2089]: Listening on routing socket on fd #22 for interface updates Jan 13 21:10:57.960277 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 13 21:10:57.954550 (ntainerd)[2130]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:10:58.044203 ntpd[2089]: 13 Jan 21:10:57 ntpd[2089]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:10:58.044203 ntpd[2089]: 13 Jan 21:10:57 ntpd[2089]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:10:57.951682 ntpd[2089]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:10:58.044452 extend-filesystems[2112]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 21:10:58.044452 extend-filesystems[2112]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:10:58.044452 extend-filesystems[2112]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 13 21:10:57.970705 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:10:58.087816 coreos-metadata[2077]: Jan 13 21:10:58.004 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 21:10:58.087816 coreos-metadata[2077]: Jan 13 21:10:58.007 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 21:10:58.087816 coreos-metadata[2077]: Jan 13 21:10:58.014 INFO Fetch successful Jan 13 21:10:58.087816 coreos-metadata[2077]: Jan 13 21:10:58.014 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 21:10:58.087816 coreos-metadata[2077]: Jan 13 21:10:58.019 INFO Fetch successful Jan 13 21:10:58.087816 coreos-metadata[2077]: Jan 13 21:10:58.019 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 21:10:58.087816 coreos-metadata[2077]: Jan 13 21:10:58.020 INFO Fetch successful Jan 13 21:10:58.087816 coreos-metadata[2077]: Jan 13 21:10:58.020 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 21:10:58.087816 coreos-metadata[2077]: Jan 13 21:10:58.023 INFO Fetch successful Jan 13 21:10:58.087816 coreos-metadata[2077]: Jan 13 21:10:58.023 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 21:10:58.087816 coreos-metadata[2077]: Jan 13 21:10:58.023 INFO Fetch failed with 404: resource not found Jan 13 21:10:58.087816 coreos-metadata[2077]: Jan 13 21:10:58.023 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 21:10:58.087816 coreos-metadata[2077]: Jan 13 21:10:58.027 INFO Fetch successful Jan 13 21:10:58.087816 coreos-metadata[2077]: Jan 13 21:10:58.027 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 21:10:58.087816 coreos-metadata[2077]: Jan 13 21:10:58.030 INFO Fetch successful Jan 13 21:10:58.087816 coreos-metadata[2077]: Jan 13 21:10:58.030 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 21:10:58.087816 coreos-metadata[2077]: Jan 13 21:10:58.031 INFO Fetch successful Jan 13 21:10:58.087816 coreos-metadata[2077]: Jan 13 21:10:58.031 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 21:10:58.087816 coreos-metadata[2077]: Jan 13 21:10:58.041 INFO Fetch successful Jan 13 21:10:58.087816 coreos-metadata[2077]: Jan 13 21:10:58.041 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 21:10:58.087816 coreos-metadata[2077]: Jan 13 21:10:58.062 INFO Fetch successful Jan 13 21:10:57.951752 ntpd[2089]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:10:58.097637 extend-filesystems[2081]: Resized filesystem in /dev/nvme0n1p9 Jan 13 21:10:57.970774 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:10:58.111267 jq[2131]: true Jan 13 21:10:57.975567 dbus-daemon[2079]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 21:10:57.976489 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:10:57.976534 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:10:57.984268 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:10:58.042471 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 21:10:58.057949 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:10:58.085553 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:10:58.096065 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:10:58.096667 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:10:58.142247 tar[2125]: linux-arm64/helm Jan 13 21:10:58.246825 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 21:10:58.253970 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 21:10:58.274527 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 21:10:58.279471 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:10:58.302172 systemd-logind[2111]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 21:10:58.302227 systemd-logind[2111]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 13 21:10:58.303917 systemd-logind[2111]: New seat seat0. Jan 13 21:10:58.308641 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:10:58.313630 bash[2186]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:10:58.327138 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:10:58.348863 systemd[1]: Starting sshkeys.service... Jan 13 21:10:58.438077 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (2198) Jan 13 21:10:58.453261 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 21:10:58.523954 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 21:10:58.625630 amazon-ssm-agent[2191]: Initializing new seelog logger Jan 13 21:10:58.639148 amazon-ssm-agent[2191]: New Seelog Logger Creation Complete Jan 13 21:10:58.639311 amazon-ssm-agent[2191]: 2025/01/13 21:10:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:10:58.639311 amazon-ssm-agent[2191]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:10:58.654480 amazon-ssm-agent[2191]: 2025/01/13 21:10:58 processing appconfig overrides Jan 13 21:10:58.659410 amazon-ssm-agent[2191]: 2025/01/13 21:10:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:10:58.659410 amazon-ssm-agent[2191]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:10:58.659410 amazon-ssm-agent[2191]: 2025/01/13 21:10:58 processing appconfig overrides Jan 13 21:10:58.659410 amazon-ssm-agent[2191]: 2025/01/13 21:10:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:10:58.659410 amazon-ssm-agent[2191]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:10:58.659410 amazon-ssm-agent[2191]: 2025/01/13 21:10:58 processing appconfig overrides Jan 13 21:10:58.659782 amazon-ssm-agent[2191]: 2025-01-13 21:10:58 INFO Proxy environment variables: Jan 13 21:10:58.671465 amazon-ssm-agent[2191]: 2025/01/13 21:10:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:10:58.671465 amazon-ssm-agent[2191]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 21:10:58.671465 amazon-ssm-agent[2191]: 2025/01/13 21:10:58 processing appconfig overrides Jan 13 21:10:58.759274 amazon-ssm-agent[2191]: 2025-01-13 21:10:58 INFO https_proxy: Jan 13 21:10:58.859263 amazon-ssm-agent[2191]: 2025-01-13 21:10:58 INFO http_proxy: Jan 13 21:10:58.961164 amazon-ssm-agent[2191]: 2025-01-13 21:10:58 INFO no_proxy: Jan 13 21:10:59.032858 containerd[2130]: time="2025-01-13T21:10:59.032679168Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:10:59.034482 coreos-metadata[2216]: Jan 13 21:10:59.034 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 21:10:59.043785 locksmithd[2152]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:10:59.045998 coreos-metadata[2216]: Jan 13 21:10:59.045 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 21:10:59.048554 coreos-metadata[2216]: Jan 13 21:10:59.047 INFO Fetch successful Jan 13 21:10:59.048554 coreos-metadata[2216]: Jan 13 21:10:59.047 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 21:10:59.048554 coreos-metadata[2216]: Jan 13 21:10:59.048 INFO Fetch successful Jan 13 21:10:59.062780 unknown[2216]: wrote ssh authorized keys file for user: core Jan 13 21:10:59.063392 amazon-ssm-agent[2191]: 2025-01-13 21:10:58 INFO Checking if agent identity type OnPrem can be assumed Jan 13 21:10:59.120431 dbus-daemon[2079]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 21:10:59.120703 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 21:10:59.130807 dbus-daemon[2079]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2148 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 21:10:59.153487 update-ssh-keys[2295]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:10:59.157760 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 21:10:59.164237 amazon-ssm-agent[2191]: 2025-01-13 21:10:58 INFO Checking if agent identity type EC2 can be assumed Jan 13 21:10:59.187125 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 21:10:59.211688 systemd[1]: Finished sshkeys.service. Jan 13 21:10:59.271080 amazon-ssm-agent[2191]: 2025-01-13 21:10:59 INFO Agent will take identity from EC2 Jan 13 21:10:59.296116 polkitd[2297]: Started polkitd version 121 Jan 13 21:10:59.344520 containerd[2130]: time="2025-01-13T21:10:59.343961293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:10:59.353851 polkitd[2297]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 21:10:59.353994 polkitd[2297]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 21:10:59.362577 polkitd[2297]: Finished loading, compiling and executing 2 rules Jan 13 21:10:59.365937 containerd[2130]: time="2025-01-13T21:10:59.363226105Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:10:59.365937 containerd[2130]: time="2025-01-13T21:10:59.363304417Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:10:59.365937 containerd[2130]: time="2025-01-13T21:10:59.363340741Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:10:59.365937 containerd[2130]: time="2025-01-13T21:10:59.363718345Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:10:59.365937 containerd[2130]: time="2025-01-13T21:10:59.363769225Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:10:59.365937 containerd[2130]: time="2025-01-13T21:10:59.363909361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:10:59.365937 containerd[2130]: time="2025-01-13T21:10:59.363942013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:10:59.365937 containerd[2130]: time="2025-01-13T21:10:59.364399885Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:10:59.365937 containerd[2130]: time="2025-01-13T21:10:59.364445881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:10:59.365937 containerd[2130]: time="2025-01-13T21:10:59.364531093Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:10:59.365937 containerd[2130]: time="2025-01-13T21:10:59.364557133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:10:59.366552 containerd[2130]: time="2025-01-13T21:10:59.364763701Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:10:59.368169 amazon-ssm-agent[2191]: 2025-01-13 21:10:59 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:10:59.375116 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 21:10:59.378500 containerd[2130]: time="2025-01-13T21:10:59.375089437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:10:59.378500 containerd[2130]: time="2025-01-13T21:10:59.375483589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:10:59.378500 containerd[2130]: time="2025-01-13T21:10:59.375528313Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:10:59.378500 containerd[2130]: time="2025-01-13T21:10:59.375765457Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:10:59.378500 containerd[2130]: time="2025-01-13T21:10:59.375884233Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:10:59.374599 dbus-daemon[2079]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 21:10:59.379757 polkitd[2297]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 21:10:59.398707 containerd[2130]: time="2025-01-13T21:10:59.397796629Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:10:59.398707 containerd[2130]: time="2025-01-13T21:10:59.397911637Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:10:59.398707 containerd[2130]: time="2025-01-13T21:10:59.398067361Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:10:59.398707 containerd[2130]: time="2025-01-13T21:10:59.398118253Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:10:59.398707 containerd[2130]: time="2025-01-13T21:10:59.398169721Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:10:59.398707 containerd[2130]: time="2025-01-13T21:10:59.398469205Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:10:59.402725 containerd[2130]: time="2025-01-13T21:10:59.399542605Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:10:59.402725 containerd[2130]: time="2025-01-13T21:10:59.399823621Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:10:59.402725 containerd[2130]: time="2025-01-13T21:10:59.399865021Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:10:59.402725 containerd[2130]: time="2025-01-13T21:10:59.399897901Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:10:59.402725 containerd[2130]: time="2025-01-13T21:10:59.399945577Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:10:59.402725 containerd[2130]: time="2025-01-13T21:10:59.399994741Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:10:59.410816 containerd[2130]: time="2025-01-13T21:10:59.400025509Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:10:59.410816 containerd[2130]: time="2025-01-13T21:10:59.408492121Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:10:59.410816 containerd[2130]: time="2025-01-13T21:10:59.408539233Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:10:59.410816 containerd[2130]: time="2025-01-13T21:10:59.408571753Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:10:59.410816 containerd[2130]: time="2025-01-13T21:10:59.408602989Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:10:59.410816 containerd[2130]: time="2025-01-13T21:10:59.408643633Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:10:59.410816 containerd[2130]: time="2025-01-13T21:10:59.408687493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:10:59.410816 containerd[2130]: time="2025-01-13T21:10:59.408723121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:10:59.410816 containerd[2130]: time="2025-01-13T21:10:59.408753349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:10:59.410816 containerd[2130]: time="2025-01-13T21:10:59.408804109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:10:59.410816 containerd[2130]: time="2025-01-13T21:10:59.408837673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:10:59.410816 containerd[2130]: time="2025-01-13T21:10:59.408870385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:10:59.410816 containerd[2130]: time="2025-01-13T21:10:59.408901621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:10:59.410816 containerd[2130]: time="2025-01-13T21:10:59.408940609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:10:59.411716 containerd[2130]: time="2025-01-13T21:10:59.408973201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:10:59.411716 containerd[2130]: time="2025-01-13T21:10:59.409021981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:10:59.411716 containerd[2130]: time="2025-01-13T21:10:59.409090009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:10:59.411716 containerd[2130]: time="2025-01-13T21:10:59.409125253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:10:59.411716 containerd[2130]: time="2025-01-13T21:10:59.409162585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:10:59.411716 containerd[2130]: time="2025-01-13T21:10:59.409201393Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:10:59.411716 containerd[2130]: time="2025-01-13T21:10:59.409252333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:10:59.411716 containerd[2130]: time="2025-01-13T21:10:59.409284313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:10:59.411716 containerd[2130]: time="2025-01-13T21:10:59.409313509Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:10:59.411716 containerd[2130]: time="2025-01-13T21:10:59.409434205Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:10:59.411716 containerd[2130]: time="2025-01-13T21:10:59.409476601Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:10:59.411716 containerd[2130]: time="2025-01-13T21:10:59.409506949Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:10:59.411716 containerd[2130]: time="2025-01-13T21:10:59.409539169Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:10:59.412313 containerd[2130]: time="2025-01-13T21:10:59.409565293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:10:59.412313 containerd[2130]: time="2025-01-13T21:10:59.409607965Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:10:59.412313 containerd[2130]: time="2025-01-13T21:10:59.409633909Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:10:59.412313 containerd[2130]: time="2025-01-13T21:10:59.409659841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:10:59.425393 containerd[2130]: time="2025-01-13T21:10:59.417383437Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:10:59.425393 containerd[2130]: time="2025-01-13T21:10:59.417583009Z" level=info msg="Connect containerd service" Jan 13 21:10:59.425393 containerd[2130]: time="2025-01-13T21:10:59.417663037Z" level=info msg="using legacy CRI server" Jan 13 21:10:59.425393 containerd[2130]: time="2025-01-13T21:10:59.417694225Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:10:59.425393 containerd[2130]: time="2025-01-13T21:10:59.417942805Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:10:59.438441 containerd[2130]: time="2025-01-13T21:10:59.438350858Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:10:59.449874 containerd[2130]: time="2025-01-13T21:10:59.449737838Z" level=info msg="Start subscribing containerd event" Jan 13 21:10:59.450004 containerd[2130]: time="2025-01-13T21:10:59.449895530Z" level=info msg="Start recovering state" Jan 13 21:10:59.450181 containerd[2130]: time="2025-01-13T21:10:59.450136922Z" level=info msg="Start event monitor" Jan 13 21:10:59.450245 containerd[2130]: time="2025-01-13T21:10:59.450182546Z" level=info msg="Start snapshots syncer" Jan 13 21:10:59.450245 containerd[2130]: time="2025-01-13T21:10:59.450212162Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:10:59.450387 containerd[2130]: time="2025-01-13T21:10:59.450242870Z" level=info msg="Start streaming server" Jan 13 21:10:59.454544 containerd[2130]: time="2025-01-13T21:10:59.454464458Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:10:59.454710 containerd[2130]: time="2025-01-13T21:10:59.454664702Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:10:59.465920 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:10:59.470653 amazon-ssm-agent[2191]: 2025-01-13 21:10:59 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:10:59.476530 containerd[2130]: time="2025-01-13T21:10:59.476277446Z" level=info msg="containerd successfully booted in 0.458956s" Jan 13 21:10:59.496749 systemd-hostnamed[2148]: Hostname set to (transient) Jan 13 21:10:59.496774 systemd-resolved[2022]: System hostname changed to 'ip-172-31-24-5'. Jan 13 21:10:59.571418 amazon-ssm-agent[2191]: 2025-01-13 21:10:59 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 21:10:59.671101 amazon-ssm-agent[2191]: 2025-01-13 21:10:59 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 13 21:10:59.765305 sshd_keygen[2142]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:10:59.770503 amazon-ssm-agent[2191]: 2025-01-13 21:10:59 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 13 21:10:59.870812 amazon-ssm-agent[2191]: 2025-01-13 21:10:59 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 21:10:59.890987 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:10:59.917539 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:10:59.964774 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:10:59.965408 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:10:59.979121 amazon-ssm-agent[2191]: 2025-01-13 21:10:59 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 13 21:10:59.982723 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:11:00.022641 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:11:00.047592 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:11:00.063541 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:11:00.068815 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:11:00.077058 amazon-ssm-agent[2191]: 2025-01-13 21:10:59 INFO [Registrar] Starting registrar module Jan 13 21:11:00.177859 amazon-ssm-agent[2191]: 2025-01-13 21:10:59 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 13 21:11:00.199368 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:11:00.204549 (kubelet)[2366]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:11:00.441749 amazon-ssm-agent[2191]: 2025-01-13 21:11:00 INFO [EC2Identity] EC2 registration was successful. Jan 13 21:11:00.490251 amazon-ssm-agent[2191]: 2025-01-13 21:11:00 INFO [CredentialRefresher] credentialRefresher has started Jan 13 21:11:00.491856 amazon-ssm-agent[2191]: 2025-01-13 21:11:00 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 21:11:00.491856 amazon-ssm-agent[2191]: 2025-01-13 21:11:00 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 21:11:00.514646 tar[2125]: linux-arm64/LICENSE Jan 13 21:11:00.515666 tar[2125]: linux-arm64/README.md Jan 13 21:11:00.542833 amazon-ssm-agent[2191]: 2025-01-13 21:11:00 INFO [CredentialRefresher] Next credential rotation will be in 30.049949337 minutes Jan 13 21:11:00.551512 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:11:00.559656 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:11:00.565558 systemd[1]: Startup finished in 11.019s (kernel) + 9.982s (userspace) = 21.002s. Jan 13 21:11:01.045002 kubelet[2366]: E0113 21:11:01.044877 2366 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:11:01.049733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:11:01.051138 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:11:01.517058 amazon-ssm-agent[2191]: 2025-01-13 21:11:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 21:11:01.618304 amazon-ssm-agent[2191]: 2025-01-13 21:11:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2385) started Jan 13 21:11:01.719214 amazon-ssm-agent[2191]: 2025-01-13 21:11:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 21:11:05.085739 systemd-resolved[2022]: Clock change detected. Flushing caches. Jan 13 21:11:05.642937 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:11:05.653243 systemd[1]: Started sshd@0-172.31.24.5:22-139.178.89.65:54348.service - OpenSSH per-connection server daemon (139.178.89.65:54348). Jan 13 21:11:05.867867 sshd[2394]: Accepted publickey for core from 139.178.89.65 port 54348 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:11:05.871483 sshd[2394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:11:05.886360 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:11:05.897012 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:11:05.903544 systemd-logind[2111]: New session 1 of user core. Jan 13 21:11:05.923225 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:11:05.946756 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:11:05.953024 (systemd)[2400]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:11:06.167035 systemd[2400]: Queued start job for default target default.target. Jan 13 21:11:06.168242 systemd[2400]: Created slice app.slice - User Application Slice. Jan 13 21:11:06.168291 systemd[2400]: Reached target paths.target - Paths. Jan 13 21:11:06.168323 systemd[2400]: Reached target timers.target - Timers. Jan 13 21:11:06.176793 systemd[2400]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:11:06.192644 systemd[2400]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:11:06.192768 systemd[2400]: Reached target sockets.target - Sockets. Jan 13 21:11:06.192802 systemd[2400]: Reached target basic.target - Basic System. Jan 13 21:11:06.192910 systemd[2400]: Reached target default.target - Main User Target. Jan 13 21:11:06.192977 systemd[2400]: Startup finished in 227ms. Jan 13 21:11:06.193055 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:11:06.201185 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:11:06.356721 systemd[1]: Started sshd@1-172.31.24.5:22-139.178.89.65:54354.service - OpenSSH per-connection server daemon (139.178.89.65:54354). Jan 13 21:11:06.529126 sshd[2412]: Accepted publickey for core from 139.178.89.65 port 54354 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:11:06.531791 sshd[2412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:11:06.540252 systemd-logind[2111]: New session 2 of user core. Jan 13 21:11:06.547244 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:11:06.677716 sshd[2412]: pam_unix(sshd:session): session closed for user core Jan 13 21:11:06.682563 systemd-logind[2111]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:11:06.685284 systemd[1]: sshd@1-172.31.24.5:22-139.178.89.65:54354.service: Deactivated successfully. Jan 13 21:11:06.688311 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:11:06.691192 systemd-logind[2111]: Removed session 2. Jan 13 21:11:06.708128 systemd[1]: Started sshd@2-172.31.24.5:22-139.178.89.65:54364.service - OpenSSH per-connection server daemon (139.178.89.65:54364). Jan 13 21:11:06.875716 sshd[2420]: Accepted publickey for core from 139.178.89.65 port 54364 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:11:06.878397 sshd[2420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:11:06.885891 systemd-logind[2111]: New session 3 of user core. Jan 13 21:11:06.894103 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:11:07.014024 sshd[2420]: pam_unix(sshd:session): session closed for user core Jan 13 21:11:07.021273 systemd[1]: sshd@2-172.31.24.5:22-139.178.89.65:54364.service: Deactivated successfully. Jan 13 21:11:07.026456 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:11:07.028445 systemd-logind[2111]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:11:07.030499 systemd-logind[2111]: Removed session 3. Jan 13 21:11:07.042138 systemd[1]: Started sshd@3-172.31.24.5:22-139.178.89.65:54372.service - OpenSSH per-connection server daemon (139.178.89.65:54372). Jan 13 21:11:07.226382 sshd[2428]: Accepted publickey for core from 139.178.89.65 port 54372 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:11:07.229113 sshd[2428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:11:07.240454 systemd-logind[2111]: New session 4 of user core. Jan 13 21:11:07.248300 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:11:07.378973 sshd[2428]: pam_unix(sshd:session): session closed for user core Jan 13 21:11:07.385236 systemd[1]: sshd@3-172.31.24.5:22-139.178.89.65:54372.service: Deactivated successfully. Jan 13 21:11:07.391224 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:11:07.392573 systemd-logind[2111]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:11:07.394478 systemd-logind[2111]: Removed session 4. Jan 13 21:11:07.410209 systemd[1]: Started sshd@4-172.31.24.5:22-139.178.89.65:54386.service - OpenSSH per-connection server daemon (139.178.89.65:54386). Jan 13 21:11:07.583115 sshd[2436]: Accepted publickey for core from 139.178.89.65 port 54386 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:11:07.585867 sshd[2436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:11:07.594968 systemd-logind[2111]: New session 5 of user core. Jan 13 21:11:07.603265 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:11:07.719093 sudo[2440]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:11:07.719805 sudo[2440]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:11:08.212069 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:11:08.214458 (dockerd)[2455]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:11:08.607314 dockerd[2455]: time="2025-01-13T21:11:08.607197613Z" level=info msg="Starting up" Jan 13 21:11:08.742415 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport331455427-merged.mount: Deactivated successfully. Jan 13 21:11:09.526520 dockerd[2455]: time="2025-01-13T21:11:09.526433930Z" level=info msg="Loading containers: start." Jan 13 21:11:09.696680 kernel: Initializing XFRM netlink socket Jan 13 21:11:09.728724 (udev-worker)[2479]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:11:09.811404 systemd-networkd[1690]: docker0: Link UP Jan 13 21:11:09.835976 dockerd[2455]: time="2025-01-13T21:11:09.835922523Z" level=info msg="Loading containers: done." Jan 13 21:11:09.873449 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1853717521-merged.mount: Deactivated successfully. Jan 13 21:11:09.902023 dockerd[2455]: time="2025-01-13T21:11:09.901952271Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:11:09.902377 dockerd[2455]: time="2025-01-13T21:11:09.902119359Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:11:09.902377 dockerd[2455]: time="2025-01-13T21:11:09.902339043Z" level=info msg="Daemon has completed initialization" Jan 13 21:11:09.977711 dockerd[2455]: time="2025-01-13T21:11:09.976952800Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:11:09.977129 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:11:11.171332 containerd[2130]: time="2025-01-13T21:11:11.170994182Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 21:11:11.351356 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:11:11.364944 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:11:11.900167 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:11:11.912402 (kubelet)[2613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:11:12.019805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3369374677.mount: Deactivated successfully. Jan 13 21:11:12.040376 kubelet[2613]: E0113 21:11:12.040279 2613 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:11:12.049372 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:11:12.050181 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:11:13.865215 containerd[2130]: time="2025-01-13T21:11:13.865150891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:13.869246 containerd[2130]: time="2025-01-13T21:11:13.869177923Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201250" Jan 13 21:11:13.872632 containerd[2130]: time="2025-01-13T21:11:13.871880107Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:13.879772 containerd[2130]: time="2025-01-13T21:11:13.879701923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:13.882376 containerd[2130]: time="2025-01-13T21:11:13.882292315Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 2.711235241s" Jan 13 21:11:13.882376 containerd[2130]: time="2025-01-13T21:11:13.882368323Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Jan 13 21:11:13.922191 containerd[2130]: time="2025-01-13T21:11:13.922111591Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 21:11:15.814044 containerd[2130]: time="2025-01-13T21:11:15.813984669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:15.817047 containerd[2130]: time="2025-01-13T21:11:15.816979701Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381297" Jan 13 21:11:15.819021 containerd[2130]: time="2025-01-13T21:11:15.818923917Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:15.825588 containerd[2130]: time="2025-01-13T21:11:15.825492945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:15.828130 containerd[2130]: time="2025-01-13T21:11:15.828060021Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 1.905876706s" Jan 13 21:11:15.828472 containerd[2130]: time="2025-01-13T21:11:15.828317973Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Jan 13 21:11:15.872390 containerd[2130]: time="2025-01-13T21:11:15.872309949Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 21:11:17.104307 containerd[2130]: time="2025-01-13T21:11:17.104229007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:17.106475 containerd[2130]: time="2025-01-13T21:11:17.106413643Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765640" Jan 13 21:11:17.109334 containerd[2130]: time="2025-01-13T21:11:17.109233439Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:17.115206 containerd[2130]: time="2025-01-13T21:11:17.115149295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:17.118184 containerd[2130]: time="2025-01-13T21:11:17.117617191Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.245211734s" Jan 13 21:11:17.118184 containerd[2130]: time="2025-01-13T21:11:17.117696991Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Jan 13 21:11:17.155623 containerd[2130]: time="2025-01-13T21:11:17.155489971Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 21:11:18.719693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3056775960.mount: Deactivated successfully. Jan 13 21:11:19.264279 containerd[2130]: time="2025-01-13T21:11:19.264175846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:19.266035 containerd[2130]: time="2025-01-13T21:11:19.265950070Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273977" Jan 13 21:11:19.268412 containerd[2130]: time="2025-01-13T21:11:19.268323490Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:19.273190 containerd[2130]: time="2025-01-13T21:11:19.273072058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:19.275045 containerd[2130]: time="2025-01-13T21:11:19.274813738Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 2.119238603s" Jan 13 21:11:19.275045 containerd[2130]: time="2025-01-13T21:11:19.274879462Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Jan 13 21:11:19.314311 containerd[2130]: time="2025-01-13T21:11:19.314259838Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:11:19.913662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3243502139.mount: Deactivated successfully. Jan 13 21:11:22.101813 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:11:22.114164 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:11:22.569691 containerd[2130]: time="2025-01-13T21:11:22.569612510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:22.577710 containerd[2130]: time="2025-01-13T21:11:22.577382990Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 13 21:11:22.581889 containerd[2130]: time="2025-01-13T21:11:22.579300326Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:22.591045 containerd[2130]: time="2025-01-13T21:11:22.590979074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:22.592849 containerd[2130]: time="2025-01-13T21:11:22.592791554Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 3.278193976s" Jan 13 21:11:22.593041 containerd[2130]: time="2025-01-13T21:11:22.593008682Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 21:11:22.613925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:11:22.624335 (kubelet)[2767]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:11:22.647783 containerd[2130]: time="2025-01-13T21:11:22.646249515Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 21:11:22.710999 kubelet[2767]: E0113 21:11:22.710932 2767 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:11:22.716435 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:11:22.717094 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:11:23.170299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1125753572.mount: Deactivated successfully. Jan 13 21:11:23.177358 containerd[2130]: time="2025-01-13T21:11:23.177280921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:23.180743 containerd[2130]: time="2025-01-13T21:11:23.180686305Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jan 13 21:11:23.182485 containerd[2130]: time="2025-01-13T21:11:23.182410777Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:23.189644 containerd[2130]: time="2025-01-13T21:11:23.187843693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:23.191224 containerd[2130]: time="2025-01-13T21:11:23.191139505Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 544.787906ms" Jan 13 21:11:23.191224 containerd[2130]: time="2025-01-13T21:11:23.191212513Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 13 21:11:23.232902 containerd[2130]: time="2025-01-13T21:11:23.232838834Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 21:11:23.875240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount186238045.mount: Deactivated successfully. Jan 13 21:11:25.962256 containerd[2130]: time="2025-01-13T21:11:25.961088863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:25.963637 containerd[2130]: time="2025-01-13T21:11:25.963544927Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Jan 13 21:11:25.965763 containerd[2130]: time="2025-01-13T21:11:25.965675335Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:25.975011 containerd[2130]: time="2025-01-13T21:11:25.974906047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:11:25.977887 containerd[2130]: time="2025-01-13T21:11:25.977531635Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.744630977s" Jan 13 21:11:25.977887 containerd[2130]: time="2025-01-13T21:11:25.977589103Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jan 13 21:11:29.762653 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 21:11:32.851459 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 21:11:32.860082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:11:33.431947 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:11:33.450390 (kubelet)[2910]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:11:33.548538 kubelet[2910]: E0113 21:11:33.548468 2910 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:11:33.560109 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:11:33.560531 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:11:36.021522 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:11:36.037116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:11:36.083501 systemd[1]: Reloading requested from client PID 2928 ('systemctl') (unit session-5.scope)... Jan 13 21:11:36.083529 systemd[1]: Reloading... Jan 13 21:11:36.286642 zram_generator::config[2971]: No configuration found. Jan 13 21:11:36.564190 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:11:36.736645 systemd[1]: Reloading finished in 652 ms. Jan 13 21:11:36.823031 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:11:36.823242 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:11:36.826023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:11:36.838004 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:11:37.285999 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:11:37.304380 (kubelet)[3044]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:11:37.399302 kubelet[3044]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:11:37.400688 kubelet[3044]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:11:37.400688 kubelet[3044]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:11:37.400688 kubelet[3044]: I0113 21:11:37.400093 3044 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:11:38.579643 kubelet[3044]: I0113 21:11:38.577730 3044 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:11:38.579643 kubelet[3044]: I0113 21:11:38.577783 3044 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:11:38.579643 kubelet[3044]: I0113 21:11:38.578231 3044 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:11:38.610640 kubelet[3044]: E0113 21:11:38.610540 3044 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.24.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.24.5:6443: connect: connection refused Jan 13 21:11:38.613113 kubelet[3044]: I0113 21:11:38.613018 3044 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:11:38.631794 kubelet[3044]: I0113 21:11:38.631723 3044 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:11:38.632623 kubelet[3044]: I0113 21:11:38.632560 3044 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:11:38.633069 kubelet[3044]: I0113 21:11:38.632996 3044 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:11:38.633069 kubelet[3044]: I0113 21:11:38.633069 3044 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:11:38.633383 kubelet[3044]: I0113 21:11:38.633095 3044 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:11:38.636799 kubelet[3044]: I0113 21:11:38.636709 3044 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:11:38.641440 kubelet[3044]: I0113 21:11:38.641363 3044 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:11:38.641440 kubelet[3044]: I0113 21:11:38.641429 3044 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:11:38.643693 kubelet[3044]: I0113 21:11:38.641480 3044 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:11:38.643693 kubelet[3044]: I0113 21:11:38.641518 3044 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:11:38.645101 kubelet[3044]: W0113 21:11:38.644995 3044 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.24.5:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.5:6443: connect: connection refused Jan 13 21:11:38.645238 kubelet[3044]: E0113 21:11:38.645116 3044 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.24.5:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.5:6443: connect: connection refused Jan 13 21:11:38.646181 kubelet[3044]: W0113 21:11:38.646067 3044 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.24.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-5&limit=500&resourceVersion=0": dial tcp 172.31.24.5:6443: connect: connection refused Jan 13 21:11:38.646322 kubelet[3044]: E0113 21:11:38.646196 3044 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.24.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-5&limit=500&resourceVersion=0": dial tcp 172.31.24.5:6443: connect: connection refused Jan 13 21:11:38.646470 kubelet[3044]: I0113 21:11:38.646411 3044 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:11:38.647120 kubelet[3044]: I0113 21:11:38.647047 3044 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:11:38.647254 kubelet[3044]: W0113 21:11:38.647196 3044 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:11:38.648512 kubelet[3044]: I0113 21:11:38.648432 3044 server.go:1256] "Started kubelet" Jan 13 21:11:38.658267 kubelet[3044]: E0113 21:11:38.658225 3044 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.5:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.5:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-5.181a5cda39e077e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-5,UID:ip-172-31-24-5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-5,},FirstTimestamp:2025-01-13 21:11:38.648385506 +0000 UTC m=+1.336285291,LastTimestamp:2025-01-13 21:11:38.648385506 +0000 UTC m=+1.336285291,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-5,}" Jan 13 21:11:38.659329 kubelet[3044]: I0113 21:11:38.659279 3044 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:11:38.665376 kubelet[3044]: I0113 21:11:38.665333 3044 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:11:38.667098 kubelet[3044]: I0113 21:11:38.667061 3044 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:11:38.669143 kubelet[3044]: I0113 21:11:38.669101 3044 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:11:38.669646 kubelet[3044]: I0113 21:11:38.669574 3044 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:11:38.670945 kubelet[3044]: I0113 21:11:38.670904 3044 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:11:38.671335 kubelet[3044]: I0113 21:11:38.671305 3044 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:11:38.671574 kubelet[3044]: I0113 21:11:38.671547 3044 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:11:38.672461 kubelet[3044]: W0113 21:11:38.672375 3044 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.24.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.5:6443: connect: connection refused Jan 13 21:11:38.672720 kubelet[3044]: E0113 21:11:38.672693 3044 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.24.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.5:6443: connect: connection refused Jan 13 21:11:38.673958 kubelet[3044]: E0113 21:11:38.673908 3044 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-5?timeout=10s\": dial tcp 172.31.24.5:6443: connect: connection refused" interval="200ms" Jan 13 21:11:38.674685 kubelet[3044]: E0113 21:11:38.674629 3044 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:11:38.676884 kubelet[3044]: I0113 21:11:38.676845 3044 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:11:38.677257 kubelet[3044]: I0113 21:11:38.677225 3044 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:11:38.679744 kubelet[3044]: I0113 21:11:38.679687 3044 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:11:38.696357 kubelet[3044]: I0113 21:11:38.696294 3044 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:11:38.698738 kubelet[3044]: I0113 21:11:38.698683 3044 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:11:38.698738 kubelet[3044]: I0113 21:11:38.698733 3044 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:11:38.698946 kubelet[3044]: I0113 21:11:38.698772 3044 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:11:38.698946 kubelet[3044]: E0113 21:11:38.698856 3044 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:11:38.732229 kubelet[3044]: W0113 21:11:38.732128 3044 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.24.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.5:6443: connect: connection refused Jan 13 21:11:38.732229 kubelet[3044]: E0113 21:11:38.732240 3044 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.24.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.5:6443: connect: connection refused Jan 13 21:11:38.753541 kubelet[3044]: I0113 21:11:38.753170 3044 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:11:38.753541 kubelet[3044]: I0113 21:11:38.753206 3044 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:11:38.753541 kubelet[3044]: I0113 21:11:38.753238 3044 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:11:38.756545 kubelet[3044]: I0113 21:11:38.756498 3044 policy_none.go:49] "None policy: Start" Jan 13 21:11:38.758950 kubelet[3044]: I0113 21:11:38.758226 3044 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:11:38.758950 kubelet[3044]: I0113 21:11:38.758315 3044 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:11:38.769697 kubelet[3044]: I0113 21:11:38.768932 3044 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:11:38.769697 kubelet[3044]: I0113 21:11:38.769398 3044 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:11:38.781718 kubelet[3044]: I0113 21:11:38.781189 3044 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-5" Jan 13 21:11:38.782222 kubelet[3044]: E0113 21:11:38.782164 3044 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-5\" not found" Jan 13 21:11:38.782496 kubelet[3044]: E0113 21:11:38.782463 3044 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.5:6443/api/v1/nodes\": dial tcp 172.31.24.5:6443: connect: connection refused" node="ip-172-31-24-5" Jan 13 21:11:38.799896 kubelet[3044]: I0113 21:11:38.799741 3044 topology_manager.go:215] "Topology Admit Handler" podUID="35a4cb78f87596f5f9668a7d29dcf478" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-24-5" Jan 13 21:11:38.802427 kubelet[3044]: I0113 21:11:38.802064 3044 topology_manager.go:215] "Topology Admit Handler" podUID="2c8d6724ca1d6f683ef5131d1a3f6e85" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-24-5" Jan 13 21:11:38.804857 kubelet[3044]: I0113 21:11:38.804784 3044 topology_manager.go:215] "Topology Admit Handler" podUID="95e20974b9daa8fd60919308dda4db65" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-24-5" Jan 13 21:11:38.875514 kubelet[3044]: E0113 21:11:38.875461 3044 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-5?timeout=10s\": dial tcp 172.31.24.5:6443: connect: connection refused" interval="400ms" Jan 13 21:11:38.876811 kubelet[3044]: I0113 21:11:38.876739 3044 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/35a4cb78f87596f5f9668a7d29dcf478-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-5\" (UID: \"35a4cb78f87596f5f9668a7d29dcf478\") " pod="kube-system/kube-apiserver-ip-172-31-24-5" Jan 13 21:11:38.876968 kubelet[3044]: I0113 21:11:38.876832 3044 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35a4cb78f87596f5f9668a7d29dcf478-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-5\" (UID: \"35a4cb78f87596f5f9668a7d29dcf478\") " pod="kube-system/kube-apiserver-ip-172-31-24-5" Jan 13 21:11:38.876968 kubelet[3044]: I0113 21:11:38.876891 3044 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2c8d6724ca1d6f683ef5131d1a3f6e85-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-5\" (UID: \"2c8d6724ca1d6f683ef5131d1a3f6e85\") " pod="kube-system/kube-controller-manager-ip-172-31-24-5" Jan 13 21:11:38.876968 kubelet[3044]: I0113 21:11:38.876947 3044 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c8d6724ca1d6f683ef5131d1a3f6e85-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-5\" (UID: \"2c8d6724ca1d6f683ef5131d1a3f6e85\") " pod="kube-system/kube-controller-manager-ip-172-31-24-5" Jan 13 21:11:38.877132 kubelet[3044]: I0113 21:11:38.876994 3044 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95e20974b9daa8fd60919308dda4db65-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-5\" (UID: \"95e20974b9daa8fd60919308dda4db65\") " pod="kube-system/kube-scheduler-ip-172-31-24-5" Jan 13 21:11:38.877132 kubelet[3044]: I0113 21:11:38.877067 3044 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/35a4cb78f87596f5f9668a7d29dcf478-ca-certs\") pod \"kube-apiserver-ip-172-31-24-5\" (UID: \"35a4cb78f87596f5f9668a7d29dcf478\") " pod="kube-system/kube-apiserver-ip-172-31-24-5" Jan 13 21:11:38.877132 kubelet[3044]: I0113 21:11:38.877115 3044 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c8d6724ca1d6f683ef5131d1a3f6e85-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-5\" (UID: \"2c8d6724ca1d6f683ef5131d1a3f6e85\") " pod="kube-system/kube-controller-manager-ip-172-31-24-5" Jan 13 21:11:38.877295 kubelet[3044]: I0113 21:11:38.877161 3044 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2c8d6724ca1d6f683ef5131d1a3f6e85-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-5\" (UID: \"2c8d6724ca1d6f683ef5131d1a3f6e85\") " pod="kube-system/kube-controller-manager-ip-172-31-24-5" Jan 13 21:11:38.877295 kubelet[3044]: I0113 21:11:38.877222 3044 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c8d6724ca1d6f683ef5131d1a3f6e85-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-5\" (UID: \"2c8d6724ca1d6f683ef5131d1a3f6e85\") " pod="kube-system/kube-controller-manager-ip-172-31-24-5" Jan 13 21:11:38.985554 kubelet[3044]: I0113 21:11:38.985439 3044 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-5" Jan 13 21:11:38.986215 kubelet[3044]: E0113 21:11:38.986144 3044 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.5:6443/api/v1/nodes\": dial tcp 172.31.24.5:6443: connect: connection refused" node="ip-172-31-24-5" Jan 13 21:11:39.115671 containerd[2130]: time="2025-01-13T21:11:39.115547381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-5,Uid:35a4cb78f87596f5f9668a7d29dcf478,Namespace:kube-system,Attempt:0,}" Jan 13 21:11:39.120744 containerd[2130]: time="2025-01-13T21:11:39.120650993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-5,Uid:2c8d6724ca1d6f683ef5131d1a3f6e85,Namespace:kube-system,Attempt:0,}" Jan 13 21:11:39.127243 containerd[2130]: time="2025-01-13T21:11:39.126756569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-5,Uid:95e20974b9daa8fd60919308dda4db65,Namespace:kube-system,Attempt:0,}" Jan 13 21:11:39.276816 kubelet[3044]: E0113 21:11:39.276770 3044 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-5?timeout=10s\": dial tcp 172.31.24.5:6443: connect: connection refused" interval="800ms" Jan 13 21:11:39.388893 kubelet[3044]: I0113 21:11:39.388727 3044 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-5" Jan 13 21:11:39.389695 kubelet[3044]: E0113 21:11:39.389648 3044 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.5:6443/api/v1/nodes\": dial tcp 172.31.24.5:6443: connect: connection refused" node="ip-172-31-24-5" Jan 13 21:11:39.503970 kubelet[3044]: W0113 21:11:39.503882 3044 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.24.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.5:6443: connect: connection refused Jan 13 21:11:39.504186 kubelet[3044]: E0113 21:11:39.504149 3044 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.24.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.5:6443: connect: connection refused Jan 13 21:11:39.600088 kubelet[3044]: E0113 21:11:39.600030 3044 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.5:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.5:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-5.181a5cda39e077e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-5,UID:ip-172-31-24-5,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-5,},FirstTimestamp:2025-01-13 21:11:38.648385506 +0000 UTC m=+1.336285291,LastTimestamp:2025-01-13 21:11:38.648385506 +0000 UTC m=+1.336285291,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-5,}" Jan 13 21:11:39.692316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount767322446.mount: Deactivated successfully. Jan 13 21:11:39.704712 containerd[2130]: time="2025-01-13T21:11:39.704568703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:11:39.706437 containerd[2130]: time="2025-01-13T21:11:39.706357531Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:11:39.708394 containerd[2130]: time="2025-01-13T21:11:39.708316087Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:11:39.709321 containerd[2130]: time="2025-01-13T21:11:39.709258603Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 13 21:11:39.712863 containerd[2130]: time="2025-01-13T21:11:39.712124683Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:11:39.715046 containerd[2130]: time="2025-01-13T21:11:39.714758408Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:11:39.715046 containerd[2130]: time="2025-01-13T21:11:39.714881876Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:11:39.721590 containerd[2130]: time="2025-01-13T21:11:39.721416272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:11:39.726357 containerd[2130]: time="2025-01-13T21:11:39.726296648Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 605.531511ms" Jan 13 21:11:39.731159 containerd[2130]: time="2025-01-13T21:11:39.731071892Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 604.189755ms" Jan 13 21:11:39.732465 containerd[2130]: time="2025-01-13T21:11:39.732182972Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 616.479135ms" Jan 13 21:11:39.763696 kubelet[3044]: W0113 21:11:39.761152 3044 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.24.5:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.5:6443: connect: connection refused Jan 13 21:11:39.763696 kubelet[3044]: E0113 21:11:39.761246 3044 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.24.5:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.5:6443: connect: connection refused Jan 13 21:11:39.797967 kubelet[3044]: W0113 21:11:39.797876 3044 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.24.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-5&limit=500&resourceVersion=0": dial tcp 172.31.24.5:6443: connect: connection refused Jan 13 21:11:39.797967 kubelet[3044]: E0113 21:11:39.797970 3044 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.24.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-5&limit=500&resourceVersion=0": dial tcp 172.31.24.5:6443: connect: connection refused Jan 13 21:11:39.834309 kubelet[3044]: W0113 21:11:39.834253 3044 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.24.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.5:6443: connect: connection refused Jan 13 21:11:39.835787 kubelet[3044]: E0113 21:11:39.835696 3044 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.24.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.5:6443: connect: connection refused Jan 13 21:11:39.976315 containerd[2130]: time="2025-01-13T21:11:39.971852169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:11:39.976315 containerd[2130]: time="2025-01-13T21:11:39.971975637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:11:39.976315 containerd[2130]: time="2025-01-13T21:11:39.972015873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:39.976315 containerd[2130]: time="2025-01-13T21:11:39.975358581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:11:39.976315 containerd[2130]: time="2025-01-13T21:11:39.975983493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:11:39.976315 containerd[2130]: time="2025-01-13T21:11:39.976040517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:39.983929 containerd[2130]: time="2025-01-13T21:11:39.982319073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:39.984141 containerd[2130]: time="2025-01-13T21:11:39.981250953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:39.991999 containerd[2130]: time="2025-01-13T21:11:39.991026561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:11:39.991999 containerd[2130]: time="2025-01-13T21:11:39.991137621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:11:39.991999 containerd[2130]: time="2025-01-13T21:11:39.991174197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:39.991999 containerd[2130]: time="2025-01-13T21:11:39.991345557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:11:40.081936 kubelet[3044]: E0113 21:11:40.081843 3044 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-5?timeout=10s\": dial tcp 172.31.24.5:6443: connect: connection refused" interval="1.6s" Jan 13 21:11:40.136981 containerd[2130]: time="2025-01-13T21:11:40.136131798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-5,Uid:2c8d6724ca1d6f683ef5131d1a3f6e85,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e6524151f506ff67342b75b303af861802e10bd2f10c6d21a742357f9c0f486\"" Jan 13 21:11:40.152799 containerd[2130]: time="2025-01-13T21:11:40.152221494Z" level=info msg="CreateContainer within sandbox \"4e6524151f506ff67342b75b303af861802e10bd2f10c6d21a742357f9c0f486\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:11:40.189639 containerd[2130]: time="2025-01-13T21:11:40.187380246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-5,Uid:35a4cb78f87596f5f9668a7d29dcf478,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf6e51d9d14e043c20c06664f18e4187b4566d67d83d29235f5a3c82963e26b8\"" Jan 13 21:11:40.189639 containerd[2130]: time="2025-01-13T21:11:40.188795718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-5,Uid:95e20974b9daa8fd60919308dda4db65,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e5bd15ade297af575a1a1618ae3a26071e04a78918cb47e44299cdcfc853e47\"" Jan 13 21:11:40.195858 kubelet[3044]: I0113 21:11:40.195811 3044 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-5" Jan 13 21:11:40.197275 kubelet[3044]: E0113 21:11:40.197216 3044 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.5:6443/api/v1/nodes\": dial tcp 172.31.24.5:6443: connect: connection refused" node="ip-172-31-24-5" Jan 13 21:11:40.198513 containerd[2130]: time="2025-01-13T21:11:40.198433698Z" level=info msg="CreateContainer within sandbox \"cf6e51d9d14e043c20c06664f18e4187b4566d67d83d29235f5a3c82963e26b8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:11:40.199280 containerd[2130]: time="2025-01-13T21:11:40.199203078Z" level=info msg="CreateContainer within sandbox \"3e5bd15ade297af575a1a1618ae3a26071e04a78918cb47e44299cdcfc853e47\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:11:40.202124 containerd[2130]: time="2025-01-13T21:11:40.202063494Z" level=info msg="CreateContainer within sandbox \"4e6524151f506ff67342b75b303af861802e10bd2f10c6d21a742357f9c0f486\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4a7d381cd718340534401e8bcc3f0ef353f1f8fa5062e2f268eac1aa08815fb3\"" Jan 13 21:11:40.214703 containerd[2130]: time="2025-01-13T21:11:40.214537446Z" level=info msg="StartContainer for \"4a7d381cd718340534401e8bcc3f0ef353f1f8fa5062e2f268eac1aa08815fb3\"" Jan 13 21:11:40.229558 containerd[2130]: time="2025-01-13T21:11:40.228386118Z" level=info msg="CreateContainer within sandbox \"cf6e51d9d14e043c20c06664f18e4187b4566d67d83d29235f5a3c82963e26b8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7cb16348cda7c40ab438d1b80a7284a5eae9e9a3b1991783df02f13769264c0a\"" Jan 13 21:11:40.230443 containerd[2130]: time="2025-01-13T21:11:40.230280666Z" level=info msg="StartContainer for \"7cb16348cda7c40ab438d1b80a7284a5eae9e9a3b1991783df02f13769264c0a\"" Jan 13 21:11:40.261930 containerd[2130]: time="2025-01-13T21:11:40.261283674Z" level=info msg="CreateContainer within sandbox \"3e5bd15ade297af575a1a1618ae3a26071e04a78918cb47e44299cdcfc853e47\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"72c3c82ae0ad059437f0b8c1ceb2b076991268fc15b5597036b26626f9d03839\"" Jan 13 21:11:40.265084 containerd[2130]: time="2025-01-13T21:11:40.264996162Z" level=info msg="StartContainer for \"72c3c82ae0ad059437f0b8c1ceb2b076991268fc15b5597036b26626f9d03839\"" Jan 13 21:11:40.392502 containerd[2130]: time="2025-01-13T21:11:40.392439523Z" level=info msg="StartContainer for \"4a7d381cd718340534401e8bcc3f0ef353f1f8fa5062e2f268eac1aa08815fb3\" returns successfully" Jan 13 21:11:40.468992 containerd[2130]: time="2025-01-13T21:11:40.468911239Z" level=info msg="StartContainer for \"7cb16348cda7c40ab438d1b80a7284a5eae9e9a3b1991783df02f13769264c0a\" returns successfully" Jan 13 21:11:40.540891 containerd[2130]: time="2025-01-13T21:11:40.540090440Z" level=info msg="StartContainer for \"72c3c82ae0ad059437f0b8c1ceb2b076991268fc15b5597036b26626f9d03839\" returns successfully" Jan 13 21:11:41.802653 kubelet[3044]: I0113 21:11:41.802516 3044 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-5" Jan 13 21:11:43.808745 update_engine[2117]: I20250113 21:11:43.808648 2117 update_attempter.cc:509] Updating boot flags... Jan 13 21:11:44.028657 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3330) Jan 13 21:11:44.653152 kubelet[3044]: I0113 21:11:44.649859 3044 apiserver.go:52] "Watching apiserver" Jan 13 21:11:44.742163 kubelet[3044]: I0113 21:11:44.741935 3044 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-24-5" Jan 13 21:11:44.772945 kubelet[3044]: I0113 21:11:44.772768 3044 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:11:44.866340 kubelet[3044]: E0113 21:11:44.866267 3044 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 13 21:11:44.953883 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3331) Jan 13 21:11:47.498786 systemd[1]: Reloading requested from client PID 3500 ('systemctl') (unit session-5.scope)... Jan 13 21:11:47.498811 systemd[1]: Reloading... Jan 13 21:11:47.663649 zram_generator::config[3543]: No configuration found. Jan 13 21:11:47.949567 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:11:48.147248 systemd[1]: Reloading finished in 647 ms. Jan 13 21:11:48.223208 kubelet[3044]: I0113 21:11:48.222310 3044 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:11:48.222775 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:11:48.240168 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:11:48.241940 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:11:48.251361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:11:48.727029 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:11:48.739356 (kubelet)[3610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:11:48.847573 kubelet[3610]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:11:48.847573 kubelet[3610]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:11:48.847573 kubelet[3610]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:11:48.848155 kubelet[3610]: I0113 21:11:48.847768 3610 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:11:48.869482 kubelet[3610]: I0113 21:11:48.869437 3610 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:11:48.869482 kubelet[3610]: I0113 21:11:48.869487 3610 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:11:48.869883 kubelet[3610]: I0113 21:11:48.869853 3610 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:11:48.875953 kubelet[3610]: I0113 21:11:48.875516 3610 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:11:48.879672 kubelet[3610]: I0113 21:11:48.879629 3610 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:11:48.892124 kubelet[3610]: I0113 21:11:48.892087 3610 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:11:48.893350 kubelet[3610]: I0113 21:11:48.893319 3610 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:11:48.893988 kubelet[3610]: I0113 21:11:48.893848 3610 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:11:48.893988 kubelet[3610]: I0113 21:11:48.893896 3610 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:11:48.893988 kubelet[3610]: I0113 21:11:48.893916 3610 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:11:48.894642 kubelet[3610]: I0113 21:11:48.894330 3610 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:11:48.894642 kubelet[3610]: I0113 21:11:48.894548 3610 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:11:48.895228 kubelet[3610]: I0113 21:11:48.894582 3610 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:11:48.895391 kubelet[3610]: I0113 21:11:48.895368 3610 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:11:48.895496 kubelet[3610]: I0113 21:11:48.895478 3610 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:11:48.897433 kubelet[3610]: I0113 21:11:48.897211 3610 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:11:48.898916 kubelet[3610]: I0113 21:11:48.898870 3610 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:11:48.901191 kubelet[3610]: I0113 21:11:48.900846 3610 server.go:1256] "Started kubelet" Jan 13 21:11:48.917351 kubelet[3610]: I0113 21:11:48.917306 3610 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:11:48.927148 kubelet[3610]: I0113 21:11:48.926880 3610 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:11:48.931279 kubelet[3610]: I0113 21:11:48.931240 3610 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:11:48.936033 kubelet[3610]: I0113 21:11:48.935995 3610 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:11:48.941415 kubelet[3610]: I0113 21:11:48.941377 3610 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:11:48.957774 kubelet[3610]: I0113 21:11:48.957723 3610 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:11:48.958455 kubelet[3610]: I0113 21:11:48.958409 3610 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:11:48.958780 kubelet[3610]: I0113 21:11:48.958748 3610 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:11:49.000701 kubelet[3610]: I0113 21:11:48.998240 3610 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:11:49.001204 kubelet[3610]: I0113 21:11:49.001050 3610 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:11:49.028885 kubelet[3610]: E0113 21:11:49.028808 3610 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:11:49.031840 kubelet[3610]: I0113 21:11:49.031804 3610 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:11:49.048008 kubelet[3610]: I0113 21:11:49.047949 3610 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:11:49.052991 kubelet[3610]: I0113 21:11:49.052930 3610 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:11:49.052991 kubelet[3610]: I0113 21:11:49.052981 3610 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:11:49.053214 kubelet[3610]: I0113 21:11:49.053013 3610 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:11:49.053214 kubelet[3610]: E0113 21:11:49.053099 3610 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:11:49.085444 kubelet[3610]: I0113 21:11:49.085343 3610 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-5" Jan 13 21:11:49.106868 kubelet[3610]: I0113 21:11:49.106703 3610 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-24-5" Jan 13 21:11:49.106868 kubelet[3610]: I0113 21:11:49.106809 3610 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-24-5" Jan 13 21:11:49.153638 kubelet[3610]: E0113 21:11:49.153574 3610 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:11:49.239747 kubelet[3610]: I0113 21:11:49.238997 3610 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:11:49.239747 kubelet[3610]: I0113 21:11:49.239146 3610 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:11:49.239747 kubelet[3610]: I0113 21:11:49.239187 3610 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:11:49.239747 kubelet[3610]: I0113 21:11:49.239474 3610 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:11:49.239747 kubelet[3610]: I0113 21:11:49.239524 3610 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:11:49.239747 kubelet[3610]: I0113 21:11:49.239542 3610 policy_none.go:49] "None policy: Start" Jan 13 21:11:49.242499 kubelet[3610]: I0113 21:11:49.242417 3610 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:11:49.242499 kubelet[3610]: I0113 21:11:49.242479 3610 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:11:49.242856 kubelet[3610]: I0113 21:11:49.242800 3610 state_mem.go:75] "Updated machine memory state" Jan 13 21:11:49.249541 kubelet[3610]: I0113 21:11:49.248428 3610 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:11:49.250205 kubelet[3610]: I0113 21:11:49.250156 3610 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:11:49.354878 kubelet[3610]: I0113 21:11:49.354815 3610 topology_manager.go:215] "Topology Admit Handler" podUID="35a4cb78f87596f5f9668a7d29dcf478" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-24-5" Jan 13 21:11:49.355017 kubelet[3610]: I0113 21:11:49.354983 3610 topology_manager.go:215] "Topology Admit Handler" podUID="2c8d6724ca1d6f683ef5131d1a3f6e85" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-24-5" Jan 13 21:11:49.355667 kubelet[3610]: I0113 21:11:49.355101 3610 topology_manager.go:215] "Topology Admit Handler" podUID="95e20974b9daa8fd60919308dda4db65" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-24-5" Jan 13 21:11:49.367220 kubelet[3610]: I0113 21:11:49.365983 3610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/35a4cb78f87596f5f9668a7d29dcf478-ca-certs\") pod \"kube-apiserver-ip-172-31-24-5\" (UID: \"35a4cb78f87596f5f9668a7d29dcf478\") " pod="kube-system/kube-apiserver-ip-172-31-24-5" Jan 13 21:11:49.367220 kubelet[3610]: I0113 21:11:49.366077 3610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2c8d6724ca1d6f683ef5131d1a3f6e85-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-5\" (UID: \"2c8d6724ca1d6f683ef5131d1a3f6e85\") " pod="kube-system/kube-controller-manager-ip-172-31-24-5" Jan 13 21:11:49.367220 kubelet[3610]: I0113 21:11:49.366127 3610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2c8d6724ca1d6f683ef5131d1a3f6e85-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-5\" (UID: \"2c8d6724ca1d6f683ef5131d1a3f6e85\") " pod="kube-system/kube-controller-manager-ip-172-31-24-5" Jan 13 21:11:49.367220 kubelet[3610]: I0113 21:11:49.366182 3610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2c8d6724ca1d6f683ef5131d1a3f6e85-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-5\" (UID: \"2c8d6724ca1d6f683ef5131d1a3f6e85\") " pod="kube-system/kube-controller-manager-ip-172-31-24-5" Jan 13 21:11:49.367220 kubelet[3610]: I0113 21:11:49.366228 3610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95e20974b9daa8fd60919308dda4db65-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-5\" (UID: \"95e20974b9daa8fd60919308dda4db65\") " pod="kube-system/kube-scheduler-ip-172-31-24-5" Jan 13 21:11:49.367637 kubelet[3610]: I0113 21:11:49.366271 3610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/35a4cb78f87596f5f9668a7d29dcf478-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-5\" (UID: \"35a4cb78f87596f5f9668a7d29dcf478\") " pod="kube-system/kube-apiserver-ip-172-31-24-5" Jan 13 21:11:49.367637 kubelet[3610]: I0113 21:11:49.366317 3610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35a4cb78f87596f5f9668a7d29dcf478-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-5\" (UID: \"35a4cb78f87596f5f9668a7d29dcf478\") " pod="kube-system/kube-apiserver-ip-172-31-24-5" Jan 13 21:11:49.367637 kubelet[3610]: I0113 21:11:49.366368 3610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2c8d6724ca1d6f683ef5131d1a3f6e85-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-5\" (UID: \"2c8d6724ca1d6f683ef5131d1a3f6e85\") " pod="kube-system/kube-controller-manager-ip-172-31-24-5" Jan 13 21:11:49.367637 kubelet[3610]: I0113 21:11:49.366414 3610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2c8d6724ca1d6f683ef5131d1a3f6e85-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-5\" (UID: \"2c8d6724ca1d6f683ef5131d1a3f6e85\") " pod="kube-system/kube-controller-manager-ip-172-31-24-5" Jan 13 21:11:49.910444 kubelet[3610]: I0113 21:11:49.910331 3610 apiserver.go:52] "Watching apiserver" Jan 13 21:11:49.959125 kubelet[3610]: I0113 21:11:49.959040 3610 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:11:50.168463 kubelet[3610]: E0113 21:11:50.166452 3610 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-24-5\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-5" Jan 13 21:11:50.211983 kubelet[3610]: I0113 21:11:50.211110 3610 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-5" podStartSLOduration=1.210982348 podStartE2EDuration="1.210982348s" podCreationTimestamp="2025-01-13 21:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:11:50.198527932 +0000 UTC m=+1.452588429" watchObservedRunningTime="2025-01-13 21:11:50.210982348 +0000 UTC m=+1.465042821" Jan 13 21:11:50.211983 kubelet[3610]: I0113 21:11:50.211274 3610 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-5" podStartSLOduration=1.2112382 podStartE2EDuration="1.2112382s" podCreationTimestamp="2025-01-13 21:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:11:50.210678208 +0000 UTC m=+1.464738705" watchObservedRunningTime="2025-01-13 21:11:50.2112382 +0000 UTC m=+1.465298661" Jan 13 21:11:50.294403 kubelet[3610]: I0113 21:11:50.293996 3610 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-5" podStartSLOduration=1.293937256 podStartE2EDuration="1.293937256s" podCreationTimestamp="2025-01-13 21:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:11:50.232525036 +0000 UTC m=+1.486585509" watchObservedRunningTime="2025-01-13 21:11:50.293937256 +0000 UTC m=+1.547997729" Jan 13 21:11:50.589509 sudo[2440]: pam_unix(sudo:session): session closed for user root Jan 13 21:11:50.614110 sshd[2436]: pam_unix(sshd:session): session closed for user core Jan 13 21:11:50.620768 systemd[1]: sshd@4-172.31.24.5:22-139.178.89.65:54386.service: Deactivated successfully. Jan 13 21:11:50.627577 systemd-logind[2111]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:11:50.630081 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:11:50.634884 systemd-logind[2111]: Removed session 5. Jan 13 21:12:01.600989 kubelet[3610]: I0113 21:12:01.600937 3610 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:12:01.606095 containerd[2130]: time="2025-01-13T21:12:01.603976276Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:12:01.606751 kubelet[3610]: I0113 21:12:01.604388 3610 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:12:01.769630 kubelet[3610]: I0113 21:12:01.767443 3610 topology_manager.go:215] "Topology Admit Handler" podUID="fbf17bf7-8504-4de5-ac3f-19dc27328c82" podNamespace="kube-system" podName="kube-proxy-fdndq" Jan 13 21:12:01.789965 kubelet[3610]: W0113 21:12:01.789847 3610 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-24-5" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-5' and this object Jan 13 21:12:01.789965 kubelet[3610]: E0113 21:12:01.789911 3610 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-24-5" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-5' and this object Jan 13 21:12:01.791019 kubelet[3610]: W0113 21:12:01.789847 3610 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-24-5" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-5' and this object Jan 13 21:12:01.791019 kubelet[3610]: E0113 21:12:01.790971 3610 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-24-5" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-24-5' and this object Jan 13 21:12:01.792244 kubelet[3610]: I0113 21:12:01.792172 3610 topology_manager.go:215] "Topology Admit Handler" podUID="1f9091b1-4a64-4bcb-9b50-aab33bd0eea3" podNamespace="kube-flannel" podName="kube-flannel-ds-gkctf" Jan 13 21:12:01.850524 kubelet[3610]: I0113 21:12:01.850450 3610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/1f9091b1-4a64-4bcb-9b50-aab33bd0eea3-cni-plugin\") pod \"kube-flannel-ds-gkctf\" (UID: \"1f9091b1-4a64-4bcb-9b50-aab33bd0eea3\") " pod="kube-flannel/kube-flannel-ds-gkctf" Jan 13 21:12:01.850724 kubelet[3610]: I0113 21:12:01.850538 3610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f9091b1-4a64-4bcb-9b50-aab33bd0eea3-xtables-lock\") pod \"kube-flannel-ds-gkctf\" (UID: \"1f9091b1-4a64-4bcb-9b50-aab33bd0eea3\") " pod="kube-flannel/kube-flannel-ds-gkctf" Jan 13 21:12:01.850724 kubelet[3610]: I0113 21:12:01.850625 3610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fbf17bf7-8504-4de5-ac3f-19dc27328c82-kube-proxy\") pod \"kube-proxy-fdndq\" (UID: \"fbf17bf7-8504-4de5-ac3f-19dc27328c82\") " pod="kube-system/kube-proxy-fdndq" Jan 13 21:12:01.850724 kubelet[3610]: I0113 21:12:01.850678 3610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/1f9091b1-4a64-4bcb-9b50-aab33bd0eea3-flannel-cfg\") pod \"kube-flannel-ds-gkctf\" (UID: \"1f9091b1-4a64-4bcb-9b50-aab33bd0eea3\") " pod="kube-flannel/kube-flannel-ds-gkctf" Jan 13 21:12:01.850724 kubelet[3610]: I0113 21:12:01.850726 3610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txjch\" (UniqueName: \"kubernetes.io/projected/1f9091b1-4a64-4bcb-9b50-aab33bd0eea3-kube-api-access-txjch\") pod \"kube-flannel-ds-gkctf\" (UID: \"1f9091b1-4a64-4bcb-9b50-aab33bd0eea3\") " pod="kube-flannel/kube-flannel-ds-gkctf" Jan 13 21:12:01.850960 kubelet[3610]: I0113 21:12:01.850774 3610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fbf17bf7-8504-4de5-ac3f-19dc27328c82-xtables-lock\") pod \"kube-proxy-fdndq\" (UID: \"fbf17bf7-8504-4de5-ac3f-19dc27328c82\") " pod="kube-system/kube-proxy-fdndq" Jan 13 21:12:01.850960 kubelet[3610]: I0113 21:12:01.850819 3610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbf17bf7-8504-4de5-ac3f-19dc27328c82-lib-modules\") pod \"kube-proxy-fdndq\" (UID: \"fbf17bf7-8504-4de5-ac3f-19dc27328c82\") " pod="kube-system/kube-proxy-fdndq" Jan 13 21:12:01.850960 kubelet[3610]: I0113 21:12:01.850863 3610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9xcq\" (UniqueName: \"kubernetes.io/projected/fbf17bf7-8504-4de5-ac3f-19dc27328c82-kube-api-access-w9xcq\") pod \"kube-proxy-fdndq\" (UID: \"fbf17bf7-8504-4de5-ac3f-19dc27328c82\") " pod="kube-system/kube-proxy-fdndq" Jan 13 21:12:01.850960 kubelet[3610]: I0113 21:12:01.850909 3610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1f9091b1-4a64-4bcb-9b50-aab33bd0eea3-run\") pod \"kube-flannel-ds-gkctf\" (UID: \"1f9091b1-4a64-4bcb-9b50-aab33bd0eea3\") " pod="kube-flannel/kube-flannel-ds-gkctf" Jan 13 21:12:01.850960 kubelet[3610]: I0113 21:12:01.850950 3610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/1f9091b1-4a64-4bcb-9b50-aab33bd0eea3-cni\") pod \"kube-flannel-ds-gkctf\" (UID: \"1f9091b1-4a64-4bcb-9b50-aab33bd0eea3\") " pod="kube-flannel/kube-flannel-ds-gkctf" Jan 13 21:12:02.118483 containerd[2130]: time="2025-01-13T21:12:02.118390959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-gkctf,Uid:1f9091b1-4a64-4bcb-9b50-aab33bd0eea3,Namespace:kube-flannel,Attempt:0,}" Jan 13 21:12:02.180444 containerd[2130]: time="2025-01-13T21:12:02.179318919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:12:02.180444 containerd[2130]: time="2025-01-13T21:12:02.180352203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:12:02.180825 containerd[2130]: time="2025-01-13T21:12:02.180394851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:12:02.181326 containerd[2130]: time="2025-01-13T21:12:02.181191855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:12:02.293663 containerd[2130]: time="2025-01-13T21:12:02.293576968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-gkctf,Uid:1f9091b1-4a64-4bcb-9b50-aab33bd0eea3,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"25fc51c9bd9ad0ef5f0bfeb886a582fe1728593fc3636ea926d5a58346ef335c\"" Jan 13 21:12:02.298495 containerd[2130]: time="2025-01-13T21:12:02.298165120Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 13 21:12:02.962357 kubelet[3610]: E0113 21:12:02.962298 3610 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 13 21:12:02.962357 kubelet[3610]: E0113 21:12:02.962349 3610 projected.go:200] Error preparing data for projected volume kube-api-access-w9xcq for pod kube-system/kube-proxy-fdndq: failed to sync configmap cache: timed out waiting for the condition Jan 13 21:12:02.963927 kubelet[3610]: E0113 21:12:02.962794 3610 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fbf17bf7-8504-4de5-ac3f-19dc27328c82-kube-api-access-w9xcq podName:fbf17bf7-8504-4de5-ac3f-19dc27328c82 nodeName:}" failed. No retries permitted until 2025-01-13 21:12:03.462427187 +0000 UTC m=+14.716487648 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w9xcq" (UniqueName: "kubernetes.io/projected/fbf17bf7-8504-4de5-ac3f-19dc27328c82-kube-api-access-w9xcq") pod "kube-proxy-fdndq" (UID: "fbf17bf7-8504-4de5-ac3f-19dc27328c82") : failed to sync configmap cache: timed out waiting for the condition Jan 13 21:12:03.577127 containerd[2130]: time="2025-01-13T21:12:03.577068054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fdndq,Uid:fbf17bf7-8504-4de5-ac3f-19dc27328c82,Namespace:kube-system,Attempt:0,}" Jan 13 21:12:03.637929 containerd[2130]: time="2025-01-13T21:12:03.637782486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:12:03.637929 containerd[2130]: time="2025-01-13T21:12:03.637875390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:12:03.638402 containerd[2130]: time="2025-01-13T21:12:03.637903014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:12:03.638743 containerd[2130]: time="2025-01-13T21:12:03.638560878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:12:03.682004 systemd[1]: run-containerd-runc-k8s.io-114901f9c2e440b8d59517ee34b43d47e45b7b161fd41f70aaf4a1ad861a61f4-runc.ooyT4i.mount: Deactivated successfully. Jan 13 21:12:03.737669 containerd[2130]: time="2025-01-13T21:12:03.736419763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fdndq,Uid:fbf17bf7-8504-4de5-ac3f-19dc27328c82,Namespace:kube-system,Attempt:0,} returns sandbox id \"114901f9c2e440b8d59517ee34b43d47e45b7b161fd41f70aaf4a1ad861a61f4\"" Jan 13 21:12:03.742116 containerd[2130]: time="2025-01-13T21:12:03.742055107Z" level=info msg="CreateContainer within sandbox \"114901f9c2e440b8d59517ee34b43d47e45b7b161fd41f70aaf4a1ad861a61f4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:12:03.776976 containerd[2130]: time="2025-01-13T21:12:03.776791015Z" level=info msg="CreateContainer within sandbox \"114901f9c2e440b8d59517ee34b43d47e45b7b161fd41f70aaf4a1ad861a61f4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2a79f17b9f43d916f453696871c165168e819b097cf32ee5ccff45bbc50f8d42\"" Jan 13 21:12:03.779387 containerd[2130]: time="2025-01-13T21:12:03.778145923Z" level=info msg="StartContainer for \"2a79f17b9f43d916f453696871c165168e819b097cf32ee5ccff45bbc50f8d42\"" Jan 13 21:12:03.887313 containerd[2130]: time="2025-01-13T21:12:03.887166524Z" level=info msg="StartContainer for \"2a79f17b9f43d916f453696871c165168e819b097cf32ee5ccff45bbc50f8d42\" returns successfully" Jan 13 21:12:04.484526 containerd[2130]: time="2025-01-13T21:12:04.484438771Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:12:04.487247 containerd[2130]: time="2025-01-13T21:12:04.487157095Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673531" Jan 13 21:12:04.491283 containerd[2130]: time="2025-01-13T21:12:04.490077835Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:12:04.498208 containerd[2130]: time="2025-01-13T21:12:04.498134467Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:12:04.501305 containerd[2130]: time="2025-01-13T21:12:04.501224947Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 2.202992003s" Jan 13 21:12:04.501305 containerd[2130]: time="2025-01-13T21:12:04.501296827Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Jan 13 21:12:04.505694 containerd[2130]: time="2025-01-13T21:12:04.505636519Z" level=info msg="CreateContainer within sandbox \"25fc51c9bd9ad0ef5f0bfeb886a582fe1728593fc3636ea926d5a58346ef335c\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 13 21:12:04.528376 containerd[2130]: time="2025-01-13T21:12:04.528243487Z" level=info msg="CreateContainer within sandbox \"25fc51c9bd9ad0ef5f0bfeb886a582fe1728593fc3636ea926d5a58346ef335c\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"ab0be9d283ae2283da7c5095f3248eb4d02eb9b0ad8e95902a01f2135b9440f8\"" Jan 13 21:12:04.529920 containerd[2130]: time="2025-01-13T21:12:04.529833451Z" level=info msg="StartContainer for \"ab0be9d283ae2283da7c5095f3248eb4d02eb9b0ad8e95902a01f2135b9440f8\"" Jan 13 21:12:04.630891 containerd[2130]: time="2025-01-13T21:12:04.630731023Z" level=info msg="StartContainer for \"ab0be9d283ae2283da7c5095f3248eb4d02eb9b0ad8e95902a01f2135b9440f8\" returns successfully" Jan 13 21:12:04.681521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab0be9d283ae2283da7c5095f3248eb4d02eb9b0ad8e95902a01f2135b9440f8-rootfs.mount: Deactivated successfully. Jan 13 21:12:04.871581 containerd[2130]: time="2025-01-13T21:12:04.871408772Z" level=info msg="shim disconnected" id=ab0be9d283ae2283da7c5095f3248eb4d02eb9b0ad8e95902a01f2135b9440f8 namespace=k8s.io Jan 13 21:12:04.871581 containerd[2130]: time="2025-01-13T21:12:04.871484924Z" level=warning msg="cleaning up after shim disconnected" id=ab0be9d283ae2283da7c5095f3248eb4d02eb9b0ad8e95902a01f2135b9440f8 namespace=k8s.io Jan 13 21:12:04.871581 containerd[2130]: time="2025-01-13T21:12:04.871506992Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:12:05.219736 containerd[2130]: time="2025-01-13T21:12:05.219084474Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 13 21:12:05.244455 kubelet[3610]: I0113 21:12:05.243807 3610 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fdndq" podStartSLOduration=4.243745374 podStartE2EDuration="4.243745374s" podCreationTimestamp="2025-01-13 21:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:12:04.229486301 +0000 UTC m=+15.483546846" watchObservedRunningTime="2025-01-13 21:12:05.243745374 +0000 UTC m=+16.497805871" Jan 13 21:12:07.157307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount369574156.mount: Deactivated successfully. Jan 13 21:12:08.384468 containerd[2130]: time="2025-01-13T21:12:08.384384334Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:12:08.387541 containerd[2130]: time="2025-01-13T21:12:08.387492742Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Jan 13 21:12:08.388571 containerd[2130]: time="2025-01-13T21:12:08.388493998Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:12:08.395167 containerd[2130]: time="2025-01-13T21:12:08.395076946Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:12:08.398636 containerd[2130]: time="2025-01-13T21:12:08.398441950Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 3.179231032s" Jan 13 21:12:08.398636 containerd[2130]: time="2025-01-13T21:12:08.398534254Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Jan 13 21:12:08.405024 containerd[2130]: time="2025-01-13T21:12:08.404884390Z" level=info msg="CreateContainer within sandbox \"25fc51c9bd9ad0ef5f0bfeb886a582fe1728593fc3636ea926d5a58346ef335c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:12:08.434354 containerd[2130]: time="2025-01-13T21:12:08.434209102Z" level=info msg="CreateContainer within sandbox \"25fc51c9bd9ad0ef5f0bfeb886a582fe1728593fc3636ea926d5a58346ef335c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8d76ac0b59cc75a514ae8c5377751e5499fcb8f4ea16b12ad18cb08e1137e087\"" Jan 13 21:12:08.437300 containerd[2130]: time="2025-01-13T21:12:08.435651250Z" level=info msg="StartContainer for \"8d76ac0b59cc75a514ae8c5377751e5499fcb8f4ea16b12ad18cb08e1137e087\"" Jan 13 21:12:08.543037 containerd[2130]: time="2025-01-13T21:12:08.542957603Z" level=info msg="StartContainer for \"8d76ac0b59cc75a514ae8c5377751e5499fcb8f4ea16b12ad18cb08e1137e087\" returns successfully" Jan 13 21:12:08.577073 kubelet[3610]: I0113 21:12:08.576919 3610 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:12:08.577498 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d76ac0b59cc75a514ae8c5377751e5499fcb8f4ea16b12ad18cb08e1137e087-rootfs.mount: Deactivated successfully. Jan 13 21:12:08.631626 kubelet[3610]: I0113 21:12:08.626027 3610 topology_manager.go:215] "Topology Admit Handler" podUID="3829fb30-fb7c-448a-9c8e-ba7ff359020d" podNamespace="kube-system" podName="coredns-76f75df574-nmhpx" Jan 13 21:12:08.631626 kubelet[3610]: I0113 21:12:08.628338 3610 topology_manager.go:215] "Topology Admit Handler" podUID="88e7f90a-2fe0-44e8-9f74-434adfad9eca" podNamespace="kube-system" podName="coredns-76f75df574-9p7gj" Jan 13 21:12:08.701464 kubelet[3610]: I0113 21:12:08.701318 3610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w76xd\" (UniqueName: \"kubernetes.io/projected/3829fb30-fb7c-448a-9c8e-ba7ff359020d-kube-api-access-w76xd\") pod \"coredns-76f75df574-nmhpx\" (UID: \"3829fb30-fb7c-448a-9c8e-ba7ff359020d\") " pod="kube-system/coredns-76f75df574-nmhpx" Jan 13 21:12:08.701464 kubelet[3610]: I0113 21:12:08.701399 3610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3829fb30-fb7c-448a-9c8e-ba7ff359020d-config-volume\") pod \"coredns-76f75df574-nmhpx\" (UID: \"3829fb30-fb7c-448a-9c8e-ba7ff359020d\") " pod="kube-system/coredns-76f75df574-nmhpx" Jan 13 21:12:08.701684 kubelet[3610]: I0113 21:12:08.701470 3610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88e7f90a-2fe0-44e8-9f74-434adfad9eca-config-volume\") pod \"coredns-76f75df574-9p7gj\" (UID: \"88e7f90a-2fe0-44e8-9f74-434adfad9eca\") " pod="kube-system/coredns-76f75df574-9p7gj" Jan 13 21:12:08.701684 kubelet[3610]: I0113 21:12:08.701524 3610 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhlqm\" (UniqueName: \"kubernetes.io/projected/88e7f90a-2fe0-44e8-9f74-434adfad9eca-kube-api-access-qhlqm\") pod \"coredns-76f75df574-9p7gj\" (UID: \"88e7f90a-2fe0-44e8-9f74-434adfad9eca\") " pod="kube-system/coredns-76f75df574-9p7gj" Jan 13 21:12:08.958312 containerd[2130]: time="2025-01-13T21:12:08.957993133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9p7gj,Uid:88e7f90a-2fe0-44e8-9f74-434adfad9eca,Namespace:kube-system,Attempt:0,}" Jan 13 21:12:08.966682 containerd[2130]: time="2025-01-13T21:12:08.966567925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nmhpx,Uid:3829fb30-fb7c-448a-9c8e-ba7ff359020d,Namespace:kube-system,Attempt:0,}" Jan 13 21:12:09.016023 containerd[2130]: time="2025-01-13T21:12:09.015660261Z" level=info msg="shim disconnected" id=8d76ac0b59cc75a514ae8c5377751e5499fcb8f4ea16b12ad18cb08e1137e087 namespace=k8s.io Jan 13 21:12:09.016023 containerd[2130]: time="2025-01-13T21:12:09.015731037Z" level=warning msg="cleaning up after shim disconnected" id=8d76ac0b59cc75a514ae8c5377751e5499fcb8f4ea16b12ad18cb08e1137e087 namespace=k8s.io Jan 13 21:12:09.016023 containerd[2130]: time="2025-01-13T21:12:09.015751317Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:12:09.083126 containerd[2130]: time="2025-01-13T21:12:09.082887945Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9p7gj,Uid:88e7f90a-2fe0-44e8-9f74-434adfad9eca,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1983299bd016ceabd872a01312ebe67d5425d5524832c8e15e8737c6639a4470\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 21:12:09.083343 kubelet[3610]: E0113 21:12:09.083298 3610 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1983299bd016ceabd872a01312ebe67d5425d5524832c8e15e8737c6639a4470\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 21:12:09.083508 kubelet[3610]: E0113 21:12:09.083381 3610 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1983299bd016ceabd872a01312ebe67d5425d5524832c8e15e8737c6639a4470\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-9p7gj" Jan 13 21:12:09.083508 kubelet[3610]: E0113 21:12:09.083419 3610 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1983299bd016ceabd872a01312ebe67d5425d5524832c8e15e8737c6639a4470\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-9p7gj" Jan 13 21:12:09.083728 kubelet[3610]: E0113 21:12:09.083511 3610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-9p7gj_kube-system(88e7f90a-2fe0-44e8-9f74-434adfad9eca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-9p7gj_kube-system(88e7f90a-2fe0-44e8-9f74-434adfad9eca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1983299bd016ceabd872a01312ebe67d5425d5524832c8e15e8737c6639a4470\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-9p7gj" podUID="88e7f90a-2fe0-44e8-9f74-434adfad9eca" Jan 13 21:12:09.089911 containerd[2130]: time="2025-01-13T21:12:09.089833713Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nmhpx,Uid:3829fb30-fb7c-448a-9c8e-ba7ff359020d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b4b55cd162f6ab1d4b9e149ddb586e3d045e67368cd8a5dd3f7373d69f4bbe34\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 21:12:09.090332 kubelet[3610]: E0113 21:12:09.090192 3610 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4b55cd162f6ab1d4b9e149ddb586e3d045e67368cd8a5dd3f7373d69f4bbe34\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 21:12:09.090332 kubelet[3610]: E0113 21:12:09.090263 3610 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4b55cd162f6ab1d4b9e149ddb586e3d045e67368cd8a5dd3f7373d69f4bbe34\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-nmhpx" Jan 13 21:12:09.090332 kubelet[3610]: E0113 21:12:09.090303 3610 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4b55cd162f6ab1d4b9e149ddb586e3d045e67368cd8a5dd3f7373d69f4bbe34\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-nmhpx" Jan 13 21:12:09.090528 kubelet[3610]: E0113 21:12:09.090398 3610 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-nmhpx_kube-system(3829fb30-fb7c-448a-9c8e-ba7ff359020d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-nmhpx_kube-system(3829fb30-fb7c-448a-9c8e-ba7ff359020d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4b55cd162f6ab1d4b9e149ddb586e3d045e67368cd8a5dd3f7373d69f4bbe34\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-nmhpx" podUID="3829fb30-fb7c-448a-9c8e-ba7ff359020d" Jan 13 21:12:09.235507 containerd[2130]: time="2025-01-13T21:12:09.235053922Z" level=info msg="CreateContainer within sandbox \"25fc51c9bd9ad0ef5f0bfeb886a582fe1728593fc3636ea926d5a58346ef335c\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 13 21:12:09.261524 containerd[2130]: time="2025-01-13T21:12:09.261456238Z" level=info msg="CreateContainer within sandbox \"25fc51c9bd9ad0ef5f0bfeb886a582fe1728593fc3636ea926d5a58346ef335c\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"26c9ee5eb751c46906cb2971086a6c5a81eadeeaf19f75e97c95d18d79ed5a67\"" Jan 13 21:12:09.262573 containerd[2130]: time="2025-01-13T21:12:09.262498570Z" level=info msg="StartContainer for \"26c9ee5eb751c46906cb2971086a6c5a81eadeeaf19f75e97c95d18d79ed5a67\"" Jan 13 21:12:09.375271 containerd[2130]: time="2025-01-13T21:12:09.375200687Z" level=info msg="StartContainer for \"26c9ee5eb751c46906cb2971086a6c5a81eadeeaf19f75e97c95d18d79ed5a67\" returns successfully" Jan 13 21:12:10.254833 kubelet[3610]: I0113 21:12:10.252910 3610 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-gkctf" podStartSLOduration=3.149009309 podStartE2EDuration="9.252850727s" podCreationTimestamp="2025-01-13 21:12:01 +0000 UTC" firstStartedPulling="2025-01-13 21:12:02.2954644 +0000 UTC m=+13.549524861" lastFinishedPulling="2025-01-13 21:12:08.399305806 +0000 UTC m=+19.653366279" observedRunningTime="2025-01-13 21:12:10.252655847 +0000 UTC m=+21.506716320" watchObservedRunningTime="2025-01-13 21:12:10.252850727 +0000 UTC m=+21.506911176" Jan 13 21:12:10.459410 (udev-worker)[4146]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:12:10.482010 systemd-networkd[1690]: flannel.1: Link UP Jan 13 21:12:10.482032 systemd-networkd[1690]: flannel.1: Gained carrier Jan 13 21:12:12.305941 systemd-networkd[1690]: flannel.1: Gained IPv6LL Jan 13 21:12:15.085580 ntpd[2089]: Listen normally on 6 flannel.1 192.168.0.0:123 Jan 13 21:12:15.085743 ntpd[2089]: Listen normally on 7 flannel.1 [fe80::b070:e2ff:fead:a22e%4]:123 Jan 13 21:12:15.086208 ntpd[2089]: 13 Jan 21:12:15 ntpd[2089]: Listen normally on 6 flannel.1 192.168.0.0:123 Jan 13 21:12:15.086208 ntpd[2089]: 13 Jan 21:12:15 ntpd[2089]: Listen normally on 7 flannel.1 [fe80::b070:e2ff:fead:a22e%4]:123 Jan 13 21:12:23.055963 containerd[2130]: time="2025-01-13T21:12:23.054731663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nmhpx,Uid:3829fb30-fb7c-448a-9c8e-ba7ff359020d,Namespace:kube-system,Attempt:0,}" Jan 13 21:12:23.057396 containerd[2130]: time="2025-01-13T21:12:23.057335279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9p7gj,Uid:88e7f90a-2fe0-44e8-9f74-434adfad9eca,Namespace:kube-system,Attempt:0,}" Jan 13 21:12:23.121103 systemd-networkd[1690]: cni0: Link UP Jan 13 21:12:23.121137 systemd-networkd[1690]: cni0: Gained carrier Jan 13 21:12:23.135290 (udev-worker)[4299]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:12:23.135329 systemd-networkd[1690]: veth3c1c3d0c: Link UP Jan 13 21:12:23.138932 kernel: cni0: port 1(veth3c1c3d0c) entered blocking state Jan 13 21:12:23.139219 kernel: cni0: port 1(veth3c1c3d0c) entered disabled state Jan 13 21:12:23.139158 systemd-networkd[1690]: cni0: Lost carrier Jan 13 21:12:23.141754 kernel: veth3c1c3d0c: entered allmulticast mode Jan 13 21:12:23.143394 kernel: veth3c1c3d0c: entered promiscuous mode Jan 13 21:12:23.144329 systemd-networkd[1690]: veth2ae034db: Link UP Jan 13 21:12:23.148649 (udev-worker)[4303]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:12:23.156329 kernel: cni0: port 2(veth2ae034db) entered blocking state Jan 13 21:12:23.156375 kernel: cni0: port 2(veth2ae034db) entered disabled state Jan 13 21:12:23.156431 kernel: veth2ae034db: entered allmulticast mode Jan 13 21:12:23.156471 kernel: veth2ae034db: entered promiscuous mode Jan 13 21:12:23.159248 kernel: cni0: port 2(veth2ae034db) entered blocking state Jan 13 21:12:23.159342 kernel: cni0: port 2(veth2ae034db) entered forwarding state Jan 13 21:12:23.159723 (udev-worker)[4304]: Network interface NamePolicy= disabled on kernel command line. Jan 13 21:12:23.163475 kernel: cni0: port 2(veth2ae034db) entered disabled state Jan 13 21:12:23.163655 kernel: cni0: port 1(veth3c1c3d0c) entered blocking state Jan 13 21:12:23.163709 kernel: cni0: port 1(veth3c1c3d0c) entered forwarding state Jan 13 21:12:23.165404 systemd-networkd[1690]: veth3c1c3d0c: Gained carrier Jan 13 21:12:23.169330 systemd-networkd[1690]: cni0: Gained carrier Jan 13 21:12:23.175483 containerd[2130]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400001a938), "name":"cbr0", "type":"bridge"} Jan 13 21:12:23.175483 containerd[2130]: delegateAdd: netconf sent to delegate plugin: Jan 13 21:12:23.183516 kernel: cni0: port 2(veth2ae034db) entered blocking state Jan 13 21:12:23.183677 kernel: cni0: port 2(veth2ae034db) entered forwarding state Jan 13 21:12:23.184099 systemd-networkd[1690]: veth2ae034db: Gained carrier Jan 13 21:12:23.197137 containerd[2130]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"} Jan 13 21:12:23.197137 containerd[2130]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000014938), "name":"cbr0", "type":"bridge"} Jan 13 21:12:23.197137 containerd[2130]: delegateAdd: netconf sent to delegate plugin: Jan 13 21:12:23.244678 containerd[2130]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-01-13T21:12:23.244472928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:12:23.247779 containerd[2130]: time="2025-01-13T21:12:23.247652268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:12:23.248090 containerd[2130]: time="2025-01-13T21:12:23.247963092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:12:23.248758 containerd[2130]: time="2025-01-13T21:12:23.248519592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:12:23.272400 containerd[2130]: time="2025-01-13T21:12:23.272234976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:12:23.273428 containerd[2130]: time="2025-01-13T21:12:23.273246048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:12:23.273428 containerd[2130]: time="2025-01-13T21:12:23.273363396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:12:23.273874 containerd[2130]: time="2025-01-13T21:12:23.273660060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:12:23.402143 containerd[2130]: time="2025-01-13T21:12:23.401957497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nmhpx,Uid:3829fb30-fb7c-448a-9c8e-ba7ff359020d,Namespace:kube-system,Attempt:0,} returns sandbox id \"088bb45440aee7a307b6e1a25c2fe635e6d3f9415274de6ab2165faf60ada2f1\"" Jan 13 21:12:23.413563 containerd[2130]: time="2025-01-13T21:12:23.413499445Z" level=info msg="CreateContainer within sandbox \"088bb45440aee7a307b6e1a25c2fe635e6d3f9415274de6ab2165faf60ada2f1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:12:23.420166 containerd[2130]: time="2025-01-13T21:12:23.420116197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9p7gj,Uid:88e7f90a-2fe0-44e8-9f74-434adfad9eca,Namespace:kube-system,Attempt:0,} returns sandbox id \"84a5594172e0899bbfc976c5bffae4bce57102c016dc78517f6e0b7db74ddc74\"" Jan 13 21:12:23.429789 containerd[2130]: time="2025-01-13T21:12:23.429734641Z" level=info msg="CreateContainer within sandbox \"84a5594172e0899bbfc976c5bffae4bce57102c016dc78517f6e0b7db74ddc74\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:12:23.446034 containerd[2130]: time="2025-01-13T21:12:23.445880605Z" level=info msg="CreateContainer within sandbox \"088bb45440aee7a307b6e1a25c2fe635e6d3f9415274de6ab2165faf60ada2f1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c0c0fc704df558ee0820f5f67bbdcc9e5ec4bb918232bc4b7396f6dad32e9b5d\"" Jan 13 21:12:23.447848 containerd[2130]: time="2025-01-13T21:12:23.447767461Z" level=info msg="StartContainer for \"c0c0fc704df558ee0820f5f67bbdcc9e5ec4bb918232bc4b7396f6dad32e9b5d\"" Jan 13 21:12:23.470252 containerd[2130]: time="2025-01-13T21:12:23.470070481Z" level=info msg="CreateContainer within sandbox \"84a5594172e0899bbfc976c5bffae4bce57102c016dc78517f6e0b7db74ddc74\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b3fdeb7e03bc21c7ddcd473d1be246850d1c8dbcef15e0c1fd469845581b92d6\"" Jan 13 21:12:23.472589 containerd[2130]: time="2025-01-13T21:12:23.471263209Z" level=info msg="StartContainer for \"b3fdeb7e03bc21c7ddcd473d1be246850d1c8dbcef15e0c1fd469845581b92d6\"" Jan 13 21:12:23.581714 containerd[2130]: time="2025-01-13T21:12:23.581632381Z" level=info msg="StartContainer for \"c0c0fc704df558ee0820f5f67bbdcc9e5ec4bb918232bc4b7396f6dad32e9b5d\" returns successfully" Jan 13 21:12:23.599900 containerd[2130]: time="2025-01-13T21:12:23.599845849Z" level=info msg="StartContainer for \"b3fdeb7e03bc21c7ddcd473d1be246850d1c8dbcef15e0c1fd469845581b92d6\" returns successfully" Jan 13 21:12:24.321008 kubelet[3610]: I0113 21:12:24.318100 3610 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-nmhpx" podStartSLOduration=23.318041881 podStartE2EDuration="23.318041881s" podCreationTimestamp="2025-01-13 21:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:12:24.296991277 +0000 UTC m=+35.551051834" watchObservedRunningTime="2025-01-13 21:12:24.318041881 +0000 UTC m=+35.572102354" Jan 13 21:12:24.346635 kubelet[3610]: I0113 21:12:24.345890 3610 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-9p7gj" podStartSLOduration=23.345830821 podStartE2EDuration="23.345830821s" podCreationTimestamp="2025-01-13 21:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:12:24.318233941 +0000 UTC m=+35.572294414" watchObservedRunningTime="2025-01-13 21:12:24.345830821 +0000 UTC m=+35.599891282" Jan 13 21:12:24.913887 systemd-networkd[1690]: cni0: Gained IPv6LL Jan 13 21:12:24.977887 systemd-networkd[1690]: veth3c1c3d0c: Gained IPv6LL Jan 13 21:12:25.041965 systemd-networkd[1690]: veth2ae034db: Gained IPv6LL Jan 13 21:12:27.085709 ntpd[2089]: Listen normally on 8 cni0 192.168.0.1:123 Jan 13 21:12:27.085855 ntpd[2089]: Listen normally on 9 cni0 [fe80::8cbb:c7ff:fe0a:4a80%5]:123 Jan 13 21:12:27.086324 ntpd[2089]: 13 Jan 21:12:27 ntpd[2089]: Listen normally on 8 cni0 192.168.0.1:123 Jan 13 21:12:27.086324 ntpd[2089]: 13 Jan 21:12:27 ntpd[2089]: Listen normally on 9 cni0 [fe80::8cbb:c7ff:fe0a:4a80%5]:123 Jan 13 21:12:27.086324 ntpd[2089]: 13 Jan 21:12:27 ntpd[2089]: Listen normally on 10 veth3c1c3d0c [fe80::fc91:a7ff:fe58:d0df%6]:123 Jan 13 21:12:27.086324 ntpd[2089]: 13 Jan 21:12:27 ntpd[2089]: Listen normally on 11 veth2ae034db [fe80::8cc9:fbff:fefd:519c%7]:123 Jan 13 21:12:27.085937 ntpd[2089]: Listen normally on 10 veth3c1c3d0c [fe80::fc91:a7ff:fe58:d0df%6]:123 Jan 13 21:12:27.086009 ntpd[2089]: Listen normally on 11 veth2ae034db [fe80::8cc9:fbff:fefd:519c%7]:123 Jan 13 21:12:30.646464 systemd[1]: Started sshd@5-172.31.24.5:22-139.178.89.65:51926.service - OpenSSH per-connection server daemon (139.178.89.65:51926). Jan 13 21:12:30.831621 sshd[4517]: Accepted publickey for core from 139.178.89.65 port 51926 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:30.834781 sshd[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:30.842566 systemd-logind[2111]: New session 6 of user core. Jan 13 21:12:30.850230 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:12:31.118297 sshd[4517]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:31.126078 systemd[1]: sshd@5-172.31.24.5:22-139.178.89.65:51926.service: Deactivated successfully. Jan 13 21:12:31.133470 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:12:31.135432 systemd-logind[2111]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:12:31.137234 systemd-logind[2111]: Removed session 6. Jan 13 21:12:36.152090 systemd[1]: Started sshd@6-172.31.24.5:22-139.178.89.65:53538.service - OpenSSH per-connection server daemon (139.178.89.65:53538). Jan 13 21:12:36.325923 sshd[4573]: Accepted publickey for core from 139.178.89.65 port 53538 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:36.328746 sshd[4573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:36.337152 systemd-logind[2111]: New session 7 of user core. Jan 13 21:12:36.349210 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:12:36.589339 sshd[4573]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:36.595567 systemd-logind[2111]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:12:36.596006 systemd[1]: sshd@6-172.31.24.5:22-139.178.89.65:53538.service: Deactivated successfully. Jan 13 21:12:36.603200 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:12:36.607273 systemd-logind[2111]: Removed session 7. Jan 13 21:12:41.623760 systemd[1]: Started sshd@7-172.31.24.5:22-139.178.89.65:34008.service - OpenSSH per-connection server daemon (139.178.89.65:34008). Jan 13 21:12:41.800103 sshd[4610]: Accepted publickey for core from 139.178.89.65 port 34008 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:41.802943 sshd[4610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:41.811053 systemd-logind[2111]: New session 8 of user core. Jan 13 21:12:41.818415 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:12:42.082946 sshd[4610]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:42.091277 systemd[1]: sshd@7-172.31.24.5:22-139.178.89.65:34008.service: Deactivated successfully. Jan 13 21:12:42.098812 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:12:42.100843 systemd-logind[2111]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:12:42.102804 systemd-logind[2111]: Removed session 8. Jan 13 21:12:42.115125 systemd[1]: Started sshd@8-172.31.24.5:22-139.178.89.65:34018.service - OpenSSH per-connection server daemon (139.178.89.65:34018). Jan 13 21:12:42.292762 sshd[4625]: Accepted publickey for core from 139.178.89.65 port 34018 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:42.295411 sshd[4625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:42.303721 systemd-logind[2111]: New session 9 of user core. Jan 13 21:12:42.310367 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:12:42.635011 sshd[4625]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:42.647970 systemd[1]: sshd@8-172.31.24.5:22-139.178.89.65:34018.service: Deactivated successfully. Jan 13 21:12:42.663389 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:12:42.664352 systemd-logind[2111]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:12:42.679793 systemd[1]: Started sshd@9-172.31.24.5:22-139.178.89.65:34026.service - OpenSSH per-connection server daemon (139.178.89.65:34026). Jan 13 21:12:42.680938 systemd-logind[2111]: Removed session 9. Jan 13 21:12:42.863816 sshd[4637]: Accepted publickey for core from 139.178.89.65 port 34026 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:42.867286 sshd[4637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:42.875708 systemd-logind[2111]: New session 10 of user core. Jan 13 21:12:42.885087 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:12:43.145038 sshd[4637]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:43.152465 systemd[1]: sshd@9-172.31.24.5:22-139.178.89.65:34026.service: Deactivated successfully. Jan 13 21:12:43.160513 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:12:43.162082 systemd-logind[2111]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:12:43.167199 systemd-logind[2111]: Removed session 10. Jan 13 21:12:48.179132 systemd[1]: Started sshd@10-172.31.24.5:22-139.178.89.65:34028.service - OpenSSH per-connection server daemon (139.178.89.65:34028). Jan 13 21:12:48.363228 sshd[4672]: Accepted publickey for core from 139.178.89.65 port 34028 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:48.366173 sshd[4672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:48.374702 systemd-logind[2111]: New session 11 of user core. Jan 13 21:12:48.385124 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:12:48.637254 sshd[4672]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:48.643128 systemd[1]: sshd@10-172.31.24.5:22-139.178.89.65:34028.service: Deactivated successfully. Jan 13 21:12:48.653525 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:12:48.656767 systemd-logind[2111]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:12:48.658741 systemd-logind[2111]: Removed session 11. Jan 13 21:12:53.672081 systemd[1]: Started sshd@11-172.31.24.5:22-139.178.89.65:34010.service - OpenSSH per-connection server daemon (139.178.89.65:34010). Jan 13 21:12:53.839397 sshd[4709]: Accepted publickey for core from 139.178.89.65 port 34010 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:53.842131 sshd[4709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:53.850504 systemd-logind[2111]: New session 12 of user core. Jan 13 21:12:53.857089 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:12:54.101473 sshd[4709]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:54.108107 systemd[1]: sshd@11-172.31.24.5:22-139.178.89.65:34010.service: Deactivated successfully. Jan 13 21:12:54.114891 systemd-logind[2111]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:12:54.115397 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:12:54.119007 systemd-logind[2111]: Removed session 12. Jan 13 21:12:59.132946 systemd[1]: Started sshd@12-172.31.24.5:22-139.178.89.65:34014.service - OpenSSH per-connection server daemon (139.178.89.65:34014). Jan 13 21:12:59.315082 sshd[4744]: Accepted publickey for core from 139.178.89.65 port 34014 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:12:59.318233 sshd[4744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:12:59.327272 systemd-logind[2111]: New session 13 of user core. Jan 13 21:12:59.334247 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:12:59.585291 sshd[4744]: pam_unix(sshd:session): session closed for user core Jan 13 21:12:59.595654 systemd[1]: sshd@12-172.31.24.5:22-139.178.89.65:34014.service: Deactivated successfully. Jan 13 21:12:59.606567 systemd-logind[2111]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:12:59.606901 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:12:59.612719 systemd-logind[2111]: Removed session 13. Jan 13 21:13:04.616083 systemd[1]: Started sshd@13-172.31.24.5:22-139.178.89.65:38798.service - OpenSSH per-connection server daemon (139.178.89.65:38798). Jan 13 21:13:04.794385 sshd[4782]: Accepted publickey for core from 139.178.89.65 port 38798 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:04.797196 sshd[4782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:04.806519 systemd-logind[2111]: New session 14 of user core. Jan 13 21:13:04.820012 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:13:05.074209 sshd[4782]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:05.081076 systemd[1]: sshd@13-172.31.24.5:22-139.178.89.65:38798.service: Deactivated successfully. Jan 13 21:13:05.091403 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:13:05.091757 systemd-logind[2111]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:13:05.102125 systemd-logind[2111]: Removed session 14. Jan 13 21:13:05.106075 systemd[1]: Started sshd@14-172.31.24.5:22-139.178.89.65:38812.service - OpenSSH per-connection server daemon (139.178.89.65:38812). Jan 13 21:13:05.285075 sshd[4796]: Accepted publickey for core from 139.178.89.65 port 38812 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:05.287901 sshd[4796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:05.297350 systemd-logind[2111]: New session 15 of user core. Jan 13 21:13:05.307287 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:13:05.599644 sshd[4796]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:05.605441 systemd[1]: sshd@14-172.31.24.5:22-139.178.89.65:38812.service: Deactivated successfully. Jan 13 21:13:05.614424 systemd-logind[2111]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:13:05.615696 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:13:05.617735 systemd-logind[2111]: Removed session 15. Jan 13 21:13:05.629119 systemd[1]: Started sshd@15-172.31.24.5:22-139.178.89.65:38820.service - OpenSSH per-connection server daemon (139.178.89.65:38820). Jan 13 21:13:05.806702 sshd[4807]: Accepted publickey for core from 139.178.89.65 port 38820 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:05.809441 sshd[4807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:05.817675 systemd-logind[2111]: New session 16 of user core. Jan 13 21:13:05.826981 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:13:08.187471 sshd[4807]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:08.202220 systemd[1]: sshd@15-172.31.24.5:22-139.178.89.65:38820.service: Deactivated successfully. Jan 13 21:13:08.219514 systemd-logind[2111]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:13:08.225087 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:13:08.240508 systemd[1]: Started sshd@16-172.31.24.5:22-139.178.89.65:38834.service - OpenSSH per-connection server daemon (139.178.89.65:38834). Jan 13 21:13:08.243626 systemd-logind[2111]: Removed session 16. Jan 13 21:13:08.432632 sshd[4847]: Accepted publickey for core from 139.178.89.65 port 38834 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:08.435311 sshd[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:08.443220 systemd-logind[2111]: New session 17 of user core. Jan 13 21:13:08.451355 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:13:08.936053 sshd[4847]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:08.944728 systemd[1]: sshd@16-172.31.24.5:22-139.178.89.65:38834.service: Deactivated successfully. Jan 13 21:13:08.950208 systemd-logind[2111]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:13:08.951377 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:13:08.954157 systemd-logind[2111]: Removed session 17. Jan 13 21:13:08.968095 systemd[1]: Started sshd@17-172.31.24.5:22-139.178.89.65:38838.service - OpenSSH per-connection server daemon (139.178.89.65:38838). Jan 13 21:13:09.150994 sshd[4859]: Accepted publickey for core from 139.178.89.65 port 38838 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:09.153675 sshd[4859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:09.162721 systemd-logind[2111]: New session 18 of user core. Jan 13 21:13:09.168114 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:13:09.418285 sshd[4859]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:09.424361 systemd[1]: sshd@17-172.31.24.5:22-139.178.89.65:38838.service: Deactivated successfully. Jan 13 21:13:09.433554 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:13:09.435260 systemd-logind[2111]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:13:09.437668 systemd-logind[2111]: Removed session 18. Jan 13 21:13:14.448076 systemd[1]: Started sshd@18-172.31.24.5:22-139.178.89.65:57972.service - OpenSSH per-connection server daemon (139.178.89.65:57972). Jan 13 21:13:14.635438 sshd[4894]: Accepted publickey for core from 139.178.89.65 port 57972 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:14.638120 sshd[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:14.646916 systemd-logind[2111]: New session 19 of user core. Jan 13 21:13:14.654244 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:13:14.896952 sshd[4894]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:14.904862 systemd[1]: sshd@18-172.31.24.5:22-139.178.89.65:57972.service: Deactivated successfully. Jan 13 21:13:14.905233 systemd-logind[2111]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:13:14.911237 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:13:14.913111 systemd-logind[2111]: Removed session 19. Jan 13 21:13:19.930281 systemd[1]: Started sshd@19-172.31.24.5:22-139.178.89.65:57980.service - OpenSSH per-connection server daemon (139.178.89.65:57980). Jan 13 21:13:20.110283 sshd[4932]: Accepted publickey for core from 139.178.89.65 port 57980 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:20.112962 sshd[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:20.120667 systemd-logind[2111]: New session 20 of user core. Jan 13 21:13:20.128156 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:13:20.366463 sshd[4932]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:20.373934 systemd-logind[2111]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:13:20.374842 systemd[1]: sshd@19-172.31.24.5:22-139.178.89.65:57980.service: Deactivated successfully. Jan 13 21:13:20.381584 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:13:20.383830 systemd-logind[2111]: Removed session 20. Jan 13 21:13:25.403096 systemd[1]: Started sshd@20-172.31.24.5:22-139.178.89.65:39176.service - OpenSSH per-connection server daemon (139.178.89.65:39176). Jan 13 21:13:25.578358 sshd[4967]: Accepted publickey for core from 139.178.89.65 port 39176 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:25.581091 sshd[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:25.589572 systemd-logind[2111]: New session 21 of user core. Jan 13 21:13:25.600438 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:13:25.840145 sshd[4967]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:25.849053 systemd[1]: sshd@20-172.31.24.5:22-139.178.89.65:39176.service: Deactivated successfully. Jan 13 21:13:25.855248 systemd-logind[2111]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:13:25.855925 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:13:25.859426 systemd-logind[2111]: Removed session 21. Jan 13 21:13:30.870321 systemd[1]: Started sshd@21-172.31.24.5:22-139.178.89.65:39190.service - OpenSSH per-connection server daemon (139.178.89.65:39190). Jan 13 21:13:31.054236 sshd[5008]: Accepted publickey for core from 139.178.89.65 port 39190 ssh2: RSA SHA256:fVGKz89zwaBVvcm/Srq5AqRwfqW9vMWr4KnnKcc3jjk Jan 13 21:13:31.058285 sshd[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:13:31.067137 systemd-logind[2111]: New session 22 of user core. Jan 13 21:13:31.077239 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:13:31.331703 sshd[5008]: pam_unix(sshd:session): session closed for user core Jan 13 21:13:31.337568 systemd[1]: sshd@21-172.31.24.5:22-139.178.89.65:39190.service: Deactivated successfully. Jan 13 21:13:31.346282 systemd-logind[2111]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:13:31.347779 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:13:31.351422 systemd-logind[2111]: Removed session 22. Jan 13 21:13:45.437932 containerd[2130]: time="2025-01-13T21:13:45.437848400Z" level=info msg="shim disconnected" id=4a7d381cd718340534401e8bcc3f0ef353f1f8fa5062e2f268eac1aa08815fb3 namespace=k8s.io Jan 13 21:13:45.437932 containerd[2130]: time="2025-01-13T21:13:45.437929436Z" level=warning msg="cleaning up after shim disconnected" id=4a7d381cd718340534401e8bcc3f0ef353f1f8fa5062e2f268eac1aa08815fb3 namespace=k8s.io Jan 13 21:13:45.439392 containerd[2130]: time="2025-01-13T21:13:45.437951384Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:45.438463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a7d381cd718340534401e8bcc3f0ef353f1f8fa5062e2f268eac1aa08815fb3-rootfs.mount: Deactivated successfully. Jan 13 21:13:45.482756 kubelet[3610]: I0113 21:13:45.481832 3610 scope.go:117] "RemoveContainer" containerID="4a7d381cd718340534401e8bcc3f0ef353f1f8fa5062e2f268eac1aa08815fb3" Jan 13 21:13:45.486981 containerd[2130]: time="2025-01-13T21:13:45.486904544Z" level=info msg="CreateContainer within sandbox \"4e6524151f506ff67342b75b303af861802e10bd2f10c6d21a742357f9c0f486\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 13 21:13:45.515953 containerd[2130]: time="2025-01-13T21:13:45.515855564Z" level=info msg="CreateContainer within sandbox \"4e6524151f506ff67342b75b303af861802e10bd2f10c6d21a742357f9c0f486\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"4de3f58bb12cb4139ec2f0ca5c1e033f2ec2b15699dbac07600b9b146153792e\"" Jan 13 21:13:45.516717 containerd[2130]: time="2025-01-13T21:13:45.516645008Z" level=info msg="StartContainer for \"4de3f58bb12cb4139ec2f0ca5c1e033f2ec2b15699dbac07600b9b146153792e\"" Jan 13 21:13:45.573822 systemd[1]: run-containerd-runc-k8s.io-4de3f58bb12cb4139ec2f0ca5c1e033f2ec2b15699dbac07600b9b146153792e-runc.aBONSI.mount: Deactivated successfully. Jan 13 21:13:45.638143 containerd[2130]: time="2025-01-13T21:13:45.638047545Z" level=info msg="StartContainer for \"4de3f58bb12cb4139ec2f0ca5c1e033f2ec2b15699dbac07600b9b146153792e\" returns successfully" Jan 13 21:13:50.186981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72c3c82ae0ad059437f0b8c1ceb2b076991268fc15b5597036b26626f9d03839-rootfs.mount: Deactivated successfully. Jan 13 21:13:50.199073 containerd[2130]: time="2025-01-13T21:13:50.198861780Z" level=info msg="shim disconnected" id=72c3c82ae0ad059437f0b8c1ceb2b076991268fc15b5597036b26626f9d03839 namespace=k8s.io Jan 13 21:13:50.199073 containerd[2130]: time="2025-01-13T21:13:50.198998952Z" level=warning msg="cleaning up after shim disconnected" id=72c3c82ae0ad059437f0b8c1ceb2b076991268fc15b5597036b26626f9d03839 namespace=k8s.io Jan 13 21:13:50.199073 containerd[2130]: time="2025-01-13T21:13:50.199021716Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:13:50.508217 kubelet[3610]: I0113 21:13:50.507470 3610 scope.go:117] "RemoveContainer" containerID="72c3c82ae0ad059437f0b8c1ceb2b076991268fc15b5597036b26626f9d03839" Jan 13 21:13:50.511999 containerd[2130]: time="2025-01-13T21:13:50.511933573Z" level=info msg="CreateContainer within sandbox \"3e5bd15ade297af575a1a1618ae3a26071e04a78918cb47e44299cdcfc853e47\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 13 21:13:50.539704 containerd[2130]: time="2025-01-13T21:13:50.539639293Z" level=info msg="CreateContainer within sandbox \"3e5bd15ade297af575a1a1618ae3a26071e04a78918cb47e44299cdcfc853e47\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"0605a2ffdc1c1aaa46cea69b6bcd0234650da419f7d12818adc3c9a0a76f953d\"" Jan 13 21:13:50.540976 containerd[2130]: time="2025-01-13T21:13:50.540873241Z" level=info msg="StartContainer for \"0605a2ffdc1c1aaa46cea69b6bcd0234650da419f7d12818adc3c9a0a76f953d\"" Jan 13 21:13:50.655072 containerd[2130]: time="2025-01-13T21:13:50.654324746Z" level=info msg="StartContainer for \"0605a2ffdc1c1aaa46cea69b6bcd0234650da419f7d12818adc3c9a0a76f953d\" returns successfully" Jan 13 21:13:50.945045 kubelet[3610]: E0113 21:13:50.944984 3610 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-5?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"