Dec 13 01:54:57.176947 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 13 01:54:57.176992 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:54:57.177017 kernel: KASLR disabled due to lack of seed Dec 13 01:54:57.177034 kernel: efi: EFI v2.7 by EDK II Dec 13 01:54:57.177050 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Dec 13 01:54:57.177065 kernel: ACPI: Early table checksum verification disabled Dec 13 01:54:57.177083 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 13 01:54:57.177099 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 13 01:54:57.177115 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 01:54:57.177131 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Dec 13 01:54:57.177151 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 01:54:57.177167 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 13 01:54:57.177182 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 13 01:54:57.177198 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 13 01:54:57.177216 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 01:54:57.177255 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 13 01:54:57.177277 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 13 01:54:57.177294 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 13 01:54:57.177311 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 13 01:54:57.177328 kernel: printk: bootconsole [uart0] enabled Dec 13 01:54:57.177345 kernel: NUMA: Failed to initialise from firmware Dec 13 01:54:57.177362 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 01:54:57.177379 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Dec 13 01:54:57.179560 kernel: Zone ranges: Dec 13 01:54:57.179579 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 01:54:57.179597 kernel: DMA32 empty Dec 13 01:54:57.179622 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 13 01:54:57.179639 kernel: Movable zone start for each node Dec 13 01:54:57.179655 kernel: Early memory node ranges Dec 13 01:54:57.179672 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 13 01:54:57.179688 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 13 01:54:57.179704 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 13 01:54:57.179721 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 13 01:54:57.179737 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 13 01:54:57.179753 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 13 01:54:57.179769 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 13 01:54:57.179785 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 13 01:54:57.179801 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 01:54:57.179827 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 13 01:54:57.179844 kernel: psci: probing for conduit method from ACPI. Dec 13 01:54:57.179867 kernel: psci: PSCIv1.0 detected in firmware. Dec 13 01:54:57.179885 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:54:57.179902 kernel: psci: Trusted OS migration not required Dec 13 01:54:57.179924 kernel: psci: SMC Calling Convention v1.1 Dec 13 01:54:57.179941 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:54:57.179959 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:54:57.179976 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 01:54:57.179993 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:54:57.180011 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:54:57.180028 kernel: CPU features: detected: Spectre-v2 Dec 13 01:54:57.180045 kernel: CPU features: detected: Spectre-v3a Dec 13 01:54:57.180062 kernel: CPU features: detected: Spectre-BHB Dec 13 01:54:57.180080 kernel: CPU features: detected: ARM erratum 1742098 Dec 13 01:54:57.180097 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 13 01:54:57.180118 kernel: alternatives: applying boot alternatives Dec 13 01:54:57.180138 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:54:57.180157 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:54:57.180175 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:54:57.180192 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:54:57.180209 kernel: Fallback order for Node 0: 0 Dec 13 01:54:57.180227 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Dec 13 01:54:57.180275 kernel: Policy zone: Normal Dec 13 01:54:57.180294 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:54:57.180312 kernel: software IO TLB: area num 2. Dec 13 01:54:57.180329 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Dec 13 01:54:57.180354 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Dec 13 01:54:57.180372 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:54:57.180389 kernel: trace event string verifier disabled Dec 13 01:54:57.180406 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:54:57.180425 kernel: rcu: RCU event tracing is enabled. Dec 13 01:54:57.180443 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:54:57.180460 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:54:57.180478 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:54:57.180496 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:54:57.180513 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:54:57.180530 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:54:57.180552 kernel: GICv3: 96 SPIs implemented Dec 13 01:54:57.180570 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:54:57.180587 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:54:57.180604 kernel: GICv3: GICv3 features: 16 PPIs Dec 13 01:54:57.180621 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 13 01:54:57.180638 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 13 01:54:57.180656 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 01:54:57.180673 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Dec 13 01:54:57.180691 kernel: GICv3: using LPI property table @0x00000004000d0000 Dec 13 01:54:57.180708 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 13 01:54:57.180725 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Dec 13 01:54:57.180743 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:54:57.180764 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 13 01:54:57.180782 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 13 01:54:57.180800 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 13 01:54:57.180818 kernel: Console: colour dummy device 80x25 Dec 13 01:54:57.180838 kernel: printk: console [tty1] enabled Dec 13 01:54:57.180856 kernel: ACPI: Core revision 20230628 Dec 13 01:54:57.180874 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 13 01:54:57.180894 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:54:57.180913 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:54:57.180932 kernel: landlock: Up and running. Dec 13 01:54:57.180955 kernel: SELinux: Initializing. Dec 13 01:54:57.180974 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:54:57.180992 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:54:57.181011 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:54:57.181029 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:54:57.181048 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:54:57.181067 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:54:57.181085 kernel: Platform MSI: ITS@0x10080000 domain created Dec 13 01:54:57.181107 kernel: PCI/MSI: ITS@0x10080000 domain created Dec 13 01:54:57.181126 kernel: Remapping and enabling EFI services. Dec 13 01:54:57.181144 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:54:57.181162 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:54:57.181181 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 13 01:54:57.181199 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Dec 13 01:54:57.181217 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 13 01:54:57.181255 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:54:57.181303 kernel: SMP: Total of 2 processors activated. Dec 13 01:54:57.181322 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:54:57.181348 kernel: CPU features: detected: 32-bit EL1 Support Dec 13 01:54:57.181367 kernel: CPU features: detected: CRC32 instructions Dec 13 01:54:57.181398 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:54:57.181421 kernel: alternatives: applying system-wide alternatives Dec 13 01:54:57.181440 kernel: devtmpfs: initialized Dec 13 01:54:57.181458 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:54:57.181477 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:54:57.181495 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:54:57.181514 kernel: SMBIOS 3.0.0 present. Dec 13 01:54:57.181538 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 13 01:54:57.181556 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:54:57.181575 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:54:57.181594 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:54:57.181613 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:54:57.181632 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:54:57.181650 kernel: audit: type=2000 audit(0.287:1): state=initialized audit_enabled=0 res=1 Dec 13 01:54:57.181673 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:54:57.181692 kernel: cpuidle: using governor menu Dec 13 01:54:57.181710 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:54:57.181729 kernel: ASID allocator initialised with 65536 entries Dec 13 01:54:57.181747 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:54:57.181780 kernel: Serial: AMBA PL011 UART driver Dec 13 01:54:57.181802 kernel: Modules: 17520 pages in range for non-PLT usage Dec 13 01:54:57.181821 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:54:57.181840 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:54:57.181865 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:54:57.181884 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:54:57.181904 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:54:57.181922 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:54:57.181941 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:54:57.181959 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:54:57.181978 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:54:57.181996 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:54:57.182015 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:54:57.182037 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:54:57.182056 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:54:57.182074 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:54:57.182111 kernel: ACPI: Interpreter enabled Dec 13 01:54:57.182132 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:54:57.182151 kernel: ACPI: MCFG table detected, 1 entries Dec 13 01:54:57.182169 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Dec 13 01:54:57.182478 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:54:57.182724 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 01:54:57.182930 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 01:54:57.183131 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Dec 13 01:54:57.184967 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Dec 13 01:54:57.185000 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 13 01:54:57.185020 kernel: acpiphp: Slot [1] registered Dec 13 01:54:57.185039 kernel: acpiphp: Slot [2] registered Dec 13 01:54:57.185057 kernel: acpiphp: Slot [3] registered Dec 13 01:54:57.185087 kernel: acpiphp: Slot [4] registered Dec 13 01:54:57.185106 kernel: acpiphp: Slot [5] registered Dec 13 01:54:57.185125 kernel: acpiphp: Slot [6] registered Dec 13 01:54:57.185143 kernel: acpiphp: Slot [7] registered Dec 13 01:54:57.185162 kernel: acpiphp: Slot [8] registered Dec 13 01:54:57.185180 kernel: acpiphp: Slot [9] registered Dec 13 01:54:57.185198 kernel: acpiphp: Slot [10] registered Dec 13 01:54:57.185216 kernel: acpiphp: Slot [11] registered Dec 13 01:54:57.185254 kernel: acpiphp: Slot [12] registered Dec 13 01:54:57.185277 kernel: acpiphp: Slot [13] registered Dec 13 01:54:57.185303 kernel: acpiphp: Slot [14] registered Dec 13 01:54:57.185321 kernel: acpiphp: Slot [15] registered Dec 13 01:54:57.185340 kernel: acpiphp: Slot [16] registered Dec 13 01:54:57.185358 kernel: acpiphp: Slot [17] registered Dec 13 01:54:57.185376 kernel: acpiphp: Slot [18] registered Dec 13 01:54:57.185395 kernel: acpiphp: Slot [19] registered Dec 13 01:54:57.185413 kernel: acpiphp: Slot [20] registered Dec 13 01:54:57.185432 kernel: acpiphp: Slot [21] registered Dec 13 01:54:57.185450 kernel: acpiphp: Slot [22] registered Dec 13 01:54:57.185472 kernel: acpiphp: Slot [23] registered Dec 13 01:54:57.185491 kernel: acpiphp: Slot [24] registered Dec 13 01:54:57.185509 kernel: acpiphp: Slot [25] registered Dec 13 01:54:57.185527 kernel: acpiphp: Slot [26] registered Dec 13 01:54:57.185545 kernel: acpiphp: Slot [27] registered Dec 13 01:54:57.185564 kernel: acpiphp: Slot [28] registered Dec 13 01:54:57.185582 kernel: acpiphp: Slot [29] registered Dec 13 01:54:57.185600 kernel: acpiphp: Slot [30] registered Dec 13 01:54:57.185618 kernel: acpiphp: Slot [31] registered Dec 13 01:54:57.185636 kernel: PCI host bridge to bus 0000:00 Dec 13 01:54:57.185862 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 13 01:54:57.186073 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 01:54:57.186351 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 13 01:54:57.186616 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Dec 13 01:54:57.186878 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Dec 13 01:54:57.187159 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Dec 13 01:54:57.187548 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Dec 13 01:54:57.187775 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 01:54:57.187984 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Dec 13 01:54:57.188190 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:54:57.188935 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 01:54:57.189150 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Dec 13 01:54:57.189403 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Dec 13 01:54:57.189612 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Dec 13 01:54:57.189810 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:54:57.190052 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Dec 13 01:54:57.191116 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Dec 13 01:54:57.191391 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Dec 13 01:54:57.191614 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Dec 13 01:54:57.191845 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Dec 13 01:54:57.192067 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 13 01:54:57.192296 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 01:54:57.192505 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 13 01:54:57.192532 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 01:54:57.192552 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 01:54:57.192571 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 01:54:57.192591 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 01:54:57.192610 kernel: iommu: Default domain type: Translated Dec 13 01:54:57.192635 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:54:57.192654 kernel: efivars: Registered efivars operations Dec 13 01:54:57.192672 kernel: vgaarb: loaded Dec 13 01:54:57.192691 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:54:57.192709 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:54:57.192728 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:54:57.192747 kernel: pnp: PnP ACPI init Dec 13 01:54:57.192979 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 13 01:54:57.193013 kernel: pnp: PnP ACPI: found 1 devices Dec 13 01:54:57.193033 kernel: NET: Registered PF_INET protocol family Dec 13 01:54:57.193053 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:54:57.193073 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:54:57.193091 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:54:57.193110 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:54:57.193129 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:54:57.193148 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:54:57.193168 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:54:57.193193 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:54:57.193212 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:54:57.198659 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:54:57.198709 kernel: kvm [1]: HYP mode not available Dec 13 01:54:57.198731 kernel: Initialise system trusted keyrings Dec 13 01:54:57.198751 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:54:57.198771 kernel: Key type asymmetric registered Dec 13 01:54:57.198789 kernel: Asymmetric key parser 'x509' registered Dec 13 01:54:57.198808 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:54:57.198837 kernel: io scheduler mq-deadline registered Dec 13 01:54:57.198856 kernel: io scheduler kyber registered Dec 13 01:54:57.198874 kernel: io scheduler bfq registered Dec 13 01:54:57.199140 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 13 01:54:57.199170 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 01:54:57.199191 kernel: ACPI: button: Power Button [PWRB] Dec 13 01:54:57.199211 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 13 01:54:57.199258 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 01:54:57.199290 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:54:57.199311 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 01:54:57.199549 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 13 01:54:57.199578 kernel: printk: console [ttyS0] disabled Dec 13 01:54:57.199598 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 13 01:54:57.199617 kernel: printk: console [ttyS0] enabled Dec 13 01:54:57.199636 kernel: printk: bootconsole [uart0] disabled Dec 13 01:54:57.199654 kernel: thunder_xcv, ver 1.0 Dec 13 01:54:57.199672 kernel: thunder_bgx, ver 1.0 Dec 13 01:54:57.199691 kernel: nicpf, ver 1.0 Dec 13 01:54:57.199715 kernel: nicvf, ver 1.0 Dec 13 01:54:57.199929 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:54:57.200122 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:54:56 UTC (1734054896) Dec 13 01:54:57.200148 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:54:57.200168 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Dec 13 01:54:57.200187 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:54:57.200205 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:54:57.200230 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:54:57.202337 kernel: Segment Routing with IPv6 Dec 13 01:54:57.202361 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:54:57.202381 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:54:57.202401 kernel: Key type dns_resolver registered Dec 13 01:54:57.202423 kernel: registered taskstats version 1 Dec 13 01:54:57.202444 kernel: Loading compiled-in X.509 certificates Dec 13 01:54:57.202464 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:54:57.202482 kernel: Key type .fscrypt registered Dec 13 01:54:57.202501 kernel: Key type fscrypt-provisioning registered Dec 13 01:54:57.202532 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:54:57.202551 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:54:57.202571 kernel: ima: No architecture policies found Dec 13 01:54:57.202590 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:54:57.202608 kernel: clk: Disabling unused clocks Dec 13 01:54:57.202627 kernel: Freeing unused kernel memory: 39360K Dec 13 01:54:57.202645 kernel: Run /init as init process Dec 13 01:54:57.202666 kernel: with arguments: Dec 13 01:54:57.202685 kernel: /init Dec 13 01:54:57.202709 kernel: with environment: Dec 13 01:54:57.202728 kernel: HOME=/ Dec 13 01:54:57.202748 kernel: TERM=linux Dec 13 01:54:57.202767 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:54:57.202791 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:54:57.202817 systemd[1]: Detected virtualization amazon. Dec 13 01:54:57.202839 systemd[1]: Detected architecture arm64. Dec 13 01:54:57.202863 systemd[1]: Running in initrd. Dec 13 01:54:57.202884 systemd[1]: No hostname configured, using default hostname. Dec 13 01:54:57.202904 systemd[1]: Hostname set to . Dec 13 01:54:57.202927 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:54:57.202947 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:54:57.202969 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:54:57.202990 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:54:57.203014 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:54:57.203041 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:54:57.203062 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:54:57.203084 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:54:57.203107 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:54:57.203128 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:54:57.203149 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:54:57.203170 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:54:57.203194 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:54:57.203215 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:54:57.204216 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:54:57.204414 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:54:57.204439 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:54:57.204459 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:54:57.204480 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:54:57.204501 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:54:57.204521 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:54:57.204551 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:54:57.204572 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:54:57.204592 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:54:57.204612 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:54:57.204632 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:54:57.204653 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:54:57.204673 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:54:57.204693 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:54:57.204717 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:54:57.204738 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:57.204759 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:54:57.204779 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:54:57.204852 systemd-journald[251]: Collecting audit messages is disabled. Dec 13 01:54:57.204901 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:54:57.204924 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:54:57.204944 systemd-journald[251]: Journal started Dec 13 01:54:57.204985 systemd-journald[251]: Runtime Journal (/run/log/journal/ec25afc3b3f8b43e01de19b81dbef613) is 8.0M, max 75.3M, 67.3M free. Dec 13 01:54:57.187041 systemd-modules-load[252]: Inserted module 'overlay' Dec 13 01:54:57.215930 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:54:57.230560 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:54:57.231902 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:54:57.236298 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:54:57.253585 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:54:57.260001 systemd-modules-load[252]: Inserted module 'br_netfilter' Dec 13 01:54:57.262392 kernel: Bridge firewalling registered Dec 13 01:54:57.262472 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:57.272347 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:54:57.284507 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:57.292784 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:54:57.298144 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:54:57.311931 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:54:57.332660 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:54:57.343622 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:54:57.348867 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:57.360931 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:54:57.393319 dracut-cmdline[287]: dracut-dracut-053 Dec 13 01:54:57.403756 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:54:57.426079 systemd-resolved[286]: Positive Trust Anchors: Dec 13 01:54:57.426173 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:54:57.426266 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:54:57.579278 kernel: SCSI subsystem initialized Dec 13 01:54:57.586273 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:54:57.599278 kernel: iscsi: registered transport (tcp) Dec 13 01:54:57.621342 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:54:57.621420 kernel: QLogic iSCSI HBA Driver Dec 13 01:54:57.678740 kernel: random: crng init done Dec 13 01:54:57.678549 systemd-resolved[286]: Defaulting to hostname 'linux'. Dec 13 01:54:57.680778 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:54:57.683525 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:54:57.709870 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:54:57.720560 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:54:57.757731 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:54:57.757809 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:54:57.759514 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:54:57.825301 kernel: raid6: neonx8 gen() 6711 MB/s Dec 13 01:54:57.842271 kernel: raid6: neonx4 gen() 6525 MB/s Dec 13 01:54:57.859272 kernel: raid6: neonx2 gen() 5450 MB/s Dec 13 01:54:57.876271 kernel: raid6: neonx1 gen() 3949 MB/s Dec 13 01:54:57.893290 kernel: raid6: int64x8 gen() 3788 MB/s Dec 13 01:54:57.910269 kernel: raid6: int64x4 gen() 3704 MB/s Dec 13 01:54:57.927273 kernel: raid6: int64x2 gen() 3599 MB/s Dec 13 01:54:57.945182 kernel: raid6: int64x1 gen() 2772 MB/s Dec 13 01:54:57.945250 kernel: raid6: using algorithm neonx8 gen() 6711 MB/s Dec 13 01:54:57.963012 kernel: raid6: .... xor() 4878 MB/s, rmw enabled Dec 13 01:54:57.963063 kernel: raid6: using neon recovery algorithm Dec 13 01:54:57.971521 kernel: xor: measuring software checksum speed Dec 13 01:54:57.971586 kernel: 8regs : 10978 MB/sec Dec 13 01:54:57.972596 kernel: 32regs : 11945 MB/sec Dec 13 01:54:57.973743 kernel: arm64_neon : 9585 MB/sec Dec 13 01:54:57.973789 kernel: xor: using function: 32regs (11945 MB/sec) Dec 13 01:54:58.058287 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:54:58.077822 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:54:58.087580 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:54:58.130203 systemd-udevd[471]: Using default interface naming scheme 'v255'. Dec 13 01:54:58.138852 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:54:58.153535 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:54:58.193743 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Dec 13 01:54:58.249873 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:54:58.259585 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:54:58.393001 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:54:58.413810 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:54:58.460761 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:54:58.470622 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:54:58.482426 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:54:58.492801 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:54:58.508602 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:54:58.545562 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:54:58.585392 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 01:54:58.585458 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 13 01:54:58.612524 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 01:54:58.612791 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 01:54:58.615278 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:79:50:5a:45:99 Dec 13 01:54:58.617869 (udev-worker)[532]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:54:58.626883 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:54:58.627155 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:58.637865 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:58.642596 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:54:58.648542 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:58.656568 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 01:54:58.656629 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 01:54:58.657381 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:58.668261 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 01:54:58.670822 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:54:58.679271 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:54:58.679335 kernel: GPT:9289727 != 16777215 Dec 13 01:54:58.679362 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:54:58.683816 kernel: GPT:9289727 != 16777215 Dec 13 01:54:58.683898 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:54:58.683924 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:58.698318 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:54:58.709664 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:54:58.757492 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:54:58.806623 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by (udev-worker) (531) Dec 13 01:54:58.827277 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/nvme0n1p3 scanned by (udev-worker) (524) Dec 13 01:54:58.858721 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 13 01:54:58.928542 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 13 01:54:58.945475 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 13 01:54:58.947889 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 13 01:54:58.965942 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:54:58.985606 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:54:58.996572 disk-uuid[659]: Primary Header is updated. Dec 13 01:54:58.996572 disk-uuid[659]: Secondary Entries is updated. Dec 13 01:54:58.996572 disk-uuid[659]: Secondary Header is updated. Dec 13 01:54:59.006275 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:59.012291 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:54:59.022281 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:55:00.026293 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:55:00.026518 disk-uuid[660]: The operation has completed successfully. Dec 13 01:55:00.214126 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:55:00.216276 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:55:00.264550 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:55:00.275614 sh[1003]: Success Dec 13 01:55:00.301352 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:55:00.411449 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:55:00.419663 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:55:00.438778 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:55:00.468723 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:55:00.468792 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:55:00.468831 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:55:00.468858 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:55:00.470009 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:55:00.535280 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:55:00.553465 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:55:00.557478 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:55:00.569509 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:55:00.576555 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:55:00.603875 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:55:00.603953 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:55:00.605432 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:55:00.622706 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:55:00.642272 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:55:00.644641 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:55:00.655638 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:55:00.668632 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:55:00.772348 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:55:00.784609 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:55:00.850314 systemd-networkd[1205]: lo: Link UP Dec 13 01:55:00.850337 systemd-networkd[1205]: lo: Gained carrier Dec 13 01:55:00.853583 systemd-networkd[1205]: Enumeration completed Dec 13 01:55:00.854448 systemd-networkd[1205]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:55:00.854455 systemd-networkd[1205]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:55:00.855977 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:55:00.859359 systemd[1]: Reached target network.target - Network. Dec 13 01:55:00.874714 systemd-networkd[1205]: eth0: Link UP Dec 13 01:55:00.874727 systemd-networkd[1205]: eth0: Gained carrier Dec 13 01:55:00.874746 systemd-networkd[1205]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:55:00.897334 systemd-networkd[1205]: eth0: DHCPv4 address 172.31.24.36/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:55:01.032410 ignition[1109]: Ignition 2.19.0 Dec 13 01:55:01.032431 ignition[1109]: Stage: fetch-offline Dec 13 01:55:01.036439 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:55:01.032966 ignition[1109]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:55:01.032989 ignition[1109]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:55:01.033500 ignition[1109]: Ignition finished successfully Dec 13 01:55:01.059630 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:55:01.084723 ignition[1217]: Ignition 2.19.0 Dec 13 01:55:01.084751 ignition[1217]: Stage: fetch Dec 13 01:55:01.086429 ignition[1217]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:55:01.086455 ignition[1217]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:55:01.087595 ignition[1217]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:55:01.100216 ignition[1217]: PUT result: OK Dec 13 01:55:01.103184 ignition[1217]: parsed url from cmdline: "" Dec 13 01:55:01.103352 ignition[1217]: no config URL provided Dec 13 01:55:01.103372 ignition[1217]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:55:01.103399 ignition[1217]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:55:01.103431 ignition[1217]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:55:01.104962 ignition[1217]: PUT result: OK Dec 13 01:55:01.105043 ignition[1217]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 01:55:01.113839 ignition[1217]: GET result: OK Dec 13 01:55:01.113966 ignition[1217]: parsing config with SHA512: 0dfd176b518546ee07084756591c652edec41bb81b84dab316cbe4cae3efaa4061b5da49ca021d0ab2a54ffbacb398c47d688b2c842a6d22dabc06e80c61e1d9 Dec 13 01:55:01.121911 unknown[1217]: fetched base config from "system" Dec 13 01:55:01.122165 unknown[1217]: fetched base config from "system" Dec 13 01:55:01.122814 ignition[1217]: fetch: fetch complete Dec 13 01:55:01.122179 unknown[1217]: fetched user config from "aws" Dec 13 01:55:01.122825 ignition[1217]: fetch: fetch passed Dec 13 01:55:01.122936 ignition[1217]: Ignition finished successfully Dec 13 01:55:01.134541 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:55:01.149475 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:55:01.171745 ignition[1223]: Ignition 2.19.0 Dec 13 01:55:01.171778 ignition[1223]: Stage: kargs Dec 13 01:55:01.172727 ignition[1223]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:55:01.172752 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:55:01.172901 ignition[1223]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:55:01.175724 ignition[1223]: PUT result: OK Dec 13 01:55:01.185680 ignition[1223]: kargs: kargs passed Dec 13 01:55:01.185800 ignition[1223]: Ignition finished successfully Dec 13 01:55:01.189599 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:55:01.199567 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:55:01.236153 ignition[1229]: Ignition 2.19.0 Dec 13 01:55:01.236176 ignition[1229]: Stage: disks Dec 13 01:55:01.236819 ignition[1229]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:55:01.236843 ignition[1229]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:55:01.236999 ignition[1229]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:55:01.240523 ignition[1229]: PUT result: OK Dec 13 01:55:01.250255 ignition[1229]: disks: disks passed Dec 13 01:55:01.250366 ignition[1229]: Ignition finished successfully Dec 13 01:55:01.256303 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:55:01.258867 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:55:01.262323 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:55:01.264583 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:55:01.266426 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:55:01.268340 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:55:01.285628 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:55:01.320375 systemd-fsck[1238]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:55:01.325272 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:55:01.336514 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:55:01.430259 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:55:01.431818 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:55:01.436227 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:55:01.453447 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:55:01.467369 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:55:01.468198 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:55:01.468383 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:55:01.468432 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:55:01.486990 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:55:01.495272 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1257) Dec 13 01:55:01.500327 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:55:01.500400 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:55:01.500427 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:55:01.500718 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:55:01.517282 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:55:01.520777 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:55:01.776152 initrd-setup-root[1282]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:55:01.784769 initrd-setup-root[1289]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:55:01.793184 initrd-setup-root[1296]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:55:01.812649 initrd-setup-root[1303]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:55:02.107764 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:55:02.131610 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:55:02.138538 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:55:02.153479 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:55:02.158269 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:55:02.199227 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:55:02.203679 ignition[1371]: INFO : Ignition 2.19.0 Dec 13 01:55:02.203679 ignition[1371]: INFO : Stage: mount Dec 13 01:55:02.203679 ignition[1371]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:55:02.203679 ignition[1371]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:55:02.203679 ignition[1371]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:55:02.217745 ignition[1371]: INFO : PUT result: OK Dec 13 01:55:02.217745 ignition[1371]: INFO : mount: mount passed Dec 13 01:55:02.217745 ignition[1371]: INFO : Ignition finished successfully Dec 13 01:55:02.221534 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:55:02.241690 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:55:02.257656 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:55:02.287272 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1384) Dec 13 01:55:02.290639 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:55:02.290678 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:55:02.290705 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:55:02.297274 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:55:02.300923 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:55:02.337287 ignition[1401]: INFO : Ignition 2.19.0 Dec 13 01:55:02.337287 ignition[1401]: INFO : Stage: files Dec 13 01:55:02.341157 ignition[1401]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:55:02.341157 ignition[1401]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:55:02.341157 ignition[1401]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:55:02.347634 ignition[1401]: INFO : PUT result: OK Dec 13 01:55:02.351881 ignition[1401]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:55:02.377992 ignition[1401]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:55:02.377992 ignition[1401]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:55:02.384118 ignition[1401]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:55:02.387001 ignition[1401]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:55:02.389921 unknown[1401]: wrote ssh authorized keys file for user: core Dec 13 01:55:02.392106 ignition[1401]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:55:02.397166 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:55:02.397166 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:55:02.524055 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:55:02.662281 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:55:02.662281 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:55:02.668951 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:55:02.668951 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:55:02.668951 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:55:02.668951 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:55:02.668951 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:55:02.668951 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:55:02.668951 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:55:02.668951 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:55:02.668951 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:55:02.668951 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:55:02.668951 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:55:02.668951 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:55:02.668951 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 01:55:02.791420 systemd-networkd[1205]: eth0: Gained IPv6LL Dec 13 01:55:03.132098 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:55:03.486900 ignition[1401]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:55:03.486900 ignition[1401]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:55:03.493734 ignition[1401]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:55:03.493734 ignition[1401]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:55:03.493734 ignition[1401]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:55:03.493734 ignition[1401]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:55:03.493734 ignition[1401]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:55:03.493734 ignition[1401]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:55:03.493734 ignition[1401]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:55:03.493734 ignition[1401]: INFO : files: files passed Dec 13 01:55:03.493734 ignition[1401]: INFO : Ignition finished successfully Dec 13 01:55:03.521297 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:55:03.532550 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:55:03.540538 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:55:03.546889 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:55:03.547095 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:55:03.575892 initrd-setup-root-after-ignition[1430]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:55:03.575892 initrd-setup-root-after-ignition[1430]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:55:03.582715 initrd-setup-root-after-ignition[1434]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:55:03.588968 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:55:03.593100 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:55:03.612641 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:55:03.662187 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:55:03.662656 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:55:03.669104 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:55:03.672995 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:55:03.675300 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:55:03.685528 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:55:03.717960 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:55:03.725794 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:55:03.752351 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:55:03.757041 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:55:03.759488 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:55:03.761384 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:55:03.761626 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:55:03.764640 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:55:03.774760 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:55:03.776573 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:55:03.778734 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:55:03.781045 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:55:03.783424 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:55:03.786980 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:55:03.789642 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:55:03.794175 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:55:03.807601 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:55:03.809415 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:55:03.809759 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:55:03.815312 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:55:03.821848 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:55:03.824344 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:55:03.827615 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:55:03.830796 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:55:03.831103 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:55:03.833615 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:55:03.833931 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:55:03.836834 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:55:03.837045 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:55:03.867492 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:55:03.869336 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:55:03.869778 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:55:03.888827 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:55:03.890638 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:55:03.891771 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:55:03.902782 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:55:03.903039 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:55:03.922061 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:55:03.925574 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:55:03.930392 ignition[1454]: INFO : Ignition 2.19.0 Dec 13 01:55:03.930392 ignition[1454]: INFO : Stage: umount Dec 13 01:55:03.932519 ignition[1454]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:55:03.932519 ignition[1454]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:55:03.932519 ignition[1454]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:55:03.943304 ignition[1454]: INFO : PUT result: OK Dec 13 01:55:03.948138 ignition[1454]: INFO : umount: umount passed Dec 13 01:55:03.948138 ignition[1454]: INFO : Ignition finished successfully Dec 13 01:55:03.953805 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:55:03.955679 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:55:03.958375 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:55:03.958549 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:55:03.961158 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:55:03.961306 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:55:03.961963 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:55:03.962528 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:55:03.962767 systemd[1]: Stopped target network.target - Network. Dec 13 01:55:03.962995 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:55:03.963076 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:55:03.963654 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:55:03.963896 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:55:03.979900 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:55:03.982342 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:55:03.984446 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:55:03.986507 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:55:03.986628 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:55:03.991806 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:55:03.991890 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:55:03.994401 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:55:03.994497 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:55:03.996626 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:55:03.996718 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:55:03.999104 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:55:04.001393 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:55:04.011421 systemd-networkd[1205]: eth0: DHCPv6 lease lost Dec 13 01:55:04.020023 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:55:04.021226 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:55:04.022590 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:55:04.026230 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:55:04.026553 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:55:04.030801 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:55:04.032580 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:55:04.062957 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:55:04.063163 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:55:04.073118 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:55:04.073226 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:55:04.090365 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:55:04.092885 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:55:04.093158 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:55:04.097158 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:55:04.097818 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:55:04.111109 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:55:04.111211 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:55:04.111395 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:55:04.111475 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:55:04.117614 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:55:04.144735 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:55:04.146838 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:55:04.153407 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:55:04.153929 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:55:04.160772 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:55:04.160865 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:55:04.162993 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:55:04.163063 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:55:04.165385 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:55:04.165476 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:55:04.167929 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:55:04.168010 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:55:04.170736 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:55:04.170826 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:55:04.197560 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:55:04.199696 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:55:04.199813 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:55:04.202330 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:55:04.202429 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:55:04.235057 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:55:04.235485 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:55:04.244023 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:55:04.256528 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:55:04.280591 systemd[1]: Switching root. Dec 13 01:55:04.331588 systemd-journald[251]: Journal stopped Dec 13 01:55:06.441841 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Dec 13 01:55:06.441969 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:55:06.442031 kernel: SELinux: policy capability open_perms=1 Dec 13 01:55:06.442069 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:55:06.442106 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:55:06.442138 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:55:06.442168 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:55:06.444080 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:55:06.444115 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:55:06.444148 kernel: audit: type=1403 audit(1734054904.763:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:55:06.444193 systemd[1]: Successfully loaded SELinux policy in 70.528ms. Dec 13 01:55:06.447002 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.098ms. Dec 13 01:55:06.447079 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:55:06.447115 systemd[1]: Detected virtualization amazon. Dec 13 01:55:06.447148 systemd[1]: Detected architecture arm64. Dec 13 01:55:06.447180 systemd[1]: Detected first boot. Dec 13 01:55:06.447224 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:55:06.451412 zram_generator::config[1496]: No configuration found. Dec 13 01:55:06.451464 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:55:06.451499 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:55:06.451533 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:55:06.451575 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:55:06.451609 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:55:06.451643 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:55:06.451675 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:55:06.451707 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:55:06.451744 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:55:06.451775 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:55:06.451808 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:55:06.451843 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:55:06.451876 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:55:06.451907 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:55:06.451936 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:55:06.451968 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:55:06.451998 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:55:06.452030 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:55:06.452061 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:55:06.452091 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:55:06.452126 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:55:06.452158 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:55:06.452188 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:55:06.452218 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:55:06.452270 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:55:06.452305 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:55:06.452338 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:55:06.452370 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:55:06.468205 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:55:06.468280 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:55:06.468316 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:55:06.468348 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:55:06.468380 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:55:06.468412 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:55:06.468443 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:55:06.468473 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:55:06.468505 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:55:06.468542 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:55:06.468574 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:55:06.468607 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:55:06.468638 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:55:06.468668 systemd[1]: Reached target machines.target - Containers. Dec 13 01:55:06.468698 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:55:06.468729 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:55:06.468761 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:55:06.468795 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:55:06.468828 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:55:06.468859 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:55:06.468889 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:55:06.468919 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:55:06.468950 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:55:06.468983 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:55:06.469017 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:55:06.469048 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:55:06.469081 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:55:06.469111 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:55:06.469140 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:55:06.469172 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:55:06.469200 kernel: loop: module loaded Dec 13 01:55:06.469228 kernel: ACPI: bus type drm_connector registered Dec 13 01:55:06.469279 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:55:06.469321 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:55:06.469352 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:55:06.469387 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:55:06.469418 systemd[1]: Stopped verity-setup.service. Dec 13 01:55:06.469450 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:55:06.469482 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:55:06.469555 systemd-journald[1585]: Collecting audit messages is disabled. Dec 13 01:55:06.469607 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:55:06.469640 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:55:06.469677 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:55:06.469708 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:55:06.469738 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:55:06.469768 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:55:06.469800 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:55:06.469834 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:55:06.469872 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:55:06.469902 systemd-journald[1585]: Journal started Dec 13 01:55:06.469949 systemd-journald[1585]: Runtime Journal (/run/log/journal/ec25afc3b3f8b43e01de19b81dbef613) is 8.0M, max 75.3M, 67.3M free. Dec 13 01:55:05.899826 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:55:06.485415 kernel: fuse: init (API version 7.39) Dec 13 01:55:06.485470 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:55:05.951948 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 13 01:55:05.952751 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:55:06.479518 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:55:06.479912 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:55:06.483065 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:55:06.486361 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:55:06.490513 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:55:06.490850 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:55:06.493897 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:55:06.494204 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:55:06.496937 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:55:06.499851 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:55:06.505434 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:55:06.511076 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:55:06.537189 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:55:06.547603 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:55:06.560108 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:55:06.562407 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:55:06.562466 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:55:06.567212 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:55:06.584883 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:55:06.590731 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:55:06.592900 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:55:06.604685 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:55:06.610537 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:55:06.612840 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:55:06.618634 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:55:06.619969 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:55:06.625582 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:55:06.633584 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:55:06.640608 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:55:06.645478 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:55:06.648053 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:55:06.653305 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:55:06.730707 systemd-journald[1585]: Time spent on flushing to /var/log/journal/ec25afc3b3f8b43e01de19b81dbef613 is 116.587ms for 907 entries. Dec 13 01:55:06.730707 systemd-journald[1585]: System Journal (/var/log/journal/ec25afc3b3f8b43e01de19b81dbef613) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:55:06.862992 systemd-journald[1585]: Received client request to flush runtime journal. Dec 13 01:55:06.863099 kernel: loop0: detected capacity change from 0 to 194512 Dec 13 01:55:06.739990 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:55:06.742510 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:55:06.757507 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:55:06.797208 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:55:06.812910 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:55:06.816852 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:55:06.869159 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:55:06.877158 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:55:06.882542 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:55:06.895687 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:55:06.901830 udevadm[1636]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:55:06.910164 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:55:06.929016 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:55:06.942289 kernel: loop1: detected capacity change from 0 to 52536 Dec 13 01:55:06.990937 systemd-tmpfiles[1644]: ACLs are not supported, ignoring. Dec 13 01:55:06.992317 systemd-tmpfiles[1644]: ACLs are not supported, ignoring. Dec 13 01:55:07.009036 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:55:07.074275 kernel: loop2: detected capacity change from 0 to 114328 Dec 13 01:55:07.194273 kernel: loop3: detected capacity change from 0 to 114432 Dec 13 01:55:07.296291 kernel: loop4: detected capacity change from 0 to 194512 Dec 13 01:55:07.327300 kernel: loop5: detected capacity change from 0 to 52536 Dec 13 01:55:07.342278 kernel: loop6: detected capacity change from 0 to 114328 Dec 13 01:55:07.358286 kernel: loop7: detected capacity change from 0 to 114432 Dec 13 01:55:07.368783 (sd-merge)[1650]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 13 01:55:07.369780 (sd-merge)[1650]: Merged extensions into '/usr'. Dec 13 01:55:07.377572 systemd[1]: Reloading requested from client PID 1625 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:55:07.377608 systemd[1]: Reloading... Dec 13 01:55:07.587295 zram_generator::config[1679]: No configuration found. Dec 13 01:55:07.871209 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:55:08.002186 systemd[1]: Reloading finished in 623 ms. Dec 13 01:55:08.043855 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:55:08.047912 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:55:08.064588 systemd[1]: Starting ensure-sysext.service... Dec 13 01:55:08.075361 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:55:08.083611 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:55:08.101628 ldconfig[1620]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:55:08.106206 systemd[1]: Reloading requested from client PID 1728 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:55:08.106291 systemd[1]: Reloading... Dec 13 01:55:08.150603 systemd-tmpfiles[1729]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:55:08.153749 systemd-tmpfiles[1729]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:55:08.155872 systemd-tmpfiles[1729]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:55:08.156492 systemd-tmpfiles[1729]: ACLs are not supported, ignoring. Dec 13 01:55:08.156654 systemd-tmpfiles[1729]: ACLs are not supported, ignoring. Dec 13 01:55:08.169541 systemd-tmpfiles[1729]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:55:08.169569 systemd-tmpfiles[1729]: Skipping /boot Dec 13 01:55:08.202673 systemd-udevd[1730]: Using default interface naming scheme 'v255'. Dec 13 01:55:08.207421 systemd-tmpfiles[1729]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:55:08.207446 systemd-tmpfiles[1729]: Skipping /boot Dec 13 01:55:08.312278 zram_generator::config[1760]: No configuration found. Dec 13 01:55:08.421769 (udev-worker)[1784]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:08.438275 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1788) Dec 13 01:55:08.520952 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1788) Dec 13 01:55:08.700833 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:55:08.769272 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (1810) Dec 13 01:55:08.850676 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:55:08.851382 systemd[1]: Reloading finished in 744 ms. Dec 13 01:55:08.890932 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:55:08.894224 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:55:08.896979 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:55:08.976797 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:55:08.993752 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:55:08.998668 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:55:09.003741 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:55:09.008603 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:55:09.014545 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:55:09.016846 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:55:09.021802 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:55:09.036862 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:55:09.053066 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:55:09.075770 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:55:09.082753 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:55:09.090118 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:55:09.141892 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:55:09.142588 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:55:09.151486 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:55:09.151838 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:55:09.163572 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:55:09.164190 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:55:09.193422 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:55:09.197384 systemd[1]: Finished ensure-sysext.service. Dec 13 01:55:09.200528 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:55:09.212963 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:55:09.222598 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:55:09.234720 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:55:09.246577 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:55:09.259439 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:55:09.261938 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:55:09.264657 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:55:09.267769 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:55:09.281420 augenrules[1961]: No rules Dec 13 01:55:09.283633 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:55:09.297893 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:55:09.301627 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:55:09.319659 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:55:09.320164 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:55:09.323203 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:55:09.330270 lvm[1956]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:55:09.331480 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:55:09.338570 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:55:09.361720 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:55:09.366060 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:55:09.368107 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:55:09.370492 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:55:09.377791 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:55:09.381382 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:55:09.382095 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:55:09.406509 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:55:09.416404 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:55:09.421065 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:55:09.426706 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:55:09.438686 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:55:09.456469 lvm[1982]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:55:09.462812 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:55:09.498394 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:55:09.576270 systemd-networkd[1934]: lo: Link UP Dec 13 01:55:09.576837 systemd-networkd[1934]: lo: Gained carrier Dec 13 01:55:09.579840 systemd-networkd[1934]: Enumeration completed Dec 13 01:55:09.580253 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:55:09.582534 systemd-networkd[1934]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:55:09.582635 systemd-networkd[1934]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:55:09.584867 systemd-networkd[1934]: eth0: Link UP Dec 13 01:55:09.585452 systemd-networkd[1934]: eth0: Gained carrier Dec 13 01:55:09.585608 systemd-networkd[1934]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:55:09.590569 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:55:09.597410 systemd-networkd[1934]: eth0: DHCPv4 address 172.31.24.36/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:55:09.599722 systemd-resolved[1938]: Positive Trust Anchors: Dec 13 01:55:09.599761 systemd-resolved[1938]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:55:09.599826 systemd-resolved[1938]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:55:09.609466 systemd-resolved[1938]: Defaulting to hostname 'linux'. Dec 13 01:55:09.614814 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:55:09.617422 systemd[1]: Reached target network.target - Network. Dec 13 01:55:09.621313 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:55:09.626559 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:55:09.629035 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:55:09.631491 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:55:09.634384 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:55:09.636572 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:55:09.638895 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:55:09.641358 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:55:09.641411 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:55:09.643083 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:55:09.647109 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:55:09.651823 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:55:09.659685 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:55:09.663470 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:55:09.665753 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:55:09.667704 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:55:09.669499 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:55:09.669551 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:55:09.676541 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:55:09.686903 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:55:09.693793 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:55:09.700204 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:55:09.718947 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:55:09.721009 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:55:09.724071 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:55:09.743913 jq[1997]: false Dec 13 01:55:09.742524 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 01:55:09.749472 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:55:09.755582 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 01:55:09.761801 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:55:09.793755 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:55:09.803802 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:55:09.806851 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:55:09.809084 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:55:09.810998 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:55:09.816479 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:55:09.824133 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:55:09.824608 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:55:09.829359 extend-filesystems[1998]: Found loop4 Dec 13 01:55:09.829359 extend-filesystems[1998]: Found loop5 Dec 13 01:55:09.829359 extend-filesystems[1998]: Found loop6 Dec 13 01:55:09.829359 extend-filesystems[1998]: Found loop7 Dec 13 01:55:09.829359 extend-filesystems[1998]: Found nvme0n1 Dec 13 01:55:09.829359 extend-filesystems[1998]: Found nvme0n1p1 Dec 13 01:55:09.829359 extend-filesystems[1998]: Found nvme0n1p2 Dec 13 01:55:09.829359 extend-filesystems[1998]: Found nvme0n1p3 Dec 13 01:55:09.829359 extend-filesystems[1998]: Found usr Dec 13 01:55:09.829359 extend-filesystems[1998]: Found nvme0n1p4 Dec 13 01:55:09.829359 extend-filesystems[1998]: Found nvme0n1p6 Dec 13 01:55:09.829359 extend-filesystems[1998]: Found nvme0n1p7 Dec 13 01:55:09.829359 extend-filesystems[1998]: Found nvme0n1p9 Dec 13 01:55:09.829359 extend-filesystems[1998]: Checking size of /dev/nvme0n1p9 Dec 13 01:55:09.865617 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:55:09.866744 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:55:09.892957 jq[2016]: true Dec 13 01:55:09.887045 dbus-daemon[1996]: [system] SELinux support is enabled Dec 13 01:55:09.901412 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:55:09.911109 dbus-daemon[1996]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1934 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 01:55:09.919685 extend-filesystems[1998]: Resized partition /dev/nvme0n1p9 Dec 13 01:55:09.934372 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:55:09.937788 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:55:09.953095 extend-filesystems[2032]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:55:09.993266 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 01:55:09.988737 dbus-daemon[1996]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:55:09.978123 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:55:09.980354 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:55:09.980411 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:55:09.983454 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:55:09.983495 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:55:09.997590 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 01:55:10.013744 (ntainerd)[2035]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:55:10.029796 ntpd[2000]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:42:18 UTC 2024 (1): Starting Dec 13 01:55:10.031792 ntpd[2000]: 13 Dec 01:55:10 ntpd[2000]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:42:18 UTC 2024 (1): Starting Dec 13 01:55:10.031792 ntpd[2000]: 13 Dec 01:55:10 ntpd[2000]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:55:10.031792 ntpd[2000]: 13 Dec 01:55:10 ntpd[2000]: ---------------------------------------------------- Dec 13 01:55:10.031792 ntpd[2000]: 13 Dec 01:55:10 ntpd[2000]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:55:10.031792 ntpd[2000]: 13 Dec 01:55:10 ntpd[2000]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:55:10.031792 ntpd[2000]: 13 Dec 01:55:10 ntpd[2000]: corporation. Support and training for ntp-4 are Dec 13 01:55:10.031792 ntpd[2000]: 13 Dec 01:55:10 ntpd[2000]: available at https://www.nwtime.org/support Dec 13 01:55:10.031792 ntpd[2000]: 13 Dec 01:55:10 ntpd[2000]: ---------------------------------------------------- Dec 13 01:55:10.029860 ntpd[2000]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:55:10.047297 ntpd[2000]: 13 Dec 01:55:10 ntpd[2000]: proto: precision = 0.096 usec (-23) Dec 13 01:55:10.047297 ntpd[2000]: 13 Dec 01:55:10 ntpd[2000]: basedate set to 2024-11-30 Dec 13 01:55:10.047297 ntpd[2000]: 13 Dec 01:55:10 ntpd[2000]: gps base set to 2024-12-01 (week 2343) Dec 13 01:55:10.047297 ntpd[2000]: 13 Dec 01:55:10 ntpd[2000]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:55:10.047297 ntpd[2000]: 13 Dec 01:55:10 ntpd[2000]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:55:10.047297 ntpd[2000]: 13 Dec 01:55:10 ntpd[2000]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:55:10.047297 ntpd[2000]: 13 Dec 01:55:10 ntpd[2000]: Listen normally on 3 eth0 172.31.24.36:123 Dec 13 01:55:10.047297 ntpd[2000]: 13 Dec 01:55:10 ntpd[2000]: Listen normally on 4 lo [::1]:123 Dec 13 01:55:10.047297 ntpd[2000]: 13 Dec 01:55:10 ntpd[2000]: bind(21) AF_INET6 fe80::479:50ff:fe5a:4599%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:55:10.047297 ntpd[2000]: 13 Dec 01:55:10 ntpd[2000]: unable to create socket on eth0 (5) for fe80::479:50ff:fe5a:4599%2#123 Dec 13 01:55:10.047297 ntpd[2000]: 13 Dec 01:55:10 ntpd[2000]: failed to init interface for address fe80::479:50ff:fe5a:4599%2 Dec 13 01:55:10.047297 ntpd[2000]: 13 Dec 01:55:10 ntpd[2000]: Listening on routing socket on fd #21 for interface updates Dec 13 01:55:10.029882 ntpd[2000]: ---------------------------------------------------- Dec 13 01:55:10.029901 ntpd[2000]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:55:10.029920 ntpd[2000]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:55:10.029938 ntpd[2000]: corporation. Support and training for ntp-4 are Dec 13 01:55:10.029957 ntpd[2000]: available at https://www.nwtime.org/support Dec 13 01:55:10.056390 ntpd[2000]: 13 Dec 01:55:10 ntpd[2000]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:55:10.056390 ntpd[2000]: 13 Dec 01:55:10 ntpd[2000]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:55:10.029975 ntpd[2000]: ---------------------------------------------------- Dec 13 01:55:10.034670 ntpd[2000]: proto: precision = 0.096 usec (-23) Dec 13 01:55:10.035108 ntpd[2000]: basedate set to 2024-11-30 Dec 13 01:55:10.035135 ntpd[2000]: gps base set to 2024-12-01 (week 2343) Dec 13 01:55:10.038547 ntpd[2000]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:55:10.038630 ntpd[2000]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:55:10.038897 ntpd[2000]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:55:10.038969 ntpd[2000]: Listen normally on 3 eth0 172.31.24.36:123 Dec 13 01:55:10.039036 ntpd[2000]: Listen normally on 4 lo [::1]:123 Dec 13 01:55:10.041597 ntpd[2000]: bind(21) AF_INET6 fe80::479:50ff:fe5a:4599%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:55:10.041668 ntpd[2000]: unable to create socket on eth0 (5) for fe80::479:50ff:fe5a:4599%2#123 Dec 13 01:55:10.041703 ntpd[2000]: failed to init interface for address fe80::479:50ff:fe5a:4599%2 Dec 13 01:55:10.041777 ntpd[2000]: Listening on routing socket on fd #21 for interface updates Dec 13 01:55:10.048399 ntpd[2000]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:55:10.070971 update_engine[2015]: I20241213 01:55:10.057760 2015 main.cc:92] Flatcar Update Engine starting Dec 13 01:55:10.048452 ntpd[2000]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:55:10.074848 jq[2028]: true Dec 13 01:55:10.077185 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:55:10.085224 update_engine[2015]: I20241213 01:55:10.083936 2015 update_check_scheduler.cc:74] Next update check in 5m57s Dec 13 01:55:10.101704 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:55:10.114078 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 01:55:10.121640 tar[2027]: linux-arm64/helm Dec 13 01:55:10.152046 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 01:55:10.154799 extend-filesystems[2032]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 01:55:10.154799 extend-filesystems[2032]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:55:10.154799 extend-filesystems[2032]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 01:55:10.162144 extend-filesystems[1998]: Resized filesystem in /dev/nvme0n1p9 Dec 13 01:55:10.186962 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:55:10.187970 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:55:10.286329 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (1810) Dec 13 01:55:10.378912 coreos-metadata[1995]: Dec 13 01:55:10.378 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:55:10.392599 coreos-metadata[1995]: Dec 13 01:55:10.387 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 13 01:55:10.393534 coreos-metadata[1995]: Dec 13 01:55:10.393 INFO Fetch successful Dec 13 01:55:10.393534 coreos-metadata[1995]: Dec 13 01:55:10.393 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 13 01:55:10.397524 coreos-metadata[1995]: Dec 13 01:55:10.397 INFO Fetch successful Dec 13 01:55:10.397524 coreos-metadata[1995]: Dec 13 01:55:10.397 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 13 01:55:10.399485 coreos-metadata[1995]: Dec 13 01:55:10.399 INFO Fetch successful Dec 13 01:55:10.399485 coreos-metadata[1995]: Dec 13 01:55:10.399 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 13 01:55:10.400339 coreos-metadata[1995]: Dec 13 01:55:10.400 INFO Fetch successful Dec 13 01:55:10.400339 coreos-metadata[1995]: Dec 13 01:55:10.400 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 13 01:55:10.400819 coreos-metadata[1995]: Dec 13 01:55:10.400 INFO Fetch failed with 404: resource not found Dec 13 01:55:10.400819 coreos-metadata[1995]: Dec 13 01:55:10.400 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 13 01:55:10.402909 bash[2087]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:55:10.410704 coreos-metadata[1995]: Dec 13 01:55:10.410 INFO Fetch successful Dec 13 01:55:10.410704 coreos-metadata[1995]: Dec 13 01:55:10.410 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 13 01:55:10.415584 coreos-metadata[1995]: Dec 13 01:55:10.415 INFO Fetch successful Dec 13 01:55:10.415584 coreos-metadata[1995]: Dec 13 01:55:10.415 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 13 01:55:10.417594 coreos-metadata[1995]: Dec 13 01:55:10.416 INFO Fetch successful Dec 13 01:55:10.417594 coreos-metadata[1995]: Dec 13 01:55:10.417 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 13 01:55:10.420375 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:55:10.447147 coreos-metadata[1995]: Dec 13 01:55:10.446 INFO Fetch successful Dec 13 01:55:10.447147 coreos-metadata[1995]: Dec 13 01:55:10.446 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 13 01:55:10.448634 systemd[1]: Starting sshkeys.service... Dec 13 01:55:10.454415 coreos-metadata[1995]: Dec 13 01:55:10.452 INFO Fetch successful Dec 13 01:55:10.460127 systemd-logind[2013]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 01:55:10.460177 systemd-logind[2013]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 13 01:55:10.461094 systemd-logind[2013]: New seat seat0. Dec 13 01:55:10.469015 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:55:10.607110 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:55:10.614155 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:55:10.626803 dbus-daemon[1996]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 01:55:10.627074 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 01:55:10.629584 dbus-daemon[1996]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2042 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 01:55:10.641818 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 01:55:10.653402 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:55:10.657392 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:55:10.749894 containerd[2035]: time="2024-12-13T01:55:10.749756410Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:55:10.774578 polkitd[2144]: Started polkitd version 121 Dec 13 01:55:10.828988 polkitd[2144]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 01:55:10.829112 polkitd[2144]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 01:55:10.832661 polkitd[2144]: Finished loading, compiling and executing 2 rules Dec 13 01:55:10.840908 dbus-daemon[1996]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 01:55:10.841560 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 01:55:10.845161 polkitd[2144]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 01:55:10.880843 locksmithd[2049]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:55:10.967518 systemd-hostnamed[2042]: Hostname set to (transient) Dec 13 01:55:10.967692 systemd-resolved[1938]: System hostname changed to 'ip-172-31-24-36'. Dec 13 01:55:11.013266 containerd[2035]: time="2024-12-13T01:55:11.011055404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:11.016155 coreos-metadata[2139]: Dec 13 01:55:11.016 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:55:11.017757 containerd[2035]: time="2024-12-13T01:55:11.017677928Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:11.018035 containerd[2035]: time="2024-12-13T01:55:11.018001400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:55:11.019255 coreos-metadata[2139]: Dec 13 01:55:11.018 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 13 01:55:11.019480 coreos-metadata[2139]: Dec 13 01:55:11.019 INFO Fetch successful Dec 13 01:55:11.019544 coreos-metadata[2139]: Dec 13 01:55:11.019 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 01:55:11.021270 containerd[2035]: time="2024-12-13T01:55:11.019760348Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:55:11.021270 containerd[2035]: time="2024-12-13T01:55:11.020221532Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:55:11.021270 containerd[2035]: time="2024-12-13T01:55:11.020327636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:11.021270 containerd[2035]: time="2024-12-13T01:55:11.020481812Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:11.021270 containerd[2035]: time="2024-12-13T01:55:11.020512904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:11.021270 containerd[2035]: time="2024-12-13T01:55:11.020821352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:11.021270 containerd[2035]: time="2024-12-13T01:55:11.020856164Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:11.021270 containerd[2035]: time="2024-12-13T01:55:11.020887352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:11.021270 containerd[2035]: time="2024-12-13T01:55:11.020912648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:11.021270 containerd[2035]: time="2024-12-13T01:55:11.021087260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:11.024256 coreos-metadata[2139]: Dec 13 01:55:11.022 INFO Fetch successful Dec 13 01:55:11.025079 containerd[2035]: time="2024-12-13T01:55:11.024982364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:55:11.028225 containerd[2035]: time="2024-12-13T01:55:11.027538448Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:55:11.028225 containerd[2035]: time="2024-12-13T01:55:11.027586556Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:55:11.028225 containerd[2035]: time="2024-12-13T01:55:11.027808736Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:55:11.028225 containerd[2035]: time="2024-12-13T01:55:11.027905480Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:55:11.028452 unknown[2139]: wrote ssh authorized keys file for user: core Dec 13 01:55:11.030554 ntpd[2000]: bind(24) AF_INET6 fe80::479:50ff:fe5a:4599%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:55:11.030624 ntpd[2000]: unable to create socket on eth0 (6) for fe80::479:50ff:fe5a:4599%2#123 Dec 13 01:55:11.031052 ntpd[2000]: 13 Dec 01:55:11 ntpd[2000]: bind(24) AF_INET6 fe80::479:50ff:fe5a:4599%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:55:11.031052 ntpd[2000]: 13 Dec 01:55:11 ntpd[2000]: unable to create socket on eth0 (6) for fe80::479:50ff:fe5a:4599%2#123 Dec 13 01:55:11.031052 ntpd[2000]: 13 Dec 01:55:11 ntpd[2000]: failed to init interface for address fe80::479:50ff:fe5a:4599%2 Dec 13 01:55:11.030654 ntpd[2000]: failed to init interface for address fe80::479:50ff:fe5a:4599%2 Dec 13 01:55:11.039363 containerd[2035]: time="2024-12-13T01:55:11.036190064Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:55:11.039363 containerd[2035]: time="2024-12-13T01:55:11.036317396Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:55:11.039363 containerd[2035]: time="2024-12-13T01:55:11.036354248Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:55:11.039363 containerd[2035]: time="2024-12-13T01:55:11.036477536Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:55:11.039363 containerd[2035]: time="2024-12-13T01:55:11.036511544Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:55:11.039363 containerd[2035]: time="2024-12-13T01:55:11.036792920Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:55:11.039363 containerd[2035]: time="2024-12-13T01:55:11.037276904Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:55:11.039363 containerd[2035]: time="2024-12-13T01:55:11.037516856Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:55:11.039363 containerd[2035]: time="2024-12-13T01:55:11.037551392Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:55:11.039363 containerd[2035]: time="2024-12-13T01:55:11.037584272Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:55:11.039363 containerd[2035]: time="2024-12-13T01:55:11.037615760Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:55:11.039363 containerd[2035]: time="2024-12-13T01:55:11.037646180Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:55:11.039363 containerd[2035]: time="2024-12-13T01:55:11.037678136Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:55:11.039363 containerd[2035]: time="2024-12-13T01:55:11.037710212Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:55:11.039982 containerd[2035]: time="2024-12-13T01:55:11.037743476Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:55:11.039982 containerd[2035]: time="2024-12-13T01:55:11.037774940Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:55:11.039982 containerd[2035]: time="2024-12-13T01:55:11.037804508Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:55:11.039982 containerd[2035]: time="2024-12-13T01:55:11.037835972Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:55:11.039982 containerd[2035]: time="2024-12-13T01:55:11.037877948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:55:11.039982 containerd[2035]: time="2024-12-13T01:55:11.037909628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:55:11.039982 containerd[2035]: time="2024-12-13T01:55:11.037938992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:55:11.039982 containerd[2035]: time="2024-12-13T01:55:11.037989344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:55:11.039982 containerd[2035]: time="2024-12-13T01:55:11.038030396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:55:11.039982 containerd[2035]: time="2024-12-13T01:55:11.038102168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:55:11.039982 containerd[2035]: time="2024-12-13T01:55:11.038134808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:55:11.039982 containerd[2035]: time="2024-12-13T01:55:11.038166200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:55:11.039982 containerd[2035]: time="2024-12-13T01:55:11.038196380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:55:11.044269 containerd[2035]: time="2024-12-13T01:55:11.042310616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:55:11.044269 containerd[2035]: time="2024-12-13T01:55:11.042379628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:55:11.044269 containerd[2035]: time="2024-12-13T01:55:11.042416204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:55:11.044269 containerd[2035]: time="2024-12-13T01:55:11.042452372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:55:11.044269 containerd[2035]: time="2024-12-13T01:55:11.042493256Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:55:11.044269 containerd[2035]: time="2024-12-13T01:55:11.042543500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:55:11.044269 containerd[2035]: time="2024-12-13T01:55:11.042585932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:55:11.044269 containerd[2035]: time="2024-12-13T01:55:11.042614696Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:55:11.044269 containerd[2035]: time="2024-12-13T01:55:11.042739136Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:55:11.044269 containerd[2035]: time="2024-12-13T01:55:11.042784196Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:55:11.044269 containerd[2035]: time="2024-12-13T01:55:11.042810668Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:55:11.044269 containerd[2035]: time="2024-12-13T01:55:11.042839936Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:55:11.044269 containerd[2035]: time="2024-12-13T01:55:11.042864764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:55:11.044889 containerd[2035]: time="2024-12-13T01:55:11.042893432Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:55:11.044889 containerd[2035]: time="2024-12-13T01:55:11.042917264Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:55:11.044889 containerd[2035]: time="2024-12-13T01:55:11.042948776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:55:11.045029 containerd[2035]: time="2024-12-13T01:55:11.043508120Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:55:11.045029 containerd[2035]: time="2024-12-13T01:55:11.043622324Z" level=info msg="Connect containerd service" Dec 13 01:55:11.045029 containerd[2035]: time="2024-12-13T01:55:11.043678316Z" level=info msg="using legacy CRI server" Dec 13 01:55:11.045029 containerd[2035]: time="2024-12-13T01:55:11.043698308Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:55:11.045029 containerd[2035]: time="2024-12-13T01:55:11.043865468Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:55:11.047513 systemd-networkd[1934]: eth0: Gained IPv6LL Dec 13 01:55:11.054296 containerd[2035]: time="2024-12-13T01:55:11.053042648Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:55:11.054296 containerd[2035]: time="2024-12-13T01:55:11.053335772Z" level=info msg="Start subscribing containerd event" Dec 13 01:55:11.054296 containerd[2035]: time="2024-12-13T01:55:11.053429168Z" level=info msg="Start recovering state" Dec 13 01:55:11.054296 containerd[2035]: time="2024-12-13T01:55:11.053554088Z" level=info msg="Start event monitor" Dec 13 01:55:11.054296 containerd[2035]: time="2024-12-13T01:55:11.053580944Z" level=info msg="Start snapshots syncer" Dec 13 01:55:11.054296 containerd[2035]: time="2024-12-13T01:55:11.053603660Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:55:11.054296 containerd[2035]: time="2024-12-13T01:55:11.053622536Z" level=info msg="Start streaming server" Dec 13 01:55:11.054296 containerd[2035]: time="2024-12-13T01:55:11.053829536Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:55:11.054296 containerd[2035]: time="2024-12-13T01:55:11.053943200Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:55:11.054296 containerd[2035]: time="2024-12-13T01:55:11.054062840Z" level=info msg="containerd successfully booted in 0.315121s" Dec 13 01:55:11.054195 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:55:11.059214 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:55:11.067310 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:55:11.083641 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 13 01:55:11.097434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:11.109030 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:55:11.126439 update-ssh-keys[2196]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:55:11.133750 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:55:11.142320 systemd[1]: Finished sshkeys.service. Dec 13 01:55:11.220499 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:55:11.252855 amazon-ssm-agent[2198]: Initializing new seelog logger Dec 13 01:55:11.253355 amazon-ssm-agent[2198]: New Seelog Logger Creation Complete Dec 13 01:55:11.253412 amazon-ssm-agent[2198]: 2024/12/13 01:55:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:11.253412 amazon-ssm-agent[2198]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:11.255266 amazon-ssm-agent[2198]: 2024/12/13 01:55:11 processing appconfig overrides Dec 13 01:55:11.255266 amazon-ssm-agent[2198]: 2024/12/13 01:55:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:11.255266 amazon-ssm-agent[2198]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:11.255266 amazon-ssm-agent[2198]: 2024/12/13 01:55:11 processing appconfig overrides Dec 13 01:55:11.255266 amazon-ssm-agent[2198]: 2024/12/13 01:55:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:11.255266 amazon-ssm-agent[2198]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:11.255266 amazon-ssm-agent[2198]: 2024/12/13 01:55:11 processing appconfig overrides Dec 13 01:55:11.256093 amazon-ssm-agent[2198]: 2024-12-13 01:55:11 INFO Proxy environment variables: Dec 13 01:55:11.262305 amazon-ssm-agent[2198]: 2024/12/13 01:55:11 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:11.262305 amazon-ssm-agent[2198]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:55:11.262305 amazon-ssm-agent[2198]: 2024/12/13 01:55:11 processing appconfig overrides Dec 13 01:55:11.355875 amazon-ssm-agent[2198]: 2024-12-13 01:55:11 INFO https_proxy: Dec 13 01:55:11.459364 amazon-ssm-agent[2198]: 2024-12-13 01:55:11 INFO http_proxy: Dec 13 01:55:11.559399 amazon-ssm-agent[2198]: 2024-12-13 01:55:11 INFO no_proxy: Dec 13 01:55:11.658263 amazon-ssm-agent[2198]: 2024-12-13 01:55:11 INFO Checking if agent identity type OnPrem can be assumed Dec 13 01:55:11.760260 amazon-ssm-agent[2198]: 2024-12-13 01:55:11 INFO Checking if agent identity type EC2 can be assumed Dec 13 01:55:11.866413 amazon-ssm-agent[2198]: 2024-12-13 01:55:11 INFO Agent will take identity from EC2 Dec 13 01:55:11.898324 tar[2027]: linux-arm64/LICENSE Dec 13 01:55:11.899018 tar[2027]: linux-arm64/README.md Dec 13 01:55:11.943871 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:55:11.966590 amazon-ssm-agent[2198]: 2024-12-13 01:55:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:55:12.067332 amazon-ssm-agent[2198]: 2024-12-13 01:55:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:55:12.166831 amazon-ssm-agent[2198]: 2024-12-13 01:55:11 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:55:12.251646 amazon-ssm-agent[2198]: 2024-12-13 01:55:11 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Dec 13 01:55:12.251646 amazon-ssm-agent[2198]: 2024-12-13 01:55:11 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Dec 13 01:55:12.251646 amazon-ssm-agent[2198]: 2024-12-13 01:55:11 INFO [amazon-ssm-agent] Starting Core Agent Dec 13 01:55:12.251646 amazon-ssm-agent[2198]: 2024-12-13 01:55:11 INFO [amazon-ssm-agent] registrar detected. Attempting registration Dec 13 01:55:12.251646 amazon-ssm-agent[2198]: 2024-12-13 01:55:11 INFO [Registrar] Starting registrar module Dec 13 01:55:12.251646 amazon-ssm-agent[2198]: 2024-12-13 01:55:11 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Dec 13 01:55:12.251646 amazon-ssm-agent[2198]: 2024-12-13 01:55:12 INFO [EC2Identity] EC2 registration was successful. Dec 13 01:55:12.251646 amazon-ssm-agent[2198]: 2024-12-13 01:55:12 INFO [CredentialRefresher] credentialRefresher has started Dec 13 01:55:12.251646 amazon-ssm-agent[2198]: 2024-12-13 01:55:12 INFO [CredentialRefresher] Starting credentials refresher loop Dec 13 01:55:12.251646 amazon-ssm-agent[2198]: 2024-12-13 01:55:12 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 13 01:55:12.268284 amazon-ssm-agent[2198]: 2024-12-13 01:55:12 INFO [CredentialRefresher] Next credential rotation will be in 30.5249896819 minutes Dec 13 01:55:12.343228 sshd_keygen[2036]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:55:12.384840 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:55:12.397631 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:55:12.409917 systemd[1]: Started sshd@0-172.31.24.36:22-139.178.68.195:57104.service - OpenSSH per-connection server daemon (139.178.68.195:57104). Dec 13 01:55:12.425872 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:55:12.427347 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:55:12.440665 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:55:12.475178 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:55:12.491000 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:55:12.496457 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:55:12.499045 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:55:12.663197 sshd[2231]: Accepted publickey for core from 139.178.68.195 port 57104 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:12.665830 sshd[2231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:12.686350 systemd-logind[2013]: New session 1 of user core. Dec 13 01:55:12.687868 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:55:12.696790 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:55:12.736210 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:55:12.754810 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:55:12.772441 (systemd)[2242]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:55:12.995303 systemd[2242]: Queued start job for default target default.target. Dec 13 01:55:13.004746 systemd[2242]: Created slice app.slice - User Application Slice. Dec 13 01:55:13.004957 systemd[2242]: Reached target paths.target - Paths. Dec 13 01:55:13.005115 systemd[2242]: Reached target timers.target - Timers. Dec 13 01:55:13.008217 systemd[2242]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:55:13.035223 systemd[2242]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:55:13.035512 systemd[2242]: Reached target sockets.target - Sockets. Dec 13 01:55:13.035545 systemd[2242]: Reached target basic.target - Basic System. Dec 13 01:55:13.035629 systemd[2242]: Reached target default.target - Main User Target. Dec 13 01:55:13.035692 systemd[2242]: Startup finished in 250ms. Dec 13 01:55:13.036023 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:55:13.050533 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:55:13.216465 systemd[1]: Started sshd@1-172.31.24.36:22-139.178.68.195:57110.service - OpenSSH per-connection server daemon (139.178.68.195:57110). Dec 13 01:55:13.279673 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:13.285861 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:55:13.288173 systemd[1]: Startup finished in 1.156s (kernel) + 7.945s (initrd) + 8.593s (userspace) = 17.695s. Dec 13 01:55:13.293565 (kubelet)[2260]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:55:13.302301 amazon-ssm-agent[2198]: 2024-12-13 01:55:13 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 13 01:55:13.405379 amazon-ssm-agent[2198]: 2024-12-13 01:55:13 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2262) started Dec 13 01:55:13.416805 sshd[2253]: Accepted publickey for core from 139.178.68.195 port 57110 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:13.420103 sshd[2253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:13.431274 systemd-logind[2013]: New session 2 of user core. Dec 13 01:55:13.438163 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:55:13.506624 amazon-ssm-agent[2198]: 2024-12-13 01:55:13 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 13 01:55:13.577008 sshd[2253]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:13.584899 systemd-logind[2013]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:55:13.585344 systemd[1]: sshd@1-172.31.24.36:22-139.178.68.195:57110.service: Deactivated successfully. Dec 13 01:55:13.590218 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:55:13.596200 systemd-logind[2013]: Removed session 2. Dec 13 01:55:13.617944 systemd[1]: Started sshd@2-172.31.24.36:22-139.178.68.195:57116.service - OpenSSH per-connection server daemon (139.178.68.195:57116). Dec 13 01:55:13.788564 sshd[2280]: Accepted publickey for core from 139.178.68.195 port 57116 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:13.791135 sshd[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:13.802573 systemd-logind[2013]: New session 3 of user core. Dec 13 01:55:13.813542 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:55:13.934116 sshd[2280]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:13.941229 systemd[1]: sshd@2-172.31.24.36:22-139.178.68.195:57116.service: Deactivated successfully. Dec 13 01:55:13.946461 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:55:13.949422 systemd-logind[2013]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:55:13.951709 systemd-logind[2013]: Removed session 3. Dec 13 01:55:13.970715 systemd[1]: Started sshd@3-172.31.24.36:22-139.178.68.195:57120.service - OpenSSH per-connection server daemon (139.178.68.195:57120). Dec 13 01:55:14.030645 ntpd[2000]: Listen normally on 7 eth0 [fe80::479:50ff:fe5a:4599%2]:123 Dec 13 01:55:14.031087 ntpd[2000]: 13 Dec 01:55:14 ntpd[2000]: Listen normally on 7 eth0 [fe80::479:50ff:fe5a:4599%2]:123 Dec 13 01:55:14.138451 sshd[2291]: Accepted publickey for core from 139.178.68.195 port 57120 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:14.141694 sshd[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:14.149700 systemd-logind[2013]: New session 4 of user core. Dec 13 01:55:14.156532 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:55:14.283922 sshd[2291]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:14.289928 systemd[1]: sshd@3-172.31.24.36:22-139.178.68.195:57120.service: Deactivated successfully. Dec 13 01:55:14.294140 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:55:14.297675 systemd-logind[2013]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:55:14.300681 systemd-logind[2013]: Removed session 4. Dec 13 01:55:14.328846 systemd[1]: Started sshd@4-172.31.24.36:22-139.178.68.195:57130.service - OpenSSH per-connection server daemon (139.178.68.195:57130). Dec 13 01:55:14.520938 sshd[2298]: Accepted publickey for core from 139.178.68.195 port 57130 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:14.524169 sshd[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:14.534657 systemd-logind[2013]: New session 5 of user core. Dec 13 01:55:14.541562 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:55:14.684212 sudo[2303]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:55:14.684878 sudo[2303]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:55:14.693900 kubelet[2260]: E1213 01:55:14.693198 2260 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:55:14.701851 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:55:14.702292 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:55:14.704366 systemd[1]: kubelet.service: Consumed 1.341s CPU time. Dec 13 01:55:15.275724 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:55:15.289806 (dockerd)[2319]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:55:15.767310 dockerd[2319]: time="2024-12-13T01:55:15.766803795Z" level=info msg="Starting up" Dec 13 01:55:15.923539 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport555852575-merged.mount: Deactivated successfully. Dec 13 01:55:16.083709 dockerd[2319]: time="2024-12-13T01:55:16.083353105Z" level=info msg="Loading containers: start." Dec 13 01:55:16.308281 kernel: Initializing XFRM netlink socket Dec 13 01:55:16.387529 (udev-worker)[2343]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:16.472347 systemd-networkd[1934]: docker0: Link UP Dec 13 01:55:16.502737 dockerd[2319]: time="2024-12-13T01:55:16.502583643Z" level=info msg="Loading containers: done." Dec 13 01:55:16.526141 dockerd[2319]: time="2024-12-13T01:55:16.526060095Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:55:16.526481 dockerd[2319]: time="2024-12-13T01:55:16.526220667Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:55:16.526481 dockerd[2319]: time="2024-12-13T01:55:16.526447911Z" level=info msg="Daemon has completed initialization" Dec 13 01:55:16.579563 dockerd[2319]: time="2024-12-13T01:55:16.579418827Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:55:16.579676 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:55:16.920378 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3850493887-merged.mount: Deactivated successfully. Dec 13 01:55:17.527017 systemd-resolved[1938]: Clock change detected. Flushing caches. Dec 13 01:55:18.623743 containerd[2035]: time="2024-12-13T01:55:18.623269589Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:55:19.348758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount595406227.mount: Deactivated successfully. Dec 13 01:55:21.040468 containerd[2035]: time="2024-12-13T01:55:21.040384277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:21.043335 containerd[2035]: time="2024-12-13T01:55:21.043277825Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201250" Dec 13 01:55:21.045590 containerd[2035]: time="2024-12-13T01:55:21.045189545Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:21.056783 containerd[2035]: time="2024-12-13T01:55:21.056697977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:21.059333 containerd[2035]: time="2024-12-13T01:55:21.059262185Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 2.435927244s" Dec 13 01:55:21.059447 containerd[2035]: time="2024-12-13T01:55:21.059330417Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 01:55:21.097797 containerd[2035]: time="2024-12-13T01:55:21.097739369Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:55:22.925647 containerd[2035]: time="2024-12-13T01:55:22.925538842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:22.927473 containerd[2035]: time="2024-12-13T01:55:22.927394222Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381297" Dec 13 01:55:22.929513 containerd[2035]: time="2024-12-13T01:55:22.929452114Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:22.935987 containerd[2035]: time="2024-12-13T01:55:22.935901586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:22.938877 containerd[2035]: time="2024-12-13T01:55:22.938594062Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 1.840749537s" Dec 13 01:55:22.938877 containerd[2035]: time="2024-12-13T01:55:22.938687482Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 01:55:22.980334 containerd[2035]: time="2024-12-13T01:55:22.979910579Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:55:24.281704 containerd[2035]: time="2024-12-13T01:55:24.281437833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:24.283163 containerd[2035]: time="2024-12-13T01:55:24.283039437Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765640" Dec 13 01:55:24.284251 containerd[2035]: time="2024-12-13T01:55:24.284163213Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:24.289811 containerd[2035]: time="2024-12-13T01:55:24.289725297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:24.292375 containerd[2035]: time="2024-12-13T01:55:24.292152669Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.312180302s" Dec 13 01:55:24.292375 containerd[2035]: time="2024-12-13T01:55:24.292212429Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 01:55:24.330053 containerd[2035]: time="2024-12-13T01:55:24.329755233Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:55:25.448968 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:55:25.459193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:25.608589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1895047974.mount: Deactivated successfully. Dec 13 01:55:25.913765 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:25.928523 (kubelet)[2558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:55:26.051314 kubelet[2558]: E1213 01:55:26.051222 2558 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:55:26.060737 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:55:26.061057 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:55:26.251477 containerd[2035]: time="2024-12-13T01:55:26.251286839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:26.254582 containerd[2035]: time="2024-12-13T01:55:26.254487335Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273977" Dec 13 01:55:26.255995 containerd[2035]: time="2024-12-13T01:55:26.255877547Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:26.259591 containerd[2035]: time="2024-12-13T01:55:26.259461515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:26.261375 containerd[2035]: time="2024-12-13T01:55:26.261173687Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.931310154s" Dec 13 01:55:26.261375 containerd[2035]: time="2024-12-13T01:55:26.261231791Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 01:55:26.299227 containerd[2035]: time="2024-12-13T01:55:26.299117771Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:55:26.844652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3609228585.mount: Deactivated successfully. Dec 13 01:55:27.989709 containerd[2035]: time="2024-12-13T01:55:27.989625579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:27.992393 containerd[2035]: time="2024-12-13T01:55:27.992303919Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Dec 13 01:55:27.993906 containerd[2035]: time="2024-12-13T01:55:27.993825435Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:28.002369 containerd[2035]: time="2024-12-13T01:55:28.002281776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:28.004611 containerd[2035]: time="2024-12-13T01:55:28.004529424Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.705346097s" Dec 13 01:55:28.004749 containerd[2035]: time="2024-12-13T01:55:28.004610376Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 01:55:28.042142 containerd[2035]: time="2024-12-13T01:55:28.042089916Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:55:28.544801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount931056048.mount: Deactivated successfully. Dec 13 01:55:28.551286 containerd[2035]: time="2024-12-13T01:55:28.551174342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:28.553090 containerd[2035]: time="2024-12-13T01:55:28.553018706Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Dec 13 01:55:28.554661 containerd[2035]: time="2024-12-13T01:55:28.554587838Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:28.558753 containerd[2035]: time="2024-12-13T01:55:28.558690878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:28.561419 containerd[2035]: time="2024-12-13T01:55:28.561241658Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 519.09377ms" Dec 13 01:55:28.561419 containerd[2035]: time="2024-12-13T01:55:28.561295214Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 01:55:28.600899 containerd[2035]: time="2024-12-13T01:55:28.600761007Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:55:29.183622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3953569181.mount: Deactivated successfully. Dec 13 01:55:31.612442 containerd[2035]: time="2024-12-13T01:55:31.612381605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:31.614626 containerd[2035]: time="2024-12-13T01:55:31.613759301Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Dec 13 01:55:31.615537 containerd[2035]: time="2024-12-13T01:55:31.615436781Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:31.621832 containerd[2035]: time="2024-12-13T01:55:31.621764454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:31.625133 containerd[2035]: time="2024-12-13T01:55:31.624931134Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.024097695s" Dec 13 01:55:31.625133 containerd[2035]: time="2024-12-13T01:55:31.624993942Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 01:55:36.221514 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:55:36.232099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:36.526930 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:36.541061 (kubelet)[2742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:55:36.628085 kubelet[2742]: E1213 01:55:36.628016 2742 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:55:36.634937 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:55:36.635240 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:55:39.744764 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:39.762029 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:39.797388 systemd[1]: Reloading requested from client PID 2759 ('systemctl') (unit session-5.scope)... Dec 13 01:55:39.797656 systemd[1]: Reloading... Dec 13 01:55:40.022707 zram_generator::config[2803]: No configuration found. Dec 13 01:55:40.269776 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:55:40.442470 systemd[1]: Reloading finished in 644 ms. Dec 13 01:55:40.547478 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:40.555898 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:55:40.556274 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:40.564354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:40.834527 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:40.849104 (kubelet)[2865]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:55:40.928728 kubelet[2865]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:55:40.928728 kubelet[2865]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:55:40.928728 kubelet[2865]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:55:40.930687 kubelet[2865]: I1213 01:55:40.930546 2865 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:55:41.499932 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 01:55:41.956015 kubelet[2865]: I1213 01:55:41.953984 2865 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:55:41.956015 kubelet[2865]: I1213 01:55:41.954028 2865 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:55:41.956015 kubelet[2865]: I1213 01:55:41.954381 2865 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:55:41.984587 kubelet[2865]: I1213 01:55:41.984056 2865 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:55:41.985022 kubelet[2865]: E1213 01:55:41.984997 2865 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.24.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.24.36:6443: connect: connection refused Dec 13 01:55:42.000943 kubelet[2865]: I1213 01:55:42.000899 2865 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:55:42.001416 kubelet[2865]: I1213 01:55:42.001386 2865 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:55:42.001751 kubelet[2865]: I1213 01:55:42.001716 2865 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:55:42.001911 kubelet[2865]: I1213 01:55:42.001763 2865 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:55:42.001911 kubelet[2865]: I1213 01:55:42.001784 2865 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:55:42.003480 kubelet[2865]: I1213 01:55:42.003427 2865 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:55:42.008775 kubelet[2865]: I1213 01:55:42.008720 2865 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:55:42.008775 kubelet[2865]: I1213 01:55:42.008772 2865 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:55:42.009634 kubelet[2865]: I1213 01:55:42.008817 2865 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:55:42.009634 kubelet[2865]: I1213 01:55:42.008849 2865 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:55:42.013605 kubelet[2865]: W1213 01:55:42.012291 2865 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.24.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.36:6443: connect: connection refused Dec 13 01:55:42.013605 kubelet[2865]: E1213 01:55:42.012374 2865 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.24.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.36:6443: connect: connection refused Dec 13 01:55:42.013605 kubelet[2865]: W1213 01:55:42.012846 2865 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.24.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-36&limit=500&resourceVersion=0": dial tcp 172.31.24.36:6443: connect: connection refused Dec 13 01:55:42.013605 kubelet[2865]: E1213 01:55:42.012895 2865 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.24.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-36&limit=500&resourceVersion=0": dial tcp 172.31.24.36:6443: connect: connection refused Dec 13 01:55:42.013605 kubelet[2865]: I1213 01:55:42.013023 2865 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:55:42.013605 kubelet[2865]: I1213 01:55:42.013535 2865 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:55:42.015211 kubelet[2865]: W1213 01:55:42.015173 2865 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:55:42.016914 kubelet[2865]: I1213 01:55:42.016872 2865 server.go:1256] "Started kubelet" Dec 13 01:55:42.020518 kubelet[2865]: I1213 01:55:42.020472 2865 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:55:42.030083 kubelet[2865]: I1213 01:55:42.030013 2865 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:55:42.031635 kubelet[2865]: I1213 01:55:42.031498 2865 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:55:42.032626 kubelet[2865]: I1213 01:55:42.032592 2865 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:55:42.033287 kubelet[2865]: I1213 01:55:42.033235 2865 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:55:42.033542 kubelet[2865]: I1213 01:55:42.033504 2865 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:55:42.039819 kubelet[2865]: I1213 01:55:42.039778 2865 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:55:42.041756 kubelet[2865]: E1213 01:55:42.040611 2865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-36?timeout=10s\": dial tcp 172.31.24.36:6443: connect: connection refused" interval="200ms" Dec 13 01:55:42.043461 kubelet[2865]: I1213 01:55:42.043400 2865 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:55:42.045711 kubelet[2865]: E1213 01:55:42.045654 2865 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.36:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.36:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-36.181099c847ec9141 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-36,UID:ip-172-31-24-36,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-36,},FirstTimestamp:2024-12-13 01:55:42.016835905 +0000 UTC m=+1.160374639,LastTimestamp:2024-12-13 01:55:42.016835905 +0000 UTC m=+1.160374639,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-36,}" Dec 13 01:55:42.045915 kubelet[2865]: I1213 01:55:42.045870 2865 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:55:42.047493 kubelet[2865]: I1213 01:55:42.045998 2865 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:55:42.047714 kubelet[2865]: W1213 01:55:42.047598 2865 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.24.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.36:6443: connect: connection refused Dec 13 01:55:42.047714 kubelet[2865]: E1213 01:55:42.047675 2865 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.24.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.36:6443: connect: connection refused Dec 13 01:55:42.048777 kubelet[2865]: E1213 01:55:42.048726 2865 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:55:42.049650 kubelet[2865]: I1213 01:55:42.049539 2865 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:55:42.062975 kubelet[2865]: I1213 01:55:42.062774 2865 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:55:42.065149 kubelet[2865]: I1213 01:55:42.065112 2865 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:55:42.065787 kubelet[2865]: I1213 01:55:42.065317 2865 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:55:42.065787 kubelet[2865]: I1213 01:55:42.065354 2865 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:55:42.065787 kubelet[2865]: E1213 01:55:42.065433 2865 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:55:42.078457 kubelet[2865]: W1213 01:55:42.078389 2865 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.24.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.36:6443: connect: connection refused Dec 13 01:55:42.078726 kubelet[2865]: E1213 01:55:42.078705 2865 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.24.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.36:6443: connect: connection refused Dec 13 01:55:42.095968 kubelet[2865]: I1213 01:55:42.095914 2865 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:55:42.095968 kubelet[2865]: I1213 01:55:42.095960 2865 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:55:42.096177 kubelet[2865]: I1213 01:55:42.095998 2865 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:55:42.098203 kubelet[2865]: I1213 01:55:42.098127 2865 policy_none.go:49] "None policy: Start" Dec 13 01:55:42.099368 kubelet[2865]: I1213 01:55:42.099314 2865 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:55:42.099503 kubelet[2865]: I1213 01:55:42.099396 2865 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:55:42.112740 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:55:42.126670 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:55:42.134455 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:55:42.137060 kubelet[2865]: I1213 01:55:42.136192 2865 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-36" Dec 13 01:55:42.137060 kubelet[2865]: E1213 01:55:42.137001 2865 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.36:6443/api/v1/nodes\": dial tcp 172.31.24.36:6443: connect: connection refused" node="ip-172-31-24-36" Dec 13 01:55:42.145311 kubelet[2865]: I1213 01:55:42.144802 2865 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:55:42.148110 kubelet[2865]: I1213 01:55:42.148063 2865 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:55:42.159411 kubelet[2865]: E1213 01:55:42.159377 2865 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-36\" not found" Dec 13 01:55:42.166611 kubelet[2865]: I1213 01:55:42.166380 2865 topology_manager.go:215] "Topology Admit Handler" podUID="334326bcef32058f9acc561b7d3a70e5" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-24-36" Dec 13 01:55:42.169891 kubelet[2865]: I1213 01:55:42.169854 2865 topology_manager.go:215] "Topology Admit Handler" podUID="5a6f735b49739075fe40f1d85c6c7052" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-24-36" Dec 13 01:55:42.173680 kubelet[2865]: I1213 01:55:42.173285 2865 topology_manager.go:215] "Topology Admit Handler" podUID="6de4ad502abff6c04510b71ecca9fe3b" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-24-36" Dec 13 01:55:42.188849 systemd[1]: Created slice kubepods-burstable-pod334326bcef32058f9acc561b7d3a70e5.slice - libcontainer container kubepods-burstable-pod334326bcef32058f9acc561b7d3a70e5.slice. Dec 13 01:55:42.207147 systemd[1]: Created slice kubepods-burstable-pod5a6f735b49739075fe40f1d85c6c7052.slice - libcontainer container kubepods-burstable-pod5a6f735b49739075fe40f1d85c6c7052.slice. Dec 13 01:55:42.223064 systemd[1]: Created slice kubepods-burstable-pod6de4ad502abff6c04510b71ecca9fe3b.slice - libcontainer container kubepods-burstable-pod6de4ad502abff6c04510b71ecca9fe3b.slice. Dec 13 01:55:42.242252 kubelet[2865]: E1213 01:55:42.242199 2865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-36?timeout=10s\": dial tcp 172.31.24.36:6443: connect: connection refused" interval="400ms" Dec 13 01:55:42.244699 kubelet[2865]: I1213 01:55:42.244639 2865 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/334326bcef32058f9acc561b7d3a70e5-ca-certs\") pod \"kube-apiserver-ip-172-31-24-36\" (UID: \"334326bcef32058f9acc561b7d3a70e5\") " pod="kube-system/kube-apiserver-ip-172-31-24-36" Dec 13 01:55:42.244828 kubelet[2865]: I1213 01:55:42.244726 2865 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/334326bcef32058f9acc561b7d3a70e5-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-36\" (UID: \"334326bcef32058f9acc561b7d3a70e5\") " pod="kube-system/kube-apiserver-ip-172-31-24-36" Dec 13 01:55:42.244828 kubelet[2865]: I1213 01:55:42.244779 2865 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/334326bcef32058f9acc561b7d3a70e5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-36\" (UID: \"334326bcef32058f9acc561b7d3a70e5\") " pod="kube-system/kube-apiserver-ip-172-31-24-36" Dec 13 01:55:42.244948 kubelet[2865]: I1213 01:55:42.244832 2865 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a6f735b49739075fe40f1d85c6c7052-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-36\" (UID: \"5a6f735b49739075fe40f1d85c6c7052\") " pod="kube-system/kube-controller-manager-ip-172-31-24-36" Dec 13 01:55:42.244948 kubelet[2865]: I1213 01:55:42.244876 2865 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a6f735b49739075fe40f1d85c6c7052-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-36\" (UID: \"5a6f735b49739075fe40f1d85c6c7052\") " pod="kube-system/kube-controller-manager-ip-172-31-24-36" Dec 13 01:55:42.244948 kubelet[2865]: I1213 01:55:42.244920 2865 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5a6f735b49739075fe40f1d85c6c7052-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-36\" (UID: \"5a6f735b49739075fe40f1d85c6c7052\") " pod="kube-system/kube-controller-manager-ip-172-31-24-36" Dec 13 01:55:42.245098 kubelet[2865]: I1213 01:55:42.245011 2865 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5a6f735b49739075fe40f1d85c6c7052-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-36\" (UID: \"5a6f735b49739075fe40f1d85c6c7052\") " pod="kube-system/kube-controller-manager-ip-172-31-24-36" Dec 13 01:55:42.245098 kubelet[2865]: I1213 01:55:42.245059 2865 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5a6f735b49739075fe40f1d85c6c7052-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-36\" (UID: \"5a6f735b49739075fe40f1d85c6c7052\") " pod="kube-system/kube-controller-manager-ip-172-31-24-36" Dec 13 01:55:42.245201 kubelet[2865]: I1213 01:55:42.245119 2865 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6de4ad502abff6c04510b71ecca9fe3b-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-36\" (UID: \"6de4ad502abff6c04510b71ecca9fe3b\") " pod="kube-system/kube-scheduler-ip-172-31-24-36" Dec 13 01:55:42.340534 kubelet[2865]: I1213 01:55:42.340055 2865 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-36" Dec 13 01:55:42.340534 kubelet[2865]: E1213 01:55:42.340479 2865 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.36:6443/api/v1/nodes\": dial tcp 172.31.24.36:6443: connect: connection refused" node="ip-172-31-24-36" Dec 13 01:55:42.503212 containerd[2035]: time="2024-12-13T01:55:42.502839928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-36,Uid:334326bcef32058f9acc561b7d3a70e5,Namespace:kube-system,Attempt:0,}" Dec 13 01:55:42.520843 containerd[2035]: time="2024-12-13T01:55:42.520781008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-36,Uid:5a6f735b49739075fe40f1d85c6c7052,Namespace:kube-system,Attempt:0,}" Dec 13 01:55:42.532412 containerd[2035]: time="2024-12-13T01:55:42.532036240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-36,Uid:6de4ad502abff6c04510b71ecca9fe3b,Namespace:kube-system,Attempt:0,}" Dec 13 01:55:42.642852 kubelet[2865]: E1213 01:55:42.642814 2865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-36?timeout=10s\": dial tcp 172.31.24.36:6443: connect: connection refused" interval="800ms" Dec 13 01:55:42.743356 kubelet[2865]: I1213 01:55:42.743310 2865 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-36" Dec 13 01:55:42.743897 kubelet[2865]: E1213 01:55:42.743814 2865 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.36:6443/api/v1/nodes\": dial tcp 172.31.24.36:6443: connect: connection refused" node="ip-172-31-24-36" Dec 13 01:55:43.007269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2889733608.mount: Deactivated successfully. Dec 13 01:55:43.014742 containerd[2035]: time="2024-12-13T01:55:43.014663630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:43.016437 containerd[2035]: time="2024-12-13T01:55:43.016363682Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:43.018296 containerd[2035]: time="2024-12-13T01:55:43.018244370Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:55:43.018421 containerd[2035]: time="2024-12-13T01:55:43.018309566Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Dec 13 01:55:43.020762 containerd[2035]: time="2024-12-13T01:55:43.020550698Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:55:43.020762 containerd[2035]: time="2024-12-13T01:55:43.020684834Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:43.026515 containerd[2035]: time="2024-12-13T01:55:43.026444858Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:43.030322 containerd[2035]: time="2024-12-13T01:55:43.029969450Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 527.025986ms" Dec 13 01:55:43.033850 containerd[2035]: time="2024-12-13T01:55:43.033542126Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 501.395642ms" Dec 13 01:55:43.035333 containerd[2035]: time="2024-12-13T01:55:43.034697042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:55:43.042922 containerd[2035]: time="2024-12-13T01:55:43.042836942Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 521.942582ms" Dec 13 01:55:43.173133 kubelet[2865]: W1213 01:55:43.173007 2865 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.24.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.36:6443: connect: connection refused Dec 13 01:55:43.173133 kubelet[2865]: E1213 01:55:43.173097 2865 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.24.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.36:6443: connect: connection refused Dec 13 01:55:43.223150 containerd[2035]: time="2024-12-13T01:55:43.222789015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:43.224920 containerd[2035]: time="2024-12-13T01:55:43.223648311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:43.224920 containerd[2035]: time="2024-12-13T01:55:43.224631939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:43.224920 containerd[2035]: time="2024-12-13T01:55:43.224805459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:43.225851 kubelet[2865]: W1213 01:55:43.225679 2865 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.24.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.36:6443: connect: connection refused Dec 13 01:55:43.225851 kubelet[2865]: E1213 01:55:43.225785 2865 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.24.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.36:6443: connect: connection refused Dec 13 01:55:43.229583 containerd[2035]: time="2024-12-13T01:55:43.229415667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:43.229939 containerd[2035]: time="2024-12-13T01:55:43.229508739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:43.230858 containerd[2035]: time="2024-12-13T01:55:43.230380719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:43.235675 containerd[2035]: time="2024-12-13T01:55:43.234849699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:43.241052 containerd[2035]: time="2024-12-13T01:55:43.240870447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:43.241052 containerd[2035]: time="2024-12-13T01:55:43.240982251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:43.241415 containerd[2035]: time="2024-12-13T01:55:43.241012671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:43.245826 containerd[2035]: time="2024-12-13T01:55:43.242946639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:43.276071 systemd[1]: Started cri-containerd-3c2955be7dee99d7c51ce20333201faf9baff04289863809b01746dae426546c.scope - libcontainer container 3c2955be7dee99d7c51ce20333201faf9baff04289863809b01746dae426546c. Dec 13 01:55:43.293250 systemd[1]: Started cri-containerd-621f487f67f5eb4d3fee91c2d1728b49767031d4b44103adcc6c948e82d5eff9.scope - libcontainer container 621f487f67f5eb4d3fee91c2d1728b49767031d4b44103adcc6c948e82d5eff9. Dec 13 01:55:43.314931 systemd[1]: Started cri-containerd-1166a0ccf1da7628b8f78da091241bf9c113fee43a78b55cceffc8a1f8a1e14c.scope - libcontainer container 1166a0ccf1da7628b8f78da091241bf9c113fee43a78b55cceffc8a1f8a1e14c. Dec 13 01:55:43.402343 kubelet[2865]: W1213 01:55:43.402169 2865 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.24.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-36&limit=500&resourceVersion=0": dial tcp 172.31.24.36:6443: connect: connection refused Dec 13 01:55:43.402343 kubelet[2865]: E1213 01:55:43.402293 2865 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.24.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-36&limit=500&resourceVersion=0": dial tcp 172.31.24.36:6443: connect: connection refused Dec 13 01:55:43.417295 containerd[2035]: time="2024-12-13T01:55:43.416554528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-36,Uid:5a6f735b49739075fe40f1d85c6c7052,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c2955be7dee99d7c51ce20333201faf9baff04289863809b01746dae426546c\"" Dec 13 01:55:43.426389 containerd[2035]: time="2024-12-13T01:55:43.425939476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-36,Uid:334326bcef32058f9acc561b7d3a70e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"621f487f67f5eb4d3fee91c2d1728b49767031d4b44103adcc6c948e82d5eff9\"" Dec 13 01:55:43.435970 containerd[2035]: time="2024-12-13T01:55:43.435898108Z" level=info msg="CreateContainer within sandbox \"621f487f67f5eb4d3fee91c2d1728b49767031d4b44103adcc6c948e82d5eff9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:55:43.441898 containerd[2035]: time="2024-12-13T01:55:43.441762604Z" level=info msg="CreateContainer within sandbox \"3c2955be7dee99d7c51ce20333201faf9baff04289863809b01746dae426546c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:55:43.442146 containerd[2035]: time="2024-12-13T01:55:43.442034656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-36,Uid:6de4ad502abff6c04510b71ecca9fe3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1166a0ccf1da7628b8f78da091241bf9c113fee43a78b55cceffc8a1f8a1e14c\"" Dec 13 01:55:43.444058 kubelet[2865]: E1213 01:55:43.444008 2865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-36?timeout=10s\": dial tcp 172.31.24.36:6443: connect: connection refused" interval="1.6s" Dec 13 01:55:43.448788 containerd[2035]: time="2024-12-13T01:55:43.448418620Z" level=info msg="CreateContainer within sandbox \"1166a0ccf1da7628b8f78da091241bf9c113fee43a78b55cceffc8a1f8a1e14c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:55:43.467109 containerd[2035]: time="2024-12-13T01:55:43.467024260Z" level=info msg="CreateContainer within sandbox \"621f487f67f5eb4d3fee91c2d1728b49767031d4b44103adcc6c948e82d5eff9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"43a87b3d7cdc603b988c67744e1c74bfaf6ebf14d8e4cf827e7584de1587ae87\"" Dec 13 01:55:43.468612 containerd[2035]: time="2024-12-13T01:55:43.468067096Z" level=info msg="StartContainer for \"43a87b3d7cdc603b988c67744e1c74bfaf6ebf14d8e4cf827e7584de1587ae87\"" Dec 13 01:55:43.474612 containerd[2035]: time="2024-12-13T01:55:43.473762656Z" level=info msg="CreateContainer within sandbox \"3c2955be7dee99d7c51ce20333201faf9baff04289863809b01746dae426546c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"347f9b8f16ccd085030d3e29f71216a333fb90d578170f532bdec0426960cb85\"" Dec 13 01:55:43.475406 containerd[2035]: time="2024-12-13T01:55:43.475363636Z" level=info msg="StartContainer for \"347f9b8f16ccd085030d3e29f71216a333fb90d578170f532bdec0426960cb85\"" Dec 13 01:55:43.483979 containerd[2035]: time="2024-12-13T01:55:43.483794296Z" level=info msg="CreateContainer within sandbox \"1166a0ccf1da7628b8f78da091241bf9c113fee43a78b55cceffc8a1f8a1e14c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"87eca29d67eb0f915e177da3154a9149f4feb8f214c8fd7bd9f34e83daaaf25b\"" Dec 13 01:55:43.485359 containerd[2035]: time="2024-12-13T01:55:43.485275492Z" level=info msg="StartContainer for \"87eca29d67eb0f915e177da3154a9149f4feb8f214c8fd7bd9f34e83daaaf25b\"" Dec 13 01:55:43.521583 kubelet[2865]: W1213 01:55:43.521495 2865 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.24.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.36:6443: connect: connection refused Dec 13 01:55:43.521791 kubelet[2865]: E1213 01:55:43.521768 2865 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.24.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.36:6443: connect: connection refused Dec 13 01:55:43.541536 systemd[1]: Started cri-containerd-43a87b3d7cdc603b988c67744e1c74bfaf6ebf14d8e4cf827e7584de1587ae87.scope - libcontainer container 43a87b3d7cdc603b988c67744e1c74bfaf6ebf14d8e4cf827e7584de1587ae87. Dec 13 01:55:43.549410 kubelet[2865]: I1213 01:55:43.547858 2865 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-36" Dec 13 01:55:43.549410 kubelet[2865]: E1213 01:55:43.548309 2865 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.36:6443/api/v1/nodes\": dial tcp 172.31.24.36:6443: connect: connection refused" node="ip-172-31-24-36" Dec 13 01:55:43.564037 systemd[1]: Started cri-containerd-347f9b8f16ccd085030d3e29f71216a333fb90d578170f532bdec0426960cb85.scope - libcontainer container 347f9b8f16ccd085030d3e29f71216a333fb90d578170f532bdec0426960cb85. Dec 13 01:55:43.589883 systemd[1]: Started cri-containerd-87eca29d67eb0f915e177da3154a9149f4feb8f214c8fd7bd9f34e83daaaf25b.scope - libcontainer container 87eca29d67eb0f915e177da3154a9149f4feb8f214c8fd7bd9f34e83daaaf25b. Dec 13 01:55:43.687675 containerd[2035]: time="2024-12-13T01:55:43.687441569Z" level=info msg="StartContainer for \"43a87b3d7cdc603b988c67744e1c74bfaf6ebf14d8e4cf827e7584de1587ae87\" returns successfully" Dec 13 01:55:43.702526 containerd[2035]: time="2024-12-13T01:55:43.702438414Z" level=info msg="StartContainer for \"347f9b8f16ccd085030d3e29f71216a333fb90d578170f532bdec0426960cb85\" returns successfully" Dec 13 01:55:43.731713 containerd[2035]: time="2024-12-13T01:55:43.731203590Z" level=info msg="StartContainer for \"87eca29d67eb0f915e177da3154a9149f4feb8f214c8fd7bd9f34e83daaaf25b\" returns successfully" Dec 13 01:55:45.153608 kubelet[2865]: I1213 01:55:45.152151 2865 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-36" Dec 13 01:55:47.285263 kubelet[2865]: E1213 01:55:47.285197 2865 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-36\" not found" node="ip-172-31-24-36" Dec 13 01:55:47.368093 kubelet[2865]: I1213 01:55:47.367911 2865 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-24-36" Dec 13 01:55:47.467937 kubelet[2865]: E1213 01:55:47.467675 2865 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-24-36.181099c847ec9141 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-36,UID:ip-172-31-24-36,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-36,},FirstTimestamp:2024-12-13 01:55:42.016835905 +0000 UTC m=+1.160374639,LastTimestamp:2024-12-13 01:55:42.016835905 +0000 UTC m=+1.160374639,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-36,}" Dec 13 01:55:47.539298 kubelet[2865]: E1213 01:55:47.537703 2865 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-24-36\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-24-36" Dec 13 01:55:47.539769 kubelet[2865]: E1213 01:55:47.539737 2865 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-24-36.181099c849d2c699 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-36,UID:ip-172-31-24-36,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-24-36,},FirstTimestamp:2024-12-13 01:55:42.048700057 +0000 UTC m=+1.192238827,LastTimestamp:2024-12-13 01:55:42.048700057 +0000 UTC m=+1.192238827,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-36,}" Dec 13 01:55:48.016132 kubelet[2865]: I1213 01:55:48.014646 2865 apiserver.go:52] "Watching apiserver" Dec 13 01:55:48.044192 kubelet[2865]: I1213 01:55:48.044115 2865 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:55:50.029622 systemd[1]: Reloading requested from client PID 3147 ('systemctl') (unit session-5.scope)... Dec 13 01:55:50.030176 systemd[1]: Reloading... Dec 13 01:55:50.237679 zram_generator::config[3199]: No configuration found. Dec 13 01:55:50.449018 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:55:50.655282 systemd[1]: Reloading finished in 624 ms. Dec 13 01:55:50.743171 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:50.743757 kubelet[2865]: I1213 01:55:50.743703 2865 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:55:50.761369 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:55:50.762013 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:50.762084 systemd[1]: kubelet.service: Consumed 1.867s CPU time, 112.8M memory peak, 0B memory swap peak. Dec 13 01:55:50.770191 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:55:51.071503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:55:51.090404 (kubelet)[3247]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:55:51.239814 kubelet[3247]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:55:51.239814 kubelet[3247]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:55:51.239814 kubelet[3247]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:55:51.242089 kubelet[3247]: I1213 01:55:51.239913 3247 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:55:51.255069 kubelet[3247]: I1213 01:55:51.254993 3247 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:55:51.255069 kubelet[3247]: I1213 01:55:51.255055 3247 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:55:51.256599 kubelet[3247]: I1213 01:55:51.255421 3247 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:55:51.258673 kubelet[3247]: I1213 01:55:51.258614 3247 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:55:51.265653 kubelet[3247]: I1213 01:55:51.264499 3247 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:55:51.280759 kubelet[3247]: I1213 01:55:51.280718 3247 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:55:51.281506 kubelet[3247]: I1213 01:55:51.281474 3247 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:55:51.282072 kubelet[3247]: I1213 01:55:51.282036 3247 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:55:51.282300 kubelet[3247]: I1213 01:55:51.282278 3247 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:55:51.282417 kubelet[3247]: I1213 01:55:51.282397 3247 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:55:51.282598 kubelet[3247]: I1213 01:55:51.282539 3247 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:55:51.283267 kubelet[3247]: I1213 01:55:51.283242 3247 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:55:51.283662 kubelet[3247]: I1213 01:55:51.283640 3247 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:55:51.283820 kubelet[3247]: I1213 01:55:51.283800 3247 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:55:51.285024 kubelet[3247]: I1213 01:55:51.284967 3247 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:55:51.292024 kubelet[3247]: I1213 01:55:51.291152 3247 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:55:51.292024 kubelet[3247]: I1213 01:55:51.291516 3247 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:55:51.292434 kubelet[3247]: I1213 01:55:51.292393 3247 server.go:1256] "Started kubelet" Dec 13 01:55:51.299384 kubelet[3247]: I1213 01:55:51.298822 3247 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:55:51.299787 kubelet[3247]: I1213 01:55:51.299705 3247 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:55:51.301439 kubelet[3247]: I1213 01:55:51.301368 3247 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:55:51.312477 kubelet[3247]: I1213 01:55:51.309251 3247 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:55:51.312477 kubelet[3247]: I1213 01:55:51.309867 3247 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:55:51.320355 kubelet[3247]: I1213 01:55:51.320312 3247 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:55:51.321776 kubelet[3247]: I1213 01:55:51.321736 3247 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:55:51.322302 kubelet[3247]: I1213 01:55:51.322273 3247 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:55:51.342375 kubelet[3247]: I1213 01:55:51.342221 3247 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:55:51.342905 kubelet[3247]: I1213 01:55:51.342661 3247 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:55:51.349612 kubelet[3247]: I1213 01:55:51.348541 3247 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:55:51.366753 kubelet[3247]: E1213 01:55:51.366709 3247 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:55:51.412077 kubelet[3247]: I1213 01:55:51.411131 3247 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:55:51.434153 kubelet[3247]: I1213 01:55:51.434114 3247 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:55:51.434362 kubelet[3247]: I1213 01:55:51.434339 3247 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:55:51.434489 kubelet[3247]: I1213 01:55:51.434468 3247 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:55:51.435994 kubelet[3247]: E1213 01:55:51.435958 3247 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:55:51.441379 kubelet[3247]: I1213 01:55:51.441093 3247 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-36" Dec 13 01:55:51.491809 kubelet[3247]: I1213 01:55:51.491748 3247 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-24-36" Dec 13 01:55:51.491940 kubelet[3247]: I1213 01:55:51.491879 3247 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-24-36" Dec 13 01:55:51.544613 kubelet[3247]: E1213 01:55:51.544459 3247 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:55:51.549510 kubelet[3247]: I1213 01:55:51.549378 3247 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:55:51.549832 kubelet[3247]: I1213 01:55:51.549782 3247 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:55:51.549915 kubelet[3247]: I1213 01:55:51.549839 3247 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:55:51.551885 kubelet[3247]: I1213 01:55:51.551836 3247 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:55:51.552203 kubelet[3247]: I1213 01:55:51.551900 3247 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:55:51.552203 kubelet[3247]: I1213 01:55:51.551920 3247 policy_none.go:49] "None policy: Start" Dec 13 01:55:51.555515 kubelet[3247]: I1213 01:55:51.555465 3247 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:55:51.555667 kubelet[3247]: I1213 01:55:51.555526 3247 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:55:51.556187 kubelet[3247]: I1213 01:55:51.555862 3247 state_mem.go:75] "Updated machine memory state" Dec 13 01:55:51.566025 kubelet[3247]: I1213 01:55:51.565691 3247 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:55:51.567083 kubelet[3247]: I1213 01:55:51.566921 3247 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:55:51.745584 kubelet[3247]: I1213 01:55:51.745294 3247 topology_manager.go:215] "Topology Admit Handler" podUID="334326bcef32058f9acc561b7d3a70e5" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-24-36" Dec 13 01:55:51.745584 kubelet[3247]: I1213 01:55:51.745427 3247 topology_manager.go:215] "Topology Admit Handler" podUID="5a6f735b49739075fe40f1d85c6c7052" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-24-36" Dec 13 01:55:51.745584 kubelet[3247]: I1213 01:55:51.745525 3247 topology_manager.go:215] "Topology Admit Handler" podUID="6de4ad502abff6c04510b71ecca9fe3b" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-24-36" Dec 13 01:55:51.835753 kubelet[3247]: I1213 01:55:51.835024 3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/334326bcef32058f9acc561b7d3a70e5-ca-certs\") pod \"kube-apiserver-ip-172-31-24-36\" (UID: \"334326bcef32058f9acc561b7d3a70e5\") " pod="kube-system/kube-apiserver-ip-172-31-24-36" Dec 13 01:55:51.835753 kubelet[3247]: I1213 01:55:51.835183 3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5a6f735b49739075fe40f1d85c6c7052-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-36\" (UID: \"5a6f735b49739075fe40f1d85c6c7052\") " pod="kube-system/kube-controller-manager-ip-172-31-24-36" Dec 13 01:55:51.835753 kubelet[3247]: I1213 01:55:51.835242 3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5a6f735b49739075fe40f1d85c6c7052-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-36\" (UID: \"5a6f735b49739075fe40f1d85c6c7052\") " pod="kube-system/kube-controller-manager-ip-172-31-24-36" Dec 13 01:55:51.835753 kubelet[3247]: I1213 01:55:51.835287 3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6de4ad502abff6c04510b71ecca9fe3b-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-36\" (UID: \"6de4ad502abff6c04510b71ecca9fe3b\") " pod="kube-system/kube-scheduler-ip-172-31-24-36" Dec 13 01:55:51.835753 kubelet[3247]: I1213 01:55:51.835331 3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/334326bcef32058f9acc561b7d3a70e5-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-36\" (UID: \"334326bcef32058f9acc561b7d3a70e5\") " pod="kube-system/kube-apiserver-ip-172-31-24-36" Dec 13 01:55:51.836142 kubelet[3247]: I1213 01:55:51.835380 3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/334326bcef32058f9acc561b7d3a70e5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-36\" (UID: \"334326bcef32058f9acc561b7d3a70e5\") " pod="kube-system/kube-apiserver-ip-172-31-24-36" Dec 13 01:55:51.836142 kubelet[3247]: I1213 01:55:51.835422 3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5a6f735b49739075fe40f1d85c6c7052-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-36\" (UID: \"5a6f735b49739075fe40f1d85c6c7052\") " pod="kube-system/kube-controller-manager-ip-172-31-24-36" Dec 13 01:55:51.836142 kubelet[3247]: I1213 01:55:51.835464 3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5a6f735b49739075fe40f1d85c6c7052-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-36\" (UID: \"5a6f735b49739075fe40f1d85c6c7052\") " pod="kube-system/kube-controller-manager-ip-172-31-24-36" Dec 13 01:55:51.836142 kubelet[3247]: I1213 01:55:51.835516 3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5a6f735b49739075fe40f1d85c6c7052-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-36\" (UID: \"5a6f735b49739075fe40f1d85c6c7052\") " pod="kube-system/kube-controller-manager-ip-172-31-24-36" Dec 13 01:55:52.288091 kubelet[3247]: I1213 01:55:52.287987 3247 apiserver.go:52] "Watching apiserver" Dec 13 01:55:52.324021 kubelet[3247]: I1213 01:55:52.322994 3247 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:55:52.500044 kubelet[3247]: E1213 01:55:52.499993 3247 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-24-36\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-36" Dec 13 01:55:52.535464 kubelet[3247]: I1213 01:55:52.535192 3247 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-36" podStartSLOduration=1.535097221 podStartE2EDuration="1.535097221s" podCreationTimestamp="2024-12-13 01:55:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:55:52.532476709 +0000 UTC m=+1.428535544" watchObservedRunningTime="2024-12-13 01:55:52.535097221 +0000 UTC m=+1.431156032" Dec 13 01:55:52.552480 kubelet[3247]: I1213 01:55:52.552151 3247 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-36" podStartSLOduration=1.552092569 podStartE2EDuration="1.552092569s" podCreationTimestamp="2024-12-13 01:55:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:55:52.551721733 +0000 UTC m=+1.447780556" watchObservedRunningTime="2024-12-13 01:55:52.552092569 +0000 UTC m=+1.448151416" Dec 13 01:55:52.573633 kubelet[3247]: I1213 01:55:52.573573 3247 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-36" podStartSLOduration=1.5734909099999999 podStartE2EDuration="1.57349091s" podCreationTimestamp="2024-12-13 01:55:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:55:52.573067238 +0000 UTC m=+1.469126061" watchObservedRunningTime="2024-12-13 01:55:52.57349091 +0000 UTC m=+1.469549721" Dec 13 01:55:52.993385 sudo[2303]: pam_unix(sudo:session): session closed for user root Dec 13 01:55:53.016814 sshd[2298]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:53.025361 systemd[1]: sshd@4-172.31.24.36:22-139.178.68.195:57130.service: Deactivated successfully. Dec 13 01:55:53.030070 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:55:53.030501 systemd[1]: session-5.scope: Consumed 10.269s CPU time, 187.1M memory peak, 0B memory swap peak. Dec 13 01:55:53.031509 systemd-logind[2013]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:55:53.033393 systemd-logind[2013]: Removed session 5. Dec 13 01:55:56.091671 update_engine[2015]: I20241213 01:55:56.091550 2015 update_attempter.cc:509] Updating boot flags... Dec 13 01:55:56.177642 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (3318) Dec 13 01:55:56.451683 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (3322) Dec 13 01:56:05.101453 kubelet[3247]: I1213 01:56:05.101395 3247 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:56:05.102372 containerd[2035]: time="2024-12-13T01:56:05.102175764Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:56:05.102930 kubelet[3247]: I1213 01:56:05.102500 3247 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:56:05.866939 kubelet[3247]: I1213 01:56:05.865791 3247 topology_manager.go:215] "Topology Admit Handler" podUID="e65383c0-03c3-4a0e-be66-210a9b1939e2" podNamespace="kube-system" podName="kube-proxy-mt9vg" Dec 13 01:56:05.886273 systemd[1]: Created slice kubepods-besteffort-pode65383c0_03c3_4a0e_be66_210a9b1939e2.slice - libcontainer container kubepods-besteffort-pode65383c0_03c3_4a0e_be66_210a9b1939e2.slice. Dec 13 01:56:05.896333 kubelet[3247]: I1213 01:56:05.896273 3247 topology_manager.go:215] "Topology Admit Handler" podUID="bde13259-7dc5-4fcd-959d-01ece68715a6" podNamespace="kube-flannel" podName="kube-flannel-ds-b6ht8" Dec 13 01:56:05.911447 kubelet[3247]: W1213 01:56:05.911393 3247 reflector.go:539] object-"kube-flannel"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-24-36" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-24-36' and this object Dec 13 01:56:05.911447 kubelet[3247]: E1213 01:56:05.911449 3247 reflector.go:147] object-"kube-flannel"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-24-36" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-24-36' and this object Dec 13 01:56:05.911701 kubelet[3247]: W1213 01:56:05.911518 3247 reflector.go:539] object-"kube-flannel"/"kube-flannel-cfg": failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ip-172-31-24-36" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-24-36' and this object Dec 13 01:56:05.911701 kubelet[3247]: E1213 01:56:05.911551 3247 reflector.go:147] object-"kube-flannel"/"kube-flannel-cfg": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ip-172-31-24-36" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-24-36' and this object Dec 13 01:56:05.921142 systemd[1]: Created slice kubepods-burstable-podbde13259_7dc5_4fcd_959d_01ece68715a6.slice - libcontainer container kubepods-burstable-podbde13259_7dc5_4fcd_959d_01ece68715a6.slice. Dec 13 01:56:05.924984 kubelet[3247]: I1213 01:56:05.924929 3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e65383c0-03c3-4a0e-be66-210a9b1939e2-kube-proxy\") pod \"kube-proxy-mt9vg\" (UID: \"e65383c0-03c3-4a0e-be66-210a9b1939e2\") " pod="kube-system/kube-proxy-mt9vg" Dec 13 01:56:05.926733 kubelet[3247]: I1213 01:56:05.926193 3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfjrh\" (UniqueName: \"kubernetes.io/projected/bde13259-7dc5-4fcd-959d-01ece68715a6-kube-api-access-xfjrh\") pod \"kube-flannel-ds-b6ht8\" (UID: \"bde13259-7dc5-4fcd-959d-01ece68715a6\") " pod="kube-flannel/kube-flannel-ds-b6ht8" Dec 13 01:56:05.927844 kubelet[3247]: I1213 01:56:05.926836 3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77r82\" (UniqueName: \"kubernetes.io/projected/e65383c0-03c3-4a0e-be66-210a9b1939e2-kube-api-access-77r82\") pod \"kube-proxy-mt9vg\" (UID: \"e65383c0-03c3-4a0e-be66-210a9b1939e2\") " pod="kube-system/kube-proxy-mt9vg" Dec 13 01:56:05.927844 kubelet[3247]: I1213 01:56:05.926924 3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e65383c0-03c3-4a0e-be66-210a9b1939e2-lib-modules\") pod \"kube-proxy-mt9vg\" (UID: \"e65383c0-03c3-4a0e-be66-210a9b1939e2\") " pod="kube-system/kube-proxy-mt9vg" Dec 13 01:56:05.927844 kubelet[3247]: I1213 01:56:05.926994 3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/bde13259-7dc5-4fcd-959d-01ece68715a6-cni-plugin\") pod \"kube-flannel-ds-b6ht8\" (UID: \"bde13259-7dc5-4fcd-959d-01ece68715a6\") " pod="kube-flannel/kube-flannel-ds-b6ht8" Dec 13 01:56:05.927844 kubelet[3247]: I1213 01:56:05.927065 3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/bde13259-7dc5-4fcd-959d-01ece68715a6-flannel-cfg\") pod \"kube-flannel-ds-b6ht8\" (UID: \"bde13259-7dc5-4fcd-959d-01ece68715a6\") " pod="kube-flannel/kube-flannel-ds-b6ht8" Dec 13 01:56:05.927844 kubelet[3247]: I1213 01:56:05.927141 3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bde13259-7dc5-4fcd-959d-01ece68715a6-xtables-lock\") pod \"kube-flannel-ds-b6ht8\" (UID: \"bde13259-7dc5-4fcd-959d-01ece68715a6\") " pod="kube-flannel/kube-flannel-ds-b6ht8" Dec 13 01:56:05.928243 kubelet[3247]: I1213 01:56:05.927194 3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e65383c0-03c3-4a0e-be66-210a9b1939e2-xtables-lock\") pod \"kube-proxy-mt9vg\" (UID: \"e65383c0-03c3-4a0e-be66-210a9b1939e2\") " pod="kube-system/kube-proxy-mt9vg" Dec 13 01:56:05.928243 kubelet[3247]: I1213 01:56:05.927683 3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bde13259-7dc5-4fcd-959d-01ece68715a6-run\") pod \"kube-flannel-ds-b6ht8\" (UID: \"bde13259-7dc5-4fcd-959d-01ece68715a6\") " pod="kube-flannel/kube-flannel-ds-b6ht8" Dec 13 01:56:05.928243 kubelet[3247]: I1213 01:56:05.927756 3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/bde13259-7dc5-4fcd-959d-01ece68715a6-cni\") pod \"kube-flannel-ds-b6ht8\" (UID: \"bde13259-7dc5-4fcd-959d-01ece68715a6\") " pod="kube-flannel/kube-flannel-ds-b6ht8" Dec 13 01:56:06.062480 kubelet[3247]: E1213 01:56:06.062390 3247 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 01:56:06.062684 kubelet[3247]: E1213 01:56:06.062595 3247 projected.go:200] Error preparing data for projected volume kube-api-access-77r82 for pod kube-system/kube-proxy-mt9vg: configmap "kube-root-ca.crt" not found Dec 13 01:56:06.063065 kubelet[3247]: E1213 01:56:06.063017 3247 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e65383c0-03c3-4a0e-be66-210a9b1939e2-kube-api-access-77r82 podName:e65383c0-03c3-4a0e-be66-210a9b1939e2 nodeName:}" failed. No retries permitted until 2024-12-13 01:56:06.562842189 +0000 UTC m=+15.458901012 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-77r82" (UniqueName: "kubernetes.io/projected/e65383c0-03c3-4a0e-be66-210a9b1939e2-kube-api-access-77r82") pod "kube-proxy-mt9vg" (UID: "e65383c0-03c3-4a0e-be66-210a9b1939e2") : configmap "kube-root-ca.crt" not found Dec 13 01:56:06.804014 containerd[2035]: time="2024-12-13T01:56:06.803945788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mt9vg,Uid:e65383c0-03c3-4a0e-be66-210a9b1939e2,Namespace:kube-system,Attempt:0,}" Dec 13 01:56:06.852068 containerd[2035]: time="2024-12-13T01:56:06.851770084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:06.852068 containerd[2035]: time="2024-12-13T01:56:06.851884924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:06.852068 containerd[2035]: time="2024-12-13T01:56:06.851923708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:06.852521 containerd[2035]: time="2024-12-13T01:56:06.852096797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:06.895905 systemd[1]: Started cri-containerd-8149f11acef3d09dde029d756e3584f5109bc43dbfaa86b55a4ff0743b4f37fe.scope - libcontainer container 8149f11acef3d09dde029d756e3584f5109bc43dbfaa86b55a4ff0743b4f37fe. Dec 13 01:56:06.938841 containerd[2035]: time="2024-12-13T01:56:06.938779193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mt9vg,Uid:e65383c0-03c3-4a0e-be66-210a9b1939e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"8149f11acef3d09dde029d756e3584f5109bc43dbfaa86b55a4ff0743b4f37fe\"" Dec 13 01:56:06.945433 containerd[2035]: time="2024-12-13T01:56:06.945348497Z" level=info msg="CreateContainer within sandbox \"8149f11acef3d09dde029d756e3584f5109bc43dbfaa86b55a4ff0743b4f37fe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:56:06.984736 containerd[2035]: time="2024-12-13T01:56:06.984647753Z" level=info msg="CreateContainer within sandbox \"8149f11acef3d09dde029d756e3584f5109bc43dbfaa86b55a4ff0743b4f37fe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4eac82971763d6214ecac9b9b9a3c167910fdbda6897fd18e1c2f418acaa756e\"" Dec 13 01:56:06.987102 containerd[2035]: time="2024-12-13T01:56:06.985813481Z" level=info msg="StartContainer for \"4eac82971763d6214ecac9b9b9a3c167910fdbda6897fd18e1c2f418acaa756e\"" Dec 13 01:56:07.034889 systemd[1]: Started cri-containerd-4eac82971763d6214ecac9b9b9a3c167910fdbda6897fd18e1c2f418acaa756e.scope - libcontainer container 4eac82971763d6214ecac9b9b9a3c167910fdbda6897fd18e1c2f418acaa756e. Dec 13 01:56:07.044974 kubelet[3247]: E1213 01:56:07.044920 3247 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:56:07.046010 kubelet[3247]: E1213 01:56:07.044984 3247 projected.go:200] Error preparing data for projected volume kube-api-access-xfjrh for pod kube-flannel/kube-flannel-ds-b6ht8: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:56:07.046010 kubelet[3247]: E1213 01:56:07.045072 3247 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bde13259-7dc5-4fcd-959d-01ece68715a6-kube-api-access-xfjrh podName:bde13259-7dc5-4fcd-959d-01ece68715a6 nodeName:}" failed. No retries permitted until 2024-12-13 01:56:07.545043141 +0000 UTC m=+16.441101952 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xfjrh" (UniqueName: "kubernetes.io/projected/bde13259-7dc5-4fcd-959d-01ece68715a6-kube-api-access-xfjrh") pod "kube-flannel-ds-b6ht8" (UID: "bde13259-7dc5-4fcd-959d-01ece68715a6") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:56:07.094694 containerd[2035]: time="2024-12-13T01:56:07.094193798Z" level=info msg="StartContainer for \"4eac82971763d6214ecac9b9b9a3c167910fdbda6897fd18e1c2f418acaa756e\" returns successfully" Dec 13 01:56:07.541697 kubelet[3247]: I1213 01:56:07.540683 3247 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mt9vg" podStartSLOduration=2.54055396 podStartE2EDuration="2.54055396s" podCreationTimestamp="2024-12-13 01:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:56:07.540049408 +0000 UTC m=+16.436108207" watchObservedRunningTime="2024-12-13 01:56:07.54055396 +0000 UTC m=+16.436612783" Dec 13 01:56:07.643959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3808427757.mount: Deactivated successfully. Dec 13 01:56:07.729922 containerd[2035]: time="2024-12-13T01:56:07.729380945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-b6ht8,Uid:bde13259-7dc5-4fcd-959d-01ece68715a6,Namespace:kube-flannel,Attempt:0,}" Dec 13 01:56:07.778956 containerd[2035]: time="2024-12-13T01:56:07.778795529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:07.778956 containerd[2035]: time="2024-12-13T01:56:07.778903481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:07.779188 containerd[2035]: time="2024-12-13T01:56:07.778940573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:07.779188 containerd[2035]: time="2024-12-13T01:56:07.779095049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:07.816916 systemd[1]: Started cri-containerd-0d1f34379376b36da5b5d4cd46e507869a0c16ab319ee0729d4e83839a53b473.scope - libcontainer container 0d1f34379376b36da5b5d4cd46e507869a0c16ab319ee0729d4e83839a53b473. Dec 13 01:56:07.879473 containerd[2035]: time="2024-12-13T01:56:07.879326670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-b6ht8,Uid:bde13259-7dc5-4fcd-959d-01ece68715a6,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"0d1f34379376b36da5b5d4cd46e507869a0c16ab319ee0729d4e83839a53b473\"" Dec 13 01:56:07.884409 containerd[2035]: time="2024-12-13T01:56:07.884340078Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 13 01:56:09.852157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1355255822.mount: Deactivated successfully. Dec 13 01:56:09.919120 containerd[2035]: time="2024-12-13T01:56:09.919060928Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:09.921294 containerd[2035]: time="2024-12-13T01:56:09.921174944Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532" Dec 13 01:56:09.923518 containerd[2035]: time="2024-12-13T01:56:09.923405024Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:09.930460 containerd[2035]: time="2024-12-13T01:56:09.930348572Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:09.933237 containerd[2035]: time="2024-12-13T01:56:09.932033564Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 2.04760057s" Dec 13 01:56:09.933237 containerd[2035]: time="2024-12-13T01:56:09.932102120Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Dec 13 01:56:09.936141 containerd[2035]: time="2024-12-13T01:56:09.936091808Z" level=info msg="CreateContainer within sandbox \"0d1f34379376b36da5b5d4cd46e507869a0c16ab319ee0729d4e83839a53b473\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 13 01:56:09.963731 containerd[2035]: time="2024-12-13T01:56:09.963674588Z" level=info msg="CreateContainer within sandbox \"0d1f34379376b36da5b5d4cd46e507869a0c16ab319ee0729d4e83839a53b473\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"863db6a7d14eaa000f98202ced1539a80d516a018e6a1fe20e26497f1ba57f0a\"" Dec 13 01:56:09.965433 containerd[2035]: time="2024-12-13T01:56:09.964526132Z" level=info msg="StartContainer for \"863db6a7d14eaa000f98202ced1539a80d516a018e6a1fe20e26497f1ba57f0a\"" Dec 13 01:56:10.013932 systemd[1]: Started cri-containerd-863db6a7d14eaa000f98202ced1539a80d516a018e6a1fe20e26497f1ba57f0a.scope - libcontainer container 863db6a7d14eaa000f98202ced1539a80d516a018e6a1fe20e26497f1ba57f0a. Dec 13 01:56:10.059294 containerd[2035]: time="2024-12-13T01:56:10.059214448Z" level=info msg="StartContainer for \"863db6a7d14eaa000f98202ced1539a80d516a018e6a1fe20e26497f1ba57f0a\" returns successfully" Dec 13 01:56:10.061260 systemd[1]: cri-containerd-863db6a7d14eaa000f98202ced1539a80d516a018e6a1fe20e26497f1ba57f0a.scope: Deactivated successfully. Dec 13 01:56:10.134237 containerd[2035]: time="2024-12-13T01:56:10.134050685Z" level=info msg="shim disconnected" id=863db6a7d14eaa000f98202ced1539a80d516a018e6a1fe20e26497f1ba57f0a namespace=k8s.io Dec 13 01:56:10.134237 containerd[2035]: time="2024-12-13T01:56:10.134127461Z" level=warning msg="cleaning up after shim disconnected" id=863db6a7d14eaa000f98202ced1539a80d516a018e6a1fe20e26497f1ba57f0a namespace=k8s.io Dec 13 01:56:10.134237 containerd[2035]: time="2024-12-13T01:56:10.134149769Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:10.539307 containerd[2035]: time="2024-12-13T01:56:10.538940539Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 13 01:56:10.709211 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-863db6a7d14eaa000f98202ced1539a80d516a018e6a1fe20e26497f1ba57f0a-rootfs.mount: Deactivated successfully. Dec 13 01:56:12.691178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount448864263.mount: Deactivated successfully. Dec 13 01:56:13.876670 containerd[2035]: time="2024-12-13T01:56:13.875793059Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:13.878152 containerd[2035]: time="2024-12-13T01:56:13.878072831Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261" Dec 13 01:56:13.880622 containerd[2035]: time="2024-12-13T01:56:13.880509551Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:13.888164 containerd[2035]: time="2024-12-13T01:56:13.888068675Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:56:13.891068 containerd[2035]: time="2024-12-13T01:56:13.890720003Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 3.351639364s" Dec 13 01:56:13.891068 containerd[2035]: time="2024-12-13T01:56:13.890782739Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Dec 13 01:56:13.897483 containerd[2035]: time="2024-12-13T01:56:13.897403823Z" level=info msg="CreateContainer within sandbox \"0d1f34379376b36da5b5d4cd46e507869a0c16ab319ee0729d4e83839a53b473\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:56:13.921441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount367524890.mount: Deactivated successfully. Dec 13 01:56:13.926840 containerd[2035]: time="2024-12-13T01:56:13.926766360Z" level=info msg="CreateContainer within sandbox \"0d1f34379376b36da5b5d4cd46e507869a0c16ab319ee0729d4e83839a53b473\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"05fab7fd1d5360e3239e4be694aa1fd71812a4825945362e6338bb12403d5869\"" Dec 13 01:56:13.929063 containerd[2035]: time="2024-12-13T01:56:13.928601604Z" level=info msg="StartContainer for \"05fab7fd1d5360e3239e4be694aa1fd71812a4825945362e6338bb12403d5869\"" Dec 13 01:56:13.979883 systemd[1]: Started cri-containerd-05fab7fd1d5360e3239e4be694aa1fd71812a4825945362e6338bb12403d5869.scope - libcontainer container 05fab7fd1d5360e3239e4be694aa1fd71812a4825945362e6338bb12403d5869. Dec 13 01:56:14.025889 systemd[1]: cri-containerd-05fab7fd1d5360e3239e4be694aa1fd71812a4825945362e6338bb12403d5869.scope: Deactivated successfully. Dec 13 01:56:14.032377 containerd[2035]: time="2024-12-13T01:56:14.032296484Z" level=info msg="StartContainer for \"05fab7fd1d5360e3239e4be694aa1fd71812a4825945362e6338bb12403d5869\" returns successfully" Dec 13 01:56:14.065478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05fab7fd1d5360e3239e4be694aa1fd71812a4825945362e6338bb12403d5869-rootfs.mount: Deactivated successfully. Dec 13 01:56:14.099049 kubelet[3247]: I1213 01:56:14.097944 3247 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:56:14.160709 kubelet[3247]: I1213 01:56:14.158785 3247 topology_manager.go:215] "Topology Admit Handler" podUID="63aefb7d-a866-49a2-8487-dba7bd3b7758" podNamespace="kube-system" podName="coredns-76f75df574-xbv85" Dec 13 01:56:14.162406 kubelet[3247]: I1213 01:56:14.162310 3247 topology_manager.go:215] "Topology Admit Handler" podUID="ba5b7218-6f75-4a65-900a-01fd3c58de1f" podNamespace="kube-system" podName="coredns-76f75df574-hcf76" Dec 13 01:56:14.186148 kubelet[3247]: I1213 01:56:14.185743 3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba5b7218-6f75-4a65-900a-01fd3c58de1f-config-volume\") pod \"coredns-76f75df574-hcf76\" (UID: \"ba5b7218-6f75-4a65-900a-01fd3c58de1f\") " pod="kube-system/coredns-76f75df574-hcf76" Dec 13 01:56:14.186148 kubelet[3247]: I1213 01:56:14.185854 3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvr8j\" (UniqueName: \"kubernetes.io/projected/ba5b7218-6f75-4a65-900a-01fd3c58de1f-kube-api-access-vvr8j\") pod \"coredns-76f75df574-hcf76\" (UID: \"ba5b7218-6f75-4a65-900a-01fd3c58de1f\") " pod="kube-system/coredns-76f75df574-hcf76" Dec 13 01:56:14.186148 kubelet[3247]: I1213 01:56:14.185915 3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63aefb7d-a866-49a2-8487-dba7bd3b7758-config-volume\") pod \"coredns-76f75df574-xbv85\" (UID: \"63aefb7d-a866-49a2-8487-dba7bd3b7758\") " pod="kube-system/coredns-76f75df574-xbv85" Dec 13 01:56:14.186148 kubelet[3247]: I1213 01:56:14.185986 3247 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5mfb\" (UniqueName: \"kubernetes.io/projected/63aefb7d-a866-49a2-8487-dba7bd3b7758-kube-api-access-m5mfb\") pod \"coredns-76f75df574-xbv85\" (UID: \"63aefb7d-a866-49a2-8487-dba7bd3b7758\") " pod="kube-system/coredns-76f75df574-xbv85" Dec 13 01:56:14.191845 systemd[1]: Created slice kubepods-burstable-pod63aefb7d_a866_49a2_8487_dba7bd3b7758.slice - libcontainer container kubepods-burstable-pod63aefb7d_a866_49a2_8487_dba7bd3b7758.slice. Dec 13 01:56:14.208086 containerd[2035]: time="2024-12-13T01:56:14.207880905Z" level=info msg="shim disconnected" id=05fab7fd1d5360e3239e4be694aa1fd71812a4825945362e6338bb12403d5869 namespace=k8s.io Dec 13 01:56:14.208086 containerd[2035]: time="2024-12-13T01:56:14.208019133Z" level=warning msg="cleaning up after shim disconnected" id=05fab7fd1d5360e3239e4be694aa1fd71812a4825945362e6338bb12403d5869 namespace=k8s.io Dec 13 01:56:14.208086 containerd[2035]: time="2024-12-13T01:56:14.208040985Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:14.214897 systemd[1]: Created slice kubepods-burstable-podba5b7218_6f75_4a65_900a_01fd3c58de1f.slice - libcontainer container kubepods-burstable-podba5b7218_6f75_4a65_900a_01fd3c58de1f.slice. Dec 13 01:56:14.513464 containerd[2035]: time="2024-12-13T01:56:14.513330671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xbv85,Uid:63aefb7d-a866-49a2-8487-dba7bd3b7758,Namespace:kube-system,Attempt:0,}" Dec 13 01:56:14.528192 containerd[2035]: time="2024-12-13T01:56:14.527654195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hcf76,Uid:ba5b7218-6f75-4a65-900a-01fd3c58de1f,Namespace:kube-system,Attempt:0,}" Dec 13 01:56:14.592676 containerd[2035]: time="2024-12-13T01:56:14.591281015Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xbv85,Uid:63aefb7d-a866-49a2-8487-dba7bd3b7758,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6c6e29d4e37e32967aea51785e095a265d9508e81ac5ee6013a5e2712ecafb92\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:56:14.592676 containerd[2035]: time="2024-12-13T01:56:14.591366395Z" level=info msg="CreateContainer within sandbox \"0d1f34379376b36da5b5d4cd46e507869a0c16ab319ee0729d4e83839a53b473\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 13 01:56:14.592920 kubelet[3247]: E1213 01:56:14.591755 3247 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c6e29d4e37e32967aea51785e095a265d9508e81ac5ee6013a5e2712ecafb92\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:56:14.592920 kubelet[3247]: E1213 01:56:14.591834 3247 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c6e29d4e37e32967aea51785e095a265d9508e81ac5ee6013a5e2712ecafb92\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-xbv85" Dec 13 01:56:14.592920 kubelet[3247]: E1213 01:56:14.591871 3247 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c6e29d4e37e32967aea51785e095a265d9508e81ac5ee6013a5e2712ecafb92\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-xbv85" Dec 13 01:56:14.592920 kubelet[3247]: E1213 01:56:14.591954 3247 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-xbv85_kube-system(63aefb7d-a866-49a2-8487-dba7bd3b7758)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-xbv85_kube-system(63aefb7d-a866-49a2-8487-dba7bd3b7758)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c6e29d4e37e32967aea51785e095a265d9508e81ac5ee6013a5e2712ecafb92\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-xbv85" podUID="63aefb7d-a866-49a2-8487-dba7bd3b7758" Dec 13 01:56:14.608511 containerd[2035]: time="2024-12-13T01:56:14.608273831Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hcf76,Uid:ba5b7218-6f75-4a65-900a-01fd3c58de1f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"948d2c85375d774adbca9d67a6ddf6361087060242d304c42cb8333515015c87\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:56:14.608921 kubelet[3247]: E1213 01:56:14.608883 3247 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"948d2c85375d774adbca9d67a6ddf6361087060242d304c42cb8333515015c87\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:56:14.609056 kubelet[3247]: E1213 01:56:14.608983 3247 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"948d2c85375d774adbca9d67a6ddf6361087060242d304c42cb8333515015c87\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-hcf76" Dec 13 01:56:14.609173 kubelet[3247]: E1213 01:56:14.609139 3247 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"948d2c85375d774adbca9d67a6ddf6361087060242d304c42cb8333515015c87\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-hcf76" Dec 13 01:56:14.610092 kubelet[3247]: E1213 01:56:14.609372 3247 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-hcf76_kube-system(ba5b7218-6f75-4a65-900a-01fd3c58de1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-hcf76_kube-system(ba5b7218-6f75-4a65-900a-01fd3c58de1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"948d2c85375d774adbca9d67a6ddf6361087060242d304c42cb8333515015c87\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-hcf76" podUID="ba5b7218-6f75-4a65-900a-01fd3c58de1f" Dec 13 01:56:14.615011 containerd[2035]: time="2024-12-13T01:56:14.614811971Z" level=info msg="CreateContainer within sandbox \"0d1f34379376b36da5b5d4cd46e507869a0c16ab319ee0729d4e83839a53b473\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"a69958d74bffa98961d74258ba84f8d2b5b245e35689301043ac1f1966ab31fb\"" Dec 13 01:56:14.617847 containerd[2035]: time="2024-12-13T01:56:14.617776943Z" level=info msg="StartContainer for \"a69958d74bffa98961d74258ba84f8d2b5b245e35689301043ac1f1966ab31fb\"" Dec 13 01:56:14.665901 systemd[1]: Started cri-containerd-a69958d74bffa98961d74258ba84f8d2b5b245e35689301043ac1f1966ab31fb.scope - libcontainer container a69958d74bffa98961d74258ba84f8d2b5b245e35689301043ac1f1966ab31fb. Dec 13 01:56:14.720399 containerd[2035]: time="2024-12-13T01:56:14.720226272Z" level=info msg="StartContainer for \"a69958d74bffa98961d74258ba84f8d2b5b245e35689301043ac1f1966ab31fb\" returns successfully" Dec 13 01:56:15.589328 kubelet[3247]: I1213 01:56:15.588997 3247 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-b6ht8" podStartSLOduration=4.580313371 podStartE2EDuration="10.58893918s" podCreationTimestamp="2024-12-13 01:56:05 +0000 UTC" firstStartedPulling="2024-12-13 01:56:07.882486774 +0000 UTC m=+16.778545597" lastFinishedPulling="2024-12-13 01:56:13.891112595 +0000 UTC m=+22.787171406" observedRunningTime="2024-12-13 01:56:15.588640116 +0000 UTC m=+24.484698951" watchObservedRunningTime="2024-12-13 01:56:15.58893918 +0000 UTC m=+24.484998003" Dec 13 01:56:15.790188 (udev-worker)[3976]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:56:15.807982 systemd-networkd[1934]: flannel.1: Link UP Dec 13 01:56:15.808818 systemd-networkd[1934]: flannel.1: Gained carrier Dec 13 01:56:17.143803 systemd-networkd[1934]: flannel.1: Gained IPv6LL Dec 13 01:56:19.526858 ntpd[2000]: Listen normally on 8 flannel.1 192.168.0.0:123 Dec 13 01:56:19.526994 ntpd[2000]: Listen normally on 9 flannel.1 [fe80::c6c:9cff:fe03:68bb%4]:123 Dec 13 01:56:19.527442 ntpd[2000]: 13 Dec 01:56:19 ntpd[2000]: Listen normally on 8 flannel.1 192.168.0.0:123 Dec 13 01:56:19.527442 ntpd[2000]: 13 Dec 01:56:19 ntpd[2000]: Listen normally on 9 flannel.1 [fe80::c6c:9cff:fe03:68bb%4]:123 Dec 13 01:56:25.438223 containerd[2035]: time="2024-12-13T01:56:25.437493069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hcf76,Uid:ba5b7218-6f75-4a65-900a-01fd3c58de1f,Namespace:kube-system,Attempt:0,}" Dec 13 01:56:25.477071 systemd-networkd[1934]: cni0: Link UP Dec 13 01:56:25.477086 systemd-networkd[1934]: cni0: Gained carrier Dec 13 01:56:25.484290 (udev-worker)[4090]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:56:25.484933 systemd-networkd[1934]: cni0: Lost carrier Dec 13 01:56:25.496791 systemd-networkd[1934]: veth6c620384: Link UP Dec 13 01:56:25.498171 kernel: cni0: port 1(veth6c620384) entered blocking state Dec 13 01:56:25.498272 kernel: cni0: port 1(veth6c620384) entered disabled state Dec 13 01:56:25.498313 kernel: veth6c620384: entered allmulticast mode Dec 13 01:56:25.499751 kernel: veth6c620384: entered promiscuous mode Dec 13 01:56:25.501744 kernel: cni0: port 1(veth6c620384) entered blocking state Dec 13 01:56:25.501823 kernel: cni0: port 1(veth6c620384) entered forwarding state Dec 13 01:56:25.504151 kernel: cni0: port 1(veth6c620384) entered disabled state Dec 13 01:56:25.504259 (udev-worker)[4093]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:56:25.520993 kernel: cni0: port 1(veth6c620384) entered blocking state Dec 13 01:56:25.521069 kernel: cni0: port 1(veth6c620384) entered forwarding state Dec 13 01:56:25.521528 systemd-networkd[1934]: veth6c620384: Gained carrier Dec 13 01:56:25.523339 systemd-networkd[1934]: cni0: Gained carrier Dec 13 01:56:25.533296 containerd[2035]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"} Dec 13 01:56:25.533296 containerd[2035]: delegateAdd: netconf sent to delegate plugin: Dec 13 01:56:25.565062 containerd[2035]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2024-12-13T01:56:25.564721245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:25.565062 containerd[2035]: time="2024-12-13T01:56:25.564811485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:25.565062 containerd[2035]: time="2024-12-13T01:56:25.564859137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:25.565958 containerd[2035]: time="2024-12-13T01:56:25.565723245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:25.607879 systemd[1]: Started cri-containerd-2779bc99d01346ab07bdafacdd73aa33e3cc871416427a7e1a2636ad93251fda.scope - libcontainer container 2779bc99d01346ab07bdafacdd73aa33e3cc871416427a7e1a2636ad93251fda. Dec 13 01:56:25.679156 containerd[2035]: time="2024-12-13T01:56:25.679032586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hcf76,Uid:ba5b7218-6f75-4a65-900a-01fd3c58de1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2779bc99d01346ab07bdafacdd73aa33e3cc871416427a7e1a2636ad93251fda\"" Dec 13 01:56:25.685174 containerd[2035]: time="2024-12-13T01:56:25.685105774Z" level=info msg="CreateContainer within sandbox \"2779bc99d01346ab07bdafacdd73aa33e3cc871416427a7e1a2636ad93251fda\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:56:25.719606 containerd[2035]: time="2024-12-13T01:56:25.719301994Z" level=info msg="CreateContainer within sandbox \"2779bc99d01346ab07bdafacdd73aa33e3cc871416427a7e1a2636ad93251fda\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fb466ca2d16bb7061866c1cbd0f84bcbfa4cffecd3f7ef070f01d907e7494066\"" Dec 13 01:56:25.721773 containerd[2035]: time="2024-12-13T01:56:25.720662038Z" level=info msg="StartContainer for \"fb466ca2d16bb7061866c1cbd0f84bcbfa4cffecd3f7ef070f01d907e7494066\"" Dec 13 01:56:25.763900 systemd[1]: Started cri-containerd-fb466ca2d16bb7061866c1cbd0f84bcbfa4cffecd3f7ef070f01d907e7494066.scope - libcontainer container fb466ca2d16bb7061866c1cbd0f84bcbfa4cffecd3f7ef070f01d907e7494066. Dec 13 01:56:25.815010 containerd[2035]: time="2024-12-13T01:56:25.814931519Z" level=info msg="StartContainer for \"fb466ca2d16bb7061866c1cbd0f84bcbfa4cffecd3f7ef070f01d907e7494066\" returns successfully" Dec 13 01:56:26.110099 systemd[1]: Started sshd@5-172.31.24.36:22-139.178.68.195:54834.service - OpenSSH per-connection server daemon (139.178.68.195:54834). Dec 13 01:56:26.277450 sshd[4205]: Accepted publickey for core from 139.178.68.195 port 54834 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:26.280188 sshd[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:26.288497 systemd-logind[2013]: New session 6 of user core. Dec 13 01:56:26.295843 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:56:26.437682 containerd[2035]: time="2024-12-13T01:56:26.437032930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xbv85,Uid:63aefb7d-a866-49a2-8487-dba7bd3b7758,Namespace:kube-system,Attempt:0,}" Dec 13 01:56:26.504266 systemd-networkd[1934]: veth9e6eae46: Link UP Dec 13 01:56:26.505893 (udev-worker)[4101]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:56:26.511600 kernel: cni0: port 2(veth9e6eae46) entered blocking state Dec 13 01:56:26.511714 kernel: cni0: port 2(veth9e6eae46) entered disabled state Dec 13 01:56:26.514766 kernel: veth9e6eae46: entered allmulticast mode Dec 13 01:56:26.517381 kernel: veth9e6eae46: entered promiscuous mode Dec 13 01:56:26.547643 kernel: cni0: port 2(veth9e6eae46) entered blocking state Dec 13 01:56:26.547729 kernel: cni0: port 2(veth9e6eae46) entered forwarding state Dec 13 01:56:26.547812 systemd-networkd[1934]: veth9e6eae46: Gained carrier Dec 13 01:56:26.554812 containerd[2035]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"} Dec 13 01:56:26.554812 containerd[2035]: delegateAdd: netconf sent to delegate plugin: Dec 13 01:56:26.613552 containerd[2035]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2024-12-13T01:56:26.612952991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:56:26.619150 containerd[2035]: time="2024-12-13T01:56:26.615348203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:56:26.619150 containerd[2035]: time="2024-12-13T01:56:26.615398951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:26.619150 containerd[2035]: time="2024-12-13T01:56:26.615585191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:56:26.634609 sshd[4205]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:26.648616 systemd[1]: sshd@5-172.31.24.36:22-139.178.68.195:54834.service: Deactivated successfully. Dec 13 01:56:26.658059 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:56:26.662356 systemd-logind[2013]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:56:26.680481 systemd-logind[2013]: Removed session 6. Dec 13 01:56:26.700483 systemd[1]: Started cri-containerd-bbb60173392b03d69de0004f56b7be82d238044fa69e1e69affa6cdb5cd10eb2.scope - libcontainer container bbb60173392b03d69de0004f56b7be82d238044fa69e1e69affa6cdb5cd10eb2. Dec 13 01:56:26.725965 kubelet[3247]: I1213 01:56:26.725884 3247 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-hcf76" podStartSLOduration=20.725820839 podStartE2EDuration="20.725820839s" podCreationTimestamp="2024-12-13 01:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:56:26.669233195 +0000 UTC m=+35.565292018" watchObservedRunningTime="2024-12-13 01:56:26.725820839 +0000 UTC m=+35.621879674" Dec 13 01:56:26.792146 containerd[2035]: time="2024-12-13T01:56:26.792067344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xbv85,Uid:63aefb7d-a866-49a2-8487-dba7bd3b7758,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbb60173392b03d69de0004f56b7be82d238044fa69e1e69affa6cdb5cd10eb2\"" Dec 13 01:56:26.800305 containerd[2035]: time="2024-12-13T01:56:26.800238444Z" level=info msg="CreateContainer within sandbox \"bbb60173392b03d69de0004f56b7be82d238044fa69e1e69affa6cdb5cd10eb2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:56:26.828821 containerd[2035]: time="2024-12-13T01:56:26.828765564Z" level=info msg="CreateContainer within sandbox \"bbb60173392b03d69de0004f56b7be82d238044fa69e1e69affa6cdb5cd10eb2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9c5e77ecc7fc0274496949b4123840321ef22b7dc8e0c0e6094b8a98b129dd4d\"" Dec 13 01:56:26.830521 containerd[2035]: time="2024-12-13T01:56:26.830443692Z" level=info msg="StartContainer for \"9c5e77ecc7fc0274496949b4123840321ef22b7dc8e0c0e6094b8a98b129dd4d\"" Dec 13 01:56:26.875929 systemd[1]: Started cri-containerd-9c5e77ecc7fc0274496949b4123840321ef22b7dc8e0c0e6094b8a98b129dd4d.scope - libcontainer container 9c5e77ecc7fc0274496949b4123840321ef22b7dc8e0c0e6094b8a98b129dd4d. Dec 13 01:56:26.930053 containerd[2035]: time="2024-12-13T01:56:26.929767020Z" level=info msg="StartContainer for \"9c5e77ecc7fc0274496949b4123840321ef22b7dc8e0c0e6094b8a98b129dd4d\" returns successfully" Dec 13 01:56:27.256087 systemd-networkd[1934]: veth6c620384: Gained IPv6LL Dec 13 01:56:27.256521 systemd-networkd[1934]: cni0: Gained IPv6LL Dec 13 01:56:27.637363 kubelet[3247]: I1213 01:56:27.637191 3247 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-xbv85" podStartSLOduration=21.637133496 podStartE2EDuration="21.637133496s" podCreationTimestamp="2024-12-13 01:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:56:27.63643062 +0000 UTC m=+36.532489455" watchObservedRunningTime="2024-12-13 01:56:27.637133496 +0000 UTC m=+36.533192319" Dec 13 01:56:27.959891 systemd-networkd[1934]: veth9e6eae46: Gained IPv6LL Dec 13 01:56:30.526912 ntpd[2000]: Listen normally on 10 cni0 192.168.0.1:123 Dec 13 01:56:30.527063 ntpd[2000]: Listen normally on 11 cni0 [fe80::f057:71ff:fe43:181a%5]:123 Dec 13 01:56:30.527690 ntpd[2000]: 13 Dec 01:56:30 ntpd[2000]: Listen normally on 10 cni0 192.168.0.1:123 Dec 13 01:56:30.527690 ntpd[2000]: 13 Dec 01:56:30 ntpd[2000]: Listen normally on 11 cni0 [fe80::f057:71ff:fe43:181a%5]:123 Dec 13 01:56:30.527690 ntpd[2000]: 13 Dec 01:56:30 ntpd[2000]: Listen normally on 12 veth6c620384 [fe80::645f:56ff:fecf:1de1%6]:123 Dec 13 01:56:30.527690 ntpd[2000]: 13 Dec 01:56:30 ntpd[2000]: Listen normally on 13 veth9e6eae46 [fe80::f09e:88ff:fe01:2734%7]:123 Dec 13 01:56:30.527147 ntpd[2000]: Listen normally on 12 veth6c620384 [fe80::645f:56ff:fecf:1de1%6]:123 Dec 13 01:56:30.527215 ntpd[2000]: Listen normally on 13 veth9e6eae46 [fe80::f09e:88ff:fe01:2734%7]:123 Dec 13 01:56:31.675117 systemd[1]: Started sshd@6-172.31.24.36:22-139.178.68.195:54836.service - OpenSSH per-connection server daemon (139.178.68.195:54836). Dec 13 01:56:31.852484 sshd[4349]: Accepted publickey for core from 139.178.68.195 port 54836 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:31.855138 sshd[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:31.864167 systemd-logind[2013]: New session 7 of user core. Dec 13 01:56:31.872882 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:56:32.119375 sshd[4349]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:32.125953 systemd[1]: sshd@6-172.31.24.36:22-139.178.68.195:54836.service: Deactivated successfully. Dec 13 01:56:32.129425 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:56:32.133686 systemd-logind[2013]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:56:32.137314 systemd-logind[2013]: Removed session 7. Dec 13 01:56:37.164208 systemd[1]: Started sshd@7-172.31.24.36:22-139.178.68.195:36292.service - OpenSSH per-connection server daemon (139.178.68.195:36292). Dec 13 01:56:37.335621 sshd[4388]: Accepted publickey for core from 139.178.68.195 port 36292 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:37.338288 sshd[4388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:37.346507 systemd-logind[2013]: New session 8 of user core. Dec 13 01:56:37.356841 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:56:37.596670 sshd[4388]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:37.603887 systemd[1]: sshd@7-172.31.24.36:22-139.178.68.195:36292.service: Deactivated successfully. Dec 13 01:56:37.609844 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:56:37.611299 systemd-logind[2013]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:56:37.613395 systemd-logind[2013]: Removed session 8. Dec 13 01:56:37.634109 systemd[1]: Started sshd@8-172.31.24.36:22-139.178.68.195:36300.service - OpenSSH per-connection server daemon (139.178.68.195:36300). Dec 13 01:56:37.805683 sshd[4404]: Accepted publickey for core from 139.178.68.195 port 36300 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:37.808319 sshd[4404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:37.818706 systemd-logind[2013]: New session 9 of user core. Dec 13 01:56:37.829845 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:56:38.150439 sshd[4404]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:38.160891 systemd-logind[2013]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:56:38.161840 systemd[1]: sshd@8-172.31.24.36:22-139.178.68.195:36300.service: Deactivated successfully. Dec 13 01:56:38.169956 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:56:38.187869 systemd-logind[2013]: Removed session 9. Dec 13 01:56:38.196306 systemd[1]: Started sshd@9-172.31.24.36:22-139.178.68.195:36310.service - OpenSSH per-connection server daemon (139.178.68.195:36310). Dec 13 01:56:38.376727 sshd[4415]: Accepted publickey for core from 139.178.68.195 port 36310 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:38.379374 sshd[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:38.387802 systemd-logind[2013]: New session 10 of user core. Dec 13 01:56:38.395820 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:56:38.633076 sshd[4415]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:38.639890 systemd[1]: sshd@9-172.31.24.36:22-139.178.68.195:36310.service: Deactivated successfully. Dec 13 01:56:38.643213 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:56:38.647018 systemd-logind[2013]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:56:38.649242 systemd-logind[2013]: Removed session 10. Dec 13 01:56:43.674111 systemd[1]: Started sshd@10-172.31.24.36:22-139.178.68.195:36324.service - OpenSSH per-connection server daemon (139.178.68.195:36324). Dec 13 01:56:43.857868 sshd[4450]: Accepted publickey for core from 139.178.68.195 port 36324 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:43.860644 sshd[4450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:43.869598 systemd-logind[2013]: New session 11 of user core. Dec 13 01:56:43.878029 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:56:44.115941 sshd[4450]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:44.123020 systemd[1]: sshd@10-172.31.24.36:22-139.178.68.195:36324.service: Deactivated successfully. Dec 13 01:56:44.127140 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:56:44.128733 systemd-logind[2013]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:56:44.132102 systemd-logind[2013]: Removed session 11. Dec 13 01:56:44.157114 systemd[1]: Started sshd@11-172.31.24.36:22-139.178.68.195:36340.service - OpenSSH per-connection server daemon (139.178.68.195:36340). Dec 13 01:56:44.330105 sshd[4463]: Accepted publickey for core from 139.178.68.195 port 36340 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:44.332751 sshd[4463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:44.342201 systemd-logind[2013]: New session 12 of user core. Dec 13 01:56:44.347860 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:56:44.646890 sshd[4463]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:44.653824 systemd[1]: sshd@11-172.31.24.36:22-139.178.68.195:36340.service: Deactivated successfully. Dec 13 01:56:44.657141 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:56:44.659718 systemd-logind[2013]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:56:44.662500 systemd-logind[2013]: Removed session 12. Dec 13 01:56:44.690105 systemd[1]: Started sshd@12-172.31.24.36:22-139.178.68.195:36342.service - OpenSSH per-connection server daemon (139.178.68.195:36342). Dec 13 01:56:44.869599 sshd[4474]: Accepted publickey for core from 139.178.68.195 port 36342 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:44.873517 sshd[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:44.882040 systemd-logind[2013]: New session 13 of user core. Dec 13 01:56:44.890968 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:56:47.163032 sshd[4474]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:47.169321 systemd[1]: sshd@12-172.31.24.36:22-139.178.68.195:36342.service: Deactivated successfully. Dec 13 01:56:47.177117 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:56:47.182642 systemd-logind[2013]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:56:47.209314 systemd[1]: Started sshd@13-172.31.24.36:22-139.178.68.195:38146.service - OpenSSH per-connection server daemon (139.178.68.195:38146). Dec 13 01:56:47.211956 systemd-logind[2013]: Removed session 13. Dec 13 01:56:47.391092 sshd[4513]: Accepted publickey for core from 139.178.68.195 port 38146 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:47.393849 sshd[4513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:47.402847 systemd-logind[2013]: New session 14 of user core. Dec 13 01:56:47.412931 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:56:47.898026 sshd[4513]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:47.907803 systemd[1]: sshd@13-172.31.24.36:22-139.178.68.195:38146.service: Deactivated successfully. Dec 13 01:56:47.913437 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:56:47.915613 systemd-logind[2013]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:56:47.935183 systemd[1]: Started sshd@14-172.31.24.36:22-139.178.68.195:38160.service - OpenSSH per-connection server daemon (139.178.68.195:38160). Dec 13 01:56:47.937839 systemd-logind[2013]: Removed session 14. Dec 13 01:56:48.110485 sshd[4524]: Accepted publickey for core from 139.178.68.195 port 38160 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:48.113133 sshd[4524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:48.120910 systemd-logind[2013]: New session 15 of user core. Dec 13 01:56:48.130809 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:56:48.373903 sshd[4524]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:48.379998 systemd[1]: sshd@14-172.31.24.36:22-139.178.68.195:38160.service: Deactivated successfully. Dec 13 01:56:48.384097 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:56:48.386960 systemd-logind[2013]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:56:48.389779 systemd-logind[2013]: Removed session 15. Dec 13 01:56:53.414145 systemd[1]: Started sshd@15-172.31.24.36:22-139.178.68.195:38174.service - OpenSSH per-connection server daemon (139.178.68.195:38174). Dec 13 01:56:53.594021 sshd[4560]: Accepted publickey for core from 139.178.68.195 port 38174 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:53.596965 sshd[4560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:53.604238 systemd-logind[2013]: New session 16 of user core. Dec 13 01:56:53.617849 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:56:53.857291 sshd[4560]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:53.863777 systemd[1]: sshd@15-172.31.24.36:22-139.178.68.195:38174.service: Deactivated successfully. Dec 13 01:56:53.866986 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:56:53.868524 systemd-logind[2013]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:56:53.871127 systemd-logind[2013]: Removed session 16. Dec 13 01:56:58.894111 systemd[1]: Started sshd@16-172.31.24.36:22-139.178.68.195:32838.service - OpenSSH per-connection server daemon (139.178.68.195:32838). Dec 13 01:56:59.073352 sshd[4597]: Accepted publickey for core from 139.178.68.195 port 32838 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:59.076090 sshd[4597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:59.085066 systemd-logind[2013]: New session 17 of user core. Dec 13 01:56:59.089834 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:56:59.326950 sshd[4597]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:59.332549 systemd[1]: sshd@16-172.31.24.36:22-139.178.68.195:32838.service: Deactivated successfully. Dec 13 01:56:59.333274 systemd-logind[2013]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:56:59.338739 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:56:59.343666 systemd-logind[2013]: Removed session 17. Dec 13 01:57:04.374462 systemd[1]: Started sshd@17-172.31.24.36:22-139.178.68.195:32852.service - OpenSSH per-connection server daemon (139.178.68.195:32852). Dec 13 01:57:04.546682 sshd[4631]: Accepted publickey for core from 139.178.68.195 port 32852 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:57:04.549772 sshd[4631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:04.557206 systemd-logind[2013]: New session 18 of user core. Dec 13 01:57:04.567918 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:57:04.806468 sshd[4631]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:04.812362 systemd[1]: sshd@17-172.31.24.36:22-139.178.68.195:32852.service: Deactivated successfully. Dec 13 01:57:04.816974 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:57:04.819993 systemd-logind[2013]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:57:04.822200 systemd-logind[2013]: Removed session 18. Dec 13 01:57:09.850213 systemd[1]: Started sshd@18-172.31.24.36:22-139.178.68.195:48420.service - OpenSSH per-connection server daemon (139.178.68.195:48420). Dec 13 01:57:10.027390 sshd[4666]: Accepted publickey for core from 139.178.68.195 port 48420 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:57:10.030065 sshd[4666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:57:10.037826 systemd-logind[2013]: New session 19 of user core. Dec 13 01:57:10.043825 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:57:10.283204 sshd[4666]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:10.289954 systemd-logind[2013]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:57:10.291431 systemd[1]: sshd@18-172.31.24.36:22-139.178.68.195:48420.service: Deactivated successfully. Dec 13 01:57:10.296312 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:57:10.299905 systemd-logind[2013]: Removed session 19. Dec 13 01:57:24.414658 systemd[1]: cri-containerd-347f9b8f16ccd085030d3e29f71216a333fb90d578170f532bdec0426960cb85.scope: Deactivated successfully. Dec 13 01:57:24.415250 systemd[1]: cri-containerd-347f9b8f16ccd085030d3e29f71216a333fb90d578170f532bdec0426960cb85.scope: Consumed 4.535s CPU time, 22.1M memory peak, 0B memory swap peak. Dec 13 01:57:24.453972 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-347f9b8f16ccd085030d3e29f71216a333fb90d578170f532bdec0426960cb85-rootfs.mount: Deactivated successfully. Dec 13 01:57:24.467967 containerd[2035]: time="2024-12-13T01:57:24.467870730Z" level=info msg="shim disconnected" id=347f9b8f16ccd085030d3e29f71216a333fb90d578170f532bdec0426960cb85 namespace=k8s.io Dec 13 01:57:24.467967 containerd[2035]: time="2024-12-13T01:57:24.467951826Z" level=warning msg="cleaning up after shim disconnected" id=347f9b8f16ccd085030d3e29f71216a333fb90d578170f532bdec0426960cb85 namespace=k8s.io Dec 13 01:57:24.469066 containerd[2035]: time="2024-12-13T01:57:24.467973858Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:57:24.757945 kubelet[3247]: I1213 01:57:24.757820 3247 scope.go:117] "RemoveContainer" containerID="347f9b8f16ccd085030d3e29f71216a333fb90d578170f532bdec0426960cb85" Dec 13 01:57:24.763224 containerd[2035]: time="2024-12-13T01:57:24.763150363Z" level=info msg="CreateContainer within sandbox \"3c2955be7dee99d7c51ce20333201faf9baff04289863809b01746dae426546c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 01:57:24.793773 containerd[2035]: time="2024-12-13T01:57:24.793622732Z" level=info msg="CreateContainer within sandbox \"3c2955be7dee99d7c51ce20333201faf9baff04289863809b01746dae426546c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"18ca6d651575d8296d7375b9934cb58c5d952b8642fa1c7b343919f6b95f1214\"" Dec 13 01:57:24.794573 containerd[2035]: time="2024-12-13T01:57:24.794517956Z" level=info msg="StartContainer for \"18ca6d651575d8296d7375b9934cb58c5d952b8642fa1c7b343919f6b95f1214\"" Dec 13 01:57:24.848879 systemd[1]: Started cri-containerd-18ca6d651575d8296d7375b9934cb58c5d952b8642fa1c7b343919f6b95f1214.scope - libcontainer container 18ca6d651575d8296d7375b9934cb58c5d952b8642fa1c7b343919f6b95f1214. Dec 13 01:57:24.915342 containerd[2035]: time="2024-12-13T01:57:24.915274832Z" level=info msg="StartContainer for \"18ca6d651575d8296d7375b9934cb58c5d952b8642fa1c7b343919f6b95f1214\" returns successfully" Dec 13 01:57:25.455021 systemd[1]: run-containerd-runc-k8s.io-18ca6d651575d8296d7375b9934cb58c5d952b8642fa1c7b343919f6b95f1214-runc.Vn7iNl.mount: Deactivated successfully. Dec 13 01:57:30.103316 systemd[1]: cri-containerd-87eca29d67eb0f915e177da3154a9149f4feb8f214c8fd7bd9f34e83daaaf25b.scope: Deactivated successfully. Dec 13 01:57:30.103859 systemd[1]: cri-containerd-87eca29d67eb0f915e177da3154a9149f4feb8f214c8fd7bd9f34e83daaaf25b.scope: Consumed 2.657s CPU time, 16.0M memory peak, 0B memory swap peak. Dec 13 01:57:30.144543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87eca29d67eb0f915e177da3154a9149f4feb8f214c8fd7bd9f34e83daaaf25b-rootfs.mount: Deactivated successfully. Dec 13 01:57:30.168067 containerd[2035]: time="2024-12-13T01:57:30.167884606Z" level=info msg="shim disconnected" id=87eca29d67eb0f915e177da3154a9149f4feb8f214c8fd7bd9f34e83daaaf25b namespace=k8s.io Dec 13 01:57:30.168067 containerd[2035]: time="2024-12-13T01:57:30.168054346Z" level=warning msg="cleaning up after shim disconnected" id=87eca29d67eb0f915e177da3154a9149f4feb8f214c8fd7bd9f34e83daaaf25b namespace=k8s.io Dec 13 01:57:30.168801 containerd[2035]: time="2024-12-13T01:57:30.168100630Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:57:30.777496 kubelet[3247]: I1213 01:57:30.777435 3247 scope.go:117] "RemoveContainer" containerID="87eca29d67eb0f915e177da3154a9149f4feb8f214c8fd7bd9f34e83daaaf25b" Dec 13 01:57:30.781288 containerd[2035]: time="2024-12-13T01:57:30.781080505Z" level=info msg="CreateContainer within sandbox \"1166a0ccf1da7628b8f78da091241bf9c113fee43a78b55cceffc8a1f8a1e14c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 01:57:30.803672 containerd[2035]: time="2024-12-13T01:57:30.803531113Z" level=info msg="CreateContainer within sandbox \"1166a0ccf1da7628b8f78da091241bf9c113fee43a78b55cceffc8a1f8a1e14c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"69033a17eba84a9d8324ed5d24fb257f0c3476a4b4f706056b87624465036e68\"" Dec 13 01:57:30.804342 containerd[2035]: time="2024-12-13T01:57:30.804291457Z" level=info msg="StartContainer for \"69033a17eba84a9d8324ed5d24fb257f0c3476a4b4f706056b87624465036e68\"" Dec 13 01:57:30.866906 systemd[1]: Started cri-containerd-69033a17eba84a9d8324ed5d24fb257f0c3476a4b4f706056b87624465036e68.scope - libcontainer container 69033a17eba84a9d8324ed5d24fb257f0c3476a4b4f706056b87624465036e68. Dec 13 01:57:30.933103 containerd[2035]: time="2024-12-13T01:57:30.932982782Z" level=info msg="StartContainer for \"69033a17eba84a9d8324ed5d24fb257f0c3476a4b4f706056b87624465036e68\" returns successfully" Dec 13 01:57:31.146537 systemd[1]: run-containerd-runc-k8s.io-69033a17eba84a9d8324ed5d24fb257f0c3476a4b4f706056b87624465036e68-runc.jwDHJ6.mount: Deactivated successfully. Dec 13 01:57:33.333882 kubelet[3247]: E1213 01:57:33.333818 3247 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-36?timeout=10s\": context deadline exceeded" Dec 13 01:57:43.334680 kubelet[3247]: E1213 01:57:43.334614 3247 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-36?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"