Feb 13 15:16:43.213994 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083]
Feb 13 15:16:43.214039 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 13:57:00 -00 2025
Feb 13 15:16:43.214063 kernel: KASLR disabled due to lack of seed
Feb 13 15:16:43.214079 kernel: efi: EFI v2.7 by EDK II
Feb 13 15:16:43.214095 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x78503d98 
Feb 13 15:16:43.214110 kernel: secureboot: Secure boot disabled
Feb 13 15:16:43.214127 kernel: ACPI: Early table checksum verification disabled
Feb 13 15:16:43.214170 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON)
Feb 13 15:16:43.214189 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001      01000013)
Feb 13 15:16:43.214205 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001)
Feb 13 15:16:43.214227 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527)
Feb 13 15:16:43.214243 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001)
Feb 13 15:16:43.214345 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001)
Feb 13 15:16:43.214522 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001)
Feb 13 15:16:43.214542 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001)
Feb 13 15:16:43.214564 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001)
Feb 13 15:16:43.214581 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001)
Feb 13 15:16:43.214597 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001)
Feb 13 15:16:43.214613 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200
Feb 13 15:16:43.214629 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200')
Feb 13 15:16:43.214646 kernel: printk: bootconsole [uart0] enabled
Feb 13 15:16:43.214662 kernel: NUMA: Failed to initialise from firmware
Feb 13 15:16:43.214679 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff]
Feb 13 15:16:43.214695 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff]
Feb 13 15:16:43.214711 kernel: Zone ranges:
Feb 13 15:16:43.214727 kernel:   DMA      [mem 0x0000000040000000-0x00000000ffffffff]
Feb 13 15:16:43.214747 kernel:   DMA32    empty
Feb 13 15:16:43.214764 kernel:   Normal   [mem 0x0000000100000000-0x00000004b5ffffff]
Feb 13 15:16:43.214780 kernel: Movable zone start for each node
Feb 13 15:16:43.214796 kernel: Early memory node ranges
Feb 13 15:16:43.214812 kernel:   node   0: [mem 0x0000000040000000-0x000000007862ffff]
Feb 13 15:16:43.214828 kernel:   node   0: [mem 0x0000000078630000-0x000000007863ffff]
Feb 13 15:16:43.214844 kernel:   node   0: [mem 0x0000000078640000-0x00000000786effff]
Feb 13 15:16:43.214860 kernel:   node   0: [mem 0x00000000786f0000-0x000000007872ffff]
Feb 13 15:16:43.214876 kernel:   node   0: [mem 0x0000000078730000-0x000000007bbfffff]
Feb 13 15:16:43.214892 kernel:   node   0: [mem 0x000000007bc00000-0x000000007bfdffff]
Feb 13 15:16:43.214908 kernel:   node   0: [mem 0x000000007bfe0000-0x000000007fffffff]
Feb 13 15:16:43.214924 kernel:   node   0: [mem 0x0000000400000000-0x00000004b5ffffff]
Feb 13 15:16:43.214944 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff]
Feb 13 15:16:43.214961 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges
Feb 13 15:16:43.214984 kernel: psci: probing for conduit method from ACPI.
Feb 13 15:16:43.215001 kernel: psci: PSCIv1.0 detected in firmware.
Feb 13 15:16:43.215018 kernel: psci: Using standard PSCI v0.2 function IDs
Feb 13 15:16:43.215039 kernel: psci: Trusted OS migration not required
Feb 13 15:16:43.215057 kernel: psci: SMC Calling Convention v1.1
Feb 13 15:16:43.215074 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976
Feb 13 15:16:43.215091 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096
Feb 13 15:16:43.215108 kernel: pcpu-alloc: [0] 0 [0] 1 
Feb 13 15:16:43.215125 kernel: Detected PIPT I-cache on CPU0
Feb 13 15:16:43.215195 kernel: CPU features: detected: GIC system register CPU interface
Feb 13 15:16:43.215216 kernel: CPU features: detected: Spectre-v2
Feb 13 15:16:43.215233 kernel: CPU features: detected: Spectre-v3a
Feb 13 15:16:43.215250 kernel: CPU features: detected: Spectre-BHB
Feb 13 15:16:43.215266 kernel: CPU features: detected: ARM erratum 1742098
Feb 13 15:16:43.215283 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923
Feb 13 15:16:43.215306 kernel: alternatives: applying boot alternatives
Feb 13 15:16:43.215326 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6
Feb 13 15:16:43.215344 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb 13 15:16:43.215362 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 13 15:16:43.215379 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Feb 13 15:16:43.215396 kernel: Fallback order for Node 0: 0 
Feb 13 15:16:43.215413 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 991872
Feb 13 15:16:43.215430 kernel: Policy zone: Normal
Feb 13 15:16:43.215446 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 13 15:16:43.215463 kernel: software IO TLB: area num 2.
Feb 13 15:16:43.215485 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB)
Feb 13 15:16:43.215503 kernel: Memory: 3819960K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 210504K reserved, 0K cma-reserved)
Feb 13 15:16:43.215520 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Feb 13 15:16:43.215537 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 13 15:16:43.215555 kernel: rcu:         RCU event tracing is enabled.
Feb 13 15:16:43.215573 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Feb 13 15:16:43.215591 kernel:         Trampoline variant of Tasks RCU enabled.
Feb 13 15:16:43.215608 kernel:         Tracing variant of Tasks RCU enabled.
Feb 13 15:16:43.215625 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 13 15:16:43.215642 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Feb 13 15:16:43.215659 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
Feb 13 15:16:43.215680 kernel: GICv3: 96 SPIs implemented
Feb 13 15:16:43.215697 kernel: GICv3: 0 Extended SPIs implemented
Feb 13 15:16:43.215714 kernel: Root IRQ handler: gic_handle_irq
Feb 13 15:16:43.215731 kernel: GICv3: GICv3 features: 16 PPIs
Feb 13 15:16:43.215748 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000
Feb 13 15:16:43.215765 kernel: ITS [mem 0x10080000-0x1009ffff]
Feb 13 15:16:43.215782 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1)
Feb 13 15:16:43.215799 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1)
Feb 13 15:16:43.215816 kernel: GICv3: using LPI property table @0x00000004000d0000
Feb 13 15:16:43.215833 kernel: ITS: Using hypervisor restricted LPI range [128]
Feb 13 15:16:43.215850 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000
Feb 13 15:16:43.215868 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 13 15:16:43.215889 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt).
Feb 13 15:16:43.215906 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns
Feb 13 15:16:43.215923 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns
Feb 13 15:16:43.215941 kernel: Console: colour dummy device 80x25
Feb 13 15:16:43.215959 kernel: printk: console [tty1] enabled
Feb 13 15:16:43.215976 kernel: ACPI: Core revision 20230628
Feb 13 15:16:43.215994 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333)
Feb 13 15:16:43.216011 kernel: pid_max: default: 32768 minimum: 301
Feb 13 15:16:43.216029 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Feb 13 15:16:43.216046 kernel: landlock: Up and running.
Feb 13 15:16:43.216068 kernel: SELinux:  Initializing.
Feb 13 15:16:43.216085 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 13 15:16:43.216103 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 13 15:16:43.216120 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 15:16:43.217362 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 15:16:43.217402 kernel: rcu: Hierarchical SRCU implementation.
Feb 13 15:16:43.217421 kernel: rcu:         Max phase no-delay instances is 400.
Feb 13 15:16:43.217439 kernel: Platform MSI: ITS@0x10080000 domain created
Feb 13 15:16:43.217466 kernel: PCI/MSI: ITS@0x10080000 domain created
Feb 13 15:16:43.217484 kernel: Remapping and enabling EFI services.
Feb 13 15:16:43.217502 kernel: smp: Bringing up secondary CPUs ...
Feb 13 15:16:43.217519 kernel: Detected PIPT I-cache on CPU1
Feb 13 15:16:43.217537 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000
Feb 13 15:16:43.217555 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000
Feb 13 15:16:43.217573 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083]
Feb 13 15:16:43.217591 kernel: smp: Brought up 1 node, 2 CPUs
Feb 13 15:16:43.217608 kernel: SMP: Total of 2 processors activated.
Feb 13 15:16:43.217626 kernel: CPU features: detected: 32-bit EL0 Support
Feb 13 15:16:43.217648 kernel: CPU features: detected: 32-bit EL1 Support
Feb 13 15:16:43.217666 kernel: CPU features: detected: CRC32 instructions
Feb 13 15:16:43.217694 kernel: CPU: All CPU(s) started at EL1
Feb 13 15:16:43.217717 kernel: alternatives: applying system-wide alternatives
Feb 13 15:16:43.217735 kernel: devtmpfs: initialized
Feb 13 15:16:43.217754 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 13 15:16:43.217772 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Feb 13 15:16:43.217790 kernel: pinctrl core: initialized pinctrl subsystem
Feb 13 15:16:43.217808 kernel: SMBIOS 3.0.0 present.
Feb 13 15:16:43.217830 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018
Feb 13 15:16:43.217848 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 13 15:16:43.217867 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations
Feb 13 15:16:43.217885 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb 13 15:16:43.217903 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb 13 15:16:43.217921 kernel: audit: initializing netlink subsys (disabled)
Feb 13 15:16:43.217940 kernel: audit: type=2000 audit(0.224:1): state=initialized audit_enabled=0 res=1
Feb 13 15:16:43.217962 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 13 15:16:43.217980 kernel: cpuidle: using governor menu
Feb 13 15:16:43.217999 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
Feb 13 15:16:43.218017 kernel: ASID allocator initialised with 65536 entries
Feb 13 15:16:43.218035 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 13 15:16:43.218053 kernel: Serial: AMBA PL011 UART driver
Feb 13 15:16:43.218071 kernel: Modules: 17440 pages in range for non-PLT usage
Feb 13 15:16:43.218089 kernel: Modules: 508960 pages in range for PLT usage
Feb 13 15:16:43.218107 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 13 15:16:43.218130 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
Feb 13 15:16:43.218172 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages
Feb 13 15:16:43.218191 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page
Feb 13 15:16:43.218210 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 13 15:16:43.218228 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
Feb 13 15:16:43.218246 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages
Feb 13 15:16:43.218264 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page
Feb 13 15:16:43.218282 kernel: ACPI: Added _OSI(Module Device)
Feb 13 15:16:43.218300 kernel: ACPI: Added _OSI(Processor Device)
Feb 13 15:16:43.218324 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb 13 15:16:43.218343 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 13 15:16:43.218361 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb 13 15:16:43.218379 kernel: ACPI: Interpreter enabled
Feb 13 15:16:43.218397 kernel: ACPI: Using GIC for interrupt routing
Feb 13 15:16:43.218414 kernel: ACPI: MCFG table detected, 1 entries
Feb 13 15:16:43.218433 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f])
Feb 13 15:16:43.218718 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Feb 13 15:16:43.218929 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR]
Feb 13 15:16:43.219127 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
Feb 13 15:16:43.219386 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00
Feb 13 15:16:43.219588 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f]
Feb 13 15:16:43.219613 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io  0x0000-0xffff window]
Feb 13 15:16:43.219632 kernel: acpiphp: Slot [1] registered
Feb 13 15:16:43.219651 kernel: acpiphp: Slot [2] registered
Feb 13 15:16:43.219669 kernel: acpiphp: Slot [3] registered
Feb 13 15:16:43.219694 kernel: acpiphp: Slot [4] registered
Feb 13 15:16:43.219712 kernel: acpiphp: Slot [5] registered
Feb 13 15:16:43.219730 kernel: acpiphp: Slot [6] registered
Feb 13 15:16:43.219748 kernel: acpiphp: Slot [7] registered
Feb 13 15:16:43.219766 kernel: acpiphp: Slot [8] registered
Feb 13 15:16:43.219784 kernel: acpiphp: Slot [9] registered
Feb 13 15:16:43.219802 kernel: acpiphp: Slot [10] registered
Feb 13 15:16:43.219821 kernel: acpiphp: Slot [11] registered
Feb 13 15:16:43.219839 kernel: acpiphp: Slot [12] registered
Feb 13 15:16:43.219857 kernel: acpiphp: Slot [13] registered
Feb 13 15:16:43.219879 kernel: acpiphp: Slot [14] registered
Feb 13 15:16:43.219898 kernel: acpiphp: Slot [15] registered
Feb 13 15:16:43.219916 kernel: acpiphp: Slot [16] registered
Feb 13 15:16:43.219934 kernel: acpiphp: Slot [17] registered
Feb 13 15:16:43.219952 kernel: acpiphp: Slot [18] registered
Feb 13 15:16:43.219969 kernel: acpiphp: Slot [19] registered
Feb 13 15:16:43.219987 kernel: acpiphp: Slot [20] registered
Feb 13 15:16:43.220005 kernel: acpiphp: Slot [21] registered
Feb 13 15:16:43.220023 kernel: acpiphp: Slot [22] registered
Feb 13 15:16:43.220045 kernel: acpiphp: Slot [23] registered
Feb 13 15:16:43.220064 kernel: acpiphp: Slot [24] registered
Feb 13 15:16:43.220081 kernel: acpiphp: Slot [25] registered
Feb 13 15:16:43.220100 kernel: acpiphp: Slot [26] registered
Feb 13 15:16:43.220117 kernel: acpiphp: Slot [27] registered
Feb 13 15:16:43.220356 kernel: acpiphp: Slot [28] registered
Feb 13 15:16:43.220949 kernel: acpiphp: Slot [29] registered
Feb 13 15:16:43.220973 kernel: acpiphp: Slot [30] registered
Feb 13 15:16:43.220991 kernel: acpiphp: Slot [31] registered
Feb 13 15:16:43.221009 kernel: PCI host bridge to bus 0000:00
Feb 13 15:16:43.221680 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window]
Feb 13 15:16:43.222228 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0xffff window]
Feb 13 15:16:43.222545 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window]
Feb 13 15:16:43.222729 kernel: pci_bus 0000:00: root bus resource [bus 00-0f]
Feb 13 15:16:43.222962 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000
Feb 13 15:16:43.224577 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003
Feb 13 15:16:43.224835 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff]
Feb 13 15:16:43.225053 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802
Feb 13 15:16:43.225286 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff]
Feb 13 15:16:43.225489 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold
Feb 13 15:16:43.225700 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000
Feb 13 15:16:43.225907 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff]
Feb 13 15:16:43.226111 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref]
Feb 13 15:16:43.228430 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff]
Feb 13 15:16:43.228674 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold
Feb 13 15:16:43.228882 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref]
Feb 13 15:16:43.229089 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff]
Feb 13 15:16:43.229356 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff]
Feb 13 15:16:43.229565 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff]
Feb 13 15:16:43.229775 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff]
Feb 13 15:16:43.229976 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window]
Feb 13 15:16:43.230271 kernel: pci_bus 0000:00: resource 5 [io  0x0000-0xffff window]
Feb 13 15:16:43.230466 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window]
Feb 13 15:16:43.230495 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35
Feb 13 15:16:43.230543 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36
Feb 13 15:16:43.230589 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37
Feb 13 15:16:43.230640 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38
Feb 13 15:16:43.230664 kernel: iommu: Default domain type: Translated
Feb 13 15:16:43.230709 kernel: iommu: DMA domain TLB invalidation policy: strict mode
Feb 13 15:16:43.230729 kernel: efivars: Registered efivars operations
Feb 13 15:16:43.230747 kernel: vgaarb: loaded
Feb 13 15:16:43.230766 kernel: clocksource: Switched to clocksource arch_sys_counter
Feb 13 15:16:43.230784 kernel: VFS: Disk quotas dquot_6.6.0
Feb 13 15:16:43.230802 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 13 15:16:43.230821 kernel: pnp: PnP ACPI init
Feb 13 15:16:43.231159 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved
Feb 13 15:16:43.231222 kernel: pnp: PnP ACPI: found 1 devices
Feb 13 15:16:43.231242 kernel: NET: Registered PF_INET protocol family
Feb 13 15:16:43.231261 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 13 15:16:43.231281 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Feb 13 15:16:43.231299 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 13 15:16:43.231318 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb 13 15:16:43.231337 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear)
Feb 13 15:16:43.231355 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Feb 13 15:16:43.231374 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 13 15:16:43.231397 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 13 15:16:43.231415 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 13 15:16:43.231434 kernel: PCI: CLS 0 bytes, default 64
Feb 13 15:16:43.231452 kernel: kvm [1]: HYP mode not available
Feb 13 15:16:43.231470 kernel: Initialise system trusted keyrings
Feb 13 15:16:43.231489 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Feb 13 15:16:43.231507 kernel: Key type asymmetric registered
Feb 13 15:16:43.231525 kernel: Asymmetric key parser 'x509' registered
Feb 13 15:16:43.231543 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
Feb 13 15:16:43.231567 kernel: io scheduler mq-deadline registered
Feb 13 15:16:43.231586 kernel: io scheduler kyber registered
Feb 13 15:16:43.231604 kernel: io scheduler bfq registered
Feb 13 15:16:43.231843 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered
Feb 13 15:16:43.231873 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
Feb 13 15:16:43.231893 kernel: ACPI: button: Power Button [PWRB]
Feb 13 15:16:43.231912 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1
Feb 13 15:16:43.231932 kernel: ACPI: button: Sleep Button [SLPB]
Feb 13 15:16:43.231958 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 13 15:16:43.231979 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37
Feb 13 15:16:43.232406 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012)
Feb 13 15:16:43.232444 kernel: printk: console [ttyS0] disabled
Feb 13 15:16:43.232481 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A
Feb 13 15:16:43.232506 kernel: printk: console [ttyS0] enabled
Feb 13 15:16:43.232526 kernel: printk: bootconsole [uart0] disabled
Feb 13 15:16:43.232545 kernel: thunder_xcv, ver 1.0
Feb 13 15:16:43.232564 kernel: thunder_bgx, ver 1.0
Feb 13 15:16:43.232582 kernel: nicpf, ver 1.0
Feb 13 15:16:43.232610 kernel: nicvf, ver 1.0
Feb 13 15:16:43.232882 kernel: rtc-efi rtc-efi.0: registered as rtc0
Feb 13 15:16:43.233106 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:16:42 UTC (1739459802)
Feb 13 15:16:43.233133 kernel: hid: raw HID events driver (C) Jiri Kosina
Feb 13 15:16:43.233259 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available
Feb 13 15:16:43.233281 kernel: watchdog: Delayed init of the lockup detector failed: -19
Feb 13 15:16:43.233299 kernel: watchdog: Hard watchdog permanently disabled
Feb 13 15:16:43.233327 kernel: NET: Registered PF_INET6 protocol family
Feb 13 15:16:43.233346 kernel: Segment Routing with IPv6
Feb 13 15:16:43.233364 kernel: In-situ OAM (IOAM) with IPv6
Feb 13 15:16:43.233383 kernel: NET: Registered PF_PACKET protocol family
Feb 13 15:16:43.233401 kernel: Key type dns_resolver registered
Feb 13 15:16:43.233419 kernel: registered taskstats version 1
Feb 13 15:16:43.233437 kernel: Loading compiled-in X.509 certificates
Feb 13 15:16:43.233455 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4531cdb19689f90a81e7969ac7d8e25a95254f51'
Feb 13 15:16:43.233473 kernel: Key type .fscrypt registered
Feb 13 15:16:43.233491 kernel: Key type fscrypt-provisioning registered
Feb 13 15:16:43.233513 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 13 15:16:43.233532 kernel: ima: Allocated hash algorithm: sha1
Feb 13 15:16:43.233550 kernel: ima: No architecture policies found
Feb 13 15:16:43.233568 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng)
Feb 13 15:16:43.233586 kernel: clk: Disabling unused clocks
Feb 13 15:16:43.233604 kernel: Freeing unused kernel memory: 39680K
Feb 13 15:16:43.233622 kernel: Run /init as init process
Feb 13 15:16:43.233641 kernel:   with arguments:
Feb 13 15:16:43.233659 kernel:     /init
Feb 13 15:16:43.233680 kernel:   with environment:
Feb 13 15:16:43.233698 kernel:     HOME=/
Feb 13 15:16:43.233716 kernel:     TERM=linux
Feb 13 15:16:43.233734 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb 13 15:16:43.233757 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 15:16:43.233780 systemd[1]: Detected virtualization amazon.
Feb 13 15:16:43.233801 systemd[1]: Detected architecture arm64.
Feb 13 15:16:43.233825 systemd[1]: Running in initrd.
Feb 13 15:16:43.233845 systemd[1]: No hostname configured, using default hostname.
Feb 13 15:16:43.233864 systemd[1]: Hostname set to <localhost>.
Feb 13 15:16:43.233884 systemd[1]: Initializing machine ID from VM UUID.
Feb 13 15:16:43.233904 systemd[1]: Queued start job for default target initrd.target.
Feb 13 15:16:43.233924 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:16:43.233944 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:16:43.233965 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Feb 13 15:16:43.233990 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 15:16:43.234010 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Feb 13 15:16:43.234031 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Feb 13 15:16:43.234053 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Feb 13 15:16:43.234074 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Feb 13 15:16:43.234094 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:16:43.234114 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:16:43.234263 systemd[1]: Reached target paths.target - Path Units.
Feb 13 15:16:43.234289 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 15:16:43.234310 systemd[1]: Reached target swap.target - Swaps.
Feb 13 15:16:43.234330 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 15:16:43.234350 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 15:16:43.234370 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 15:16:43.234390 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Feb 13 15:16:43.234410 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Feb 13 15:16:43.234430 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:16:43.234457 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:16:43.234477 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:16:43.234498 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 15:16:43.234518 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Feb 13 15:16:43.234538 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 15:16:43.234558 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Feb 13 15:16:43.234577 systemd[1]: Starting systemd-fsck-usr.service...
Feb 13 15:16:43.234598 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 15:16:43.234622 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 15:16:43.234642 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:16:43.234662 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Feb 13 15:16:43.234682 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:16:43.234702 systemd[1]: Finished systemd-fsck-usr.service.
Feb 13 15:16:43.234765 systemd-journald[252]: Collecting audit messages is disabled.
Feb 13 15:16:43.234813 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Feb 13 15:16:43.234834 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:16:43.234854 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 13 15:16:43.234877 systemd-journald[252]: Journal started
Feb 13 15:16:43.234914 systemd-journald[252]: Runtime Journal (/run/log/journal/ec2ba0cafa76789b85495d53eac823a2) is 8.0M, max 75.3M, 67.3M free.
Feb 13 15:16:43.237762 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:16:43.197091 systemd-modules-load[253]: Inserted module 'overlay'
Feb 13 15:16:43.242499 systemd-modules-load[253]: Inserted module 'br_netfilter'
Feb 13 15:16:43.244312 kernel: Bridge firewalling registered
Feb 13 15:16:43.252178 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 15:16:43.255274 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 15:16:43.267475 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 15:16:43.277107 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 15:16:43.281582 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:16:43.285561 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:16:43.317509 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:16:43.326720 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:16:43.339941 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:16:43.350606 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Feb 13 15:16:43.356081 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:16:43.368523 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 15:16:43.391376 dracut-cmdline[287]: dracut-dracut-053
Feb 13 15:16:43.399332 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6
Feb 13 15:16:43.446729 systemd-resolved[289]: Positive Trust Anchors:
Feb 13 15:16:43.448281 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 15:16:43.448348 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 15:16:43.591407 kernel: SCSI subsystem initialized
Feb 13 15:16:43.598271 kernel: Loading iSCSI transport class v2.0-870.
Feb 13 15:16:43.611267 kernel: iscsi: registered transport (tcp)
Feb 13 15:16:43.633317 kernel: iscsi: registered transport (qla4xxx)
Feb 13 15:16:43.633397 kernel: QLogic iSCSI HBA Driver
Feb 13 15:16:43.710271 kernel: random: crng init done
Feb 13 15:16:43.710641 systemd-resolved[289]: Defaulting to hostname 'linux'.
Feb 13 15:16:43.712630 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 15:16:43.716644 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:16:43.742952 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Feb 13 15:16:43.753484 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Feb 13 15:16:43.796710 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 13 15:16:43.796787 kernel: device-mapper: uevent: version 1.0.3
Feb 13 15:16:43.796815 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Feb 13 15:16:43.864201 kernel: raid6: neonx8   gen()  6577 MB/s
Feb 13 15:16:43.881177 kernel: raid6: neonx4   gen()  6458 MB/s
Feb 13 15:16:43.898184 kernel: raid6: neonx2   gen()  5364 MB/s
Feb 13 15:16:43.915182 kernel: raid6: neonx1   gen()  3927 MB/s
Feb 13 15:16:43.932177 kernel: raid6: int64x8  gen()  3763 MB/s
Feb 13 15:16:43.949185 kernel: raid6: int64x4  gen()  3713 MB/s
Feb 13 15:16:43.966186 kernel: raid6: int64x2  gen()  3590 MB/s
Feb 13 15:16:43.984200 kernel: raid6: int64x1  gen()  2768 MB/s
Feb 13 15:16:43.984303 kernel: raid6: using algorithm neonx8 gen() 6577 MB/s
Feb 13 15:16:44.001922 kernel: raid6: .... xor() 4918 MB/s, rmw enabled
Feb 13 15:16:44.001978 kernel: raid6: using neon recovery algorithm
Feb 13 15:16:44.010588 kernel: xor: measuring software checksum speed
Feb 13 15:16:44.010677 kernel:    8regs           : 10967 MB/sec
Feb 13 15:16:44.011704 kernel:    32regs          : 11969 MB/sec
Feb 13 15:16:44.012904 kernel:    arm64_neon      :  9561 MB/sec
Feb 13 15:16:44.012937 kernel: xor: using function: 32regs (11969 MB/sec)
Feb 13 15:16:44.099218 kernel: Btrfs loaded, zoned=no, fsverity=no
Feb 13 15:16:44.122245 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 15:16:44.132803 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:16:44.175201 systemd-udevd[471]: Using default interface naming scheme 'v255'.
Feb 13 15:16:44.184036 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:16:44.197433 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Feb 13 15:16:44.234736 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation
Feb 13 15:16:44.293372 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 15:16:44.302385 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 15:16:44.427289 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:16:44.440610 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Feb 13 15:16:44.499360 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Feb 13 15:16:44.505049 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 15:16:44.511798 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:16:44.517668 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 15:16:44.542446 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Feb 13 15:16:44.588676 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 15:16:44.657377 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36
Feb 13 15:16:44.657452 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012)
Feb 13 15:16:44.694637 kernel: ena 0000:00:05.0: ENA device version: 0.10
Feb 13 15:16:44.695124 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1
Feb 13 15:16:44.695549 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35
Feb 13 15:16:44.695579 kernel: nvme nvme0: pci function 0000:00:04.0
Feb 13 15:16:44.695834 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:ac:bb:05:3a:65
Feb 13 15:16:44.696062 kernel: nvme nvme0: 2/0/0 default/read/poll queues
Feb 13 15:16:44.665825 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 15:16:44.666055 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:16:44.670155 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:16:44.672336 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 15:16:44.672610 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:16:44.674843 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:16:44.684709 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:16:44.720351 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Feb 13 15:16:44.720497 kernel: GPT:9289727 != 16777215
Feb 13 15:16:44.720527 kernel: GPT:Alternate GPT header not at the end of the disk.
Feb 13 15:16:44.720552 kernel: GPT:9289727 != 16777215
Feb 13 15:16:44.721953 kernel: GPT: Use GNU Parted to correct GPT errors.
Feb 13 15:16:44.722966 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 15:16:44.730230 (udev-worker)[525]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:16:44.738507 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:16:44.750579 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:16:44.794477 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:16:44.846203 kernel: BTRFS: device fsid 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (520)
Feb 13 15:16:44.859183 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (531)
Feb 13 15:16:44.902290 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM.
Feb 13 15:16:44.957902 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT.
Feb 13 15:16:44.984867 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A.
Feb 13 15:16:44.990804 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A.
Feb 13 15:16:45.008092 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM.
Feb 13 15:16:45.021431 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Feb 13 15:16:45.033888 disk-uuid[662]: Primary Header is updated.
Feb 13 15:16:45.033888 disk-uuid[662]: Secondary Entries is updated.
Feb 13 15:16:45.033888 disk-uuid[662]: Secondary Header is updated.
Feb 13 15:16:45.044200 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 15:16:45.052172 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 15:16:46.060322 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 15:16:46.063231 disk-uuid[663]: The operation has completed successfully.
Feb 13 15:16:46.260231 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb 13 15:16:46.262398 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Feb 13 15:16:46.322508 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Feb 13 15:16:46.333406 sh[923]: Success
Feb 13 15:16:46.360190 kernel: device-mapper: verity: sha256 using implementation "sha256-ce"
Feb 13 15:16:46.467513 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Feb 13 15:16:46.487433 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Feb 13 15:16:46.497381 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Feb 13 15:16:46.520099 kernel: BTRFS info (device dm-0): first mount of filesystem 27ad543d-6fdb-4ace-b8f1-8f50b124bd06
Feb 13 15:16:46.520178 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm
Feb 13 15:16:46.520206 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Feb 13 15:16:46.523200 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Feb 13 15:16:46.523250 kernel: BTRFS info (device dm-0): using free space tree
Feb 13 15:16:46.613178 kernel: BTRFS info (device dm-0): enabling ssd optimizations
Feb 13 15:16:46.638457 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Feb 13 15:16:46.642598 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Feb 13 15:16:46.653440 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Feb 13 15:16:46.661699 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Feb 13 15:16:46.690047 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643
Feb 13 15:16:46.690592 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 15:16:46.690635 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 13 15:16:46.699212 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 13 15:16:46.717264 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb 13 15:16:46.720284 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643
Feb 13 15:16:46.731255 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Feb 13 15:16:46.755593 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Feb 13 15:16:46.853967 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 15:16:46.868564 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 15:16:46.930708 systemd-networkd[1115]: lo: Link UP
Feb 13 15:16:46.930731 systemd-networkd[1115]: lo: Gained carrier
Feb 13 15:16:46.936201 systemd-networkd[1115]: Enumeration completed
Feb 13 15:16:46.936964 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 15:16:46.937039 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:16:46.937046 systemd-networkd[1115]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 15:16:46.940028 systemd[1]: Reached target network.target - Network.
Feb 13 15:16:46.954796 systemd-networkd[1115]: eth0: Link UP
Feb 13 15:16:46.954821 systemd-networkd[1115]: eth0: Gained carrier
Feb 13 15:16:46.954842 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:16:46.974649 systemd-networkd[1115]: eth0: DHCPv4 address 172.31.23.200/20, gateway 172.31.16.1 acquired from 172.31.16.1
Feb 13 15:16:47.186549 ignition[1041]: Ignition 2.20.0
Feb 13 15:16:47.186586 ignition[1041]: Stage: fetch-offline
Feb 13 15:16:47.187296 ignition[1041]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:16:47.187324 ignition[1041]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:16:47.190322 ignition[1041]: Ignition finished successfully
Feb 13 15:16:47.198584 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 15:16:47.207535 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)...
Feb 13 15:16:47.246350 ignition[1125]: Ignition 2.20.0
Feb 13 15:16:47.246381 ignition[1125]: Stage: fetch
Feb 13 15:16:47.247587 ignition[1125]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:16:47.247898 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:16:47.248377 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:16:47.271789 ignition[1125]: PUT result: OK
Feb 13 15:16:47.280188 ignition[1125]: parsed url from cmdline: ""
Feb 13 15:16:47.280213 ignition[1125]: no config URL provided
Feb 13 15:16:47.280230 ignition[1125]: reading system config file "/usr/lib/ignition/user.ign"
Feb 13 15:16:47.280259 ignition[1125]: no config at "/usr/lib/ignition/user.ign"
Feb 13 15:16:47.280296 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:16:47.283805 ignition[1125]: PUT result: OK
Feb 13 15:16:47.284546 ignition[1125]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1
Feb 13 15:16:47.292127 ignition[1125]: GET result: OK
Feb 13 15:16:47.292306 ignition[1125]: parsing config with SHA512: 12f96be98ab8ed757fd257a2e028e6a44a1ed521c62a44c9abee7767cb1bfbcd4d2ead06f34c3c4a5aec2950ea5feea30405a7aa97f2a99765705b1e90cb59e1
Feb 13 15:16:47.307175 unknown[1125]: fetched base config from "system"
Feb 13 15:16:47.307198 unknown[1125]: fetched base config from "system"
Feb 13 15:16:47.307222 unknown[1125]: fetched user config from "aws"
Feb 13 15:16:47.312866 ignition[1125]: fetch: fetch complete
Feb 13 15:16:47.312881 ignition[1125]: fetch: fetch passed
Feb 13 15:16:47.312998 ignition[1125]: Ignition finished successfully
Feb 13 15:16:47.320303 systemd[1]: Finished ignition-fetch.service - Ignition (fetch).
Feb 13 15:16:47.331442 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Feb 13 15:16:47.364266 ignition[1131]: Ignition 2.20.0
Feb 13 15:16:47.364296 ignition[1131]: Stage: kargs
Feb 13 15:16:47.366248 ignition[1131]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:16:47.366284 ignition[1131]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:16:47.366467 ignition[1131]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:16:47.372350 ignition[1131]: PUT result: OK
Feb 13 15:16:47.385388 ignition[1131]: kargs: kargs passed
Feb 13 15:16:47.385585 ignition[1131]: Ignition finished successfully
Feb 13 15:16:47.390234 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Feb 13 15:16:47.397506 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Feb 13 15:16:47.435093 ignition[1138]: Ignition 2.20.0
Feb 13 15:16:47.435124 ignition[1138]: Stage: disks
Feb 13 15:16:47.436396 ignition[1138]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:16:47.436423 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:16:47.436862 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:16:47.439011 ignition[1138]: PUT result: OK
Feb 13 15:16:47.450208 ignition[1138]: disks: disks passed
Feb 13 15:16:47.451491 ignition[1138]: Ignition finished successfully
Feb 13 15:16:47.455312 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Feb 13 15:16:47.459731 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Feb 13 15:16:47.463052 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Feb 13 15:16:47.466545 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 15:16:47.472664 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 15:16:47.479386 systemd[1]: Reached target basic.target - Basic System.
Feb 13 15:16:47.490533 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Feb 13 15:16:47.540416 systemd-fsck[1146]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Feb 13 15:16:47.545908 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Feb 13 15:16:47.559322 systemd[1]: Mounting sysroot.mount - /sysroot...
Feb 13 15:16:47.665187 kernel: EXT4-fs (nvme0n1p9): mounted filesystem b8d8a7c2-9667-48db-9266-035fd118dfdf r/w with ordered data mode. Quota mode: none.
Feb 13 15:16:47.667197 systemd[1]: Mounted sysroot.mount - /sysroot.
Feb 13 15:16:47.671418 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Feb 13 15:16:47.692318 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 15:16:47.697856 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Feb 13 15:16:47.707709 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Feb 13 15:16:47.707792 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb 13 15:16:47.707846 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 15:16:47.732771 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1165)
Feb 13 15:16:47.739370 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643
Feb 13 15:16:47.739446 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 15:16:47.739486 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 13 15:16:47.746540 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Feb 13 15:16:47.754187 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 13 15:16:47.761106 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Feb 13 15:16:47.768642 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 15:16:48.213403 initrd-setup-root[1189]: cut: /sysroot/etc/passwd: No such file or directory
Feb 13 15:16:48.223619 initrd-setup-root[1196]: cut: /sysroot/etc/group: No such file or directory
Feb 13 15:16:48.243458 initrd-setup-root[1203]: cut: /sysroot/etc/shadow: No such file or directory
Feb 13 15:16:48.252997 initrd-setup-root[1210]: cut: /sysroot/etc/gshadow: No such file or directory
Feb 13 15:16:48.585316 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Feb 13 15:16:48.596474 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Feb 13 15:16:48.602410 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Feb 13 15:16:48.633798 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Feb 13 15:16:48.636710 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643
Feb 13 15:16:48.678084 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Feb 13 15:16:48.682510 ignition[1278]: INFO     : Ignition 2.20.0
Feb 13 15:16:48.682510 ignition[1278]: INFO     : Stage: mount
Feb 13 15:16:48.688640 ignition[1278]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:16:48.688640 ignition[1278]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:16:48.688640 ignition[1278]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:16:48.696658 ignition[1278]: INFO     : PUT result: OK
Feb 13 15:16:48.700931 ignition[1278]: INFO     : mount: mount passed
Feb 13 15:16:48.703033 ignition[1278]: INFO     : Ignition finished successfully
Feb 13 15:16:48.707722 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Feb 13 15:16:48.722426 systemd[1]: Starting ignition-files.service - Ignition (files)...
Feb 13 15:16:48.755613 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 15:16:48.779193 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1289)
Feb 13 15:16:48.783453 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643
Feb 13 15:16:48.783505 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 15:16:48.783530 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 13 15:16:48.791228 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 13 15:16:48.793556 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 15:16:48.839239 ignition[1306]: INFO     : Ignition 2.20.0
Feb 13 15:16:48.839239 ignition[1306]: INFO     : Stage: files
Feb 13 15:16:48.843569 ignition[1306]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:16:48.843569 ignition[1306]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:16:48.848265 ignition[1306]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:16:48.851592 ignition[1306]: INFO     : PUT result: OK
Feb 13 15:16:48.857715 ignition[1306]: DEBUG    : files: compiled without relabeling support, skipping
Feb 13 15:16:48.870129 ignition[1306]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb 13 15:16:48.870129 ignition[1306]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb 13 15:16:48.913762 ignition[1306]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb 13 15:16:48.916659 ignition[1306]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb 13 15:16:48.919958 unknown[1306]: wrote ssh authorized keys file for user: core
Feb 13 15:16:48.922274 ignition[1306]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb 13 15:16:48.926952 ignition[1306]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Feb 13 15:16:48.930613 ignition[1306]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1
Feb 13 15:16:48.961291 systemd-networkd[1115]: eth0: Gained IPv6LL
Feb 13 15:16:49.042789 ignition[1306]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Feb 13 15:16:49.188301 ignition[1306]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Feb 13 15:16:49.188301 ignition[1306]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/home/core/install.sh"
Feb 13 15:16:49.195755 ignition[1306]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh"
Feb 13 15:16:49.195755 ignition[1306]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/nginx.yaml"
Feb 13 15:16:49.195755 ignition[1306]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml"
Feb 13 15:16:49.195755 ignition[1306]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 13 15:16:49.195755 ignition[1306]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 13 15:16:49.195755 ignition[1306]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 13 15:16:49.195755 ignition[1306]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 13 15:16:49.195755 ignition[1306]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 15:16:49.195755 ignition[1306]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 15:16:49.195755 ignition[1306]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw"
Feb 13 15:16:49.195755 ignition[1306]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw"
Feb 13 15:16:49.195755 ignition[1306]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw"
Feb 13 15:16:49.195755 ignition[1306]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1
Feb 13 15:16:49.537772 ignition[1306]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET result: OK
Feb 13 15:16:50.475319 ignition[1306]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw"
Feb 13 15:16:50.475319 ignition[1306]: INFO     : files: op(b): [started]  processing unit "prepare-helm.service"
Feb 13 15:16:50.486233 ignition[1306]: INFO     : files: op(b): op(c): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 13 15:16:50.489974 ignition[1306]: INFO     : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 13 15:16:50.489974 ignition[1306]: INFO     : files: op(b): [finished] processing unit "prepare-helm.service"
Feb 13 15:16:50.489974 ignition[1306]: INFO     : files: op(d): [started]  setting preset to enabled for "prepare-helm.service"
Feb 13 15:16:50.489974 ignition[1306]: INFO     : files: op(d): [finished] setting preset to enabled for "prepare-helm.service"
Feb 13 15:16:50.504072 ignition[1306]: INFO     : files: createResultFile: createFiles: op(e): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb 13 15:16:50.504072 ignition[1306]: INFO     : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb 13 15:16:50.504072 ignition[1306]: INFO     : files: files passed
Feb 13 15:16:50.504072 ignition[1306]: INFO     : Ignition finished successfully
Feb 13 15:16:50.501810 systemd[1]: Finished ignition-files.service - Ignition (files).
Feb 13 15:16:50.533857 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Feb 13 15:16:50.542065 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Feb 13 15:16:50.547707 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb 13 15:16:50.549168 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Feb 13 15:16:50.597018 initrd-setup-root-after-ignition[1335]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:16:50.600749 initrd-setup-root-after-ignition[1339]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:16:50.603964 initrd-setup-root-after-ignition[1335]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:16:50.609690 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 15:16:50.614735 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Feb 13 15:16:50.623698 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Feb 13 15:16:50.688478 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 13 15:16:50.689651 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Feb 13 15:16:50.695670 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Feb 13 15:16:50.698201 systemd[1]: Reached target initrd.target - Initrd Default Target.
Feb 13 15:16:50.714411 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Feb 13 15:16:50.721458 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Feb 13 15:16:50.756052 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 15:16:50.769407 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Feb 13 15:16:50.791989 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:16:50.793954 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:16:50.794958 systemd[1]: Stopped target timers.target - Timer Units.
Feb 13 15:16:50.795775 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 13 15:16:50.796070 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 15:16:50.797263 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Feb 13 15:16:50.797630 systemd[1]: Stopped target basic.target - Basic System.
Feb 13 15:16:50.798242 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Feb 13 15:16:50.798815 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 15:16:50.799689 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Feb 13 15:16:50.800292 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Feb 13 15:16:50.801173 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 15:16:50.801775 systemd[1]: Stopped target sysinit.target - System Initialization.
Feb 13 15:16:50.802366 systemd[1]: Stopped target local-fs.target - Local File Systems.
Feb 13 15:16:50.802929 systemd[1]: Stopped target swap.target - Swaps.
Feb 13 15:16:50.803427 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 13 15:16:50.803704 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 15:16:50.804898 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:16:50.805257 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:16:50.805418 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Feb 13 15:16:50.822279 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:16:50.822490 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 13 15:16:50.822697 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Feb 13 15:16:50.823434 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb 13 15:16:50.823639 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 15:16:50.824067 systemd[1]: ignition-files.service: Deactivated successfully.
Feb 13 15:16:50.824654 systemd[1]: Stopped ignition-files.service - Ignition (files).
Feb 13 15:16:50.857986 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Feb 13 15:16:50.867043 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 13 15:16:50.867723 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:16:50.898017 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Feb 13 15:16:50.904319 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 13 15:16:50.904885 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:16:50.911578 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 13 15:16:50.911807 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 15:16:50.926843 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 13 15:16:50.929431 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Feb 13 15:16:50.955684 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb 13 15:16:50.961775 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb 13 15:16:50.964338 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Feb 13 15:16:50.969256 ignition[1359]: INFO     : Ignition 2.20.0
Feb 13 15:16:50.969256 ignition[1359]: INFO     : Stage: umount
Feb 13 15:16:50.972457 ignition[1359]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:16:50.972457 ignition[1359]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:16:50.972457 ignition[1359]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:16:50.979697 ignition[1359]: INFO     : PUT result: OK
Feb 13 15:16:50.985344 ignition[1359]: INFO     : umount: umount passed
Feb 13 15:16:50.988118 ignition[1359]: INFO     : Ignition finished successfully
Feb 13 15:16:50.990618 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb 13 15:16:50.991748 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Feb 13 15:16:50.997628 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb 13 15:16:50.997737 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Feb 13 15:16:50.999814 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb 13 15:16:50.999893 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Feb 13 15:16:51.001855 systemd[1]: ignition-fetch.service: Deactivated successfully.
Feb 13 15:16:51.001930 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch).
Feb 13 15:16:51.004049 systemd[1]: Stopped target network.target - Network.
Feb 13 15:16:51.005737 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb 13 15:16:51.005819 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 15:16:51.008021 systemd[1]: Stopped target paths.target - Path Units.
Feb 13 15:16:51.009667 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 13 15:16:51.016562 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:16:51.019620 systemd[1]: Stopped target slices.target - Slice Units.
Feb 13 15:16:51.021415 systemd[1]: Stopped target sockets.target - Socket Units.
Feb 13 15:16:51.023322 systemd[1]: iscsid.socket: Deactivated successfully.
Feb 13 15:16:51.023402 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 15:16:51.025507 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb 13 15:16:51.025574 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 15:16:51.027548 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb 13 15:16:51.027941 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Feb 13 15:16:51.031095 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Feb 13 15:16:51.031198 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Feb 13 15:16:51.033601 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb 13 15:16:51.033742 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Feb 13 15:16:51.064583 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Feb 13 15:16:51.066651 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Feb 13 15:16:51.078657 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb 13 15:16:51.078856 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Feb 13 15:16:51.078918 systemd-networkd[1115]: eth0: DHCPv6 lease lost
Feb 13 15:16:51.090329 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb 13 15:16:51.090594 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Feb 13 15:16:51.098060 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb 13 15:16:51.099930 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:16:51.124395 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Feb 13 15:16:51.128012 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb 13 15:16:51.128118 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 15:16:51.137401 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 13 15:16:51.137515 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:16:51.141316 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 13 15:16:51.141499 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:16:51.145085 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 13 15:16:51.145185 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:16:51.147649 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:16:51.178507 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 13 15:16:51.179372 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:16:51.187153 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 13 15:16:51.187303 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:16:51.191396 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 13 15:16:51.191473 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:16:51.201461 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 13 15:16:51.201615 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 15:16:51.207337 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 13 15:16:51.207437 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Feb 13 15:16:51.213326 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 15:16:51.213448 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:16:51.238520 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Feb 13 15:16:51.240753 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 13 15:16:51.240881 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:16:51.243670 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 15:16:51.243793 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:16:51.248681 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb 13 15:16:51.250231 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Feb 13 15:16:51.278576 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 13 15:16:51.279000 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Feb 13 15:16:51.286661 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Feb 13 15:16:51.295508 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Feb 13 15:16:51.324896 systemd[1]: Switching root.
Feb 13 15:16:51.372516 systemd-journald[252]: Journal stopped
Feb 13 15:16:53.658822 systemd-journald[252]: Received SIGTERM from PID 1 (systemd).
Feb 13 15:16:53.658962 kernel: SELinux:  policy capability network_peer_controls=1
Feb 13 15:16:53.659006 kernel: SELinux:  policy capability open_perms=1
Feb 13 15:16:53.659038 kernel: SELinux:  policy capability extended_socket_class=1
Feb 13 15:16:53.659072 kernel: SELinux:  policy capability always_check_network=0
Feb 13 15:16:53.659101 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 13 15:16:53.659209 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 13 15:16:53.659246 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Feb 13 15:16:53.659279 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Feb 13 15:16:53.659312 kernel: audit: type=1403 audit(1739459811.820:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 13 15:16:53.659362 systemd[1]: Successfully loaded SELinux policy in 71.844ms.
Feb 13 15:16:53.659426 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.934ms.
Feb 13 15:16:53.659556 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 15:16:53.659748 systemd[1]: Detected virtualization amazon.
Feb 13 15:16:53.659788 systemd[1]: Detected architecture arm64.
Feb 13 15:16:53.659817 systemd[1]: Detected first boot.
Feb 13 15:16:53.659850 systemd[1]: Initializing machine ID from VM UUID.
Feb 13 15:16:53.659880 zram_generator::config[1402]: No configuration found.
Feb 13 15:16:53.659914 systemd[1]: Populated /etc with preset unit settings.
Feb 13 15:16:53.659945 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Feb 13 15:16:53.659976 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Feb 13 15:16:53.660006 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb 13 15:16:53.660038 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Feb 13 15:16:53.660073 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Feb 13 15:16:53.660105 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Feb 13 15:16:53.662208 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Feb 13 15:16:53.662278 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Feb 13 15:16:53.662310 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Feb 13 15:16:53.662345 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Feb 13 15:16:53.662379 systemd[1]: Created slice user.slice - User and Session Slice.
Feb 13 15:16:53.662412 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:16:53.662452 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:16:53.662481 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Feb 13 15:16:53.662514 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Feb 13 15:16:53.662545 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Feb 13 15:16:53.662577 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 15:16:53.662609 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0...
Feb 13 15:16:53.662639 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:16:53.662672 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Feb 13 15:16:53.662715 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Feb 13 15:16:53.662749 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Feb 13 15:16:53.662791 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Feb 13 15:16:53.662822 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:16:53.662855 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 15:16:53.662886 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 15:16:53.662917 systemd[1]: Reached target swap.target - Swaps.
Feb 13 15:16:53.662946 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Feb 13 15:16:53.662978 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Feb 13 15:16:53.663012 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:16:53.663043 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:16:53.663076 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:16:53.663105 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Feb 13 15:16:53.663134 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Feb 13 15:16:53.665248 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Feb 13 15:16:53.665281 systemd[1]: Mounting media.mount - External Media Directory...
Feb 13 15:16:53.665311 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Feb 13 15:16:53.665340 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Feb 13 15:16:53.665379 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Feb 13 15:16:53.665413 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 13 15:16:53.665444 systemd[1]: Reached target machines.target - Containers.
Feb 13 15:16:53.665475 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Feb 13 15:16:53.665505 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:16:53.665536 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 15:16:53.665565 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Feb 13 15:16:53.665594 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:16:53.665626 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 15:16:53.665656 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:16:53.665684 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Feb 13 15:16:53.665713 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:16:53.665743 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb 13 15:16:53.665773 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Feb 13 15:16:53.665802 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Feb 13 15:16:53.665830 kernel: loop: module loaded
Feb 13 15:16:53.665862 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Feb 13 15:16:53.665895 systemd[1]: Stopped systemd-fsck-usr.service.
Feb 13 15:16:53.665926 kernel: fuse: init (API version 7.39)
Feb 13 15:16:53.665958 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 15:16:53.666004 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 15:16:53.666041 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Feb 13 15:16:53.666073 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Feb 13 15:16:53.666102 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 15:16:53.666135 systemd[1]: verity-setup.service: Deactivated successfully.
Feb 13 15:16:53.666231 systemd[1]: Stopped verity-setup.service.
Feb 13 15:16:53.666268 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Feb 13 15:16:53.666298 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Feb 13 15:16:53.666329 kernel: ACPI: bus type drm_connector registered
Feb 13 15:16:53.666358 systemd[1]: Mounted media.mount - External Media Directory.
Feb 13 15:16:53.666392 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Feb 13 15:16:53.666423 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Feb 13 15:16:53.666458 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Feb 13 15:16:53.666490 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:16:53.666520 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Feb 13 15:16:53.666551 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 13 15:16:53.668198 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Feb 13 15:16:53.668274 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:16:53.668306 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:16:53.668348 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 15:16:53.668378 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 15:16:53.668407 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:16:53.668453 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:16:53.668488 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 13 15:16:53.668522 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Feb 13 15:16:53.668553 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:16:53.668591 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:16:53.668623 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:16:53.668656 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Feb 13 15:16:53.668685 systemd[1]: Reached target network-pre.target - Preparation for Network.
Feb 13 15:16:53.668770 systemd-journald[1491]: Collecting audit messages is disabled.
Feb 13 15:16:53.668830 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Feb 13 15:16:53.668864 systemd-journald[1491]: Journal started
Feb 13 15:16:53.668914 systemd-journald[1491]: Runtime Journal (/run/log/journal/ec2ba0cafa76789b85495d53eac823a2) is 8.0M, max 75.3M, 67.3M free.
Feb 13 15:16:53.040879 systemd[1]: Queued start job for default target multi-user.target.
Feb 13 15:16:53.066552 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6.
Feb 13 15:16:53.067673 systemd[1]: systemd-journald.service: Deactivated successfully.
Feb 13 15:16:53.683976 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Feb 13 15:16:53.684061 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 15:16:53.698283 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:16:53.704122 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 15:16:53.707707 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Feb 13 15:16:53.714373 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Feb 13 15:16:53.717259 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Feb 13 15:16:53.766965 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb 13 15:16:53.767100 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 15:16:53.774037 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Feb 13 15:16:53.785653 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Feb 13 15:16:53.817482 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Feb 13 15:16:53.820492 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:16:53.832457 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Feb 13 15:16:53.846297 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Feb 13 15:16:53.850601 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 15:16:53.854915 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Feb 13 15:16:53.860155 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Feb 13 15:16:53.875672 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Feb 13 15:16:53.882400 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:16:53.886948 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:16:53.891350 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Feb 13 15:16:53.932727 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Feb 13 15:16:53.940880 kernel: loop0: detected capacity change from 0 to 194512
Feb 13 15:16:53.951404 systemd-journald[1491]: Time spent on flushing to /var/log/journal/ec2ba0cafa76789b85495d53eac823a2 is 65.861ms for 910 entries.
Feb 13 15:16:53.951404 systemd-journald[1491]: System Journal (/var/log/journal/ec2ba0cafa76789b85495d53eac823a2) is 8.0M, max 195.6M, 187.6M free.
Feb 13 15:16:54.050089 systemd-journald[1491]: Received client request to flush runtime journal.
Feb 13 15:16:54.050231 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Feb 13 15:16:53.997474 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Feb 13 15:16:54.003941 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Feb 13 15:16:54.018501 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Feb 13 15:16:54.031538 udevadm[1540]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Feb 13 15:16:54.059260 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Feb 13 15:16:54.068527 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Feb 13 15:16:54.090181 kernel: loop1: detected capacity change from 0 to 53784
Feb 13 15:16:54.105097 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 15:16:54.117112 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Feb 13 15:16:54.125862 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Feb 13 15:16:54.200864 systemd-tmpfiles[1549]: ACLs are not supported, ignoring.
Feb 13 15:16:54.200915 systemd-tmpfiles[1549]: ACLs are not supported, ignoring.
Feb 13 15:16:54.219969 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:16:54.233234 kernel: loop2: detected capacity change from 0 to 116808
Feb 13 15:16:54.386282 kernel: loop3: detected capacity change from 0 to 113536
Feb 13 15:16:54.504967 kernel: loop4: detected capacity change from 0 to 194512
Feb 13 15:16:54.556118 kernel: loop5: detected capacity change from 0 to 53784
Feb 13 15:16:54.583311 kernel: loop6: detected capacity change from 0 to 116808
Feb 13 15:16:54.599241 kernel: loop7: detected capacity change from 0 to 113536
Feb 13 15:16:54.613710 (sd-merge)[1558]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'.
Feb 13 15:16:54.614840 (sd-merge)[1558]: Merged extensions into '/usr'.
Feb 13 15:16:54.635773 systemd[1]: Reloading requested from client PID 1535 ('systemd-sysext') (unit systemd-sysext.service)...
Feb 13 15:16:54.635810 systemd[1]: Reloading...
Feb 13 15:16:54.794268 ldconfig[1531]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Feb 13 15:16:54.831195 zram_generator::config[1585]: No configuration found.
Feb 13 15:16:55.179418 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:16:55.293854 systemd[1]: Reloading finished in 656 ms.
Feb 13 15:16:55.347622 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Feb 13 15:16:55.350877 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Feb 13 15:16:55.354093 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Feb 13 15:16:55.371550 systemd[1]: Starting ensure-sysext.service...
Feb 13 15:16:55.383821 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 15:16:55.390679 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:16:55.415456 systemd[1]: Reloading requested from client PID 1638 ('systemctl') (unit ensure-sysext.service)...
Feb 13 15:16:55.415490 systemd[1]: Reloading...
Feb 13 15:16:55.459403 systemd-tmpfiles[1639]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Feb 13 15:16:55.463058 systemd-tmpfiles[1639]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Feb 13 15:16:55.467452 systemd-tmpfiles[1639]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Feb 13 15:16:55.470988 systemd-tmpfiles[1639]: ACLs are not supported, ignoring.
Feb 13 15:16:55.471190 systemd-tmpfiles[1639]: ACLs are not supported, ignoring.
Feb 13 15:16:55.482081 systemd-tmpfiles[1639]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 15:16:55.482399 systemd-tmpfiles[1639]: Skipping /boot
Feb 13 15:16:55.511758 systemd-tmpfiles[1639]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 15:16:55.511960 systemd-tmpfiles[1639]: Skipping /boot
Feb 13 15:16:55.544699 systemd-udevd[1640]: Using default interface naming scheme 'v255'.
Feb 13 15:16:55.582187 zram_generator::config[1666]: No configuration found.
Feb 13 15:16:55.820664 (udev-worker)[1696]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:16:56.005369 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:16:56.112408 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1692)
Feb 13 15:16:56.184653 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped.
Feb 13 15:16:56.186034 systemd[1]: Reloading finished in 769 ms.
Feb 13 15:16:56.218547 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:16:56.242467 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:16:56.295369 systemd[1]: Finished ensure-sysext.service.
Feb 13 15:16:56.316696 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 15:16:56.330472 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Feb 13 15:16:56.334705 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:16:56.343520 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:16:56.350535 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 15:16:56.358524 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:16:56.364585 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:16:56.367642 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:16:56.370784 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Feb 13 15:16:56.379547 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 15:16:56.389800 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 15:16:56.392462 systemd[1]: Reached target time-set.target - System Time Set.
Feb 13 15:16:56.402384 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Feb 13 15:16:56.409426 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:16:56.428732 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:16:56.432011 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:16:56.545307 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Feb 13 15:16:56.549193 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:16:56.550518 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:16:56.555377 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 15:16:56.557712 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 15:16:56.560548 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:16:56.560931 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:16:56.569331 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 15:16:56.569536 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 15:16:56.582892 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Feb 13 15:16:56.585804 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb 13 15:16:56.603703 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Feb 13 15:16:56.627970 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Feb 13 15:16:56.637790 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Feb 13 15:16:56.664301 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM.
Feb 13 15:16:56.677979 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Feb 13 15:16:56.698534 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Feb 13 15:16:56.714821 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Feb 13 15:16:56.716566 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Feb 13 15:16:56.755873 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Feb 13 15:16:56.773284 augenrules[1880]: No rules
Feb 13 15:16:56.778306 lvm[1874]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 15:16:56.779888 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 15:16:56.783273 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 15:16:56.816229 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Feb 13 15:16:56.820594 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Feb 13 15:16:56.823131 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:16:56.837918 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Feb 13 15:16:56.857175 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:16:56.874408 lvm[1893]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 15:16:56.920660 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Feb 13 15:16:56.960131 systemd-networkd[1839]: lo: Link UP
Feb 13 15:16:56.960169 systemd-networkd[1839]: lo: Gained carrier
Feb 13 15:16:56.963620 systemd-networkd[1839]: Enumeration completed
Feb 13 15:16:56.963922 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 15:16:56.967754 systemd-networkd[1839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:16:56.968758 systemd-networkd[1839]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 15:16:56.973233 systemd-networkd[1839]: eth0: Link UP
Feb 13 15:16:56.973548 systemd-networkd[1839]: eth0: Gained carrier
Feb 13 15:16:56.973582 systemd-networkd[1839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:16:56.974172 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Feb 13 15:16:56.982270 systemd-networkd[1839]: eth0: DHCPv4 address 172.31.23.200/20, gateway 172.31.16.1 acquired from 172.31.16.1
Feb 13 15:16:57.001254 systemd-resolved[1841]: Positive Trust Anchors:
Feb 13 15:16:57.001321 systemd-resolved[1841]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 15:16:57.001384 systemd-resolved[1841]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 15:16:57.010648 systemd-resolved[1841]: Defaulting to hostname 'linux'.
Feb 13 15:16:57.013769 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 15:16:57.016155 systemd[1]: Reached target network.target - Network.
Feb 13 15:16:57.017902 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:16:57.020248 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 15:16:57.022469 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Feb 13 15:16:57.024924 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Feb 13 15:16:57.027668 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Feb 13 15:16:57.030061 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Feb 13 15:16:57.032550 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Feb 13 15:16:57.034891 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Feb 13 15:16:57.034940 systemd[1]: Reached target paths.target - Path Units.
Feb 13 15:16:57.036626 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 15:16:57.039181 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Feb 13 15:16:57.043926 systemd[1]: Starting docker.socket - Docker Socket for the API...
Feb 13 15:16:57.055520 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Feb 13 15:16:57.058673 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Feb 13 15:16:57.060911 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 15:16:57.062809 systemd[1]: Reached target basic.target - Basic System.
Feb 13 15:16:57.065058 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Feb 13 15:16:57.065319 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Feb 13 15:16:57.073466 systemd[1]: Starting containerd.service - containerd container runtime...
Feb 13 15:16:57.084500 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent...
Feb 13 15:16:57.090581 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Feb 13 15:16:57.097800 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Feb 13 15:16:57.103562 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Feb 13 15:16:57.106454 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Feb 13 15:16:57.113513 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Feb 13 15:16:57.137929 systemd[1]: Started ntpd.service - Network Time Service.
Feb 13 15:16:57.148381 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin...
Feb 13 15:16:57.174969 jq[1905]: false
Feb 13 15:16:57.177598 systemd[1]: Starting setup-oem.service - Setup OEM...
Feb 13 15:16:57.183361 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Feb 13 15:16:57.190992 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Feb 13 15:16:57.200684 systemd[1]: Starting systemd-logind.service - User Login Management...
Feb 13 15:16:57.204778 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Feb 13 15:16:57.207694 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Feb 13 15:16:57.214736 systemd[1]: Starting update-engine.service - Update Engine...
Feb 13 15:16:57.226501 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Feb 13 15:16:57.237077 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Feb 13 15:16:57.237741 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Feb 13 15:16:57.288761 jq[1921]: true
Feb 13 15:16:57.315280 extend-filesystems[1906]: Found loop4
Feb 13 15:16:57.315280 extend-filesystems[1906]: Found loop5
Feb 13 15:16:57.315280 extend-filesystems[1906]: Found loop6
Feb 13 15:16:57.315280 extend-filesystems[1906]: Found loop7
Feb 13 15:16:57.315280 extend-filesystems[1906]: Found nvme0n1
Feb 13 15:16:57.315280 extend-filesystems[1906]: Found nvme0n1p2
Feb 13 15:16:57.315280 extend-filesystems[1906]: Found nvme0n1p3
Feb 13 15:16:57.315280 extend-filesystems[1906]: Found usr
Feb 13 15:16:57.315280 extend-filesystems[1906]: Found nvme0n1p4
Feb 13 15:16:57.315280 extend-filesystems[1906]: Found nvme0n1p6
Feb 13 15:16:57.356724 extend-filesystems[1906]: Found nvme0n1p7
Feb 13 15:16:57.356724 extend-filesystems[1906]: Found nvme0n1p9
Feb 13 15:16:57.356724 extend-filesystems[1906]: Checking size of /dev/nvme0n1p9
Feb 13 15:16:57.344720 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Feb 13 15:16:57.345099 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Feb 13 15:16:57.377612 systemd[1]: motdgen.service: Deactivated successfully.
Feb 13 15:16:57.381701 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Feb 13 15:16:57.387466 dbus-daemon[1904]: [system] SELinux support is enabled
Feb 13 15:16:57.402390 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Feb 13 15:16:57.405431 ntpd[1908]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:47 UTC 2025 (1): Starting
Feb 13 15:16:57.407098 (ntainerd)[1939]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Feb 13 15:16:57.410087 ntpd[1908]: 13 Feb 15:16:57 ntpd[1908]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:47 UTC 2025 (1): Starting
Feb 13 15:16:57.410087 ntpd[1908]: 13 Feb 15:16:57 ntpd[1908]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp
Feb 13 15:16:57.410087 ntpd[1908]: 13 Feb 15:16:57 ntpd[1908]: ----------------------------------------------------
Feb 13 15:16:57.410087 ntpd[1908]: 13 Feb 15:16:57 ntpd[1908]: ntp-4 is maintained by Network Time Foundation,
Feb 13 15:16:57.410087 ntpd[1908]: 13 Feb 15:16:57 ntpd[1908]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
Feb 13 15:16:57.410087 ntpd[1908]: 13 Feb 15:16:57 ntpd[1908]: corporation.  Support and training for ntp-4 are
Feb 13 15:16:57.410087 ntpd[1908]: 13 Feb 15:16:57 ntpd[1908]: available at https://www.nwtime.org/support
Feb 13 15:16:57.410087 ntpd[1908]: 13 Feb 15:16:57 ntpd[1908]: ----------------------------------------------------
Feb 13 15:16:57.405483 ntpd[1908]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp
Feb 13 15:16:57.415606 ntpd[1908]: 13 Feb 15:16:57 ntpd[1908]: proto: precision = 0.096 usec (-23)
Feb 13 15:16:57.405503 ntpd[1908]: ----------------------------------------------------
Feb 13 15:16:57.405522 ntpd[1908]: ntp-4 is maintained by Network Time Foundation,
Feb 13 15:16:57.405539 ntpd[1908]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
Feb 13 15:16:57.405559 ntpd[1908]: corporation.  Support and training for ntp-4 are
Feb 13 15:16:57.405578 ntpd[1908]: available at https://www.nwtime.org/support
Feb 13 15:16:57.419812 ntpd[1908]: 13 Feb 15:16:57 ntpd[1908]: basedate set to 2025-02-01
Feb 13 15:16:57.419812 ntpd[1908]: 13 Feb 15:16:57 ntpd[1908]: gps base set to 2025-02-02 (week 2352)
Feb 13 15:16:57.405596 ntpd[1908]: ----------------------------------------------------
Feb 13 15:16:57.408704 dbus-daemon[1904]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1839 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0")
Feb 13 15:16:57.414687 ntpd[1908]: proto: precision = 0.096 usec (-23)
Feb 13 15:16:57.416918 ntpd[1908]: basedate set to 2025-02-01
Feb 13 15:16:57.424323 ntpd[1908]: 13 Feb 15:16:57 ntpd[1908]: Listen and drop on 0 v6wildcard [::]:123
Feb 13 15:16:57.424323 ntpd[1908]: 13 Feb 15:16:57 ntpd[1908]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Feb 13 15:16:57.424323 ntpd[1908]: 13 Feb 15:16:57 ntpd[1908]: Listen normally on 2 lo 127.0.0.1:123
Feb 13 15:16:57.424323 ntpd[1908]: 13 Feb 15:16:57 ntpd[1908]: Listen normally on 3 eth0 172.31.23.200:123
Feb 13 15:16:57.424323 ntpd[1908]: 13 Feb 15:16:57 ntpd[1908]: Listen normally on 4 lo [::1]:123
Feb 13 15:16:57.424323 ntpd[1908]: 13 Feb 15:16:57 ntpd[1908]: bind(21) AF_INET6 fe80::4ac:bbff:fe05:3a65%2#123 flags 0x11 failed: Cannot assign requested address
Feb 13 15:16:57.424323 ntpd[1908]: 13 Feb 15:16:57 ntpd[1908]: unable to create socket on eth0 (5) for fe80::4ac:bbff:fe05:3a65%2#123
Feb 13 15:16:57.424323 ntpd[1908]: 13 Feb 15:16:57 ntpd[1908]: failed to init interface for address fe80::4ac:bbff:fe05:3a65%2
Feb 13 15:16:57.424323 ntpd[1908]: 13 Feb 15:16:57 ntpd[1908]: Listening on routing socket on fd #21 for interface updates
Feb 13 15:16:57.416950 ntpd[1908]: gps base set to 2025-02-02 (week 2352)
Feb 13 15:16:57.421756 ntpd[1908]: Listen and drop on 0 v6wildcard [::]:123
Feb 13 15:16:57.421860 ntpd[1908]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Feb 13 15:16:57.423512 ntpd[1908]: Listen normally on 2 lo 127.0.0.1:123
Feb 13 15:16:57.423659 ntpd[1908]: Listen normally on 3 eth0 172.31.23.200:123
Feb 13 15:16:57.423743 ntpd[1908]: Listen normally on 4 lo [::1]:123
Feb 13 15:16:57.423834 ntpd[1908]: bind(21) AF_INET6 fe80::4ac:bbff:fe05:3a65%2#123 flags 0x11 failed: Cannot assign requested address
Feb 13 15:16:57.423879 ntpd[1908]: unable to create socket on eth0 (5) for fe80::4ac:bbff:fe05:3a65%2#123
Feb 13 15:16:57.443077 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Feb 13 15:16:57.443964 ntpd[1908]: 13 Feb 15:16:57 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 15:16:57.443964 ntpd[1908]: 13 Feb 15:16:57 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 15:16:57.423907 ntpd[1908]: failed to init interface for address fe80::4ac:bbff:fe05:3a65%2
Feb 13 15:16:57.443199 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Feb 13 15:16:57.423974 ntpd[1908]: Listening on routing socket on fd #21 for interface updates
Feb 13 15:16:57.429122 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 15:16:57.429211 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 15:16:57.445721 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Feb 13 15:16:57.460773 dbus-daemon[1904]: [system] Successfully activated service 'org.freedesktop.systemd1'
Feb 13 15:16:57.471777 tar[1931]: linux-arm64/helm
Feb 13 15:16:57.445770 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Feb 13 15:16:57.473833 systemd[1]: Starting systemd-hostnamed.service - Hostname Service...
Feb 13 15:16:57.510599 extend-filesystems[1906]: Resized partition /dev/nvme0n1p9
Feb 13 15:16:57.518567 jq[1933]: true
Feb 13 15:16:57.559562 update_engine[1920]: I20250213 15:16:57.555429  1920 main.cc:92] Flatcar Update Engine starting
Feb 13 15:16:57.560005 extend-filesystems[1953]: resize2fs 1.47.1 (20-May-2024)
Feb 13 15:16:57.588703 systemd[1]: Finished setup-oem.service - Setup OEM.
Feb 13 15:16:57.593641 systemd[1]: Started update-engine.service - Update Engine.
Feb 13 15:16:57.600401 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks
Feb 13 15:16:57.600544 update_engine[1920]: I20250213 15:16:57.597252  1920 update_check_scheduler.cc:74] Next update check in 7m0s
Feb 13 15:16:57.622482 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Feb 13 15:16:57.713180 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915
Feb 13 15:16:57.748959 extend-filesystems[1953]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required
Feb 13 15:16:57.748959 extend-filesystems[1953]: old_desc_blocks = 1, new_desc_blocks = 1
Feb 13 15:16:57.748959 extend-filesystems[1953]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long.
Feb 13 15:16:57.765856 extend-filesystems[1906]: Resized filesystem in /dev/nvme0n1p9
Feb 13 15:16:57.765856 extend-filesystems[1906]: Found nvme0n1p1
Feb 13 15:16:57.750429 systemd[1]: extend-filesystems.service: Deactivated successfully.
Feb 13 15:16:57.750912 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Feb 13 15:16:57.775268 coreos-metadata[1903]: Feb 13 15:16:57.775 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Feb 13 15:16:57.787787 coreos-metadata[1903]: Feb 13 15:16:57.787 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1
Feb 13 15:16:57.792891 coreos-metadata[1903]: Feb 13 15:16:57.792 INFO Fetch successful
Feb 13 15:16:57.792891 coreos-metadata[1903]: Feb 13 15:16:57.792 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1
Feb 13 15:16:57.793815 coreos-metadata[1903]: Feb 13 15:16:57.793 INFO Fetch successful
Feb 13 15:16:57.793815 coreos-metadata[1903]: Feb 13 15:16:57.793 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1
Feb 13 15:16:57.795300 coreos-metadata[1903]: Feb 13 15:16:57.795 INFO Fetch successful
Feb 13 15:16:57.795694 coreos-metadata[1903]: Feb 13 15:16:57.795 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1
Feb 13 15:16:57.798701 coreos-metadata[1903]: Feb 13 15:16:57.798 INFO Fetch successful
Feb 13 15:16:57.798701 coreos-metadata[1903]: Feb 13 15:16:57.798 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1
Feb 13 15:16:57.799622 coreos-metadata[1903]: Feb 13 15:16:57.799 INFO Fetch failed with 404: resource not found
Feb 13 15:16:57.799622 coreos-metadata[1903]: Feb 13 15:16:57.799 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1
Feb 13 15:16:57.800774 coreos-metadata[1903]: Feb 13 15:16:57.800 INFO Fetch successful
Feb 13 15:16:57.800774 coreos-metadata[1903]: Feb 13 15:16:57.800 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1
Feb 13 15:16:57.801425 coreos-metadata[1903]: Feb 13 15:16:57.801 INFO Fetch successful
Feb 13 15:16:57.801425 coreos-metadata[1903]: Feb 13 15:16:57.801 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1
Feb 13 15:16:57.802476 coreos-metadata[1903]: Feb 13 15:16:57.802 INFO Fetch successful
Feb 13 15:16:57.802476 coreos-metadata[1903]: Feb 13 15:16:57.802 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1
Feb 13 15:16:57.806630 coreos-metadata[1903]: Feb 13 15:16:57.806 INFO Fetch successful
Feb 13 15:16:57.806960 coreos-metadata[1903]: Feb 13 15:16:57.806 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1
Feb 13 15:16:57.820586 coreos-metadata[1903]: Feb 13 15:16:57.817 INFO Fetch successful
Feb 13 15:16:57.880267 bash[1985]: Updated "/home/core/.ssh/authorized_keys"
Feb 13 15:16:57.923189 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Feb 13 15:16:57.935315 systemd[1]: Starting sshkeys.service...
Feb 13 15:16:57.968998 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys.
Feb 13 15:16:57.979572 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)...
Feb 13 15:16:57.999302 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1705)
Feb 13 15:16:58.046621 systemd-logind[1918]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 13 15:16:58.190699 containerd[1939]: time="2025-02-13T15:16:58.180756477Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23
Feb 13 15:16:58.046681 systemd-logind[1918]: Watching system buttons on /dev/input/event1 (Sleep Button)
Feb 13 15:16:58.049521 systemd-logind[1918]: New seat seat0.
Feb 13 15:16:58.050308 systemd-networkd[1839]: eth0: Gained IPv6LL
Feb 13 15:16:58.188990 systemd[1]: Started systemd-logind.service - User Login Management.
Feb 13 15:16:58.200640 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Feb 13 15:16:58.205414 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent.
Feb 13 15:16:58.226518 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Feb 13 15:16:58.229895 systemd[1]: Reached target network-online.target - Network is Online.
Feb 13 15:16:58.243835 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent.
Feb 13 15:16:58.254648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:16:58.263764 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Feb 13 15:16:58.266261 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Feb 13 15:16:58.313438 locksmithd[1959]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Feb 13 15:16:58.370994 dbus-daemon[1904]: [system] Successfully activated service 'org.freedesktop.hostname1'
Feb 13 15:16:58.377641 dbus-daemon[1904]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1948 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0")
Feb 13 15:16:58.390459 systemd[1]: Started systemd-hostnamed.service - Hostname Service.
Feb 13 15:16:58.400401 coreos-metadata[1996]: Feb 13 15:16:58.392 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Feb 13 15:16:58.400401 coreos-metadata[1996]: Feb 13 15:16:58.392 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1
Feb 13 15:16:58.416976 systemd[1]: Starting polkit.service - Authorization Manager...
Feb 13 15:16:58.424815 coreos-metadata[1996]: Feb 13 15:16:58.421 INFO Fetch successful
Feb 13 15:16:58.424815 coreos-metadata[1996]: Feb 13 15:16:58.421 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1
Feb 13 15:16:58.428065 coreos-metadata[1996]: Feb 13 15:16:58.425 INFO Fetch successful
Feb 13 15:16:58.433659 unknown[1996]: wrote ssh authorized keys file for user: core
Feb 13 15:16:58.511722 amazon-ssm-agent[2026]: Initializing new seelog logger
Feb 13 15:16:58.511722 amazon-ssm-agent[2026]: New Seelog Logger Creation Complete
Feb 13 15:16:58.511722 amazon-ssm-agent[2026]: 2025/02/13 15:16:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:16:58.511722 amazon-ssm-agent[2026]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:16:58.534898 amazon-ssm-agent[2026]: 2025/02/13 15:16:58 processing appconfig overrides
Feb 13 15:16:58.534898 amazon-ssm-agent[2026]: 2025/02/13 15:16:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:16:58.534898 amazon-ssm-agent[2026]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:16:58.534898 amazon-ssm-agent[2026]: 2025/02/13 15:16:58 processing appconfig overrides
Feb 13 15:16:58.534898 amazon-ssm-agent[2026]: 2025/02/13 15:16:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:16:58.534898 amazon-ssm-agent[2026]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:16:58.534898 amazon-ssm-agent[2026]: 2025/02/13 15:16:58 processing appconfig overrides
Feb 13 15:16:58.534898 amazon-ssm-agent[2026]: 2025-02-13 15:16:58 INFO Proxy environment variables:
Feb 13 15:16:58.534898 amazon-ssm-agent[2026]: 2025/02/13 15:16:58 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:16:58.534898 amazon-ssm-agent[2026]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:16:58.534898 amazon-ssm-agent[2026]: 2025/02/13 15:16:58 processing appconfig overrides
Feb 13 15:16:58.530898 polkitd[2043]: Started polkitd version 121
Feb 13 15:16:58.562811 containerd[1939]: time="2025-02-13T15:16:58.559705979Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:16:58.563040 update-ssh-keys[2052]: Updated "/home/core/.ssh/authorized_keys"
Feb 13 15:16:58.582096 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys).
Feb 13 15:16:58.585044 polkitd[2043]: Loading rules from directory /etc/polkit-1/rules.d
Feb 13 15:16:58.588287 polkitd[2043]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 13 15:16:58.595322 systemd[1]: Finished sshkeys.service.
Feb 13 15:16:58.602568 polkitd[2043]: Finished loading, compiling and executing 2 rules
Feb 13 15:16:58.618388 amazon-ssm-agent[2026]: 2025-02-13 15:16:58 INFO https_proxy:
Feb 13 15:16:58.624084 dbus-daemon[1904]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Feb 13 15:16:58.626099 systemd[1]: Started polkit.service - Authorization Manager.
Feb 13 15:16:58.630301 polkitd[2043]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Feb 13 15:16:58.633504 containerd[1939]: time="2025-02-13T15:16:58.628546044Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:16:58.633504 containerd[1939]: time="2025-02-13T15:16:58.628609080Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Feb 13 15:16:58.633504 containerd[1939]: time="2025-02-13T15:16:58.628644876Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Feb 13 15:16:58.633504 containerd[1939]: time="2025-02-13T15:16:58.628999992Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Feb 13 15:16:58.633504 containerd[1939]: time="2025-02-13T15:16:58.629047116Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Feb 13 15:16:58.631262 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Feb 13 15:16:58.636887 containerd[1939]: time="2025-02-13T15:16:58.631252092Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:16:58.636887 containerd[1939]: time="2025-02-13T15:16:58.635494092Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:16:58.643928 containerd[1939]: time="2025-02-13T15:16:58.642330468Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:16:58.643928 containerd[1939]: time="2025-02-13T15:16:58.642412044Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Feb 13 15:16:58.643928 containerd[1939]: time="2025-02-13T15:16:58.642448920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:16:58.643928 containerd[1939]: time="2025-02-13T15:16:58.643101876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Feb 13 15:16:58.650613 containerd[1939]: time="2025-02-13T15:16:58.649701684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:16:58.650613 containerd[1939]: time="2025-02-13T15:16:58.650533500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:16:58.658501 containerd[1939]: time="2025-02-13T15:16:58.658393512Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:16:58.659780 containerd[1939]: time="2025-02-13T15:16:58.658707396Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Feb 13 15:16:58.659780 containerd[1939]: time="2025-02-13T15:16:58.658993680Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Feb 13 15:16:58.659780 containerd[1939]: time="2025-02-13T15:16:58.659184684Z" level=info msg="metadata content store policy set" policy=shared
Feb 13 15:16:58.671642 containerd[1939]: time="2025-02-13T15:16:58.671504796Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Feb 13 15:16:58.674193 containerd[1939]: time="2025-02-13T15:16:58.671864496Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Feb 13 15:16:58.674193 containerd[1939]: time="2025-02-13T15:16:58.671922360Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Feb 13 15:16:58.674193 containerd[1939]: time="2025-02-13T15:16:58.671961840Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Feb 13 15:16:58.674193 containerd[1939]: time="2025-02-13T15:16:58.672003000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Feb 13 15:16:58.674193 containerd[1939]: time="2025-02-13T15:16:58.672343728Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Feb 13 15:16:58.674193 containerd[1939]: time="2025-02-13T15:16:58.672831612Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Feb 13 15:16:58.681765 containerd[1939]: time="2025-02-13T15:16:58.679312176Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Feb 13 15:16:58.681765 containerd[1939]: time="2025-02-13T15:16:58.679387200Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Feb 13 15:16:58.681765 containerd[1939]: time="2025-02-13T15:16:58.679426488Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Feb 13 15:16:58.681765 containerd[1939]: time="2025-02-13T15:16:58.679459392Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Feb 13 15:16:58.681765 containerd[1939]: time="2025-02-13T15:16:58.679490928Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Feb 13 15:16:58.681765 containerd[1939]: time="2025-02-13T15:16:58.679521996Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Feb 13 15:16:58.681765 containerd[1939]: time="2025-02-13T15:16:58.679555716Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Feb 13 15:16:58.681765 containerd[1939]: time="2025-02-13T15:16:58.679592028Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Feb 13 15:16:58.681765 containerd[1939]: time="2025-02-13T15:16:58.679623036Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Feb 13 15:16:58.681765 containerd[1939]: time="2025-02-13T15:16:58.679655280Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Feb 13 15:16:58.681765 containerd[1939]: time="2025-02-13T15:16:58.679690116Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Feb 13 15:16:58.681765 containerd[1939]: time="2025-02-13T15:16:58.679758036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Feb 13 15:16:58.681765 containerd[1939]: time="2025-02-13T15:16:58.679795260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Feb 13 15:16:58.681765 containerd[1939]: time="2025-02-13T15:16:58.679844232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Feb 13 15:16:58.682522 containerd[1939]: time="2025-02-13T15:16:58.679880136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Feb 13 15:16:58.682522 containerd[1939]: time="2025-02-13T15:16:58.679940424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Feb 13 15:16:58.682522 containerd[1939]: time="2025-02-13T15:16:58.679978248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb 13 15:16:58.682522 containerd[1939]: time="2025-02-13T15:16:58.680008572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Feb 13 15:16:58.682522 containerd[1939]: time="2025-02-13T15:16:58.680041056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Feb 13 15:16:58.682522 containerd[1939]: time="2025-02-13T15:16:58.680073288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Feb 13 15:16:58.682522 containerd[1939]: time="2025-02-13T15:16:58.680125488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Feb 13 15:16:58.682522 containerd[1939]: time="2025-02-13T15:16:58.680190264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Feb 13 15:16:58.682522 containerd[1939]: time="2025-02-13T15:16:58.680247756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Feb 13 15:16:58.682522 containerd[1939]: time="2025-02-13T15:16:58.680280804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Feb 13 15:16:58.682522 containerd[1939]: time="2025-02-13T15:16:58.680320032Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Feb 13 15:16:58.682522 containerd[1939]: time="2025-02-13T15:16:58.680378832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Feb 13 15:16:58.682522 containerd[1939]: time="2025-02-13T15:16:58.680413800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Feb 13 15:16:58.682522 containerd[1939]: time="2025-02-13T15:16:58.680465136Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Feb 13 15:16:58.688565 containerd[1939]: time="2025-02-13T15:16:58.680604408Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Feb 13 15:16:58.688565 containerd[1939]: time="2025-02-13T15:16:58.680646384Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Feb 13 15:16:58.688565 containerd[1939]: time="2025-02-13T15:16:58.680671656Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Feb 13 15:16:58.688565 containerd[1939]: time="2025-02-13T15:16:58.680708628Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Feb 13 15:16:58.688565 containerd[1939]: time="2025-02-13T15:16:58.680733072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Feb 13 15:16:58.688565 containerd[1939]: time="2025-02-13T15:16:58.680763588Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Feb 13 15:16:58.688565 containerd[1939]: time="2025-02-13T15:16:58.680788236Z" level=info msg="NRI interface is disabled by configuration."
Feb 13 15:16:58.688565 containerd[1939]: time="2025-02-13T15:16:58.680817276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Feb 13 15:16:58.698654 containerd[1939]: time="2025-02-13T15:16:58.695755092Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Feb 13 15:16:58.698654 containerd[1939]: time="2025-02-13T15:16:58.695878416Z" level=info msg="Connect containerd service"
Feb 13 15:16:58.698654 containerd[1939]: time="2025-02-13T15:16:58.695949996Z" level=info msg="using legacy CRI server"
Feb 13 15:16:58.698654 containerd[1939]: time="2025-02-13T15:16:58.695969052Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Feb 13 15:16:58.698654 containerd[1939]: time="2025-02-13T15:16:58.696990396Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Feb 13 15:16:58.710447 containerd[1939]: time="2025-02-13T15:16:58.706711692Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 15:16:58.710447 containerd[1939]: time="2025-02-13T15:16:58.707404740Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb 13 15:16:58.710447 containerd[1939]: time="2025-02-13T15:16:58.707536968Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb 13 15:16:58.710447 containerd[1939]: time="2025-02-13T15:16:58.707630364Z" level=info msg="Start subscribing containerd event"
Feb 13 15:16:58.710447 containerd[1939]: time="2025-02-13T15:16:58.707694816Z" level=info msg="Start recovering state"
Feb 13 15:16:58.710447 containerd[1939]: time="2025-02-13T15:16:58.707831700Z" level=info msg="Start event monitor"
Feb 13 15:16:58.710447 containerd[1939]: time="2025-02-13T15:16:58.707863584Z" level=info msg="Start snapshots syncer"
Feb 13 15:16:58.710447 containerd[1939]: time="2025-02-13T15:16:58.707886780Z" level=info msg="Start cni network conf syncer for default"
Feb 13 15:16:58.710447 containerd[1939]: time="2025-02-13T15:16:58.707906604Z" level=info msg="Start streaming server"
Feb 13 15:16:58.708248 systemd[1]: Started containerd.service - containerd container runtime.
Feb 13 15:16:58.724709 systemd-hostnamed[1948]: Hostname set to <ip-172-31-23-200> (transient)
Feb 13 15:16:58.727753 systemd-resolved[1841]: System hostname changed to 'ip-172-31-23-200'.
Feb 13 15:16:58.735178 amazon-ssm-agent[2026]: 2025-02-13 15:16:58 INFO http_proxy:
Feb 13 15:16:58.737566 containerd[1939]: time="2025-02-13T15:16:58.737492832Z" level=info msg="containerd successfully booted in 0.569290s"
Feb 13 15:16:58.831564 amazon-ssm-agent[2026]: 2025-02-13 15:16:58 INFO no_proxy:
Feb 13 15:16:58.933653 amazon-ssm-agent[2026]: 2025-02-13 15:16:58 INFO Checking if agent identity type OnPrem can be assumed
Feb 13 15:16:59.032756 amazon-ssm-agent[2026]: 2025-02-13 15:16:58 INFO Checking if agent identity type EC2 can be assumed
Feb 13 15:16:59.131105 amazon-ssm-agent[2026]: 2025-02-13 15:16:58 INFO Agent will take identity from EC2
Feb 13 15:16:59.232640 amazon-ssm-agent[2026]: 2025-02-13 15:16:58 INFO [amazon-ssm-agent] using named pipe channel for IPC
Feb 13 15:16:59.318193 sshd_keygen[1951]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Feb 13 15:16:59.330869 amazon-ssm-agent[2026]: 2025-02-13 15:16:58 INFO [amazon-ssm-agent] using named pipe channel for IPC
Feb 13 15:16:59.422583 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Feb 13 15:16:59.433307 amazon-ssm-agent[2026]: 2025-02-13 15:16:58 INFO [amazon-ssm-agent] using named pipe channel for IPC
Feb 13 15:16:59.437737 systemd[1]: Starting issuegen.service - Generate /run/issue...
Feb 13 15:16:59.451729 systemd[1]: Started sshd@0-172.31.23.200:22-139.178.68.195:54506.service - OpenSSH per-connection server daemon (139.178.68.195:54506).
Feb 13 15:16:59.517950 systemd[1]: issuegen.service: Deactivated successfully.
Feb 13 15:16:59.518395 systemd[1]: Finished issuegen.service - Generate /run/issue.
Feb 13 15:16:59.529831 amazon-ssm-agent[2026]: 2025-02-13 15:16:58 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0
Feb 13 15:16:59.537215 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Feb 13 15:16:59.628943 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Feb 13 15:16:59.631369 amazon-ssm-agent[2026]: 2025-02-13 15:16:58 INFO [amazon-ssm-agent] OS: linux, Arch: arm64
Feb 13 15:16:59.644871 systemd[1]: Started getty@tty1.service - Getty on tty1.
Feb 13 15:16:59.658516 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0.
Feb 13 15:16:59.661273 systemd[1]: Reached target getty.target - Login Prompts.
Feb 13 15:16:59.733075 amazon-ssm-agent[2026]: 2025-02-13 15:16:58 INFO [amazon-ssm-agent] Starting Core Agent
Feb 13 15:16:59.816881 sshd[2135]: Accepted publickey for core from 139.178.68.195 port 54506 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o
Feb 13 15:16:59.825920 sshd-session[2135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:16:59.833987 amazon-ssm-agent[2026]: 2025-02-13 15:16:58 INFO [amazon-ssm-agent] registrar detected. Attempting registration
Feb 13 15:16:59.858345 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Feb 13 15:16:59.868793 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Feb 13 15:16:59.885228 systemd-logind[1918]: New session 1 of user core.
Feb 13 15:16:59.935409 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Feb 13 15:16:59.942450 amazon-ssm-agent[2026]: 2025-02-13 15:16:58 INFO [Registrar] Starting registrar module
Feb 13 15:16:59.955899 systemd[1]: Starting user@500.service - User Manager for UID 500...
Feb 13 15:16:59.985027 tar[1931]: linux-arm64/LICENSE
Feb 13 15:16:59.985027 tar[1931]: linux-arm64/README.md
Feb 13 15:16:59.984093 (systemd)[2146]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Feb 13 15:17:00.042251 amazon-ssm-agent[2026]: 2025-02-13 15:16:58 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration
Feb 13 15:17:00.052252 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin.
Feb 13 15:17:00.313528 systemd[2146]: Queued start job for default target default.target.
Feb 13 15:17:00.320904 systemd[2146]: Created slice app.slice - User Application Slice.
Feb 13 15:17:00.320999 systemd[2146]: Reached target paths.target - Paths.
Feb 13 15:17:00.321037 systemd[2146]: Reached target timers.target - Timers.
Feb 13 15:17:00.326609 systemd[2146]: Starting dbus.socket - D-Bus User Message Bus Socket...
Feb 13 15:17:00.366918 systemd[2146]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Feb 13 15:17:00.371257 systemd[2146]: Reached target sockets.target - Sockets.
Feb 13 15:17:00.371493 systemd[2146]: Reached target basic.target - Basic System.
Feb 13 15:17:00.371704 systemd[1]: Started user@500.service - User Manager for UID 500.
Feb 13 15:17:00.372846 systemd[2146]: Reached target default.target - Main User Target.
Feb 13 15:17:00.372952 systemd[2146]: Startup finished in 357ms.
Feb 13 15:17:00.383651 systemd[1]: Started session-1.scope - Session 1 of User core.
Feb 13 15:17:00.406613 ntpd[1908]: Listen normally on 6 eth0 [fe80::4ac:bbff:fe05:3a65%2]:123
Feb 13 15:17:00.407095 ntpd[1908]: 13 Feb 15:17:00 ntpd[1908]: Listen normally on 6 eth0 [fe80::4ac:bbff:fe05:3a65%2]:123
Feb 13 15:17:00.512620 amazon-ssm-agent[2026]: 2025-02-13 15:17:00 INFO [EC2Identity] EC2 registration was successful.
Feb 13 15:17:00.552750 systemd[1]: Started sshd@1-172.31.23.200:22-139.178.68.195:40272.service - OpenSSH per-connection server daemon (139.178.68.195:40272).
Feb 13 15:17:00.556538 amazon-ssm-agent[2026]: 2025-02-13 15:17:00 INFO [CredentialRefresher] credentialRefresher has started
Feb 13 15:17:00.556538 amazon-ssm-agent[2026]: 2025-02-13 15:17:00 INFO [CredentialRefresher] Starting credentials refresher loop
Feb 13 15:17:00.556538 amazon-ssm-agent[2026]: 2025-02-13 15:17:00 INFO EC2RoleProvider Successfully connected with instance profile role credentials
Feb 13 15:17:00.613539 amazon-ssm-agent[2026]: 2025-02-13 15:17:00 INFO [CredentialRefresher] Next credential rotation will be in 32.30827655943333 minutes
Feb 13 15:17:00.666492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:17:00.670528 systemd[1]: Reached target multi-user.target - Multi-User System.
Feb 13 15:17:00.673068 systemd[1]: Startup finished in 1.184s (kernel) + 8.996s (initrd) + 8.922s (userspace) = 19.103s.
Feb 13 15:17:00.685129 (kubelet)[2167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 15:17:00.782216 sshd[2160]: Accepted publickey for core from 139.178.68.195 port 40272 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o
Feb 13 15:17:00.785228 sshd-session[2160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:17:00.799252 systemd-logind[1918]: New session 2 of user core.
Feb 13 15:17:00.811466 systemd[1]: Started session-2.scope - Session 2 of User core.
Feb 13 15:17:00.944304 sshd[2172]: Connection closed by 139.178.68.195 port 40272
Feb 13 15:17:00.944788 sshd-session[2160]: pam_unix(sshd:session): session closed for user core
Feb 13 15:17:00.955447 systemd[1]: sshd@1-172.31.23.200:22-139.178.68.195:40272.service: Deactivated successfully.
Feb 13 15:17:00.960087 systemd[1]: session-2.scope: Deactivated successfully.
Feb 13 15:17:00.963594 systemd-logind[1918]: Session 2 logged out. Waiting for processes to exit.
Feb 13 15:17:00.984762 systemd[1]: Started sshd@2-172.31.23.200:22-139.178.68.195:40276.service - OpenSSH per-connection server daemon (139.178.68.195:40276).
Feb 13 15:17:00.988396 systemd-logind[1918]: Removed session 2.
Feb 13 15:17:01.180859 sshd[2181]: Accepted publickey for core from 139.178.68.195 port 40276 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o
Feb 13 15:17:01.183066 sshd-session[2181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:17:01.195294 systemd-logind[1918]: New session 3 of user core.
Feb 13 15:17:01.207510 systemd[1]: Started session-3.scope - Session 3 of User core.
Feb 13 15:17:01.336375 sshd[2183]: Connection closed by 139.178.68.195 port 40276
Feb 13 15:17:01.335527 sshd-session[2181]: pam_unix(sshd:session): session closed for user core
Feb 13 15:17:01.341968 systemd-logind[1918]: Session 3 logged out. Waiting for processes to exit.
Feb 13 15:17:01.342629 systemd[1]: sshd@2-172.31.23.200:22-139.178.68.195:40276.service: Deactivated successfully.
Feb 13 15:17:01.349086 systemd[1]: session-3.scope: Deactivated successfully.
Feb 13 15:17:01.355346 systemd-logind[1918]: Removed session 3.
Feb 13 15:17:01.380180 systemd[1]: Started sshd@3-172.31.23.200:22-139.178.68.195:40280.service - OpenSSH per-connection server daemon (139.178.68.195:40280).
Feb 13 15:17:01.582308 sshd[2188]: Accepted publickey for core from 139.178.68.195 port 40280 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o
Feb 13 15:17:01.586750 sshd-session[2188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:17:01.602519 amazon-ssm-agent[2026]: 2025-02-13 15:17:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process
Feb 13 15:17:01.609595 systemd-logind[1918]: New session 4 of user core.
Feb 13 15:17:01.615374 systemd[1]: Started session-4.scope - Session 4 of User core.
Feb 13 15:17:01.702200 amazon-ssm-agent[2026]: 2025-02-13 15:17:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2192) started
Feb 13 15:17:01.761213 sshd[2193]: Connection closed by 139.178.68.195 port 40280
Feb 13 15:17:01.762533 sshd-session[2188]: pam_unix(sshd:session): session closed for user core
Feb 13 15:17:01.772770 systemd[1]: sshd@3-172.31.23.200:22-139.178.68.195:40280.service: Deactivated successfully.
Feb 13 15:17:01.777823 systemd[1]: session-4.scope: Deactivated successfully.
Feb 13 15:17:01.784470 systemd-logind[1918]: Session 4 logged out. Waiting for processes to exit.
Feb 13 15:17:01.801447 amazon-ssm-agent[2026]: 2025-02-13 15:17:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds
Feb 13 15:17:01.811835 systemd[1]: Started sshd@4-172.31.23.200:22-139.178.68.195:40290.service - OpenSSH per-connection server daemon (139.178.68.195:40290).
Feb 13 15:17:01.815275 systemd-logind[1918]: Removed session 4.
Feb 13 15:17:01.886208 kubelet[2167]: E0213 15:17:01.885474    2167 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 15:17:01.893019 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 15:17:01.894420 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 15:17:01.895061 systemd[1]: kubelet.service: Consumed 1.469s CPU time.
Feb 13 15:17:02.031297 sshd[2204]: Accepted publickey for core from 139.178.68.195 port 40290 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o
Feb 13 15:17:02.034336 sshd-session[2204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:17:02.044260 systemd-logind[1918]: New session 5 of user core.
Feb 13 15:17:02.055478 systemd[1]: Started session-5.scope - Session 5 of User core.
Feb 13 15:17:02.179427 sudo[2212]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Feb 13 15:17:02.181037 sudo[2212]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:17:02.757760 systemd[1]: Starting docker.service - Docker Application Container Engine...
Feb 13 15:17:02.759350 (dockerd)[2231]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU
Feb 13 15:17:03.132220 dockerd[2231]: time="2025-02-13T15:17:03.132012410Z" level=info msg="Starting up"
Feb 13 15:17:03.268344 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2539155675-merged.mount: Deactivated successfully.
Feb 13 15:17:03.465904 dockerd[2231]: time="2025-02-13T15:17:03.465040396Z" level=info msg="Loading containers: start."
Feb 13 15:17:03.772206 kernel: Initializing XFRM netlink socket
Feb 13 15:17:03.809922 (udev-worker)[2253]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:17:03.933822 systemd-networkd[1839]: docker0: Link UP
Feb 13 15:17:03.978106 dockerd[2231]: time="2025-02-13T15:17:03.977944158Z" level=info msg="Loading containers: done."
Feb 13 15:17:04.014535 dockerd[2231]: time="2025-02-13T15:17:04.014456162Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Feb 13 15:17:04.014769 dockerd[2231]: time="2025-02-13T15:17:04.014630282Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
Feb 13 15:17:04.014931 dockerd[2231]: time="2025-02-13T15:17:04.014886074Z" level=info msg="Daemon has completed initialization"
Feb 13 15:17:04.076495 dockerd[2231]: time="2025-02-13T15:17:04.076160031Z" level=info msg="API listen on /run/docker.sock"
Feb 13 15:17:04.077000 systemd[1]: Started docker.service - Docker Application Container Engine.
Feb 13 15:17:05.322209 containerd[1939]: time="2025-02-13T15:17:05.322017178Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\""
Feb 13 15:17:05.985563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1443034251.mount: Deactivated successfully.
Feb 13 15:17:09.115907 containerd[1939]: time="2025-02-13T15:17:09.115811225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:17:09.118309 containerd[1939]: time="2025-02-13T15:17:09.118176791Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=32205861"
Feb 13 15:17:09.119192 containerd[1939]: time="2025-02-13T15:17:09.119034500Z" level=info msg="ImageCreate event name:\"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:17:09.126102 containerd[1939]: time="2025-02-13T15:17:09.126037108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:17:09.129290 containerd[1939]: time="2025-02-13T15:17:09.128957387Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"32202661\" in 3.805980942s"
Feb 13 15:17:09.129290 containerd[1939]: time="2025-02-13T15:17:09.129033085Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\""
Feb 13 15:17:09.172046 containerd[1939]: time="2025-02-13T15:17:09.171977935Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\""
Feb 13 15:17:11.863207 containerd[1939]: time="2025-02-13T15:17:11.862759624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:17:11.864892 containerd[1939]: time="2025-02-13T15:17:11.864807200Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=29383091"
Feb 13 15:17:11.866896 containerd[1939]: time="2025-02-13T15:17:11.866813522Z" level=info msg="ImageCreate event name:\"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:17:11.872254 containerd[1939]: time="2025-02-13T15:17:11.872170179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:17:11.874503 containerd[1939]: time="2025-02-13T15:17:11.874438941Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"30786820\" in 2.702398094s"
Feb 13 15:17:11.874659 containerd[1939]: time="2025-02-13T15:17:11.874498599Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\""
Feb 13 15:17:11.920199 containerd[1939]: time="2025-02-13T15:17:11.919669001Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\""
Feb 13 15:17:12.142020 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Feb 13 15:17:12.149657 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:17:12.450212 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:17:12.468761 (kubelet)[2499]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 15:17:12.550123 kubelet[2499]: E0213 15:17:12.550018    2499 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 15:17:12.557090 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 15:17:12.557458 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 15:17:13.884557 containerd[1939]: time="2025-02-13T15:17:13.884492996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:17:13.886692 containerd[1939]: time="2025-02-13T15:17:13.886583709Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=15766980"
Feb 13 15:17:13.888300 containerd[1939]: time="2025-02-13T15:17:13.888240933Z" level=info msg="ImageCreate event name:\"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:17:13.894200 containerd[1939]: time="2025-02-13T15:17:13.893924729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:17:13.896718 containerd[1939]: time="2025-02-13T15:17:13.896348237Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"17170727\" in 1.976569153s"
Feb 13 15:17:13.896718 containerd[1939]: time="2025-02-13T15:17:13.896417823Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\""
Feb 13 15:17:13.936436 containerd[1939]: time="2025-02-13T15:17:13.936371186Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\""
Feb 13 15:17:15.534515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3838890818.mount: Deactivated successfully.
Feb 13 15:17:16.019403 containerd[1939]: time="2025-02-13T15:17:16.019210188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:17:16.020656 containerd[1939]: time="2025-02-13T15:17:16.020546300Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=25273375"
Feb 13 15:17:16.023002 containerd[1939]: time="2025-02-13T15:17:16.022931195Z" level=info msg="ImageCreate event name:\"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:17:16.028248 containerd[1939]: time="2025-02-13T15:17:16.028187915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:17:16.029366 containerd[1939]: time="2025-02-13T15:17:16.029315542Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"25272394\" in 2.092883689s"
Feb 13 15:17:16.029478 containerd[1939]: time="2025-02-13T15:17:16.029371478Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\""
Feb 13 15:17:16.070803 containerd[1939]: time="2025-02-13T15:17:16.070481751Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\""
Feb 13 15:17:16.708658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount896479591.mount: Deactivated successfully.
Feb 13 15:17:17.857393 containerd[1939]: time="2025-02-13T15:17:17.857332522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:17:17.860467 containerd[1939]: time="2025-02-13T15:17:17.860390763Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381"
Feb 13 15:17:17.861615 containerd[1939]: time="2025-02-13T15:17:17.861508245Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:17:17.868206 containerd[1939]: time="2025-02-13T15:17:17.868092000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:17:17.870986 containerd[1939]: time="2025-02-13T15:17:17.870913470Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.80036661s"
Feb 13 15:17:17.871797 containerd[1939]: time="2025-02-13T15:17:17.871233033Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\""
Feb 13 15:17:17.911306 containerd[1939]: time="2025-02-13T15:17:17.911245046Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\""
Feb 13 15:17:18.377679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1203162514.mount: Deactivated successfully.
Feb 13 15:17:18.388893 containerd[1939]: time="2025-02-13T15:17:18.387322040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:17:18.388893 containerd[1939]: time="2025-02-13T15:17:18.388822142Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821"
Feb 13 15:17:18.390022 containerd[1939]: time="2025-02-13T15:17:18.389955196Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:17:18.399191 containerd[1939]: time="2025-02-13T15:17:18.398547434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:17:18.402985 containerd[1939]: time="2025-02-13T15:17:18.402930081Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 491.614392ms"
Feb 13 15:17:18.403301 containerd[1939]: time="2025-02-13T15:17:18.403268242Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\""
Feb 13 15:17:18.446962 containerd[1939]: time="2025-02-13T15:17:18.446901636Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\""
Feb 13 15:17:18.961588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1799923768.mount: Deactivated successfully.
Feb 13 15:17:22.807948 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Feb 13 15:17:22.815572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:17:23.486768 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:17:23.499669 (kubelet)[2638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 15:17:23.604079 kubelet[2638]: E0213 15:17:23.604001    2638 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 15:17:23.616935 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 15:17:23.618275 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 15:17:24.293736 containerd[1939]: time="2025-02-13T15:17:24.293644019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:17:24.296609 containerd[1939]: time="2025-02-13T15:17:24.296524918Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786"
Feb 13 15:17:24.299096 containerd[1939]: time="2025-02-13T15:17:24.299019705Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:17:24.305327 containerd[1939]: time="2025-02-13T15:17:24.305248538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:17:24.307908 containerd[1939]: time="2025-02-13T15:17:24.307691387Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 5.86031148s"
Feb 13 15:17:24.307908 containerd[1939]: time="2025-02-13T15:17:24.307743793Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\""
Feb 13 15:17:28.759395 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 13 15:17:31.837399 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:17:31.852992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:17:31.889250 systemd[1]: Reloading requested from client PID 2714 ('systemctl') (unit session-5.scope)...
Feb 13 15:17:31.889276 systemd[1]: Reloading...
Feb 13 15:17:32.112193 zram_generator::config[2757]: No configuration found.
Feb 13 15:17:32.366615 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:17:32.530624 systemd[1]: Reloading finished in 640 ms.
Feb 13 15:17:32.632508 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM
Feb 13 15:17:32.632718 systemd[1]: kubelet.service: Failed with result 'signal'.
Feb 13 15:17:32.633576 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:17:32.648626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:17:33.040424 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:17:33.055017 (kubelet)[2817]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Feb 13 15:17:33.147014 kubelet[2817]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:17:33.147575 kubelet[2817]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 13 15:17:33.147667 kubelet[2817]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:17:33.147913 kubelet[2817]: I0213 15:17:33.147856    2817 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 13 15:17:33.670270 kubelet[2817]: I0213 15:17:33.670208    2817 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Feb 13 15:17:33.670270 kubelet[2817]: I0213 15:17:33.670261    2817 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 13 15:17:33.670662 kubelet[2817]: I0213 15:17:33.670630    2817 server.go:919] "Client rotation is on, will bootstrap in background"
Feb 13 15:17:33.704919 kubelet[2817]: I0213 15:17:33.704853    2817 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 15:17:33.706397 kubelet[2817]: E0213 15:17:33.706229    2817 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.200:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.200:6443: connect: connection refused
Feb 13 15:17:33.723026 kubelet[2817]: I0213 15:17:33.722978    2817 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 13 15:17:33.724638 kubelet[2817]: I0213 15:17:33.724567    2817 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 13 15:17:33.725159 kubelet[2817]: I0213 15:17:33.725099    2817 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Feb 13 15:17:33.725352 kubelet[2817]: I0213 15:17:33.725184    2817 topology_manager.go:138] "Creating topology manager with none policy"
Feb 13 15:17:33.725352 kubelet[2817]: I0213 15:17:33.725208    2817 container_manager_linux.go:301] "Creating device plugin manager"
Feb 13 15:17:33.725485 kubelet[2817]: I0213 15:17:33.725470    2817 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:17:33.730386 kubelet[2817]: I0213 15:17:33.730299    2817 kubelet.go:396] "Attempting to sync node with API server"
Feb 13 15:17:33.730386 kubelet[2817]: I0213 15:17:33.730380    2817 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 13 15:17:33.731293 kubelet[2817]: I0213 15:17:33.730439    2817 kubelet.go:312] "Adding apiserver pod source"
Feb 13 15:17:33.731293 kubelet[2817]: I0213 15:17:33.730479    2817 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 13 15:17:33.735202 kubelet[2817]: I0213 15:17:33.734234    2817 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Feb 13 15:17:33.735202 kubelet[2817]: I0213 15:17:33.734887    2817 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Feb 13 15:17:33.735202 kubelet[2817]: W0213 15:17:33.734991    2817 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Feb 13 15:17:33.736980 kubelet[2817]: W0213 15:17:33.736798    2817 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.23.200:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.200:6443: connect: connection refused
Feb 13 15:17:33.737254 kubelet[2817]: E0213 15:17:33.737232    2817 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.200:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.200:6443: connect: connection refused
Feb 13 15:17:33.738820 kubelet[2817]: I0213 15:17:33.738762    2817 server.go:1256] "Started kubelet"
Feb 13 15:17:33.744935 kubelet[2817]: W0213 15:17:33.744659    2817 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.23.200:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-200&limit=500&resourceVersion=0": dial tcp 172.31.23.200:6443: connect: connection refused
Feb 13 15:17:33.744935 kubelet[2817]: E0213 15:17:33.744801    2817 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.200:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-200&limit=500&resourceVersion=0": dial tcp 172.31.23.200:6443: connect: connection refused
Feb 13 15:17:33.745290 kubelet[2817]: I0213 15:17:33.744981    2817 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Feb 13 15:17:33.749176 kubelet[2817]: I0213 15:17:33.747789    2817 server.go:461] "Adding debug handlers to kubelet server"
Feb 13 15:17:33.750997 kubelet[2817]: I0213 15:17:33.750902    2817 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Feb 13 15:17:33.751650 kubelet[2817]: I0213 15:17:33.751595    2817 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb 13 15:17:33.757726 kubelet[2817]: E0213 15:17:33.757517    2817 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.200:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.200:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-200.1823cd855b25a493  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-200,UID:ip-172-31-23-200,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-200,},FirstTimestamp:2025-02-13 15:17:33.738681491 +0000 UTC m=+0.675890334,LastTimestamp:2025-02-13 15:17:33.738681491 +0000 UTC m=+0.675890334,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-200,}"
Feb 13 15:17:33.761391 kubelet[2817]: I0213 15:17:33.761326    2817 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 13 15:17:33.764074 kubelet[2817]: I0213 15:17:33.764023    2817 volume_manager.go:291] "Starting Kubelet Volume Manager"
Feb 13 15:17:33.767952 kubelet[2817]: I0213 15:17:33.767887    2817 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Feb 13 15:17:33.770030 kubelet[2817]: I0213 15:17:33.769973    2817 reconciler_new.go:29] "Reconciler: start to sync state"
Feb 13 15:17:33.772051 kubelet[2817]: W0213 15:17:33.771950    2817 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.23.200:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.200:6443: connect: connection refused
Feb 13 15:17:33.772287 kubelet[2817]: E0213 15:17:33.772076    2817 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.200:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.200:6443: connect: connection refused
Feb 13 15:17:33.772386 kubelet[2817]: E0213 15:17:33.772345    2817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.200:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-200?timeout=10s\": dial tcp 172.31.23.200:6443: connect: connection refused" interval="200ms"
Feb 13 15:17:33.773350 kubelet[2817]: E0213 15:17:33.773115    2817 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 13 15:17:33.779192 kubelet[2817]: I0213 15:17:33.777204    2817 factory.go:221] Registration of the containerd container factory successfully
Feb 13 15:17:33.779192 kubelet[2817]: I0213 15:17:33.777239    2817 factory.go:221] Registration of the systemd container factory successfully
Feb 13 15:17:33.779192 kubelet[2817]: I0213 15:17:33.777396    2817 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Feb 13 15:17:33.811634 kubelet[2817]: I0213 15:17:33.811599    2817 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 13 15:17:33.811844 kubelet[2817]: I0213 15:17:33.811822    2817 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 13 15:17:33.811969 kubelet[2817]: I0213 15:17:33.811949    2817 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:17:33.814043 kubelet[2817]: I0213 15:17:33.813990    2817 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 13 15:17:33.817830 kubelet[2817]: I0213 15:17:33.817785    2817 policy_none.go:49] "None policy: Start"
Feb 13 15:17:33.818045 kubelet[2817]: I0213 15:17:33.818006    2817 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 13 15:17:33.818045 kubelet[2817]: I0213 15:17:33.818043    2817 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb 13 15:17:33.818178 kubelet[2817]: I0213 15:17:33.818076    2817 kubelet.go:2329] "Starting kubelet main sync loop"
Feb 13 15:17:33.818256 kubelet[2817]: E0213 15:17:33.818224    2817 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb 13 15:17:33.826375 kubelet[2817]: I0213 15:17:33.826334    2817 memory_manager.go:170] "Starting memorymanager" policy="None"
Feb 13 15:17:33.826634 kubelet[2817]: I0213 15:17:33.826610    2817 state_mem.go:35] "Initializing new in-memory state store"
Feb 13 15:17:33.827014 kubelet[2817]: W0213 15:17:33.826947    2817 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.23.200:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.200:6443: connect: connection refused
Feb 13 15:17:33.827120 kubelet[2817]: E0213 15:17:33.827029    2817 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.200:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.200:6443: connect: connection refused
Feb 13 15:17:33.836201 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Feb 13 15:17:33.854092 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Feb 13 15:17:33.861529 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Feb 13 15:17:33.871433 kubelet[2817]: I0213 15:17:33.871339    2817 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 13 15:17:33.872860 kubelet[2817]: I0213 15:17:33.871748    2817 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 13 15:17:33.876914 kubelet[2817]: I0213 15:17:33.874569    2817 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-200"
Feb 13 15:17:33.876914 kubelet[2817]: E0213 15:17:33.876235    2817 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-200\" not found"
Feb 13 15:17:33.876914 kubelet[2817]: E0213 15:17:33.876834    2817 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.200:6443/api/v1/nodes\": dial tcp 172.31.23.200:6443: connect: connection refused" node="ip-172-31-23-200"
Feb 13 15:17:33.919180 kubelet[2817]: I0213 15:17:33.919115    2817 topology_manager.go:215] "Topology Admit Handler" podUID="dafc5e6efcb1ac97b582b6655f4fb4e9" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-200"
Feb 13 15:17:33.921390 kubelet[2817]: I0213 15:17:33.921251    2817 topology_manager.go:215] "Topology Admit Handler" podUID="ff8d5b1c67db3280c23eba9fb63b2a5a" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-200"
Feb 13 15:17:33.926275 kubelet[2817]: I0213 15:17:33.926228    2817 topology_manager.go:215] "Topology Admit Handler" podUID="2e4baaea13d2a8e7e2d41e16543220f7" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-200"
Feb 13 15:17:33.942548 systemd[1]: Created slice kubepods-burstable-poddafc5e6efcb1ac97b582b6655f4fb4e9.slice - libcontainer container kubepods-burstable-poddafc5e6efcb1ac97b582b6655f4fb4e9.slice.
Feb 13 15:17:33.966746 systemd[1]: Created slice kubepods-burstable-podff8d5b1c67db3280c23eba9fb63b2a5a.slice - libcontainer container kubepods-burstable-podff8d5b1c67db3280c23eba9fb63b2a5a.slice.
Feb 13 15:17:33.973971 kubelet[2817]: E0213 15:17:33.973500    2817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.200:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-200?timeout=10s\": dial tcp 172.31.23.200:6443: connect: connection refused" interval="400ms"
Feb 13 15:17:33.981067 systemd[1]: Created slice kubepods-burstable-pod2e4baaea13d2a8e7e2d41e16543220f7.slice - libcontainer container kubepods-burstable-pod2e4baaea13d2a8e7e2d41e16543220f7.slice.
Feb 13 15:17:34.071646 kubelet[2817]: I0213 15:17:34.071592    2817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dafc5e6efcb1ac97b582b6655f4fb4e9-ca-certs\") pod \"kube-apiserver-ip-172-31-23-200\" (UID: \"dafc5e6efcb1ac97b582b6655f4fb4e9\") " pod="kube-system/kube-apiserver-ip-172-31-23-200"
Feb 13 15:17:34.071981 kubelet[2817]: I0213 15:17:34.071763    2817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ff8d5b1c67db3280c23eba9fb63b2a5a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-200\" (UID: \"ff8d5b1c67db3280c23eba9fb63b2a5a\") " pod="kube-system/kube-controller-manager-ip-172-31-23-200"
Feb 13 15:17:34.071981 kubelet[2817]: I0213 15:17:34.071856    2817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ff8d5b1c67db3280c23eba9fb63b2a5a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-200\" (UID: \"ff8d5b1c67db3280c23eba9fb63b2a5a\") " pod="kube-system/kube-controller-manager-ip-172-31-23-200"
Feb 13 15:17:34.071981 kubelet[2817]: I0213 15:17:34.071904    2817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dafc5e6efcb1ac97b582b6655f4fb4e9-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-200\" (UID: \"dafc5e6efcb1ac97b582b6655f4fb4e9\") " pod="kube-system/kube-apiserver-ip-172-31-23-200"
Feb 13 15:17:34.071981 kubelet[2817]: I0213 15:17:34.071960    2817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dafc5e6efcb1ac97b582b6655f4fb4e9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-200\" (UID: \"dafc5e6efcb1ac97b582b6655f4fb4e9\") " pod="kube-system/kube-apiserver-ip-172-31-23-200"
Feb 13 15:17:34.072245 kubelet[2817]: I0213 15:17:34.072005    2817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ff8d5b1c67db3280c23eba9fb63b2a5a-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-200\" (UID: \"ff8d5b1c67db3280c23eba9fb63b2a5a\") " pod="kube-system/kube-controller-manager-ip-172-31-23-200"
Feb 13 15:17:34.072245 kubelet[2817]: I0213 15:17:34.072050    2817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ff8d5b1c67db3280c23eba9fb63b2a5a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-200\" (UID: \"ff8d5b1c67db3280c23eba9fb63b2a5a\") " pod="kube-system/kube-controller-manager-ip-172-31-23-200"
Feb 13 15:17:34.072245 kubelet[2817]: I0213 15:17:34.072103    2817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ff8d5b1c67db3280c23eba9fb63b2a5a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-200\" (UID: \"ff8d5b1c67db3280c23eba9fb63b2a5a\") " pod="kube-system/kube-controller-manager-ip-172-31-23-200"
Feb 13 15:17:34.072245 kubelet[2817]: I0213 15:17:34.072176    2817 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2e4baaea13d2a8e7e2d41e16543220f7-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-200\" (UID: \"2e4baaea13d2a8e7e2d41e16543220f7\") " pod="kube-system/kube-scheduler-ip-172-31-23-200"
Feb 13 15:17:34.079634 kubelet[2817]: I0213 15:17:34.079575    2817 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-200"
Feb 13 15:17:34.080189 kubelet[2817]: E0213 15:17:34.080080    2817 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.200:6443/api/v1/nodes\": dial tcp 172.31.23.200:6443: connect: connection refused" node="ip-172-31-23-200"
Feb 13 15:17:34.263223 containerd[1939]: time="2025-02-13T15:17:34.262979024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-200,Uid:dafc5e6efcb1ac97b582b6655f4fb4e9,Namespace:kube-system,Attempt:0,}"
Feb 13 15:17:34.276771 containerd[1939]: time="2025-02-13T15:17:34.276220538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-200,Uid:ff8d5b1c67db3280c23eba9fb63b2a5a,Namespace:kube-system,Attempt:0,}"
Feb 13 15:17:34.287474 containerd[1939]: time="2025-02-13T15:17:34.287398064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-200,Uid:2e4baaea13d2a8e7e2d41e16543220f7,Namespace:kube-system,Attempt:0,}"
Feb 13 15:17:34.374286 kubelet[2817]: E0213 15:17:34.374213    2817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.200:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-200?timeout=10s\": dial tcp 172.31.23.200:6443: connect: connection refused" interval="800ms"
Feb 13 15:17:34.483755 kubelet[2817]: I0213 15:17:34.483692    2817 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-200"
Feb 13 15:17:34.484475 kubelet[2817]: E0213 15:17:34.484437    2817 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.200:6443/api/v1/nodes\": dial tcp 172.31.23.200:6443: connect: connection refused" node="ip-172-31-23-200"
Feb 13 15:17:34.874047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1016343838.mount: Deactivated successfully.
Feb 13 15:17:34.880784 containerd[1939]: time="2025-02-13T15:17:34.880705354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:17:34.883306 containerd[1939]: time="2025-02-13T15:17:34.883179406Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173"
Feb 13 15:17:34.885776 containerd[1939]: time="2025-02-13T15:17:34.885660170Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:17:34.888069 containerd[1939]: time="2025-02-13T15:17:34.887967075Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:17:34.889840 containerd[1939]: time="2025-02-13T15:17:34.889769151Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Feb 13 15:17:34.892180 containerd[1939]: time="2025-02-13T15:17:34.891931444Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:17:34.893585 containerd[1939]: time="2025-02-13T15:17:34.893531099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:17:34.894266 containerd[1939]: time="2025-02-13T15:17:34.893834635Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Feb 13 15:17:34.899179 containerd[1939]: time="2025-02-13T15:17:34.897107194Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 633.973161ms"
Feb 13 15:17:34.906087 containerd[1939]: time="2025-02-13T15:17:34.906016979Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 629.645886ms"
Feb 13 15:17:34.907458 containerd[1939]: time="2025-02-13T15:17:34.907383490Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 619.865835ms"
Feb 13 15:17:34.982832 kubelet[2817]: W0213 15:17:34.982764    2817 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.23.200:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-200&limit=500&resourceVersion=0": dial tcp 172.31.23.200:6443: connect: connection refused
Feb 13 15:17:34.983025 kubelet[2817]: E0213 15:17:34.982863    2817 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.200:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-200&limit=500&resourceVersion=0": dial tcp 172.31.23.200:6443: connect: connection refused
Feb 13 15:17:35.130962 containerd[1939]: time="2025-02-13T15:17:35.129512285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:17:35.132606 containerd[1939]: time="2025-02-13T15:17:35.132241164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:17:35.132606 containerd[1939]: time="2025-02-13T15:17:35.132292970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:17:35.132606 containerd[1939]: time="2025-02-13T15:17:35.132445266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:17:35.142900 containerd[1939]: time="2025-02-13T15:17:35.141836023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:17:35.142900 containerd[1939]: time="2025-02-13T15:17:35.141922154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:17:35.142900 containerd[1939]: time="2025-02-13T15:17:35.141947307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:17:35.142900 containerd[1939]: time="2025-02-13T15:17:35.142067751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:17:35.146980 containerd[1939]: time="2025-02-13T15:17:35.145786226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:17:35.146980 containerd[1939]: time="2025-02-13T15:17:35.145870220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:17:35.146980 containerd[1939]: time="2025-02-13T15:17:35.145896609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:17:35.146980 containerd[1939]: time="2025-02-13T15:17:35.146029155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:17:35.175686 kubelet[2817]: E0213 15:17:35.175649    2817 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.200:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-200?timeout=10s\": dial tcp 172.31.23.200:6443: connect: connection refused" interval="1.6s"
Feb 13 15:17:35.178716 kubelet[2817]: W0213 15:17:35.178669    2817 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.23.200:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.200:6443: connect: connection refused
Feb 13 15:17:35.179035 kubelet[2817]: E0213 15:17:35.178985    2817 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.200:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.200:6443: connect: connection refused
Feb 13 15:17:35.187196 kubelet[2817]: E0213 15:17:35.186230    2817 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.200:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.200:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-200.1823cd855b25a493  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-200,UID:ip-172-31-23-200,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-200,},FirstTimestamp:2025-02-13 15:17:33.738681491 +0000 UTC m=+0.675890334,LastTimestamp:2025-02-13 15:17:33.738681491 +0000 UTC m=+0.675890334,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-200,}"
Feb 13 15:17:35.198453 systemd[1]: Started cri-containerd-299e6b625d046bfd25583d30498cefa323bc786446d798d484d1ca4b7e183506.scope - libcontainer container 299e6b625d046bfd25583d30498cefa323bc786446d798d484d1ca4b7e183506.
Feb 13 15:17:35.217296 systemd[1]: Started cri-containerd-0b956dcd9701d1bfa2a8d97733f64b5ac6d93e3441fa6e80bff7e0b289ea76ce.scope - libcontainer container 0b956dcd9701d1bfa2a8d97733f64b5ac6d93e3441fa6e80bff7e0b289ea76ce.
Feb 13 15:17:35.222285 systemd[1]: Started cri-containerd-d6cb2092722eb3b959802200266212f740b2f408e49aa132ac78a9ef29160af3.scope - libcontainer container d6cb2092722eb3b959802200266212f740b2f408e49aa132ac78a9ef29160af3.
Feb 13 15:17:35.286880 kubelet[2817]: W0213 15:17:35.285277    2817 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.23.200:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.200:6443: connect: connection refused
Feb 13 15:17:35.287905 kubelet[2817]: E0213 15:17:35.287866    2817 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.200:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.200:6443: connect: connection refused
Feb 13 15:17:35.291116 kubelet[2817]: I0213 15:17:35.290767    2817 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-200"
Feb 13 15:17:35.292285 kubelet[2817]: E0213 15:17:35.291339    2817 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.200:6443/api/v1/nodes\": dial tcp 172.31.23.200:6443: connect: connection refused" node="ip-172-31-23-200"
Feb 13 15:17:35.314194 containerd[1939]: time="2025-02-13T15:17:35.313666149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-200,Uid:ff8d5b1c67db3280c23eba9fb63b2a5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b956dcd9701d1bfa2a8d97733f64b5ac6d93e3441fa6e80bff7e0b289ea76ce\""
Feb 13 15:17:35.346094 containerd[1939]: time="2025-02-13T15:17:35.345592564Z" level=info msg="CreateContainer within sandbox \"0b956dcd9701d1bfa2a8d97733f64b5ac6d93e3441fa6e80bff7e0b289ea76ce\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Feb 13 15:17:35.367726 containerd[1939]: time="2025-02-13T15:17:35.367664887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-200,Uid:2e4baaea13d2a8e7e2d41e16543220f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"299e6b625d046bfd25583d30498cefa323bc786446d798d484d1ca4b7e183506\""
Feb 13 15:17:35.369307 kubelet[2817]: W0213 15:17:35.369108    2817 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.23.200:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.200:6443: connect: connection refused
Feb 13 15:17:35.369307 kubelet[2817]: E0213 15:17:35.369245    2817 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.200:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.200:6443: connect: connection refused
Feb 13 15:17:35.376648 containerd[1939]: time="2025-02-13T15:17:35.376333351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-200,Uid:dafc5e6efcb1ac97b582b6655f4fb4e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6cb2092722eb3b959802200266212f740b2f408e49aa132ac78a9ef29160af3\""
Feb 13 15:17:35.382493 containerd[1939]: time="2025-02-13T15:17:35.382282768Z" level=info msg="CreateContainer within sandbox \"299e6b625d046bfd25583d30498cefa323bc786446d798d484d1ca4b7e183506\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Feb 13 15:17:35.388398 containerd[1939]: time="2025-02-13T15:17:35.388050883Z" level=info msg="CreateContainer within sandbox \"d6cb2092722eb3b959802200266212f740b2f408e49aa132ac78a9ef29160af3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Feb 13 15:17:35.389843 containerd[1939]: time="2025-02-13T15:17:35.389751641Z" level=info msg="CreateContainer within sandbox \"0b956dcd9701d1bfa2a8d97733f64b5ac6d93e3441fa6e80bff7e0b289ea76ce\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b9de39691ff23da8a2c3a55b5cc59280a655e7fb2ca268223914174a183eba0d\""
Feb 13 15:17:35.393470 containerd[1939]: time="2025-02-13T15:17:35.393228483Z" level=info msg="StartContainer for \"b9de39691ff23da8a2c3a55b5cc59280a655e7fb2ca268223914174a183eba0d\""
Feb 13 15:17:35.411947 containerd[1939]: time="2025-02-13T15:17:35.411748074Z" level=info msg="CreateContainer within sandbox \"299e6b625d046bfd25583d30498cefa323bc786446d798d484d1ca4b7e183506\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"75982614f5012cd8de048f26fa06f872ddbb94fd4f559166cdbe59e4cce3ff53\""
Feb 13 15:17:35.412758 containerd[1939]: time="2025-02-13T15:17:35.412667325Z" level=info msg="StartContainer for \"75982614f5012cd8de048f26fa06f872ddbb94fd4f559166cdbe59e4cce3ff53\""
Feb 13 15:17:35.421529 containerd[1939]: time="2025-02-13T15:17:35.421294345Z" level=info msg="CreateContainer within sandbox \"d6cb2092722eb3b959802200266212f740b2f408e49aa132ac78a9ef29160af3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3587eab6eefc83fb3b01298fc4c63f95db79dfd381a6faa78870775cfd7d5083\""
Feb 13 15:17:35.422569 containerd[1939]: time="2025-02-13T15:17:35.422425814Z" level=info msg="StartContainer for \"3587eab6eefc83fb3b01298fc4c63f95db79dfd381a6faa78870775cfd7d5083\""
Feb 13 15:17:35.466465 systemd[1]: Started cri-containerd-b9de39691ff23da8a2c3a55b5cc59280a655e7fb2ca268223914174a183eba0d.scope - libcontainer container b9de39691ff23da8a2c3a55b5cc59280a655e7fb2ca268223914174a183eba0d.
Feb 13 15:17:35.503503 systemd[1]: Started cri-containerd-75982614f5012cd8de048f26fa06f872ddbb94fd4f559166cdbe59e4cce3ff53.scope - libcontainer container 75982614f5012cd8de048f26fa06f872ddbb94fd4f559166cdbe59e4cce3ff53.
Feb 13 15:17:35.538643 systemd[1]: Started cri-containerd-3587eab6eefc83fb3b01298fc4c63f95db79dfd381a6faa78870775cfd7d5083.scope - libcontainer container 3587eab6eefc83fb3b01298fc4c63f95db79dfd381a6faa78870775cfd7d5083.
Feb 13 15:17:35.602032 containerd[1939]: time="2025-02-13T15:17:35.601971908Z" level=info msg="StartContainer for \"b9de39691ff23da8a2c3a55b5cc59280a655e7fb2ca268223914174a183eba0d\" returns successfully"
Feb 13 15:17:35.656730 containerd[1939]: time="2025-02-13T15:17:35.656455724Z" level=info msg="StartContainer for \"75982614f5012cd8de048f26fa06f872ddbb94fd4f559166cdbe59e4cce3ff53\" returns successfully"
Feb 13 15:17:35.667829 containerd[1939]: time="2025-02-13T15:17:35.667373825Z" level=info msg="StartContainer for \"3587eab6eefc83fb3b01298fc4c63f95db79dfd381a6faa78870775cfd7d5083\" returns successfully"
Feb 13 15:17:36.897342 kubelet[2817]: I0213 15:17:36.894755    2817 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-200"
Feb 13 15:17:39.052187 kubelet[2817]: E0213 15:17:39.050296    2817 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-200\" not found" node="ip-172-31-23-200"
Feb 13 15:17:39.100841 kubelet[2817]: I0213 15:17:39.100785    2817 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-200"
Feb 13 15:17:39.737321 kubelet[2817]: I0213 15:17:39.737233    2817 apiserver.go:52] "Watching apiserver"
Feb 13 15:17:39.770992 kubelet[2817]: I0213 15:17:39.770909    2817 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb 13 15:17:42.238684 systemd[1]: Reloading requested from client PID 3094 ('systemctl') (unit session-5.scope)...
Feb 13 15:17:42.239206 systemd[1]: Reloading...
Feb 13 15:17:42.415261 zram_generator::config[3137]: No configuration found.
Feb 13 15:17:42.577271 update_engine[1920]: I20250213 15:17:42.576208  1920 update_attempter.cc:509] Updating boot flags...
Feb 13 15:17:42.695474 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:17:42.702121 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3197)
Feb 13 15:17:42.954339 systemd[1]: Reloading finished in 714 ms.
Feb 13 15:17:43.127127 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:17:43.132242 kubelet[2817]: I0213 15:17:43.127562    2817 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 15:17:43.159169 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3201)
Feb 13 15:17:43.163015 systemd[1]: kubelet.service: Deactivated successfully.
Feb 13 15:17:43.163973 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:17:43.164054 systemd[1]: kubelet.service: Consumed 1.435s CPU time, 112.8M memory peak, 0B memory swap peak.
Feb 13 15:17:43.187104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:17:43.660382 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:17:43.677741 (kubelet)[3376]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Feb 13 15:17:43.803159 kubelet[3376]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:17:43.805132 kubelet[3376]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 13 15:17:43.805132 kubelet[3376]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:17:43.805132 kubelet[3376]: I0213 15:17:43.804085    3376 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 13 15:17:43.827712 kubelet[3376]: I0213 15:17:43.827589    3376 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Feb 13 15:17:43.827990 kubelet[3376]: I0213 15:17:43.827965    3376 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 13 15:17:43.828718 kubelet[3376]: I0213 15:17:43.828687    3376 server.go:919] "Client rotation is on, will bootstrap in background"
Feb 13 15:17:43.833255 kubelet[3376]: I0213 15:17:43.832814    3376 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Feb 13 15:17:43.838291 kubelet[3376]: I0213 15:17:43.838237    3376 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 15:17:43.855949 kubelet[3376]: I0213 15:17:43.855883    3376 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 13 15:17:43.857197 kubelet[3376]: I0213 15:17:43.856791    3376 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 13 15:17:43.857699 kubelet[3376]: I0213 15:17:43.857664    3376 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Feb 13 15:17:43.858348 kubelet[3376]: I0213 15:17:43.857926    3376 topology_manager.go:138] "Creating topology manager with none policy"
Feb 13 15:17:43.858348 kubelet[3376]: I0213 15:17:43.857957    3376 container_manager_linux.go:301] "Creating device plugin manager"
Feb 13 15:17:43.858348 kubelet[3376]: I0213 15:17:43.858023    3376 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:17:43.858761 kubelet[3376]: I0213 15:17:43.858637    3376 kubelet.go:396] "Attempting to sync node with API server"
Feb 13 15:17:43.859646 kubelet[3376]: I0213 15:17:43.859253    3376 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 13 15:17:43.859646 kubelet[3376]: I0213 15:17:43.859352    3376 kubelet.go:312] "Adding apiserver pod source"
Feb 13 15:17:43.859646 kubelet[3376]: I0213 15:17:43.859396    3376 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 13 15:17:43.873154 kubelet[3376]: I0213 15:17:43.868352    3376 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Feb 13 15:17:43.873154 kubelet[3376]: I0213 15:17:43.868694    3376 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Feb 13 15:17:43.873154 kubelet[3376]: I0213 15:17:43.869507    3376 server.go:1256] "Started kubelet"
Feb 13 15:17:43.882760 kubelet[3376]: I0213 15:17:43.882716    3376 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 13 15:17:43.890629 kubelet[3376]: I0213 15:17:43.890456    3376 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Feb 13 15:17:43.895194 kubelet[3376]: I0213 15:17:43.893937    3376 server.go:461] "Adding debug handlers to kubelet server"
Feb 13 15:17:43.895894 kubelet[3376]: I0213 15:17:43.895849    3376 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Feb 13 15:17:43.896252 kubelet[3376]: I0213 15:17:43.896211    3376 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb 13 15:17:43.901167 kubelet[3376]: I0213 15:17:43.900565    3376 volume_manager.go:291] "Starting Kubelet Volume Manager"
Feb 13 15:17:43.903988 kubelet[3376]: I0213 15:17:43.903464    3376 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Feb 13 15:17:43.918379 kubelet[3376]: I0213 15:17:43.915132    3376 factory.go:221] Registration of the systemd container factory successfully
Feb 13 15:17:43.918379 kubelet[3376]: I0213 15:17:43.915517    3376 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Feb 13 15:17:43.921218 kubelet[3376]: I0213 15:17:43.919721    3376 reconciler_new.go:29] "Reconciler: start to sync state"
Feb 13 15:17:43.966018 kubelet[3376]: I0213 15:17:43.965924    3376 factory.go:221] Registration of the containerd container factory successfully
Feb 13 15:17:43.998727 kubelet[3376]: E0213 15:17:43.998620    3376 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 13 15:17:44.009697 kubelet[3376]: I0213 15:17:44.009603    3376 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 13 15:17:44.021035 kubelet[3376]: I0213 15:17:44.021000    3376 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 13 15:17:44.021505 kubelet[3376]: I0213 15:17:44.021477    3376 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb 13 15:17:44.021751 kubelet[3376]: I0213 15:17:44.021731    3376 kubelet.go:2329] "Starting kubelet main sync loop"
Feb 13 15:17:44.022839 kubelet[3376]: E0213 15:17:44.022790    3376 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb 13 15:17:44.024273 kubelet[3376]: I0213 15:17:44.023825    3376 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-200"
Feb 13 15:17:44.057330 kubelet[3376]: I0213 15:17:44.056595    3376 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-23-200"
Feb 13 15:17:44.057330 kubelet[3376]: I0213 15:17:44.056726    3376 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-200"
Feb 13 15:17:44.124555 kubelet[3376]: E0213 15:17:44.124502    3376 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Feb 13 15:17:44.125431 kubelet[3376]: I0213 15:17:44.125387    3376 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 13 15:17:44.125431 kubelet[3376]: I0213 15:17:44.125431    3376 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 13 15:17:44.125617 kubelet[3376]: I0213 15:17:44.125465    3376 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:17:44.126764 kubelet[3376]: I0213 15:17:44.125715    3376 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Feb 13 15:17:44.126764 kubelet[3376]: I0213 15:17:44.125766    3376 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Feb 13 15:17:44.126764 kubelet[3376]: I0213 15:17:44.125784    3376 policy_none.go:49] "None policy: Start"
Feb 13 15:17:44.127360 kubelet[3376]: I0213 15:17:44.127317    3376 memory_manager.go:170] "Starting memorymanager" policy="None"
Feb 13 15:17:44.127439 kubelet[3376]: I0213 15:17:44.127373    3376 state_mem.go:35] "Initializing new in-memory state store"
Feb 13 15:17:44.127781 kubelet[3376]: I0213 15:17:44.127649    3376 state_mem.go:75] "Updated machine memory state"
Feb 13 15:17:44.149391 kubelet[3376]: I0213 15:17:44.149007    3376 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 13 15:17:44.151815 kubelet[3376]: I0213 15:17:44.150484    3376 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 13 15:17:44.325921 kubelet[3376]: I0213 15:17:44.325734    3376 topology_manager.go:215] "Topology Admit Handler" podUID="ff8d5b1c67db3280c23eba9fb63b2a5a" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-200"
Feb 13 15:17:44.325921 kubelet[3376]: I0213 15:17:44.325892    3376 topology_manager.go:215] "Topology Admit Handler" podUID="2e4baaea13d2a8e7e2d41e16543220f7" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-200"
Feb 13 15:17:44.326185 kubelet[3376]: I0213 15:17:44.326042    3376 topology_manager.go:215] "Topology Admit Handler" podUID="dafc5e6efcb1ac97b582b6655f4fb4e9" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-200"
Feb 13 15:17:44.341657 kubelet[3376]: E0213 15:17:44.341549    3376 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-23-200\" already exists" pod="kube-system/kube-scheduler-ip-172-31-23-200"
Feb 13 15:17:44.421987 kubelet[3376]: I0213 15:17:44.421879    3376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ff8d5b1c67db3280c23eba9fb63b2a5a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-200\" (UID: \"ff8d5b1c67db3280c23eba9fb63b2a5a\") " pod="kube-system/kube-controller-manager-ip-172-31-23-200"
Feb 13 15:17:44.422834 kubelet[3376]: I0213 15:17:44.422322    3376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dafc5e6efcb1ac97b582b6655f4fb4e9-ca-certs\") pod \"kube-apiserver-ip-172-31-23-200\" (UID: \"dafc5e6efcb1ac97b582b6655f4fb4e9\") " pod="kube-system/kube-apiserver-ip-172-31-23-200"
Feb 13 15:17:44.422834 kubelet[3376]: I0213 15:17:44.422425    3376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dafc5e6efcb1ac97b582b6655f4fb4e9-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-200\" (UID: \"dafc5e6efcb1ac97b582b6655f4fb4e9\") " pod="kube-system/kube-apiserver-ip-172-31-23-200"
Feb 13 15:17:44.422834 kubelet[3376]: I0213 15:17:44.422477    3376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dafc5e6efcb1ac97b582b6655f4fb4e9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-200\" (UID: \"dafc5e6efcb1ac97b582b6655f4fb4e9\") " pod="kube-system/kube-apiserver-ip-172-31-23-200"
Feb 13 15:17:44.422834 kubelet[3376]: I0213 15:17:44.422525    3376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ff8d5b1c67db3280c23eba9fb63b2a5a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-200\" (UID: \"ff8d5b1c67db3280c23eba9fb63b2a5a\") " pod="kube-system/kube-controller-manager-ip-172-31-23-200"
Feb 13 15:17:44.422834 kubelet[3376]: I0213 15:17:44.422589    3376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ff8d5b1c67db3280c23eba9fb63b2a5a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-200\" (UID: \"ff8d5b1c67db3280c23eba9fb63b2a5a\") " pod="kube-system/kube-controller-manager-ip-172-31-23-200"
Feb 13 15:17:44.423135 kubelet[3376]: I0213 15:17:44.422639    3376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ff8d5b1c67db3280c23eba9fb63b2a5a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-200\" (UID: \"ff8d5b1c67db3280c23eba9fb63b2a5a\") " pod="kube-system/kube-controller-manager-ip-172-31-23-200"
Feb 13 15:17:44.423135 kubelet[3376]: I0213 15:17:44.422689    3376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2e4baaea13d2a8e7e2d41e16543220f7-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-200\" (UID: \"2e4baaea13d2a8e7e2d41e16543220f7\") " pod="kube-system/kube-scheduler-ip-172-31-23-200"
Feb 13 15:17:44.423135 kubelet[3376]: I0213 15:17:44.422735    3376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ff8d5b1c67db3280c23eba9fb63b2a5a-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-200\" (UID: \"ff8d5b1c67db3280c23eba9fb63b2a5a\") " pod="kube-system/kube-controller-manager-ip-172-31-23-200"
Feb 13 15:17:44.879688 kubelet[3376]: I0213 15:17:44.879548    3376 apiserver.go:52] "Watching apiserver"
Feb 13 15:17:44.906206 kubelet[3376]: I0213 15:17:44.905452    3376 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb 13 15:17:44.967990 kubelet[3376]: I0213 15:17:44.967114    3376 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-200" podStartSLOduration=0.967047775 podStartE2EDuration="967.047775ms" podCreationTimestamp="2025-02-13 15:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:17:44.95465721 +0000 UTC m=+1.267269423" watchObservedRunningTime="2025-02-13 15:17:44.967047775 +0000 UTC m=+1.279659988"
Feb 13 15:17:44.984245 kubelet[3376]: I0213 15:17:44.984127    3376 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-200" podStartSLOduration=3.984065775 podStartE2EDuration="3.984065775s" podCreationTimestamp="2025-02-13 15:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:17:44.967611108 +0000 UTC m=+1.280223333" watchObservedRunningTime="2025-02-13 15:17:44.984065775 +0000 UTC m=+1.296677976"
Feb 13 15:17:45.003936 kubelet[3376]: I0213 15:17:45.003859    3376 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-200" podStartSLOduration=1.003793699 podStartE2EDuration="1.003793699s" podCreationTimestamp="2025-02-13 15:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:17:44.985297123 +0000 UTC m=+1.297909312" watchObservedRunningTime="2025-02-13 15:17:45.003793699 +0000 UTC m=+1.316405900"
Feb 13 15:17:45.521593 sudo[2212]: pam_unix(sudo:session): session closed for user root
Feb 13 15:17:45.545093 sshd[2211]: Connection closed by 139.178.68.195 port 40290
Feb 13 15:17:45.546678 sshd-session[2204]: pam_unix(sshd:session): session closed for user core
Feb 13 15:17:45.554243 systemd[1]: sshd@4-172.31.23.200:22-139.178.68.195:40290.service: Deactivated successfully.
Feb 13 15:17:45.557909 systemd[1]: session-5.scope: Deactivated successfully.
Feb 13 15:17:45.558938 systemd[1]: session-5.scope: Consumed 9.955s CPU time, 188.1M memory peak, 0B memory swap peak.
Feb 13 15:17:45.560108 systemd-logind[1918]: Session 5 logged out. Waiting for processes to exit.
Feb 13 15:17:45.562282 systemd-logind[1918]: Removed session 5.
Feb 13 15:17:54.604497 kubelet[3376]: I0213 15:17:54.604461    3376 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Feb 13 15:17:54.605953 containerd[1939]: time="2025-02-13T15:17:54.605886941Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Feb 13 15:17:54.606567 kubelet[3376]: I0213 15:17:54.606213    3376 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Feb 13 15:17:55.156380 kubelet[3376]: I0213 15:17:55.155040    3376 topology_manager.go:215] "Topology Admit Handler" podUID="be6f0ec8-b32e-4094-a251-418097fee3ef" podNamespace="kube-system" podName="kube-proxy-mdptf"
Feb 13 15:17:55.176992 systemd[1]: Created slice kubepods-besteffort-podbe6f0ec8_b32e_4094_a251_418097fee3ef.slice - libcontainer container kubepods-besteffort-podbe6f0ec8_b32e_4094_a251_418097fee3ef.slice.
Feb 13 15:17:55.189572 kubelet[3376]: I0213 15:17:55.189482    3376 topology_manager.go:215] "Topology Admit Handler" podUID="97caa5b7-37ef-456b-8ccf-39443ca278e6" podNamespace="kube-flannel" podName="kube-flannel-ds-dx7pz"
Feb 13 15:17:55.195901 kubelet[3376]: I0213 15:17:55.195636    3376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be6f0ec8-b32e-4094-a251-418097fee3ef-lib-modules\") pod \"kube-proxy-mdptf\" (UID: \"be6f0ec8-b32e-4094-a251-418097fee3ef\") " pod="kube-system/kube-proxy-mdptf"
Feb 13 15:17:55.195901 kubelet[3376]: I0213 15:17:55.195707    3376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/be6f0ec8-b32e-4094-a251-418097fee3ef-kube-proxy\") pod \"kube-proxy-mdptf\" (UID: \"be6f0ec8-b32e-4094-a251-418097fee3ef\") " pod="kube-system/kube-proxy-mdptf"
Feb 13 15:17:55.195901 kubelet[3376]: I0213 15:17:55.195758    3376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be6f0ec8-b32e-4094-a251-418097fee3ef-xtables-lock\") pod \"kube-proxy-mdptf\" (UID: \"be6f0ec8-b32e-4094-a251-418097fee3ef\") " pod="kube-system/kube-proxy-mdptf"
Feb 13 15:17:55.195901 kubelet[3376]: I0213 15:17:55.195809    3376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxnbs\" (UniqueName: \"kubernetes.io/projected/be6f0ec8-b32e-4094-a251-418097fee3ef-kube-api-access-zxnbs\") pod \"kube-proxy-mdptf\" (UID: \"be6f0ec8-b32e-4094-a251-418097fee3ef\") " pod="kube-system/kube-proxy-mdptf"
Feb 13 15:17:55.214326 systemd[1]: Created slice kubepods-burstable-pod97caa5b7_37ef_456b_8ccf_39443ca278e6.slice - libcontainer container kubepods-burstable-pod97caa5b7_37ef_456b_8ccf_39443ca278e6.slice.
Feb 13 15:17:55.297019 kubelet[3376]: I0213 15:17:55.296949    3376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d847x\" (UniqueName: \"kubernetes.io/projected/97caa5b7-37ef-456b-8ccf-39443ca278e6-kube-api-access-d847x\") pod \"kube-flannel-ds-dx7pz\" (UID: \"97caa5b7-37ef-456b-8ccf-39443ca278e6\") " pod="kube-flannel/kube-flannel-ds-dx7pz"
Feb 13 15:17:55.298084 kubelet[3376]: I0213 15:17:55.297357    3376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/97caa5b7-37ef-456b-8ccf-39443ca278e6-flannel-cfg\") pod \"kube-flannel-ds-dx7pz\" (UID: \"97caa5b7-37ef-456b-8ccf-39443ca278e6\") " pod="kube-flannel/kube-flannel-ds-dx7pz"
Feb 13 15:17:55.298084 kubelet[3376]: I0213 15:17:55.297414    3376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97caa5b7-37ef-456b-8ccf-39443ca278e6-xtables-lock\") pod \"kube-flannel-ds-dx7pz\" (UID: \"97caa5b7-37ef-456b-8ccf-39443ca278e6\") " pod="kube-flannel/kube-flannel-ds-dx7pz"
Feb 13 15:17:55.298084 kubelet[3376]: I0213 15:17:55.297498    3376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/97caa5b7-37ef-456b-8ccf-39443ca278e6-run\") pod \"kube-flannel-ds-dx7pz\" (UID: \"97caa5b7-37ef-456b-8ccf-39443ca278e6\") " pod="kube-flannel/kube-flannel-ds-dx7pz"
Feb 13 15:17:55.298084 kubelet[3376]: I0213 15:17:55.297554    3376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/97caa5b7-37ef-456b-8ccf-39443ca278e6-cni\") pod \"kube-flannel-ds-dx7pz\" (UID: \"97caa5b7-37ef-456b-8ccf-39443ca278e6\") " pod="kube-flannel/kube-flannel-ds-dx7pz"
Feb 13 15:17:55.298084 kubelet[3376]: I0213 15:17:55.297739    3376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/97caa5b7-37ef-456b-8ccf-39443ca278e6-cni-plugin\") pod \"kube-flannel-ds-dx7pz\" (UID: \"97caa5b7-37ef-456b-8ccf-39443ca278e6\") " pod="kube-flannel/kube-flannel-ds-dx7pz"
Feb 13 15:17:55.502037 containerd[1939]: time="2025-02-13T15:17:55.501007796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mdptf,Uid:be6f0ec8-b32e-4094-a251-418097fee3ef,Namespace:kube-system,Attempt:0,}"
Feb 13 15:17:55.523318 containerd[1939]: time="2025-02-13T15:17:55.523261169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-dx7pz,Uid:97caa5b7-37ef-456b-8ccf-39443ca278e6,Namespace:kube-flannel,Attempt:0,}"
Feb 13 15:17:55.609007 containerd[1939]: time="2025-02-13T15:17:55.608504298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:17:55.609007 containerd[1939]: time="2025-02-13T15:17:55.608615533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:17:55.609007 containerd[1939]: time="2025-02-13T15:17:55.608642211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:17:55.609007 containerd[1939]: time="2025-02-13T15:17:55.608776630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:17:55.643114 containerd[1939]: time="2025-02-13T15:17:55.641411291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:17:55.643114 containerd[1939]: time="2025-02-13T15:17:55.641498179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:17:55.643114 containerd[1939]: time="2025-02-13T15:17:55.641523764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:17:55.643114 containerd[1939]: time="2025-02-13T15:17:55.641673202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:17:55.691654 systemd[1]: Started cri-containerd-f5b3ca360166e867164e376e962bb060feb83d22e806b1c3a7c25ff654d8f5b6.scope - libcontainer container f5b3ca360166e867164e376e962bb060feb83d22e806b1c3a7c25ff654d8f5b6.
Feb 13 15:17:55.704921 systemd[1]: Started cri-containerd-1738c2bdb8855c88f6e2d1384f503e808881a98518b964b5f4252a3bf943a287.scope - libcontainer container 1738c2bdb8855c88f6e2d1384f503e808881a98518b964b5f4252a3bf943a287.
Feb 13 15:17:55.843999 containerd[1939]: time="2025-02-13T15:17:55.843935320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mdptf,Uid:be6f0ec8-b32e-4094-a251-418097fee3ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5b3ca360166e867164e376e962bb060feb83d22e806b1c3a7c25ff654d8f5b6\""
Feb 13 15:17:55.860386 containerd[1939]: time="2025-02-13T15:17:55.858949303Z" level=info msg="CreateContainer within sandbox \"f5b3ca360166e867164e376e962bb060feb83d22e806b1c3a7c25ff654d8f5b6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb 13 15:17:55.871804 containerd[1939]: time="2025-02-13T15:17:55.871516848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-dx7pz,Uid:97caa5b7-37ef-456b-8ccf-39443ca278e6,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"1738c2bdb8855c88f6e2d1384f503e808881a98518b964b5f4252a3bf943a287\""
Feb 13 15:17:55.876747 containerd[1939]: time="2025-02-13T15:17:55.876486948Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\""
Feb 13 15:17:55.910306 containerd[1939]: time="2025-02-13T15:17:55.910236883Z" level=info msg="CreateContainer within sandbox \"f5b3ca360166e867164e376e962bb060feb83d22e806b1c3a7c25ff654d8f5b6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ce010a6cb0197131dbac3aa0fff28d65a031f26aadb96cd56754bc66bf71a8bb\""
Feb 13 15:17:55.913505 containerd[1939]: time="2025-02-13T15:17:55.913415471Z" level=info msg="StartContainer for \"ce010a6cb0197131dbac3aa0fff28d65a031f26aadb96cd56754bc66bf71a8bb\""
Feb 13 15:17:55.970456 systemd[1]: Started cri-containerd-ce010a6cb0197131dbac3aa0fff28d65a031f26aadb96cd56754bc66bf71a8bb.scope - libcontainer container ce010a6cb0197131dbac3aa0fff28d65a031f26aadb96cd56754bc66bf71a8bb.
Feb 13 15:17:56.043699 containerd[1939]: time="2025-02-13T15:17:56.043592088Z" level=info msg="StartContainer for \"ce010a6cb0197131dbac3aa0fff28d65a031f26aadb96cd56754bc66bf71a8bb\" returns successfully"
Feb 13 15:17:58.034564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2527369778.mount: Deactivated successfully.
Feb 13 15:17:58.102248 containerd[1939]: time="2025-02-13T15:17:58.101692967Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:17:58.104053 containerd[1939]: time="2025-02-13T15:17:58.103964394Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673532"
Feb 13 15:17:58.106563 containerd[1939]: time="2025-02-13T15:17:58.106467849Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:17:58.112985 containerd[1939]: time="2025-02-13T15:17:58.112916163Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:17:58.115055 containerd[1939]: time="2025-02-13T15:17:58.114843379Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 2.238294167s"
Feb 13 15:17:58.115055 containerd[1939]: time="2025-02-13T15:17:58.114919401Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\""
Feb 13 15:17:58.121297 containerd[1939]: time="2025-02-13T15:17:58.121007655Z" level=info msg="CreateContainer within sandbox \"1738c2bdb8855c88f6e2d1384f503e808881a98518b964b5f4252a3bf943a287\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}"
Feb 13 15:17:58.147488 containerd[1939]: time="2025-02-13T15:17:58.147414372Z" level=info msg="CreateContainer within sandbox \"1738c2bdb8855c88f6e2d1384f503e808881a98518b964b5f4252a3bf943a287\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"9d11b2a03fd208c23f033a780a92a7872e88077b54ff54667ef71ff3f3eda496\""
Feb 13 15:17:58.149340 containerd[1939]: time="2025-02-13T15:17:58.148388851Z" level=info msg="StartContainer for \"9d11b2a03fd208c23f033a780a92a7872e88077b54ff54667ef71ff3f3eda496\""
Feb 13 15:17:58.205447 systemd[1]: Started cri-containerd-9d11b2a03fd208c23f033a780a92a7872e88077b54ff54667ef71ff3f3eda496.scope - libcontainer container 9d11b2a03fd208c23f033a780a92a7872e88077b54ff54667ef71ff3f3eda496.
Feb 13 15:17:58.263122 systemd[1]: cri-containerd-9d11b2a03fd208c23f033a780a92a7872e88077b54ff54667ef71ff3f3eda496.scope: Deactivated successfully.
Feb 13 15:17:58.266495 containerd[1939]: time="2025-02-13T15:17:58.266431148Z" level=info msg="StartContainer for \"9d11b2a03fd208c23f033a780a92a7872e88077b54ff54667ef71ff3f3eda496\" returns successfully"
Feb 13 15:17:58.341826 containerd[1939]: time="2025-02-13T15:17:58.341746659Z" level=info msg="shim disconnected" id=9d11b2a03fd208c23f033a780a92a7872e88077b54ff54667ef71ff3f3eda496 namespace=k8s.io
Feb 13 15:17:58.341826 containerd[1939]: time="2025-02-13T15:17:58.341824890Z" level=warning msg="cleaning up after shim disconnected" id=9d11b2a03fd208c23f033a780a92a7872e88077b54ff54667ef71ff3f3eda496 namespace=k8s.io
Feb 13 15:17:58.341826 containerd[1939]: time="2025-02-13T15:17:58.341845901Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:17:58.367225 containerd[1939]: time="2025-02-13T15:17:58.367054827Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:17:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Feb 13 15:17:58.879704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d11b2a03fd208c23f033a780a92a7872e88077b54ff54667ef71ff3f3eda496-rootfs.mount: Deactivated successfully.
Feb 13 15:17:59.125919 containerd[1939]: time="2025-02-13T15:17:59.125778156Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\""
Feb 13 15:17:59.161308 kubelet[3376]: I0213 15:17:59.159293    3376 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mdptf" podStartSLOduration=4.159233392 podStartE2EDuration="4.159233392s" podCreationTimestamp="2025-02-13 15:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:17:56.13684604 +0000 UTC m=+12.449458265" watchObservedRunningTime="2025-02-13 15:17:59.159233392 +0000 UTC m=+15.471845593"
Feb 13 15:18:01.266879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount46993743.mount: Deactivated successfully.
Feb 13 15:18:02.901878 containerd[1939]: time="2025-02-13T15:18:02.901717505Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:18:02.904510 containerd[1939]: time="2025-02-13T15:18:02.904436600Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874260"
Feb 13 15:18:02.907521 containerd[1939]: time="2025-02-13T15:18:02.907371466Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:18:02.925604 containerd[1939]: time="2025-02-13T15:18:02.925535656Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:18:02.929510 containerd[1939]: time="2025-02-13T15:18:02.929064952Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 3.803179942s"
Feb 13 15:18:02.929510 containerd[1939]: time="2025-02-13T15:18:02.929204089Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\""
Feb 13 15:18:02.934945 containerd[1939]: time="2025-02-13T15:18:02.933638145Z" level=info msg="CreateContainer within sandbox \"1738c2bdb8855c88f6e2d1384f503e808881a98518b964b5f4252a3bf943a287\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Feb 13 15:18:02.962736 containerd[1939]: time="2025-02-13T15:18:02.962661210Z" level=info msg="CreateContainer within sandbox \"1738c2bdb8855c88f6e2d1384f503e808881a98518b964b5f4252a3bf943a287\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"98f5dfdb9a9bd073ebacde51c205557d64549ce366741db93992aa063fbd5909\""
Feb 13 15:18:02.964980 containerd[1939]: time="2025-02-13T15:18:02.964738296Z" level=info msg="StartContainer for \"98f5dfdb9a9bd073ebacde51c205557d64549ce366741db93992aa063fbd5909\""
Feb 13 15:18:03.020638 systemd[1]: Started cri-containerd-98f5dfdb9a9bd073ebacde51c205557d64549ce366741db93992aa063fbd5909.scope - libcontainer container 98f5dfdb9a9bd073ebacde51c205557d64549ce366741db93992aa063fbd5909.
Feb 13 15:18:03.067862 systemd[1]: cri-containerd-98f5dfdb9a9bd073ebacde51c205557d64549ce366741db93992aa063fbd5909.scope: Deactivated successfully.
Feb 13 15:18:03.072881 containerd[1939]: time="2025-02-13T15:18:03.072398104Z" level=info msg="StartContainer for \"98f5dfdb9a9bd073ebacde51c205557d64549ce366741db93992aa063fbd5909\" returns successfully"
Feb 13 15:18:03.113055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98f5dfdb9a9bd073ebacde51c205557d64549ce366741db93992aa063fbd5909-rootfs.mount: Deactivated successfully.
Feb 13 15:18:03.127336 kubelet[3376]: I0213 15:18:03.126994    3376 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Feb 13 15:18:03.199454 kubelet[3376]: I0213 15:18:03.197106    3376 topology_manager.go:215] "Topology Admit Handler" podUID="a497b22b-1bf7-4622-8ef7-5b976f9bd1cf" podNamespace="kube-system" podName="coredns-76f75df574-jmt7q"
Feb 13 15:18:03.199454 kubelet[3376]: I0213 15:18:03.197362    3376 topology_manager.go:215] "Topology Admit Handler" podUID="495dbc89-dc63-424c-9419-f77a8bf63db6" podNamespace="kube-system" podName="coredns-76f75df574-s7m99"
Feb 13 15:18:03.226708 systemd[1]: Created slice kubepods-burstable-pod495dbc89_dc63_424c_9419_f77a8bf63db6.slice - libcontainer container kubepods-burstable-pod495dbc89_dc63_424c_9419_f77a8bf63db6.slice.
Feb 13 15:18:03.253634 kubelet[3376]: I0213 15:18:03.253577    3376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xngv\" (UniqueName: \"kubernetes.io/projected/495dbc89-dc63-424c-9419-f77a8bf63db6-kube-api-access-2xngv\") pod \"coredns-76f75df574-s7m99\" (UID: \"495dbc89-dc63-424c-9419-f77a8bf63db6\") " pod="kube-system/coredns-76f75df574-s7m99"
Feb 13 15:18:03.254066 kubelet[3376]: I0213 15:18:03.253981    3376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a497b22b-1bf7-4622-8ef7-5b976f9bd1cf-config-volume\") pod \"coredns-76f75df574-jmt7q\" (UID: \"a497b22b-1bf7-4622-8ef7-5b976f9bd1cf\") " pod="kube-system/coredns-76f75df574-jmt7q"
Feb 13 15:18:03.257723 kubelet[3376]: I0213 15:18:03.254235    3376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbss6\" (UniqueName: \"kubernetes.io/projected/a497b22b-1bf7-4622-8ef7-5b976f9bd1cf-kube-api-access-bbss6\") pod \"coredns-76f75df574-jmt7q\" (UID: \"a497b22b-1bf7-4622-8ef7-5b976f9bd1cf\") " pod="kube-system/coredns-76f75df574-jmt7q"
Feb 13 15:18:03.257723 kubelet[3376]: I0213 15:18:03.254463    3376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/495dbc89-dc63-424c-9419-f77a8bf63db6-config-volume\") pod \"coredns-76f75df574-s7m99\" (UID: \"495dbc89-dc63-424c-9419-f77a8bf63db6\") " pod="kube-system/coredns-76f75df574-s7m99"
Feb 13 15:18:03.257961 containerd[1939]: time="2025-02-13T15:18:03.257281657Z" level=info msg="shim disconnected" id=98f5dfdb9a9bd073ebacde51c205557d64549ce366741db93992aa063fbd5909 namespace=k8s.io
Feb 13 15:18:03.257961 containerd[1939]: time="2025-02-13T15:18:03.257356970Z" level=warning msg="cleaning up after shim disconnected" id=98f5dfdb9a9bd073ebacde51c205557d64549ce366741db93992aa063fbd5909 namespace=k8s.io
Feb 13 15:18:03.257961 containerd[1939]: time="2025-02-13T15:18:03.257375892Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:18:03.256033 systemd[1]: Created slice kubepods-burstable-poda497b22b_1bf7_4622_8ef7_5b976f9bd1cf.slice - libcontainer container kubepods-burstable-poda497b22b_1bf7_4622_8ef7_5b976f9bd1cf.slice.
Feb 13 15:18:03.542770 containerd[1939]: time="2025-02-13T15:18:03.542605629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s7m99,Uid:495dbc89-dc63-424c-9419-f77a8bf63db6,Namespace:kube-system,Attempt:0,}"
Feb 13 15:18:03.569213 containerd[1939]: time="2025-02-13T15:18:03.568903812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jmt7q,Uid:a497b22b-1bf7-4622-8ef7-5b976f9bd1cf,Namespace:kube-system,Attempt:0,}"
Feb 13 15:18:03.602852 containerd[1939]: time="2025-02-13T15:18:03.602691397Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s7m99,Uid:495dbc89-dc63-424c-9419-f77a8bf63db6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"980847027effeeb69bbe549dcdb5709e36c3e4bdd75c3b57dfe0bf1c5fce49a9\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
Feb 13 15:18:03.603865 kubelet[3376]: E0213 15:18:03.603814    3376 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"980847027effeeb69bbe549dcdb5709e36c3e4bdd75c3b57dfe0bf1c5fce49a9\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
Feb 13 15:18:03.604006 kubelet[3376]: E0213 15:18:03.603900    3376 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"980847027effeeb69bbe549dcdb5709e36c3e4bdd75c3b57dfe0bf1c5fce49a9\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-s7m99"
Feb 13 15:18:03.604006 kubelet[3376]: E0213 15:18:03.603939    3376 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"980847027effeeb69bbe549dcdb5709e36c3e4bdd75c3b57dfe0bf1c5fce49a9\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-s7m99"
Feb 13 15:18:03.604117 kubelet[3376]: E0213 15:18:03.604031    3376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-s7m99_kube-system(495dbc89-dc63-424c-9419-f77a8bf63db6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-s7m99_kube-system(495dbc89-dc63-424c-9419-f77a8bf63db6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"980847027effeeb69bbe549dcdb5709e36c3e4bdd75c3b57dfe0bf1c5fce49a9\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-s7m99" podUID="495dbc89-dc63-424c-9419-f77a8bf63db6"
Feb 13 15:18:03.619448 containerd[1939]: time="2025-02-13T15:18:03.619346745Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jmt7q,Uid:a497b22b-1bf7-4622-8ef7-5b976f9bd1cf,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bc8e465fa26a29f6f6fc481e93d65542ab20d7cbfffb986a6ccf53ccb04a0177\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
Feb 13 15:18:03.619810 kubelet[3376]: E0213 15:18:03.619759    3376 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc8e465fa26a29f6f6fc481e93d65542ab20d7cbfffb986a6ccf53ccb04a0177\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
Feb 13 15:18:03.619927 kubelet[3376]: E0213 15:18:03.619842    3376 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc8e465fa26a29f6f6fc481e93d65542ab20d7cbfffb986a6ccf53ccb04a0177\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-jmt7q"
Feb 13 15:18:03.619927 kubelet[3376]: E0213 15:18:03.619887    3376 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc8e465fa26a29f6f6fc481e93d65542ab20d7cbfffb986a6ccf53ccb04a0177\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-jmt7q"
Feb 13 15:18:03.620041 kubelet[3376]: E0213 15:18:03.619964    3376 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-jmt7q_kube-system(a497b22b-1bf7-4622-8ef7-5b976f9bd1cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-jmt7q_kube-system(a497b22b-1bf7-4622-8ef7-5b976f9bd1cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc8e465fa26a29f6f6fc481e93d65542ab20d7cbfffb986a6ccf53ccb04a0177\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-jmt7q" podUID="a497b22b-1bf7-4622-8ef7-5b976f9bd1cf"
Feb 13 15:18:04.158719 containerd[1939]: time="2025-02-13T15:18:04.156954598Z" level=info msg="CreateContainer within sandbox \"1738c2bdb8855c88f6e2d1384f503e808881a98518b964b5f4252a3bf943a287\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}"
Feb 13 15:18:04.194786 containerd[1939]: time="2025-02-13T15:18:04.194587021Z" level=info msg="CreateContainer within sandbox \"1738c2bdb8855c88f6e2d1384f503e808881a98518b964b5f4252a3bf943a287\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"e2051bdb7771f523c4ca5814c2115ddbc4fe068ce2aae3fdd88ba032431109e9\""
Feb 13 15:18:04.195688 containerd[1939]: time="2025-02-13T15:18:04.195583868Z" level=info msg="StartContainer for \"e2051bdb7771f523c4ca5814c2115ddbc4fe068ce2aae3fdd88ba032431109e9\""
Feb 13 15:18:04.264490 systemd[1]: Started cri-containerd-e2051bdb7771f523c4ca5814c2115ddbc4fe068ce2aae3fdd88ba032431109e9.scope - libcontainer container e2051bdb7771f523c4ca5814c2115ddbc4fe068ce2aae3fdd88ba032431109e9.
Feb 13 15:18:04.336062 containerd[1939]: time="2025-02-13T15:18:04.335986307Z" level=info msg="StartContainer for \"e2051bdb7771f523c4ca5814c2115ddbc4fe068ce2aae3fdd88ba032431109e9\" returns successfully"
Feb 13 15:18:05.409692 (udev-worker)[3926]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:18:05.437802 systemd-networkd[1839]: flannel.1: Link UP
Feb 13 15:18:05.438496 systemd-networkd[1839]: flannel.1: Gained carrier
Feb 13 15:18:06.914307 systemd-networkd[1839]: flannel.1: Gained IPv6LL
Feb 13 15:18:09.406270 ntpd[1908]: Listen normally on 7 flannel.1 192.168.0.0:123
Feb 13 15:18:09.406427 ntpd[1908]: Listen normally on 8 flannel.1 [fe80::18d5:16ff:fe24:c7cb%4]:123
Feb 13 15:18:09.406951 ntpd[1908]: 13 Feb 15:18:09 ntpd[1908]: Listen normally on 7 flannel.1 192.168.0.0:123
Feb 13 15:18:09.406951 ntpd[1908]: 13 Feb 15:18:09 ntpd[1908]: Listen normally on 8 flannel.1 [fe80::18d5:16ff:fe24:c7cb%4]:123
Feb 13 15:18:16.025235 containerd[1939]: time="2025-02-13T15:18:16.024723720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s7m99,Uid:495dbc89-dc63-424c-9419-f77a8bf63db6,Namespace:kube-system,Attempt:0,}"
Feb 13 15:18:16.062935 systemd-networkd[1839]: cni0: Link UP
Feb 13 15:18:16.062952 systemd-networkd[1839]: cni0: Gained carrier
Feb 13 15:18:16.074007 (udev-worker)[4064]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:18:16.074655 systemd-networkd[1839]: cni0: Lost carrier
Feb 13 15:18:16.078459 systemd-networkd[1839]: veth514b0cf5: Link UP
Feb 13 15:18:16.084596 kernel: cni0: port 1(veth514b0cf5) entered blocking state
Feb 13 15:18:16.084719 kernel: cni0: port 1(veth514b0cf5) entered disabled state
Feb 13 15:18:16.085845 kernel: veth514b0cf5: entered allmulticast mode
Feb 13 15:18:16.087158 kernel: veth514b0cf5: entered promiscuous mode
Feb 13 15:18:16.090222 kernel: cni0: port 1(veth514b0cf5) entered blocking state
Feb 13 15:18:16.090325 kernel: cni0: port 1(veth514b0cf5) entered forwarding state
Feb 13 15:18:16.092178 kernel: cni0: port 1(veth514b0cf5) entered disabled state
Feb 13 15:18:16.099318 (udev-worker)[4070]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:18:16.105750 kernel: cni0: port 1(veth514b0cf5) entered blocking state
Feb 13 15:18:16.105841 kernel: cni0: port 1(veth514b0cf5) entered forwarding state
Feb 13 15:18:16.105429 systemd-networkd[1839]: veth514b0cf5: Gained carrier
Feb 13 15:18:16.106042 systemd-networkd[1839]: cni0: Gained carrier
Feb 13 15:18:16.111250 containerd[1939]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000014938), "name":"cbr0", "type":"bridge"}
Feb 13 15:18:16.111250 containerd[1939]: delegateAdd: netconf sent to delegate plugin:
Feb 13 15:18:16.152107 containerd[1939]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T15:18:16.151743275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:18:16.152107 containerd[1939]: time="2025-02-13T15:18:16.151855819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:18:16.152107 containerd[1939]: time="2025-02-13T15:18:16.151883313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:18:16.152462 containerd[1939]: time="2025-02-13T15:18:16.152044265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:18:16.196456 systemd[1]: Started cri-containerd-b90cd4c9208027fe6b759086117eddb90f63356d15d572a8a5b86c09daf838ce.scope - libcontainer container b90cd4c9208027fe6b759086117eddb90f63356d15d572a8a5b86c09daf838ce.
Feb 13 15:18:16.258487 containerd[1939]: time="2025-02-13T15:18:16.258396115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s7m99,Uid:495dbc89-dc63-424c-9419-f77a8bf63db6,Namespace:kube-system,Attempt:0,} returns sandbox id \"b90cd4c9208027fe6b759086117eddb90f63356d15d572a8a5b86c09daf838ce\""
Feb 13 15:18:16.265410 containerd[1939]: time="2025-02-13T15:18:16.265068930Z" level=info msg="CreateContainer within sandbox \"b90cd4c9208027fe6b759086117eddb90f63356d15d572a8a5b86c09daf838ce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb 13 15:18:16.297102 containerd[1939]: time="2025-02-13T15:18:16.295802021Z" level=info msg="CreateContainer within sandbox \"b90cd4c9208027fe6b759086117eddb90f63356d15d572a8a5b86c09daf838ce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8f48c8b5f5a502f98c2edd252d861f64dc55a6a7961038db13974ccbf037f2a0\""
Feb 13 15:18:16.297601 containerd[1939]: time="2025-02-13T15:18:16.297272516Z" level=info msg="StartContainer for \"8f48c8b5f5a502f98c2edd252d861f64dc55a6a7961038db13974ccbf037f2a0\""
Feb 13 15:18:16.344460 systemd[1]: Started cri-containerd-8f48c8b5f5a502f98c2edd252d861f64dc55a6a7961038db13974ccbf037f2a0.scope - libcontainer container 8f48c8b5f5a502f98c2edd252d861f64dc55a6a7961038db13974ccbf037f2a0.
Feb 13 15:18:16.393726 containerd[1939]: time="2025-02-13T15:18:16.393634017Z" level=info msg="StartContainer for \"8f48c8b5f5a502f98c2edd252d861f64dc55a6a7961038db13974ccbf037f2a0\" returns successfully"
Feb 13 15:18:17.044924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2434743101.mount: Deactivated successfully.
Feb 13 15:18:17.205742 kubelet[3376]: I0213 15:18:17.204818    3376 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-dx7pz" podStartSLOduration=15.148862824 podStartE2EDuration="22.204760079s" podCreationTimestamp="2025-02-13 15:17:55 +0000 UTC" firstStartedPulling="2025-02-13 15:17:55.873955843 +0000 UTC m=+12.186568056" lastFinishedPulling="2025-02-13 15:18:02.929853098 +0000 UTC m=+19.242465311" observedRunningTime="2025-02-13 15:18:05.177395307 +0000 UTC m=+21.490007544" watchObservedRunningTime="2025-02-13 15:18:17.204760079 +0000 UTC m=+33.517372293"
Feb 13 15:18:17.227579 kubelet[3376]: I0213 15:18:17.226395    3376 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-s7m99" podStartSLOduration=22.226335546 podStartE2EDuration="22.226335546s" podCreationTimestamp="2025-02-13 15:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:17.207513608 +0000 UTC m=+33.520125821" watchObservedRunningTime="2025-02-13 15:18:17.226335546 +0000 UTC m=+33.538947771"
Feb 13 15:18:17.409469 systemd-networkd[1839]: veth514b0cf5: Gained IPv6LL
Feb 13 15:18:17.729449 systemd-networkd[1839]: cni0: Gained IPv6LL
Feb 13 15:18:18.026157 containerd[1939]: time="2025-02-13T15:18:18.024898626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jmt7q,Uid:a497b22b-1bf7-4622-8ef7-5b976f9bd1cf,Namespace:kube-system,Attempt:0,}"
Feb 13 15:18:18.070931 (udev-worker)[4080]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:18:18.078048 kernel: cni0: port 2(veth6abfe8a1) entered blocking state
Feb 13 15:18:18.078299 kernel: cni0: port 2(veth6abfe8a1) entered disabled state
Feb 13 15:18:18.076296 systemd-networkd[1839]: veth6abfe8a1: Link UP
Feb 13 15:18:18.080195 kernel: veth6abfe8a1: entered allmulticast mode
Feb 13 15:18:18.083304 kernel: veth6abfe8a1: entered promiscuous mode
Feb 13 15:18:18.097509 kernel: cni0: port 2(veth6abfe8a1) entered blocking state
Feb 13 15:18:18.097665 kernel: cni0: port 2(veth6abfe8a1) entered forwarding state
Feb 13 15:18:18.098054 systemd-networkd[1839]: veth6abfe8a1: Gained carrier
Feb 13 15:18:18.101082 containerd[1939]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x400001a938), "name":"cbr0", "type":"bridge"}
Feb 13 15:18:18.101082 containerd[1939]: delegateAdd: netconf sent to delegate plugin:
Feb 13 15:18:18.136111 containerd[1939]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T15:18:18.135611679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:18:18.136111 containerd[1939]: time="2025-02-13T15:18:18.135719217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:18:18.136111 containerd[1939]: time="2025-02-13T15:18:18.135757384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:18:18.136111 containerd[1939]: time="2025-02-13T15:18:18.135938158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:18:18.183786 systemd[1]: Started cri-containerd-b95be9c7c46f66a0e1a848803bd77e91fc43090d48501b32d99bc5cecc3e0573.scope - libcontainer container b95be9c7c46f66a0e1a848803bd77e91fc43090d48501b32d99bc5cecc3e0573.
Feb 13 15:18:18.251212 containerd[1939]: time="2025-02-13T15:18:18.251033774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jmt7q,Uid:a497b22b-1bf7-4622-8ef7-5b976f9bd1cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"b95be9c7c46f66a0e1a848803bd77e91fc43090d48501b32d99bc5cecc3e0573\""
Feb 13 15:18:18.257725 containerd[1939]: time="2025-02-13T15:18:18.257641036Z" level=info msg="CreateContainer within sandbox \"b95be9c7c46f66a0e1a848803bd77e91fc43090d48501b32d99bc5cecc3e0573\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb 13 15:18:18.287008 containerd[1939]: time="2025-02-13T15:18:18.286754218Z" level=info msg="CreateContainer within sandbox \"b95be9c7c46f66a0e1a848803bd77e91fc43090d48501b32d99bc5cecc3e0573\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"56cfdcc1fcf201fb65f615a0e1c92fcb33a6535cf56f2def8a04c0248184fa5c\""
Feb 13 15:18:18.289929 containerd[1939]: time="2025-02-13T15:18:18.288349215Z" level=info msg="StartContainer for \"56cfdcc1fcf201fb65f615a0e1c92fcb33a6535cf56f2def8a04c0248184fa5c\""
Feb 13 15:18:18.332483 systemd[1]: Started cri-containerd-56cfdcc1fcf201fb65f615a0e1c92fcb33a6535cf56f2def8a04c0248184fa5c.scope - libcontainer container 56cfdcc1fcf201fb65f615a0e1c92fcb33a6535cf56f2def8a04c0248184fa5c.
Feb 13 15:18:18.384524 containerd[1939]: time="2025-02-13T15:18:18.384436367Z" level=info msg="StartContainer for \"56cfdcc1fcf201fb65f615a0e1c92fcb33a6535cf56f2def8a04c0248184fa5c\" returns successfully"
Feb 13 15:18:19.217475 kubelet[3376]: I0213 15:18:19.216406    3376 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-jmt7q" podStartSLOduration=24.21634951 podStartE2EDuration="24.21634951s" podCreationTimestamp="2025-02-13 15:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:18:19.215653438 +0000 UTC m=+35.528265651" watchObservedRunningTime="2025-02-13 15:18:19.21634951 +0000 UTC m=+35.528961711"
Feb 13 15:18:19.457389 systemd-networkd[1839]: veth6abfe8a1: Gained IPv6LL
Feb 13 15:18:22.406266 ntpd[1908]: Listen normally on 9 cni0 192.168.0.1:123
Feb 13 15:18:22.406417 ntpd[1908]: Listen normally on 10 cni0 [fe80::5814:ccff:fe0e:42b8%5]:123
Feb 13 15:18:22.406813 ntpd[1908]: 13 Feb 15:18:22 ntpd[1908]: Listen normally on 9 cni0 192.168.0.1:123
Feb 13 15:18:22.406813 ntpd[1908]: 13 Feb 15:18:22 ntpd[1908]: Listen normally on 10 cni0 [fe80::5814:ccff:fe0e:42b8%5]:123
Feb 13 15:18:22.406813 ntpd[1908]: 13 Feb 15:18:22 ntpd[1908]: Listen normally on 11 veth514b0cf5 [fe80::20fd:e4ff:fe65:6c63%6]:123
Feb 13 15:18:22.406813 ntpd[1908]: 13 Feb 15:18:22 ntpd[1908]: Listen normally on 12 veth6abfe8a1 [fe80::b077:71ff:feea:4883%7]:123
Feb 13 15:18:22.406498 ntpd[1908]: Listen normally on 11 veth514b0cf5 [fe80::20fd:e4ff:fe65:6c63%6]:123
Feb 13 15:18:22.406572 ntpd[1908]: Listen normally on 12 veth6abfe8a1 [fe80::b077:71ff:feea:4883%7]:123
Feb 13 15:18:28.969714 systemd[1]: Started sshd@5-172.31.23.200:22-139.178.68.195:45934.service - OpenSSH per-connection server daemon (139.178.68.195:45934).
Feb 13 15:18:29.166861 sshd[4321]: Accepted publickey for core from 139.178.68.195 port 45934 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o
Feb 13 15:18:29.169443 sshd-session[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:18:29.178103 systemd-logind[1918]: New session 6 of user core.
Feb 13 15:18:29.185431 systemd[1]: Started session-6.scope - Session 6 of User core.
Feb 13 15:18:29.450439 sshd[4323]: Connection closed by 139.178.68.195 port 45934
Feb 13 15:18:29.451448 sshd-session[4321]: pam_unix(sshd:session): session closed for user core
Feb 13 15:18:29.458775 systemd[1]: sshd@5-172.31.23.200:22-139.178.68.195:45934.service: Deactivated successfully.
Feb 13 15:18:29.463792 systemd[1]: session-6.scope: Deactivated successfully.
Feb 13 15:18:29.469459 systemd-logind[1918]: Session 6 logged out. Waiting for processes to exit.
Feb 13 15:18:29.472007 systemd-logind[1918]: Removed session 6.
Feb 13 15:18:34.495753 systemd[1]: Started sshd@6-172.31.23.200:22-139.178.68.195:45946.service - OpenSSH per-connection server daemon (139.178.68.195:45946).
Feb 13 15:18:34.686841 sshd[4356]: Accepted publickey for core from 139.178.68.195 port 45946 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o
Feb 13 15:18:34.689831 sshd-session[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:18:34.700878 systemd-logind[1918]: New session 7 of user core.
Feb 13 15:18:34.710736 systemd[1]: Started session-7.scope - Session 7 of User core.
Feb 13 15:18:34.979222 sshd[4358]: Connection closed by 139.178.68.195 port 45946
Feb 13 15:18:34.980254 sshd-session[4356]: pam_unix(sshd:session): session closed for user core
Feb 13 15:18:34.988714 systemd[1]: sshd@6-172.31.23.200:22-139.178.68.195:45946.service: Deactivated successfully.
Feb 13 15:18:34.993972 systemd[1]: session-7.scope: Deactivated successfully.
Feb 13 15:18:34.996727 systemd-logind[1918]: Session 7 logged out. Waiting for processes to exit.
Feb 13 15:18:34.999509 systemd-logind[1918]: Removed session 7.
Feb 13 15:18:40.021694 systemd[1]: Started sshd@7-172.31.23.200:22-139.178.68.195:53306.service - OpenSSH per-connection server daemon (139.178.68.195:53306).
Feb 13 15:18:40.220052 sshd[4392]: Accepted publickey for core from 139.178.68.195 port 53306 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o
Feb 13 15:18:40.223430 sshd-session[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:18:40.231782 systemd-logind[1918]: New session 8 of user core.
Feb 13 15:18:40.242486 systemd[1]: Started session-8.scope - Session 8 of User core.
Feb 13 15:18:40.489305 sshd[4394]: Connection closed by 139.178.68.195 port 53306
Feb 13 15:18:40.490293 sshd-session[4392]: pam_unix(sshd:session): session closed for user core
Feb 13 15:18:40.495263 systemd-logind[1918]: Session 8 logged out. Waiting for processes to exit.
Feb 13 15:18:40.496011 systemd[1]: sshd@7-172.31.23.200:22-139.178.68.195:53306.service: Deactivated successfully.
Feb 13 15:18:40.500890 systemd[1]: session-8.scope: Deactivated successfully.
Feb 13 15:18:40.505004 systemd-logind[1918]: Removed session 8.
Feb 13 15:18:40.530724 systemd[1]: Started sshd@8-172.31.23.200:22-139.178.68.195:53318.service - OpenSSH per-connection server daemon (139.178.68.195:53318).
Feb 13 15:18:40.717242 sshd[4406]: Accepted publickey for core from 139.178.68.195 port 53318 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o
Feb 13 15:18:40.720480 sshd-session[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:18:40.728883 systemd-logind[1918]: New session 9 of user core.
Feb 13 15:18:40.735422 systemd[1]: Started session-9.scope - Session 9 of User core.
Feb 13 15:18:41.064013 sshd[4414]: Connection closed by 139.178.68.195 port 53318
Feb 13 15:18:41.066159 sshd-session[4406]: pam_unix(sshd:session): session closed for user core
Feb 13 15:18:41.076925 systemd[1]: sshd@8-172.31.23.200:22-139.178.68.195:53318.service: Deactivated successfully.
Feb 13 15:18:41.089066 systemd[1]: session-9.scope: Deactivated successfully.
Feb 13 15:18:41.093923 systemd-logind[1918]: Session 9 logged out. Waiting for processes to exit.
Feb 13 15:18:41.116954 systemd[1]: Started sshd@9-172.31.23.200:22-139.178.68.195:53334.service - OpenSSH per-connection server daemon (139.178.68.195:53334).
Feb 13 15:18:41.119657 systemd-logind[1918]: Removed session 9.
Feb 13 15:18:41.312657 sshd[4438]: Accepted publickey for core from 139.178.68.195 port 53334 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o
Feb 13 15:18:41.315182 sshd-session[4438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:18:41.324545 systemd-logind[1918]: New session 10 of user core.
Feb 13 15:18:41.332440 systemd[1]: Started session-10.scope - Session 10 of User core.
Feb 13 15:18:41.588495 sshd[4440]: Connection closed by 139.178.68.195 port 53334
Feb 13 15:18:41.589526 sshd-session[4438]: pam_unix(sshd:session): session closed for user core
Feb 13 15:18:41.595348 systemd[1]: session-10.scope: Deactivated successfully.
Feb 13 15:18:41.599498 systemd[1]: sshd@9-172.31.23.200:22-139.178.68.195:53334.service: Deactivated successfully.
Feb 13 15:18:41.605850 systemd-logind[1918]: Session 10 logged out. Waiting for processes to exit.
Feb 13 15:18:41.607547 systemd-logind[1918]: Removed session 10.
Feb 13 15:18:46.633840 systemd[1]: Started sshd@10-172.31.23.200:22-139.178.68.195:49732.service - OpenSSH per-connection server daemon (139.178.68.195:49732).
Feb 13 15:18:46.834859 sshd[4475]: Accepted publickey for core from 139.178.68.195 port 49732 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o
Feb 13 15:18:46.838928 sshd-session[4475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:18:46.850407 systemd-logind[1918]: New session 11 of user core.
Feb 13 15:18:46.861230 systemd[1]: Started session-11.scope - Session 11 of User core.
Feb 13 15:18:47.125830 sshd[4477]: Connection closed by 139.178.68.195 port 49732
Feb 13 15:18:47.126997 sshd-session[4475]: pam_unix(sshd:session): session closed for user core
Feb 13 15:18:47.133266 systemd[1]: sshd@10-172.31.23.200:22-139.178.68.195:49732.service: Deactivated successfully.
Feb 13 15:18:47.137681 systemd[1]: session-11.scope: Deactivated successfully.
Feb 13 15:18:47.139806 systemd-logind[1918]: Session 11 logged out. Waiting for processes to exit.
Feb 13 15:18:47.141982 systemd-logind[1918]: Removed session 11.
Feb 13 15:18:47.164675 systemd[1]: Started sshd@11-172.31.23.200:22-139.178.68.195:49740.service - OpenSSH per-connection server daemon (139.178.68.195:49740).
Feb 13 15:18:47.355600 sshd[4488]: Accepted publickey for core from 139.178.68.195 port 49740 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o
Feb 13 15:18:47.358396 sshd-session[4488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:18:47.367038 systemd-logind[1918]: New session 12 of user core.
Feb 13 15:18:47.377447 systemd[1]: Started session-12.scope - Session 12 of User core.
Feb 13 15:18:47.687088 sshd[4490]: Connection closed by 139.178.68.195 port 49740
Feb 13 15:18:47.685336 sshd-session[4488]: pam_unix(sshd:session): session closed for user core
Feb 13 15:18:47.691095 systemd[1]: sshd@11-172.31.23.200:22-139.178.68.195:49740.service: Deactivated successfully.
Feb 13 15:18:47.691667 systemd-logind[1918]: Session 12 logged out. Waiting for processes to exit.
Feb 13 15:18:47.695704 systemd[1]: session-12.scope: Deactivated successfully.
Feb 13 15:18:47.701036 systemd-logind[1918]: Removed session 12.
Feb 13 15:18:47.723097 systemd[1]: Started sshd@12-172.31.23.200:22-139.178.68.195:49748.service - OpenSSH per-connection server daemon (139.178.68.195:49748).
Feb 13 15:18:47.913869 sshd[4499]: Accepted publickey for core from 139.178.68.195 port 49748 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o
Feb 13 15:18:47.916544 sshd-session[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:18:47.927486 systemd-logind[1918]: New session 13 of user core.
Feb 13 15:18:47.934435 systemd[1]: Started session-13.scope - Session 13 of User core.
Feb 13 15:18:50.234414 sshd[4501]: Connection closed by 139.178.68.195 port 49748
Feb 13 15:18:50.235265 sshd-session[4499]: pam_unix(sshd:session): session closed for user core
Feb 13 15:18:50.247020 systemd[1]: sshd@12-172.31.23.200:22-139.178.68.195:49748.service: Deactivated successfully.
Feb 13 15:18:50.257659 systemd[1]: session-13.scope: Deactivated successfully.
Feb 13 15:18:50.269515 systemd-logind[1918]: Session 13 logged out. Waiting for processes to exit.
Feb 13 15:18:50.291772 systemd[1]: Started sshd@13-172.31.23.200:22-139.178.68.195:49754.service - OpenSSH per-connection server daemon (139.178.68.195:49754).
Feb 13 15:18:50.294158 systemd-logind[1918]: Removed session 13.
Feb 13 15:18:50.475189 sshd[4518]: Accepted publickey for core from 139.178.68.195 port 49754 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o
Feb 13 15:18:50.478054 sshd-session[4518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:18:50.487289 systemd-logind[1918]: New session 14 of user core.
Feb 13 15:18:50.496528 systemd[1]: Started session-14.scope - Session 14 of User core.
Feb 13 15:18:50.996300 sshd[4520]: Connection closed by 139.178.68.195 port 49754
Feb 13 15:18:50.996872 sshd-session[4518]: pam_unix(sshd:session): session closed for user core
Feb 13 15:18:51.008092 systemd-logind[1918]: Session 14 logged out. Waiting for processes to exit.
Feb 13 15:18:51.008584 systemd[1]: sshd@13-172.31.23.200:22-139.178.68.195:49754.service: Deactivated successfully.
Feb 13 15:18:51.012561 systemd[1]: session-14.scope: Deactivated successfully.
Feb 13 15:18:51.017661 systemd-logind[1918]: Removed session 14.
Feb 13 15:18:51.037776 systemd[1]: Started sshd@14-172.31.23.200:22-139.178.68.195:49762.service - OpenSSH per-connection server daemon (139.178.68.195:49762).
Feb 13 15:18:51.230237 sshd[4550]: Accepted publickey for core from 139.178.68.195 port 49762 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o
Feb 13 15:18:51.232681 sshd-session[4550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:18:51.240532 systemd-logind[1918]: New session 15 of user core.
Feb 13 15:18:51.247485 systemd[1]: Started session-15.scope - Session 15 of User core.
Feb 13 15:18:51.492563 sshd[4552]: Connection closed by 139.178.68.195 port 49762
Feb 13 15:18:51.492438 sshd-session[4550]: pam_unix(sshd:session): session closed for user core
Feb 13 15:18:51.498249 systemd[1]: sshd@14-172.31.23.200:22-139.178.68.195:49762.service: Deactivated successfully.
Feb 13 15:18:51.502923 systemd[1]: session-15.scope: Deactivated successfully.
Feb 13 15:18:51.506710 systemd-logind[1918]: Session 15 logged out. Waiting for processes to exit.
Feb 13 15:18:51.508984 systemd-logind[1918]: Removed session 15.
Feb 13 15:18:56.537657 systemd[1]: Started sshd@15-172.31.23.200:22-139.178.68.195:36154.service - OpenSSH per-connection server daemon (139.178.68.195:36154).
Feb 13 15:18:56.732018 sshd[4586]: Accepted publickey for core from 139.178.68.195 port 36154 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o
Feb 13 15:18:56.734698 sshd-session[4586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:18:56.744407 systemd-logind[1918]: New session 16 of user core.
Feb 13 15:18:56.751480 systemd[1]: Started session-16.scope - Session 16 of User core.
Feb 13 15:18:56.998865 sshd[4588]: Connection closed by 139.178.68.195 port 36154
Feb 13 15:18:56.999788 sshd-session[4586]: pam_unix(sshd:session): session closed for user core
Feb 13 15:18:57.006767 systemd[1]: sshd@15-172.31.23.200:22-139.178.68.195:36154.service: Deactivated successfully.
Feb 13 15:18:57.013114 systemd[1]: session-16.scope: Deactivated successfully.
Feb 13 15:18:57.015314 systemd-logind[1918]: Session 16 logged out. Waiting for processes to exit.
Feb 13 15:18:57.017503 systemd-logind[1918]: Removed session 16.
Feb 13 15:19:02.041646 systemd[1]: Started sshd@16-172.31.23.200:22-139.178.68.195:36170.service - OpenSSH per-connection server daemon (139.178.68.195:36170).
Feb 13 15:19:02.227768 sshd[4624]: Accepted publickey for core from 139.178.68.195 port 36170 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o
Feb 13 15:19:02.230497 sshd-session[4624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:19:02.238620 systemd-logind[1918]: New session 17 of user core.
Feb 13 15:19:02.252422 systemd[1]: Started session-17.scope - Session 17 of User core.
Feb 13 15:19:02.494476 sshd[4626]: Connection closed by 139.178.68.195 port 36170
Feb 13 15:19:02.495491 sshd-session[4624]: pam_unix(sshd:session): session closed for user core
Feb 13 15:19:02.503102 systemd[1]: sshd@16-172.31.23.200:22-139.178.68.195:36170.service: Deactivated successfully.
Feb 13 15:19:02.507571 systemd[1]: session-17.scope: Deactivated successfully.
Feb 13 15:19:02.509654 systemd-logind[1918]: Session 17 logged out. Waiting for processes to exit.
Feb 13 15:19:02.512468 systemd-logind[1918]: Removed session 17.
Feb 13 15:19:07.535759 systemd[1]: Started sshd@17-172.31.23.200:22-139.178.68.195:58336.service - OpenSSH per-connection server daemon (139.178.68.195:58336).
Feb 13 15:19:07.728179 sshd[4660]: Accepted publickey for core from 139.178.68.195 port 58336 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o
Feb 13 15:19:07.731088 sshd-session[4660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:19:07.741055 systemd-logind[1918]: New session 18 of user core.
Feb 13 15:19:07.747652 systemd[1]: Started session-18.scope - Session 18 of User core.
Feb 13 15:19:07.996889 sshd[4662]: Connection closed by 139.178.68.195 port 58336
Feb 13 15:19:07.997471 sshd-session[4660]: pam_unix(sshd:session): session closed for user core
Feb 13 15:19:08.004663 systemd[1]: sshd@17-172.31.23.200:22-139.178.68.195:58336.service: Deactivated successfully.
Feb 13 15:19:08.008557 systemd[1]: session-18.scope: Deactivated successfully.
Feb 13 15:19:08.011913 systemd-logind[1918]: Session 18 logged out. Waiting for processes to exit.
Feb 13 15:19:08.014947 systemd-logind[1918]: Removed session 18.
Feb 13 15:19:13.046611 systemd[1]: Started sshd@18-172.31.23.200:22-139.178.68.195:58342.service - OpenSSH per-connection server daemon (139.178.68.195:58342).
Feb 13 15:19:13.226861 sshd[4694]: Accepted publickey for core from 139.178.68.195 port 58342 ssh2: RSA SHA256:ygX9pQgvMQ+9oqA0nZMZFpcKgy7v6tDhD4NsfWOkE5o
Feb 13 15:19:13.229531 sshd-session[4694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:19:13.237408 systemd-logind[1918]: New session 19 of user core.
Feb 13 15:19:13.246410 systemd[1]: Started session-19.scope - Session 19 of User core.
Feb 13 15:19:13.492306 sshd[4696]: Connection closed by 139.178.68.195 port 58342
Feb 13 15:19:13.493385 sshd-session[4694]: pam_unix(sshd:session): session closed for user core
Feb 13 15:19:13.500998 systemd[1]: sshd@18-172.31.23.200:22-139.178.68.195:58342.service: Deactivated successfully.
Feb 13 15:19:13.505115 systemd[1]: session-19.scope: Deactivated successfully.
Feb 13 15:19:13.508195 systemd-logind[1918]: Session 19 logged out. Waiting for processes to exit.
Feb 13 15:19:13.510319 systemd-logind[1918]: Removed session 19.
Feb 13 15:19:27.912611 systemd[1]: cri-containerd-b9de39691ff23da8a2c3a55b5cc59280a655e7fb2ca268223914174a183eba0d.scope: Deactivated successfully.
Feb 13 15:19:27.913105 systemd[1]: cri-containerd-b9de39691ff23da8a2c3a55b5cc59280a655e7fb2ca268223914174a183eba0d.scope: Consumed 3.454s CPU time, 22.3M memory peak, 0B memory swap peak.
Feb 13 15:19:27.959624 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9de39691ff23da8a2c3a55b5cc59280a655e7fb2ca268223914174a183eba0d-rootfs.mount: Deactivated successfully.
Feb 13 15:19:27.968918 containerd[1939]: time="2025-02-13T15:19:27.968775592Z" level=info msg="shim disconnected" id=b9de39691ff23da8a2c3a55b5cc59280a655e7fb2ca268223914174a183eba0d namespace=k8s.io
Feb 13 15:19:27.968918 containerd[1939]: time="2025-02-13T15:19:27.968893108Z" level=warning msg="cleaning up after shim disconnected" id=b9de39691ff23da8a2c3a55b5cc59280a655e7fb2ca268223914174a183eba0d namespace=k8s.io
Feb 13 15:19:27.968918 containerd[1939]: time="2025-02-13T15:19:27.968915428Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:19:28.378699 kubelet[3376]: I0213 15:19:28.378525    3376 scope.go:117] "RemoveContainer" containerID="b9de39691ff23da8a2c3a55b5cc59280a655e7fb2ca268223914174a183eba0d"
Feb 13 15:19:28.384473 containerd[1939]: time="2025-02-13T15:19:28.384239306Z" level=info msg="CreateContainer within sandbox \"0b956dcd9701d1bfa2a8d97733f64b5ac6d93e3441fa6e80bff7e0b289ea76ce\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}"
Feb 13 15:19:28.415223 containerd[1939]: time="2025-02-13T15:19:28.415079523Z" level=info msg="CreateContainer within sandbox \"0b956dcd9701d1bfa2a8d97733f64b5ac6d93e3441fa6e80bff7e0b289ea76ce\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c968d0bef5111e705e51a32c5603d5fe1db7cac3589eb3a2e2467d69d5ef3486\""
Feb 13 15:19:28.416714 containerd[1939]: time="2025-02-13T15:19:28.416177463Z" level=info msg="StartContainer for \"c968d0bef5111e705e51a32c5603d5fe1db7cac3589eb3a2e2467d69d5ef3486\""
Feb 13 15:19:28.478666 systemd[1]: Started cri-containerd-c968d0bef5111e705e51a32c5603d5fe1db7cac3589eb3a2e2467d69d5ef3486.scope - libcontainer container c968d0bef5111e705e51a32c5603d5fe1db7cac3589eb3a2e2467d69d5ef3486.
Feb 13 15:19:28.560087 containerd[1939]: time="2025-02-13T15:19:28.559836591Z" level=info msg="StartContainer for \"c968d0bef5111e705e51a32c5603d5fe1db7cac3589eb3a2e2467d69d5ef3486\" returns successfully"
Feb 13 15:19:32.763198 systemd[1]: cri-containerd-75982614f5012cd8de048f26fa06f872ddbb94fd4f559166cdbe59e4cce3ff53.scope: Deactivated successfully.
Feb 13 15:19:32.764390 systemd[1]: cri-containerd-75982614f5012cd8de048f26fa06f872ddbb94fd4f559166cdbe59e4cce3ff53.scope: Consumed 1.898s CPU time, 13.4M memory peak, 0B memory swap peak.
Feb 13 15:19:32.813660 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75982614f5012cd8de048f26fa06f872ddbb94fd4f559166cdbe59e4cce3ff53-rootfs.mount: Deactivated successfully.
Feb 13 15:19:32.827169 containerd[1939]: time="2025-02-13T15:19:32.827024781Z" level=info msg="shim disconnected" id=75982614f5012cd8de048f26fa06f872ddbb94fd4f559166cdbe59e4cce3ff53 namespace=k8s.io
Feb 13 15:19:32.827169 containerd[1939]: time="2025-02-13T15:19:32.827135265Z" level=warning msg="cleaning up after shim disconnected" id=75982614f5012cd8de048f26fa06f872ddbb94fd4f559166cdbe59e4cce3ff53 namespace=k8s.io
Feb 13 15:19:32.828301 containerd[1939]: time="2025-02-13T15:19:32.827188917Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:19:33.400128 kubelet[3376]: I0213 15:19:33.400059    3376 scope.go:117] "RemoveContainer" containerID="75982614f5012cd8de048f26fa06f872ddbb94fd4f559166cdbe59e4cce3ff53"
Feb 13 15:19:33.404464 containerd[1939]: time="2025-02-13T15:19:33.404288803Z" level=info msg="CreateContainer within sandbox \"299e6b625d046bfd25583d30498cefa323bc786446d798d484d1ca4b7e183506\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}"
Feb 13 15:19:33.441670 containerd[1939]: time="2025-02-13T15:19:33.441588428Z" level=info msg="CreateContainer within sandbox \"299e6b625d046bfd25583d30498cefa323bc786446d798d484d1ca4b7e183506\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8d9e2a5b8468d8fb57a64ef88f0f136af4fdb8465660922968f33d8e8c3246e8\""
Feb 13 15:19:33.442823 containerd[1939]: time="2025-02-13T15:19:33.442764044Z" level=info msg="StartContainer for \"8d9e2a5b8468d8fb57a64ef88f0f136af4fdb8465660922968f33d8e8c3246e8\""
Feb 13 15:19:33.500504 systemd[1]: Started cri-containerd-8d9e2a5b8468d8fb57a64ef88f0f136af4fdb8465660922968f33d8e8c3246e8.scope - libcontainer container 8d9e2a5b8468d8fb57a64ef88f0f136af4fdb8465660922968f33d8e8c3246e8.
Feb 13 15:19:33.573382 containerd[1939]: time="2025-02-13T15:19:33.573312752Z" level=info msg="StartContainer for \"8d9e2a5b8468d8fb57a64ef88f0f136af4fdb8465660922968f33d8e8c3246e8\" returns successfully"
Feb 13 15:19:35.838592 kubelet[3376]: E0213 15:19:35.838200    3376 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.200:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-200?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Feb 13 15:19:45.839505 kubelet[3376]: E0213 15:19:45.839324    3376 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.200:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-200?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"