Feb 13 16:05:37.330906 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083]
Feb 13 16:05:37.330979 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 14:34:20 -00 2025
Feb 13 16:05:37.331009 kernel: KASLR disabled due to lack of seed
Feb 13 16:05:37.331028 kernel: efi: EFI v2.7 by EDK II
Feb 13 16:05:37.331046 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 
Feb 13 16:05:37.331063 kernel: ACPI: Early table checksum verification disabled
Feb 13 16:05:37.331082 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON)
Feb 13 16:05:37.331099 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001      01000013)
Feb 13 16:05:37.331116 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001)
Feb 13 16:05:37.331133 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527)
Feb 13 16:05:37.331157 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001)
Feb 13 16:05:37.331174 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001)
Feb 13 16:05:37.331193 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001)
Feb 13 16:05:37.331211 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001)
Feb 13 16:05:37.331231 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001)
Feb 13 16:05:37.331255 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001)
Feb 13 16:05:37.331274 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001)
Feb 13 16:05:37.331291 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200
Feb 13 16:05:37.331309 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200')
Feb 13 16:05:37.331326 kernel: printk: bootconsole [uart0] enabled
Feb 13 16:05:37.331343 kernel: NUMA: Failed to initialise from firmware
Feb 13 16:05:37.331362 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff]
Feb 13 16:05:37.331379 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff]
Feb 13 16:05:37.331397 kernel: Zone ranges:
Feb 13 16:05:37.331414 kernel:   DMA      [mem 0x0000000040000000-0x00000000ffffffff]
Feb 13 16:05:37.331432 kernel:   DMA32    empty
Feb 13 16:05:37.331455 kernel:   Normal   [mem 0x0000000100000000-0x00000004b5ffffff]
Feb 13 16:05:37.331475 kernel: Movable zone start for each node
Feb 13 16:05:37.331493 kernel: Early memory node ranges
Feb 13 16:05:37.331512 kernel:   node   0: [mem 0x0000000040000000-0x000000007862ffff]
Feb 13 16:05:37.331529 kernel:   node   0: [mem 0x0000000078630000-0x000000007863ffff]
Feb 13 16:05:37.331546 kernel:   node   0: [mem 0x0000000078640000-0x00000000786effff]
Feb 13 16:05:37.331564 kernel:   node   0: [mem 0x00000000786f0000-0x000000007872ffff]
Feb 13 16:05:37.331581 kernel:   node   0: [mem 0x0000000078730000-0x000000007bbfffff]
Feb 13 16:05:37.331600 kernel:   node   0: [mem 0x000000007bc00000-0x000000007bfdffff]
Feb 13 16:05:37.332729 kernel:   node   0: [mem 0x000000007bfe0000-0x000000007fffffff]
Feb 13 16:05:37.332800 kernel:   node   0: [mem 0x0000000400000000-0x00000004b5ffffff]
Feb 13 16:05:37.332822 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff]
Feb 13 16:05:37.332885 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges
Feb 13 16:05:37.332914 kernel: psci: probing for conduit method from ACPI.
Feb 13 16:05:37.332946 kernel: psci: PSCIv1.0 detected in firmware.
Feb 13 16:05:37.332969 kernel: psci: Using standard PSCI v0.2 function IDs
Feb 13 16:05:37.332989 kernel: psci: Trusted OS migration not required
Feb 13 16:05:37.333019 kernel: psci: SMC Calling Convention v1.1
Feb 13 16:05:37.333042 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976
Feb 13 16:05:37.333063 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096
Feb 13 16:05:37.333085 kernel: pcpu-alloc: [0] 0 [0] 1 
Feb 13 16:05:37.333105 kernel: Detected PIPT I-cache on CPU0
Feb 13 16:05:37.333126 kernel: CPU features: detected: GIC system register CPU interface
Feb 13 16:05:37.333147 kernel: CPU features: detected: Spectre-v2
Feb 13 16:05:37.333167 kernel: CPU features: detected: Spectre-v3a
Feb 13 16:05:37.333187 kernel: CPU features: detected: Spectre-BHB
Feb 13 16:05:37.333206 kernel: CPU features: detected: ARM erratum 1742098
Feb 13 16:05:37.333225 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923
Feb 13 16:05:37.333256 kernel: alternatives: applying boot alternatives
Feb 13 16:05:37.333280 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=55866785c450f887021047c4ba00d104a5882975060a5fc692d64491b0d81886
Feb 13 16:05:37.333303 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb 13 16:05:37.333324 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 13 16:05:37.333343 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Feb 13 16:05:37.333360 kernel: Fallback order for Node 0: 0 
Feb 13 16:05:37.333378 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 991872
Feb 13 16:05:37.333396 kernel: Policy zone: Normal
Feb 13 16:05:37.333413 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 13 16:05:37.333431 kernel: software IO TLB: area num 2.
Feb 13 16:05:37.333449 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB)
Feb 13 16:05:37.333478 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved)
Feb 13 16:05:37.333497 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Feb 13 16:05:37.333514 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 13 16:05:37.333533 kernel: rcu:         RCU event tracing is enabled.
Feb 13 16:05:37.333552 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Feb 13 16:05:37.333570 kernel:         Trampoline variant of Tasks RCU enabled.
Feb 13 16:05:37.333588 kernel:         Tracing variant of Tasks RCU enabled.
Feb 13 16:05:37.333606 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 13 16:05:37.333667 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Feb 13 16:05:37.333687 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
Feb 13 16:05:37.333705 kernel: GICv3: 96 SPIs implemented
Feb 13 16:05:37.333730 kernel: GICv3: 0 Extended SPIs implemented
Feb 13 16:05:37.333749 kernel: Root IRQ handler: gic_handle_irq
Feb 13 16:05:37.333767 kernel: GICv3: GICv3 features: 16 PPIs
Feb 13 16:05:37.333785 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000
Feb 13 16:05:37.333802 kernel: ITS [mem 0x10080000-0x1009ffff]
Feb 13 16:05:37.333821 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1)
Feb 13 16:05:37.333839 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1)
Feb 13 16:05:37.333857 kernel: GICv3: using LPI property table @0x00000004000d0000
Feb 13 16:05:37.333875 kernel: ITS: Using hypervisor restricted LPI range [128]
Feb 13 16:05:37.333893 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000
Feb 13 16:05:37.333911 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 13 16:05:37.333929 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt).
Feb 13 16:05:37.333953 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns
Feb 13 16:05:37.333972 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns
Feb 13 16:05:37.333990 kernel: Console: colour dummy device 80x25
Feb 13 16:05:37.334009 kernel: printk: console [tty1] enabled
Feb 13 16:05:37.334027 kernel: ACPI: Core revision 20230628
Feb 13 16:05:37.334046 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333)
Feb 13 16:05:37.334064 kernel: pid_max: default: 32768 minimum: 301
Feb 13 16:05:37.334082 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Feb 13 16:05:37.334100 kernel: landlock: Up and running.
Feb 13 16:05:37.334124 kernel: SELinux:  Initializing.
Feb 13 16:05:37.334143 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 13 16:05:37.334161 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 13 16:05:37.334179 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 16:05:37.334198 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 16:05:37.334216 kernel: rcu: Hierarchical SRCU implementation.
Feb 13 16:05:37.334236 kernel: rcu:         Max phase no-delay instances is 400.
Feb 13 16:05:37.334254 kernel: Platform MSI: ITS@0x10080000 domain created
Feb 13 16:05:37.334272 kernel: PCI/MSI: ITS@0x10080000 domain created
Feb 13 16:05:37.334296 kernel: Remapping and enabling EFI services.
Feb 13 16:05:37.334315 kernel: smp: Bringing up secondary CPUs ...
Feb 13 16:05:37.334332 kernel: Detected PIPT I-cache on CPU1
Feb 13 16:05:37.334350 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000
Feb 13 16:05:37.334369 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000
Feb 13 16:05:37.334387 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083]
Feb 13 16:05:37.334405 kernel: smp: Brought up 1 node, 2 CPUs
Feb 13 16:05:37.334423 kernel: SMP: Total of 2 processors activated.
Feb 13 16:05:37.334441 kernel: CPU features: detected: 32-bit EL0 Support
Feb 13 16:05:37.334465 kernel: CPU features: detected: 32-bit EL1 Support
Feb 13 16:05:37.334484 kernel: CPU features: detected: CRC32 instructions
Feb 13 16:05:37.334502 kernel: CPU: All CPU(s) started at EL1
Feb 13 16:05:37.334533 kernel: alternatives: applying system-wide alternatives
Feb 13 16:05:37.334557 kernel: devtmpfs: initialized
Feb 13 16:05:37.334578 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 13 16:05:37.334598 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Feb 13 16:05:37.340794 kernel: pinctrl core: initialized pinctrl subsystem
Feb 13 16:05:37.340884 kernel: SMBIOS 3.0.0 present.
Feb 13 16:05:37.340914 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018
Feb 13 16:05:37.340955 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 13 16:05:37.340979 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations
Feb 13 16:05:37.340999 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb 13 16:05:37.341019 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb 13 16:05:37.341039 kernel: audit: initializing netlink subsys (disabled)
Feb 13 16:05:37.341059 kernel: audit: type=2000 audit(0.322:1): state=initialized audit_enabled=0 res=1
Feb 13 16:05:37.341080 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 13 16:05:37.341107 kernel: cpuidle: using governor menu
Feb 13 16:05:37.341127 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
Feb 13 16:05:37.341147 kernel: ASID allocator initialised with 65536 entries
Feb 13 16:05:37.341167 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 13 16:05:37.341187 kernel: Serial: AMBA PL011 UART driver
Feb 13 16:05:37.341206 kernel: Modules: 17520 pages in range for non-PLT usage
Feb 13 16:05:37.341225 kernel: Modules: 509040 pages in range for PLT usage
Feb 13 16:05:37.341245 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 13 16:05:37.341264 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
Feb 13 16:05:37.341290 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages
Feb 13 16:05:37.341310 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page
Feb 13 16:05:37.341329 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 13 16:05:37.341348 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
Feb 13 16:05:37.341368 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages
Feb 13 16:05:37.341387 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page
Feb 13 16:05:37.341406 kernel: ACPI: Added _OSI(Module Device)
Feb 13 16:05:37.341425 kernel: ACPI: Added _OSI(Processor Device)
Feb 13 16:05:37.341444 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb 13 16:05:37.341470 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 13 16:05:37.341490 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb 13 16:05:37.341509 kernel: ACPI: Interpreter enabled
Feb 13 16:05:37.341528 kernel: ACPI: Using GIC for interrupt routing
Feb 13 16:05:37.341548 kernel: ACPI: MCFG table detected, 1 entries
Feb 13 16:05:37.341567 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f])
Feb 13 16:05:37.341967 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Feb 13 16:05:37.342202 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR]
Feb 13 16:05:37.342465 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
Feb 13 16:05:37.342887 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00
Feb 13 16:05:37.343229 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f]
Feb 13 16:05:37.343274 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io  0x0000-0xffff window]
Feb 13 16:05:37.343346 kernel: acpiphp: Slot [1] registered
Feb 13 16:05:37.343391 kernel: acpiphp: Slot [2] registered
Feb 13 16:05:37.343413 kernel: acpiphp: Slot [3] registered
Feb 13 16:05:37.343434 kernel: acpiphp: Slot [4] registered
Feb 13 16:05:37.343476 kernel: acpiphp: Slot [5] registered
Feb 13 16:05:37.343499 kernel: acpiphp: Slot [6] registered
Feb 13 16:05:37.343520 kernel: acpiphp: Slot [7] registered
Feb 13 16:05:37.343540 kernel: acpiphp: Slot [8] registered
Feb 13 16:05:37.343561 kernel: acpiphp: Slot [9] registered
Feb 13 16:05:37.343583 kernel: acpiphp: Slot [10] registered
Feb 13 16:05:37.343604 kernel: acpiphp: Slot [11] registered
Feb 13 16:05:37.345810 kernel: acpiphp: Slot [12] registered
Feb 13 16:05:37.345863 kernel: acpiphp: Slot [13] registered
Feb 13 16:05:37.345911 kernel: acpiphp: Slot [14] registered
Feb 13 16:05:37.345931 kernel: acpiphp: Slot [15] registered
Feb 13 16:05:37.345950 kernel: acpiphp: Slot [16] registered
Feb 13 16:05:37.345969 kernel: acpiphp: Slot [17] registered
Feb 13 16:05:37.345990 kernel: acpiphp: Slot [18] registered
Feb 13 16:05:37.346009 kernel: acpiphp: Slot [19] registered
Feb 13 16:05:37.346029 kernel: acpiphp: Slot [20] registered
Feb 13 16:05:37.346048 kernel: acpiphp: Slot [21] registered
Feb 13 16:05:37.346087 kernel: acpiphp: Slot [22] registered
Feb 13 16:05:37.346110 kernel: acpiphp: Slot [23] registered
Feb 13 16:05:37.346137 kernel: acpiphp: Slot [24] registered
Feb 13 16:05:37.346158 kernel: acpiphp: Slot [25] registered
Feb 13 16:05:37.346177 kernel: acpiphp: Slot [26] registered
Feb 13 16:05:37.346197 kernel: acpiphp: Slot [27] registered
Feb 13 16:05:37.346218 kernel: acpiphp: Slot [28] registered
Feb 13 16:05:37.346237 kernel: acpiphp: Slot [29] registered
Feb 13 16:05:37.346259 kernel: acpiphp: Slot [30] registered
Feb 13 16:05:37.346279 kernel: acpiphp: Slot [31] registered
Feb 13 16:05:37.346300 kernel: PCI host bridge to bus 0000:00
Feb 13 16:05:37.346798 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window]
Feb 13 16:05:37.347118 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0xffff window]
Feb 13 16:05:37.347354 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window]
Feb 13 16:05:37.347559 kernel: pci_bus 0000:00: root bus resource [bus 00-0f]
Feb 13 16:05:37.347894 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000
Feb 13 16:05:37.348218 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003
Feb 13 16:05:37.348598 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff]
Feb 13 16:05:37.353225 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802
Feb 13 16:05:37.353597 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff]
Feb 13 16:05:37.354426 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold
Feb 13 16:05:37.354698 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000
Feb 13 16:05:37.354922 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff]
Feb 13 16:05:37.355180 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref]
Feb 13 16:05:37.355551 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff]
Feb 13 16:05:37.356022 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold
Feb 13 16:05:37.356329 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref]
Feb 13 16:05:37.356608 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff]
Feb 13 16:05:37.359760 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff]
Feb 13 16:05:37.360078 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff]
Feb 13 16:05:37.360404 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff]
Feb 13 16:05:37.360727 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window]
Feb 13 16:05:37.361020 kernel: pci_bus 0000:00: resource 5 [io  0x0000-0xffff window]
Feb 13 16:05:37.361287 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window]
Feb 13 16:05:37.361320 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35
Feb 13 16:05:37.361340 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36
Feb 13 16:05:37.361362 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37
Feb 13 16:05:37.361382 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38
Feb 13 16:05:37.361403 kernel: iommu: Default domain type: Translated
Feb 13 16:05:37.361438 kernel: iommu: DMA domain TLB invalidation policy: strict mode
Feb 13 16:05:37.361461 kernel: efivars: Registered efivars operations
Feb 13 16:05:37.361484 kernel: vgaarb: loaded
Feb 13 16:05:37.361504 kernel: clocksource: Switched to clocksource arch_sys_counter
Feb 13 16:05:37.361525 kernel: VFS: Disk quotas dquot_6.6.0
Feb 13 16:05:37.361547 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 13 16:05:37.361567 kernel: pnp: PnP ACPI init
Feb 13 16:05:37.361984 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved
Feb 13 16:05:37.362042 kernel: pnp: PnP ACPI: found 1 devices
Feb 13 16:05:37.362083 kernel: NET: Registered PF_INET protocol family
Feb 13 16:05:37.362108 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 13 16:05:37.362132 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Feb 13 16:05:37.362153 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 13 16:05:37.362177 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb 13 16:05:37.362199 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear)
Feb 13 16:05:37.362220 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Feb 13 16:05:37.362241 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 13 16:05:37.362262 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 13 16:05:37.362295 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 13 16:05:37.362314 kernel: PCI: CLS 0 bytes, default 64
Feb 13 16:05:37.362333 kernel: kvm [1]: HYP mode not available
Feb 13 16:05:37.362352 kernel: Initialise system trusted keyrings
Feb 13 16:05:37.362371 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Feb 13 16:05:37.362390 kernel: Key type asymmetric registered
Feb 13 16:05:37.362411 kernel: Asymmetric key parser 'x509' registered
Feb 13 16:05:37.362430 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
Feb 13 16:05:37.362449 kernel: io scheduler mq-deadline registered
Feb 13 16:05:37.362474 kernel: io scheduler kyber registered
Feb 13 16:05:37.362493 kernel: io scheduler bfq registered
Feb 13 16:05:37.362848 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered
Feb 13 16:05:37.362895 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
Feb 13 16:05:37.362919 kernel: ACPI: button: Power Button [PWRB]
Feb 13 16:05:37.362940 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1
Feb 13 16:05:37.362961 kernel: ACPI: button: Sleep Button [SLPB]
Feb 13 16:05:37.362983 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 13 16:05:37.363022 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37
Feb 13 16:05:37.363332 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012)
Feb 13 16:05:37.363373 kernel: printk: console [ttyS0] disabled
Feb 13 16:05:37.363396 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A
Feb 13 16:05:37.363416 kernel: printk: console [ttyS0] enabled
Feb 13 16:05:37.363437 kernel: printk: bootconsole [uart0] disabled
Feb 13 16:05:37.363458 kernel: thunder_xcv, ver 1.0
Feb 13 16:05:37.363478 kernel: thunder_bgx, ver 1.0
Feb 13 16:05:37.363498 kernel: nicpf, ver 1.0
Feb 13 16:05:37.363529 kernel: nicvf, ver 1.0
Feb 13 16:05:37.363924 kernel: rtc-efi rtc-efi.0: registered as rtc0
Feb 13 16:05:37.364227 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T16:05:36 UTC (1739462736)
Feb 13 16:05:37.364262 kernel: hid: raw HID events driver (C) Jiri Kosina
Feb 13 16:05:37.364283 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available
Feb 13 16:05:37.364304 kernel: watchdog: Delayed init of the lockup detector failed: -19
Feb 13 16:05:37.364323 kernel: watchdog: Hard watchdog permanently disabled
Feb 13 16:05:37.364342 kernel: NET: Registered PF_INET6 protocol family
Feb 13 16:05:37.364372 kernel: Segment Routing with IPv6
Feb 13 16:05:37.364392 kernel: In-situ OAM (IOAM) with IPv6
Feb 13 16:05:37.364411 kernel: NET: Registered PF_PACKET protocol family
Feb 13 16:05:37.364430 kernel: Key type dns_resolver registered
Feb 13 16:05:37.364449 kernel: registered taskstats version 1
Feb 13 16:05:37.364469 kernel: Loading compiled-in X.509 certificates
Feb 13 16:05:37.364489 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: d3f151cc07005f6a29244b13ac54c8677429c8f5'
Feb 13 16:05:37.364509 kernel: Key type .fscrypt registered
Feb 13 16:05:37.364527 kernel: Key type fscrypt-provisioning registered
Feb 13 16:05:37.364552 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 13 16:05:37.364572 kernel: ima: Allocated hash algorithm: sha1
Feb 13 16:05:37.364592 kernel: ima: No architecture policies found
Feb 13 16:05:37.364651 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng)
Feb 13 16:05:37.364685 kernel: clk: Disabling unused clocks
Feb 13 16:05:37.364706 kernel: Freeing unused kernel memory: 39360K
Feb 13 16:05:37.364727 kernel: Run /init as init process
Feb 13 16:05:37.364747 kernel:   with arguments:
Feb 13 16:05:37.364768 kernel:     /init
Feb 13 16:05:37.364788 kernel:   with environment:
Feb 13 16:05:37.364821 kernel:     HOME=/
Feb 13 16:05:37.364869 kernel:     TERM=linux
Feb 13 16:05:37.364894 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb 13 16:05:37.364921 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 16:05:37.364949 systemd[1]: Detected virtualization amazon.
Feb 13 16:05:37.364971 systemd[1]: Detected architecture arm64.
Feb 13 16:05:37.364993 systemd[1]: Running in initrd.
Feb 13 16:05:37.365026 systemd[1]: No hostname configured, using default hostname.
Feb 13 16:05:37.365048 systemd[1]: Hostname set to <localhost>.
Feb 13 16:05:37.365071 systemd[1]: Initializing machine ID from VM UUID.
Feb 13 16:05:37.365093 systemd[1]: Queued start job for default target initrd.target.
Feb 13 16:05:37.365117 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 16:05:37.365140 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 16:05:37.365165 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Feb 13 16:05:37.365189 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 16:05:37.365227 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Feb 13 16:05:37.365253 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Feb 13 16:05:37.365284 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Feb 13 16:05:37.365309 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Feb 13 16:05:37.365333 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 16:05:37.365358 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 16:05:37.365382 systemd[1]: Reached target paths.target - Path Units.
Feb 13 16:05:37.365426 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 16:05:37.365451 systemd[1]: Reached target swap.target - Swaps.
Feb 13 16:05:37.365478 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 16:05:37.365502 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 16:05:37.365527 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 16:05:37.365554 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Feb 13 16:05:37.365581 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Feb 13 16:05:37.365608 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 16:05:37.367288 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 16:05:37.367314 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 16:05:37.367336 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 16:05:37.367357 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Feb 13 16:05:37.367378 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 16:05:37.367400 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Feb 13 16:05:37.367422 systemd[1]: Starting systemd-fsck-usr.service...
Feb 13 16:05:37.367446 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 16:05:37.367470 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 16:05:37.367510 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 16:05:37.367536 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Feb 13 16:05:37.367562 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 16:05:37.367586 systemd[1]: Finished systemd-fsck-usr.service.
Feb 13 16:05:37.367684 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Feb 13 16:05:37.367844 systemd-journald[251]: Collecting audit messages is disabled.
Feb 13 16:05:37.367916 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 16:05:37.367946 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 16:05:37.369834 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 13 16:05:37.369889 kernel: Bridge firewalling registered
Feb 13 16:05:37.369918 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 16:05:37.369945 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 16:05:37.369969 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 16:05:37.369996 systemd-journald[251]: Journal started
Feb 13 16:05:37.370061 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2135e89014fed660479131014c8c43) is 8.0M, max 75.3M, 67.3M free.
Feb 13 16:05:37.306203 systemd-modules-load[252]: Inserted module 'overlay'
Feb 13 16:05:37.346113 systemd-modules-load[252]: Inserted module 'br_netfilter'
Feb 13 16:05:37.389200 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 16:05:37.389287 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 16:05:37.408001 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 16:05:37.427266 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 16:05:37.454581 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 16:05:37.480914 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Feb 13 16:05:37.487886 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 16:05:37.493979 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 16:05:37.522008 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 16:05:37.545645 dracut-cmdline[284]: dracut-dracut-053
Feb 13 16:05:37.561498 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=55866785c450f887021047c4ba00d104a5882975060a5fc692d64491b0d81886
Feb 13 16:05:37.619683 systemd-resolved[291]: Positive Trust Anchors:
Feb 13 16:05:37.619721 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 16:05:37.619786 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 16:05:37.757783 kernel: SCSI subsystem initialized
Feb 13 16:05:37.767686 kernel: Loading iSCSI transport class v2.0-870.
Feb 13 16:05:37.782781 kernel: iscsi: registered transport (tcp)
Feb 13 16:05:37.809434 kernel: iscsi: registered transport (qla4xxx)
Feb 13 16:05:37.809550 kernel: QLogic iSCSI HBA Driver
Feb 13 16:05:37.870655 kernel: random: crng init done
Feb 13 16:05:37.869008 systemd-resolved[291]: Defaulting to hostname 'linux'.
Feb 13 16:05:37.873275 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 16:05:37.877882 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 16:05:37.912725 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Feb 13 16:05:37.928005 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Feb 13 16:05:37.968240 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 13 16:05:37.968334 kernel: device-mapper: uevent: version 1.0.3
Feb 13 16:05:37.970121 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Feb 13 16:05:38.042727 kernel: raid6: neonx8   gen()  6487 MB/s
Feb 13 16:05:38.059703 kernel: raid6: neonx4   gen()  6333 MB/s
Feb 13 16:05:38.076686 kernel: raid6: neonx2   gen()  5296 MB/s
Feb 13 16:05:38.093714 kernel: raid6: neonx1   gen()  3867 MB/s
Feb 13 16:05:38.110671 kernel: raid6: int64x8  gen()  3722 MB/s
Feb 13 16:05:38.127707 kernel: raid6: int64x4  gen()  3621 MB/s
Feb 13 16:05:38.144689 kernel: raid6: int64x2  gen()  3498 MB/s
Feb 13 16:05:38.162516 kernel: raid6: int64x1  gen()  2736 MB/s
Feb 13 16:05:38.162589 kernel: raid6: using algorithm neonx8 gen() 6487 MB/s
Feb 13 16:05:38.180534 kernel: raid6: .... xor() 4795 MB/s, rmw enabled
Feb 13 16:05:38.180689 kernel: raid6: using neon recovery algorithm
Feb 13 16:05:38.191008 kernel: xor: measuring software checksum speed
Feb 13 16:05:38.191226 kernel:    8regs           : 11018 MB/sec
Feb 13 16:05:38.191265 kernel:    32regs          : 11902 MB/sec
Feb 13 16:05:38.193422 kernel:    arm64_neon      :  9556 MB/sec
Feb 13 16:05:38.193582 kernel: xor: using function: 32regs (11902 MB/sec)
Feb 13 16:05:38.288697 kernel: Btrfs loaded, zoned=no, fsverity=no
Feb 13 16:05:38.317094 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 16:05:38.329136 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 16:05:38.377951 systemd-udevd[469]: Using default interface naming scheme 'v255'.
Feb 13 16:05:38.388103 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 16:05:38.400506 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Feb 13 16:05:38.440289 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation
Feb 13 16:05:38.513944 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 16:05:38.525229 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 16:05:38.664234 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 16:05:38.679892 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Feb 13 16:05:38.750295 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Feb 13 16:05:38.758325 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 16:05:38.762444 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 16:05:38.773810 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 16:05:38.795041 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Feb 13 16:05:38.860006 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 16:05:38.942668 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36
Feb 13 16:05:38.942779 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012)
Feb 13 16:05:38.960885 kernel: ena 0000:00:05.0: ENA device version: 0.10
Feb 13 16:05:38.961188 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1
Feb 13 16:05:38.961430 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:d9:4c:69:89:d5
Feb 13 16:05:38.956480 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 16:05:38.956869 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 16:05:38.965727 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 16:05:38.966283 (udev-worker)[518]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 16:05:38.971558 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 16:05:38.972484 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 16:05:38.981286 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 16:05:38.999400 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 16:05:39.029428 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35
Feb 13 16:05:39.029507 kernel: nvme nvme0: pci function 0000:00:04.0
Feb 13 16:05:39.039681 kernel: nvme nvme0: 2/0/0 default/read/poll queues
Feb 13 16:05:39.050674 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Feb 13 16:05:39.050751 kernel: GPT:9289727 != 16777215
Feb 13 16:05:39.050778 kernel: GPT:Alternate GPT header not at the end of the disk.
Feb 13 16:05:39.050816 kernel: GPT:9289727 != 16777215
Feb 13 16:05:39.052318 kernel: GPT: Use GNU Parted to correct GPT errors.
Feb 13 16:05:39.055132 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 16:05:39.057338 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 16:05:39.070947 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 16:05:39.116381 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 16:05:39.190725 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (532)
Feb 13 16:05:39.204671 kernel: BTRFS: device fsid 39fc2625-8d65-490f-9a1f-39e365051e19 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (522)
Feb 13 16:05:39.232097 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM.
Feb 13 16:05:39.305119 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT.
Feb 13 16:05:39.349216 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM.
Feb 13 16:05:39.367355 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A.
Feb 13 16:05:39.370282 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A.
Feb 13 16:05:39.387937 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Feb 13 16:05:39.410655 disk-uuid[659]: Primary Header is updated.
Feb 13 16:05:39.410655 disk-uuid[659]: Secondary Entries is updated.
Feb 13 16:05:39.410655 disk-uuid[659]: Secondary Header is updated.
Feb 13 16:05:39.419684 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 16:05:39.429698 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 16:05:39.439691 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 16:05:39.445657 kernel: block device autoloading is deprecated and will be removed.
Feb 13 16:05:40.440491 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 16:05:40.443712 disk-uuid[660]: The operation has completed successfully.
Feb 13 16:05:40.639989 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb 13 16:05:40.642376 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Feb 13 16:05:40.704969 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Feb 13 16:05:40.714976 sh[1006]: Success
Feb 13 16:05:40.740688 kernel: device-mapper: verity: sha256 using implementation "sha256-ce"
Feb 13 16:05:40.860374 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Feb 13 16:05:40.879892 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Feb 13 16:05:40.892510 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Feb 13 16:05:40.933274 kernel: BTRFS info (device dm-0): first mount of filesystem 39fc2625-8d65-490f-9a1f-39e365051e19
Feb 13 16:05:40.933366 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm
Feb 13 16:05:40.933395 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Feb 13 16:05:40.935034 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Feb 13 16:05:40.936304 kernel: BTRFS info (device dm-0): using free space tree
Feb 13 16:05:41.076668 kernel: BTRFS info (device dm-0): enabling ssd optimizations
Feb 13 16:05:41.102144 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Feb 13 16:05:41.107999 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Feb 13 16:05:41.119197 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Feb 13 16:05:41.126904 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Feb 13 16:05:41.167688 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41
Feb 13 16:05:41.171267 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 16:05:41.171350 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 13 16:05:41.179662 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 13 16:05:41.196386 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb 13 16:05:41.200666 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41
Feb 13 16:05:41.210391 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Feb 13 16:05:41.221073 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Feb 13 16:05:41.329075 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 16:05:41.339945 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 16:05:41.401476 systemd-networkd[1198]: lo: Link UP
Feb 13 16:05:41.401739 systemd-networkd[1198]: lo: Gained carrier
Feb 13 16:05:41.405505 systemd-networkd[1198]: Enumeration completed
Feb 13 16:05:41.406473 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 16:05:41.406987 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 16:05:41.406994 systemd-networkd[1198]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 16:05:41.412593 systemd-networkd[1198]: eth0: Link UP
Feb 13 16:05:41.412604 systemd-networkd[1198]: eth0: Gained carrier
Feb 13 16:05:41.413221 systemd-networkd[1198]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 16:05:41.414754 systemd[1]: Reached target network.target - Network.
Feb 13 16:05:41.445853 systemd-networkd[1198]: eth0: DHCPv4 address 172.31.25.253/20, gateway 172.31.16.1 acquired from 172.31.16.1
Feb 13 16:05:41.636431 ignition[1109]: Ignition 2.19.0
Feb 13 16:05:41.637058 ignition[1109]: Stage: fetch-offline
Feb 13 16:05:41.637726 ignition[1109]: no configs at "/usr/lib/ignition/base.d"
Feb 13 16:05:41.637756 ignition[1109]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 16:05:41.638298 ignition[1109]: Ignition finished successfully
Feb 13 16:05:41.647853 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 16:05:41.658036 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)...
Feb 13 16:05:41.697059 ignition[1208]: Ignition 2.19.0
Feb 13 16:05:41.697679 ignition[1208]: Stage: fetch
Feb 13 16:05:41.698575 ignition[1208]: no configs at "/usr/lib/ignition/base.d"
Feb 13 16:05:41.698605 ignition[1208]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 16:05:41.698807 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 16:05:41.715913 ignition[1208]: PUT result: OK
Feb 13 16:05:41.720044 ignition[1208]: parsed url from cmdline: ""
Feb 13 16:05:41.720130 ignition[1208]: no config URL provided
Feb 13 16:05:41.720235 ignition[1208]: reading system config file "/usr/lib/ignition/user.ign"
Feb 13 16:05:41.720281 ignition[1208]: no config at "/usr/lib/ignition/user.ign"
Feb 13 16:05:41.720373 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 16:05:41.726374 ignition[1208]: PUT result: OK
Feb 13 16:05:41.728630 ignition[1208]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1
Feb 13 16:05:41.732871 ignition[1208]: GET result: OK
Feb 13 16:05:41.733076 ignition[1208]: parsing config with SHA512: d62853be3352beab11e49840a040284718c567899b5b8ec3dbc537d61e2a5f3019a611e8912202245b857774e43ba61c6dffd25b9e0e423dab06ed2e6656ef47
Feb 13 16:05:41.742391 unknown[1208]: fetched base config from "system"
Feb 13 16:05:41.742428 unknown[1208]: fetched base config from "system"
Feb 13 16:05:41.744153 ignition[1208]: fetch: fetch complete
Feb 13 16:05:41.742454 unknown[1208]: fetched user config from "aws"
Feb 13 16:05:41.744170 ignition[1208]: fetch: fetch passed
Feb 13 16:05:41.744313 ignition[1208]: Ignition finished successfully
Feb 13 16:05:41.755443 systemd[1]: Finished ignition-fetch.service - Ignition (fetch).
Feb 13 16:05:41.777382 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Feb 13 16:05:41.815245 ignition[1214]: Ignition 2.19.0
Feb 13 16:05:41.815300 ignition[1214]: Stage: kargs
Feb 13 16:05:41.817246 ignition[1214]: no configs at "/usr/lib/ignition/base.d"
Feb 13 16:05:41.817277 ignition[1214]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 16:05:41.817525 ignition[1214]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 16:05:41.825346 ignition[1214]: PUT result: OK
Feb 13 16:05:41.833354 ignition[1214]: kargs: kargs passed
Feb 13 16:05:41.833519 ignition[1214]: Ignition finished successfully
Feb 13 16:05:41.838768 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Feb 13 16:05:41.850051 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Feb 13 16:05:41.891418 ignition[1220]: Ignition 2.19.0
Feb 13 16:05:41.892363 ignition[1220]: Stage: disks
Feb 13 16:05:41.893540 ignition[1220]: no configs at "/usr/lib/ignition/base.d"
Feb 13 16:05:41.893573 ignition[1220]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 16:05:41.893810 ignition[1220]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 16:05:41.899799 ignition[1220]: PUT result: OK
Feb 13 16:05:41.909910 ignition[1220]: disks: disks passed
Feb 13 16:05:41.910254 ignition[1220]: Ignition finished successfully
Feb 13 16:05:41.917742 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Feb 13 16:05:41.920934 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Feb 13 16:05:41.925337 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Feb 13 16:05:41.932885 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 16:05:41.935050 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 16:05:41.939319 systemd[1]: Reached target basic.target - Basic System.
Feb 13 16:05:41.955066 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Feb 13 16:05:42.004845 systemd-fsck[1228]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Feb 13 16:05:42.010451 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Feb 13 16:05:42.021857 systemd[1]: Mounting sysroot.mount - /sysroot...
Feb 13 16:05:42.132659 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 1daf3470-d909-4a02-84d2-f6d9b0a5b55c r/w with ordered data mode. Quota mode: none.
Feb 13 16:05:42.134197 systemd[1]: Mounted sysroot.mount - /sysroot.
Feb 13 16:05:42.138914 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Feb 13 16:05:42.161871 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 16:05:42.168896 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Feb 13 16:05:42.174450 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Feb 13 16:05:42.174772 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb 13 16:05:42.174938 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 16:05:42.196669 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1247)
Feb 13 16:05:42.201209 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41
Feb 13 16:05:42.201299 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 16:05:42.202777 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 13 16:05:42.205464 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Feb 13 16:05:42.217989 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Feb 13 16:05:42.228673 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 13 16:05:42.232335 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 16:05:42.693001 initrd-setup-root[1271]: cut: /sysroot/etc/passwd: No such file or directory
Feb 13 16:05:42.718470 initrd-setup-root[1278]: cut: /sysroot/etc/group: No such file or directory
Feb 13 16:05:42.728443 initrd-setup-root[1285]: cut: /sysroot/etc/shadow: No such file or directory
Feb 13 16:05:42.738716 initrd-setup-root[1292]: cut: /sysroot/etc/gshadow: No such file or directory
Feb 13 16:05:42.930086 systemd-networkd[1198]: eth0: Gained IPv6LL
Feb 13 16:05:43.080741 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Feb 13 16:05:43.089903 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Feb 13 16:05:43.095950 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Feb 13 16:05:43.123839 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Feb 13 16:05:43.126575 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41
Feb 13 16:05:43.169737 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Feb 13 16:05:43.179019 ignition[1360]: INFO     : Ignition 2.19.0
Feb 13 16:05:43.179019 ignition[1360]: INFO     : Stage: mount
Feb 13 16:05:43.182850 ignition[1360]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 16:05:43.182850 ignition[1360]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 16:05:43.182850 ignition[1360]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 16:05:43.190737 ignition[1360]: INFO     : PUT result: OK
Feb 13 16:05:43.196671 ignition[1360]: INFO     : mount: mount passed
Feb 13 16:05:43.198844 ignition[1360]: INFO     : Ignition finished successfully
Feb 13 16:05:43.201647 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Feb 13 16:05:43.220023 systemd[1]: Starting ignition-files.service - Ignition (files)...
Feb 13 16:05:43.244020 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 16:05:43.279674 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1372)
Feb 13 16:05:43.284903 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c8afbf79-805d-40d9-b4c9-cafa51441c41
Feb 13 16:05:43.285031 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 16:05:43.286368 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 13 16:05:43.293672 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 13 16:05:43.298031 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 16:05:43.348315 ignition[1389]: INFO     : Ignition 2.19.0
Feb 13 16:05:43.351649 ignition[1389]: INFO     : Stage: files
Feb 13 16:05:43.351649 ignition[1389]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 16:05:43.351649 ignition[1389]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 16:05:43.351649 ignition[1389]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 16:05:43.361763 ignition[1389]: INFO     : PUT result: OK
Feb 13 16:05:43.367041 ignition[1389]: DEBUG    : files: compiled without relabeling support, skipping
Feb 13 16:05:43.372651 ignition[1389]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb 13 16:05:43.372651 ignition[1389]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb 13 16:05:43.394786 ignition[1389]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb 13 16:05:43.397893 ignition[1389]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb 13 16:05:43.401182 unknown[1389]: wrote ssh authorized keys file for user: core
Feb 13 16:05:43.403793 ignition[1389]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb 13 16:05:43.407521 ignition[1389]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/etc/flatcar-cgroupv1"
Feb 13 16:05:43.411052 ignition[1389]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1"
Feb 13 16:05:43.411052 ignition[1389]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Feb 13 16:05:43.411052 ignition[1389]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1
Feb 13 16:05:43.482186 ignition[1389]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET result: OK
Feb 13 16:05:43.655043 ignition[1389]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Feb 13 16:05:43.655043 ignition[1389]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/install.sh"
Feb 13 16:05:43.663438 ignition[1389]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh"
Feb 13 16:05:43.663438 ignition[1389]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nginx.yaml"
Feb 13 16:05:43.663438 ignition[1389]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml"
Feb 13 16:05:43.663438 ignition[1389]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 13 16:05:43.663438 ignition[1389]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 13 16:05:43.663438 ignition[1389]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 13 16:05:43.663438 ignition[1389]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 13 16:05:43.663438 ignition[1389]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 16:05:43.663438 ignition[1389]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 16:05:43.663438 ignition[1389]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw"
Feb 13 16:05:43.663438 ignition[1389]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw"
Feb 13 16:05:43.663438 ignition[1389]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw"
Feb 13 16:05:43.663438 ignition[1389]: INFO     : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1
Feb 13 16:05:44.171018 ignition[1389]: INFO     : files: createFilesystemsFiles: createFiles: op(b): GET result: OK
Feb 13 16:05:44.636173 ignition[1389]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw"
Feb 13 16:05:44.642797 ignition[1389]: INFO     : files: op(c): [started]  processing unit "containerd.service"
Feb 13 16:05:44.642797 ignition[1389]: INFO     : files: op(c): op(d): [started]  writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Feb 13 16:05:44.642797 ignition[1389]: INFO     : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Feb 13 16:05:44.642797 ignition[1389]: INFO     : files: op(c): [finished] processing unit "containerd.service"
Feb 13 16:05:44.642797 ignition[1389]: INFO     : files: op(e): [started]  processing unit "prepare-helm.service"
Feb 13 16:05:44.642797 ignition[1389]: INFO     : files: op(e): op(f): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 13 16:05:44.642797 ignition[1389]: INFO     : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 13 16:05:44.642797 ignition[1389]: INFO     : files: op(e): [finished] processing unit "prepare-helm.service"
Feb 13 16:05:44.642797 ignition[1389]: INFO     : files: op(10): [started]  setting preset to enabled for "prepare-helm.service"
Feb 13 16:05:44.642797 ignition[1389]: INFO     : files: op(10): [finished] setting preset to enabled for "prepare-helm.service"
Feb 13 16:05:44.642797 ignition[1389]: INFO     : files: createResultFile: createFiles: op(11): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb 13 16:05:44.642797 ignition[1389]: INFO     : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb 13 16:05:44.642797 ignition[1389]: INFO     : files: files passed
Feb 13 16:05:44.642797 ignition[1389]: INFO     : Ignition finished successfully
Feb 13 16:05:44.687269 systemd[1]: Finished ignition-files.service - Ignition (files).
Feb 13 16:05:44.698105 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Feb 13 16:05:44.713345 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Feb 13 16:05:44.719178 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb 13 16:05:44.719382 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Feb 13 16:05:44.755253 initrd-setup-root-after-ignition[1418]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 16:05:44.755253 initrd-setup-root-after-ignition[1418]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 16:05:44.763814 initrd-setup-root-after-ignition[1422]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 16:05:44.772891 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 16:05:44.778913 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Feb 13 16:05:44.793439 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Feb 13 16:05:44.870119 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 13 16:05:44.870815 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Feb 13 16:05:44.873960 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Feb 13 16:05:44.874514 systemd[1]: Reached target initrd.target - Initrd Default Target.
Feb 13 16:05:44.887869 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Feb 13 16:05:44.898888 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Feb 13 16:05:44.936683 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 16:05:44.951064 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Feb 13 16:05:44.978418 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Feb 13 16:05:44.984038 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 16:05:44.987277 systemd[1]: Stopped target timers.target - Timer Units.
Feb 13 16:05:44.990917 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 13 16:05:44.991180 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 16:05:44.997253 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Feb 13 16:05:45.003413 systemd[1]: Stopped target basic.target - Basic System.
Feb 13 16:05:45.005658 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Feb 13 16:05:45.011469 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 16:05:45.013882 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Feb 13 16:05:45.016769 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Feb 13 16:05:45.025825 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 16:05:45.029060 systemd[1]: Stopped target sysinit.target - System Initialization.
Feb 13 16:05:45.035306 systemd[1]: Stopped target local-fs.target - Local File Systems.
Feb 13 16:05:45.038309 systemd[1]: Stopped target swap.target - Swaps.
Feb 13 16:05:45.042026 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 13 16:05:45.042360 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 16:05:45.049396 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Feb 13 16:05:45.052758 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 16:05:45.059031 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Feb 13 16:05:45.061825 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 16:05:45.064496 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 13 16:05:45.064783 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Feb 13 16:05:45.069834 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb 13 16:05:45.070139 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 16:05:45.074562 systemd[1]: ignition-files.service: Deactivated successfully.
Feb 13 16:05:45.074822 systemd[1]: Stopped ignition-files.service - Ignition (files).
Feb 13 16:05:45.091116 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Feb 13 16:05:45.113050 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Feb 13 16:05:45.117884 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 13 16:05:45.122834 ignition[1442]: INFO     : Ignition 2.19.0
Feb 13 16:05:45.122834 ignition[1442]: INFO     : Stage: umount
Feb 13 16:05:45.122834 ignition[1442]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 16:05:45.122834 ignition[1442]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 16:05:45.122834 ignition[1442]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 16:05:45.118554 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 16:05:45.156959 ignition[1442]: INFO     : PUT result: OK
Feb 13 16:05:45.126053 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 13 16:05:45.162879 ignition[1442]: INFO     : umount: umount passed
Feb 13 16:05:45.162879 ignition[1442]: INFO     : Ignition finished successfully
Feb 13 16:05:45.126493 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 16:05:45.164249 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 13 16:05:45.164590 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Feb 13 16:05:45.181984 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb 13 16:05:45.182336 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Feb 13 16:05:45.191213 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb 13 16:05:45.191586 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Feb 13 16:05:45.194161 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb 13 16:05:45.194280 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Feb 13 16:05:45.195310 systemd[1]: ignition-fetch.service: Deactivated successfully.
Feb 13 16:05:45.195399 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch).
Feb 13 16:05:45.195590 systemd[1]: Stopped target network.target - Network.
Feb 13 16:05:45.198102 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb 13 16:05:45.198219 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 16:05:45.199076 systemd[1]: Stopped target paths.target - Path Units.
Feb 13 16:05:45.199596 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 13 16:05:45.216042 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 16:05:45.223792 systemd[1]: Stopped target slices.target - Slice Units.
Feb 13 16:05:45.228478 systemd[1]: Stopped target sockets.target - Socket Units.
Feb 13 16:05:45.243185 systemd[1]: iscsid.socket: Deactivated successfully.
Feb 13 16:05:45.243323 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 16:05:45.248199 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb 13 16:05:45.248368 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 16:05:45.253930 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb 13 16:05:45.254073 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Feb 13 16:05:45.257960 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Feb 13 16:05:45.258073 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Feb 13 16:05:45.263298 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Feb 13 16:05:45.268688 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Feb 13 16:05:45.272938 systemd-networkd[1198]: eth0: DHCPv6 lease lost
Feb 13 16:05:45.295691 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb 13 16:05:45.298890 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb 13 16:05:45.299402 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Feb 13 16:05:45.306946 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb 13 16:05:45.307492 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Feb 13 16:05:45.314366 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb 13 16:05:45.316607 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Feb 13 16:05:45.323280 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb 13 16:05:45.323390 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 16:05:45.329545 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb 13 16:05:45.329834 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Feb 13 16:05:45.349012 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Feb 13 16:05:45.355633 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb 13 16:05:45.355793 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 16:05:45.356276 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 13 16:05:45.356385 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Feb 13 16:05:45.357347 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 13 16:05:45.357472 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Feb 13 16:05:45.358369 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 13 16:05:45.358479 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 16:05:45.361803 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 16:05:45.405820 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb 13 16:05:45.406317 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Feb 13 16:05:45.413559 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 13 16:05:45.414194 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 16:05:45.421149 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 13 16:05:45.421316 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Feb 13 16:05:45.428591 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 13 16:05:45.428772 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 16:05:45.431122 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 13 16:05:45.431272 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 16:05:45.434169 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 13 16:05:45.434293 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Feb 13 16:05:45.448215 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 16:05:45.448401 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 16:05:45.470437 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Feb 13 16:05:45.473698 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 13 16:05:45.473863 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 16:05:45.477186 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully.
Feb 13 16:05:45.477458 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 16:05:45.481255 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 13 16:05:45.481427 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 16:05:45.485352 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 16:05:45.485539 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 16:05:45.532260 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 13 16:05:45.532780 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Feb 13 16:05:45.539793 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Feb 13 16:05:45.562211 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Feb 13 16:05:45.612433 systemd[1]: Switching root.
Feb 13 16:05:45.650480 systemd-journald[251]: Journal stopped
Feb 13 16:05:48.538927 systemd-journald[251]: Received SIGTERM from PID 1 (systemd).
Feb 13 16:05:48.539083 kernel: SELinux:  policy capability network_peer_controls=1
Feb 13 16:05:48.539130 kernel: SELinux:  policy capability open_perms=1
Feb 13 16:05:48.539163 kernel: SELinux:  policy capability extended_socket_class=1
Feb 13 16:05:48.539195 kernel: SELinux:  policy capability always_check_network=0
Feb 13 16:05:48.539225 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 13 16:05:48.539257 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 13 16:05:48.539289 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Feb 13 16:05:48.539328 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Feb 13 16:05:48.539378 kernel: audit: type=1403 audit(1739462746.459:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 13 16:05:48.539426 systemd[1]: Successfully loaded SELinux policy in 63.453ms.
Feb 13 16:05:48.539495 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.689ms.
Feb 13 16:05:48.539533 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 16:05:48.539571 systemd[1]: Detected virtualization amazon.
Feb 13 16:05:48.539606 systemd[1]: Detected architecture arm64.
Feb 13 16:05:48.539687 systemd[1]: Detected first boot.
Feb 13 16:05:48.539725 systemd[1]: Initializing machine ID from VM UUID.
Feb 13 16:05:48.539765 zram_generator::config[1501]: No configuration found.
Feb 13 16:05:48.539808 systemd[1]: Populated /etc with preset unit settings.
Feb 13 16:05:48.539849 systemd[1]: Queued start job for default target multi-user.target.
Feb 13 16:05:48.539883 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6.
Feb 13 16:05:48.539935 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Feb 13 16:05:48.539977 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Feb 13 16:05:48.540022 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Feb 13 16:05:48.540061 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Feb 13 16:05:48.540114 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Feb 13 16:05:48.540164 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Feb 13 16:05:48.540202 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Feb 13 16:05:48.540242 systemd[1]: Created slice user.slice - User and Session Slice.
Feb 13 16:05:48.540282 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 16:05:48.540318 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 16:05:48.540356 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Feb 13 16:05:48.540394 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Feb 13 16:05:48.540459 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Feb 13 16:05:48.540514 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 16:05:48.540554 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0...
Feb 13 16:05:48.540665 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 16:05:48.540720 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Feb 13 16:05:48.540769 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 16:05:48.540827 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 16:05:48.540862 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 16:05:48.540894 systemd[1]: Reached target swap.target - Swaps.
Feb 13 16:05:48.540934 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Feb 13 16:05:48.540970 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Feb 13 16:05:48.541002 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Feb 13 16:05:48.541034 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Feb 13 16:05:48.541082 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 16:05:48.541122 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 16:05:48.541155 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 16:05:48.541187 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Feb 13 16:05:48.541220 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Feb 13 16:05:48.541259 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Feb 13 16:05:48.541290 systemd[1]: Mounting media.mount - External Media Directory...
Feb 13 16:05:48.541322 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Feb 13 16:05:48.541355 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Feb 13 16:05:48.541386 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Feb 13 16:05:48.541419 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Feb 13 16:05:48.541449 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 16:05:48.541479 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 16:05:48.541511 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Feb 13 16:05:48.541548 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 16:05:48.541581 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 16:05:48.541661 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 16:05:48.541703 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Feb 13 16:05:48.541742 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 16:05:48.541775 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb 13 16:05:48.541809 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
Feb 13 16:05:48.541850 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.)
Feb 13 16:05:48.541912 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 16:05:48.541951 kernel: loop: module loaded
Feb 13 16:05:48.541989 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 16:05:48.542021 kernel: fuse: init (API version 7.39)
Feb 13 16:05:48.542053 kernel: ACPI: bus type drm_connector registered
Feb 13 16:05:48.542085 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Feb 13 16:05:48.542117 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Feb 13 16:05:48.542165 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 16:05:48.542202 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Feb 13 16:05:48.542254 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Feb 13 16:05:48.542290 systemd[1]: Mounted media.mount - External Media Directory.
Feb 13 16:05:48.542503 systemd-journald[1608]: Collecting audit messages is disabled.
Feb 13 16:05:48.542710 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Feb 13 16:05:48.542760 systemd-journald[1608]: Journal started
Feb 13 16:05:48.542816 systemd-journald[1608]: Runtime Journal (/run/log/journal/ec2135e89014fed660479131014c8c43) is 8.0M, max 75.3M, 67.3M free.
Feb 13 16:05:48.552066 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 16:05:48.555198 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Feb 13 16:05:48.557740 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Feb 13 16:05:48.566486 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Feb 13 16:05:48.570954 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 16:05:48.574581 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 13 16:05:48.575020 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Feb 13 16:05:48.578400 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 16:05:48.578879 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 16:05:48.582585 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 16:05:48.583040 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 16:05:48.586755 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 16:05:48.587162 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 16:05:48.590916 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 13 16:05:48.591370 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Feb 13 16:05:48.595216 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 16:05:48.595722 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 16:05:48.599972 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 16:05:48.604143 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Feb 13 16:05:48.607562 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Feb 13 16:05:48.635638 systemd[1]: Reached target network-pre.target - Preparation for Network.
Feb 13 16:05:48.647880 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Feb 13 16:05:48.654341 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Feb 13 16:05:48.656846 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb 13 16:05:48.674119 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Feb 13 16:05:48.696712 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Feb 13 16:05:48.701064 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 16:05:48.724178 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Feb 13 16:05:48.726990 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 16:05:48.745463 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 16:05:48.762982 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Feb 13 16:05:48.777904 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Feb 13 16:05:48.783154 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Feb 13 16:05:48.802737 systemd-journald[1608]: Time spent on flushing to /var/log/journal/ec2135e89014fed660479131014c8c43 is 110.696ms for 897 entries.
Feb 13 16:05:48.802737 systemd-journald[1608]: System Journal (/var/log/journal/ec2135e89014fed660479131014c8c43) is 8.0M, max 195.6M, 187.6M free.
Feb 13 16:05:48.941770 systemd-journald[1608]: Received client request to flush runtime journal.
Feb 13 16:05:48.826562 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Feb 13 16:05:48.832700 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Feb 13 16:05:48.888709 systemd-tmpfiles[1653]: ACLs are not supported, ignoring.
Feb 13 16:05:48.888735 systemd-tmpfiles[1653]: ACLs are not supported, ignoring.
Feb 13 16:05:48.905936 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 16:05:48.920730 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 16:05:48.936223 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Feb 13 16:05:48.950703 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Feb 13 16:05:48.962653 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 16:05:48.987010 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Feb 13 16:05:49.022240 udevadm[1671]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in.
Feb 13 16:05:49.053601 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Feb 13 16:05:49.069232 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 16:05:49.119173 systemd-tmpfiles[1675]: ACLs are not supported, ignoring.
Feb 13 16:05:49.119815 systemd-tmpfiles[1675]: ACLs are not supported, ignoring.
Feb 13 16:05:49.128046 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 16:05:49.949160 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Feb 13 16:05:49.960239 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 16:05:50.035500 systemd-udevd[1681]: Using default interface naming scheme 'v255'.
Feb 13 16:05:50.099734 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 16:05:50.112026 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 16:05:50.163396 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Feb 13 16:05:50.263451 (udev-worker)[1703]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 16:05:50.297928 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0.
Feb 13 16:05:50.343286 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Feb 13 16:05:50.517186 systemd-networkd[1684]: lo: Link UP
Feb 13 16:05:50.517206 systemd-networkd[1684]: lo: Gained carrier
Feb 13 16:05:50.521654 systemd-networkd[1684]: Enumeration completed
Feb 13 16:05:50.521900 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 16:05:50.524534 systemd-networkd[1684]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 16:05:50.524562 systemd-networkd[1684]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 16:05:50.527470 systemd-networkd[1684]: eth0: Link UP
Feb 13 16:05:50.527891 systemd-networkd[1684]: eth0: Gained carrier
Feb 13 16:05:50.527923 systemd-networkd[1684]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 16:05:50.535955 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Feb 13 16:05:50.546832 systemd-networkd[1684]: eth0: DHCPv4 address 172.31.25.253/20, gateway 172.31.16.1 acquired from 172.31.16.1
Feb 13 16:05:50.608655 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1692)
Feb 13 16:05:50.619169 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 16:05:50.848081 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Feb 13 16:05:50.851362 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 16:05:50.898929 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM.
Feb 13 16:05:50.913915 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Feb 13 16:05:50.941699 lvm[1810]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 16:05:50.985384 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Feb 13 16:05:50.990450 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 16:05:51.004012 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Feb 13 16:05:51.017322 lvm[1813]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 16:05:51.060586 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Feb 13 16:05:51.064384 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Feb 13 16:05:51.066960 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 13 16:05:51.067006 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 16:05:51.070225 systemd[1]: Reached target machines.target - Containers.
Feb 13 16:05:51.075680 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Feb 13 16:05:51.086231 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Feb 13 16:05:51.098506 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Feb 13 16:05:51.101542 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 16:05:51.105151 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Feb 13 16:05:51.120802 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Feb 13 16:05:51.131218 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Feb 13 16:05:51.138527 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Feb 13 16:05:51.171317 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Feb 13 16:05:51.174215 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Feb 13 16:05:51.192539 kernel: loop0: detected capacity change from 0 to 114432
Feb 13 16:05:51.194800 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Feb 13 16:05:51.318297 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Feb 13 16:05:51.353672 kernel: loop1: detected capacity change from 0 to 114328
Feb 13 16:05:51.472683 kernel: loop2: detected capacity change from 0 to 52536
Feb 13 16:05:51.576839 kernel: loop3: detected capacity change from 0 to 194512
Feb 13 16:05:51.631659 kernel: loop4: detected capacity change from 0 to 114432
Feb 13 16:05:51.650684 kernel: loop5: detected capacity change from 0 to 114328
Feb 13 16:05:51.670679 kernel: loop6: detected capacity change from 0 to 52536
Feb 13 16:05:51.684669 kernel: loop7: detected capacity change from 0 to 194512
Feb 13 16:05:51.697870 systemd-networkd[1684]: eth0: Gained IPv6LL
Feb 13 16:05:51.705143 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Feb 13 16:05:51.718999 (sd-merge)[1835]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'.
Feb 13 16:05:51.720168 (sd-merge)[1835]: Merged extensions into '/usr'.
Feb 13 16:05:51.730182 systemd[1]: Reloading requested from client PID 1821 ('systemd-sysext') (unit systemd-sysext.service)...
Feb 13 16:05:51.730222 systemd[1]: Reloading...
Feb 13 16:05:51.877702 zram_generator::config[1860]: No configuration found.
Feb 13 16:05:52.227014 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 16:05:52.397780 systemd[1]: Reloading finished in 666 ms.
Feb 13 16:05:52.426721 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Feb 13 16:05:52.442184 systemd[1]: Starting ensure-sysext.service...
Feb 13 16:05:52.449918 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 16:05:52.465962 systemd[1]: Reloading requested from client PID 1921 ('systemctl') (unit ensure-sysext.service)...
Feb 13 16:05:52.465996 systemd[1]: Reloading...
Feb 13 16:05:52.542735 systemd-tmpfiles[1922]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Feb 13 16:05:52.543470 systemd-tmpfiles[1922]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Feb 13 16:05:52.551889 systemd-tmpfiles[1922]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Feb 13 16:05:52.553042 systemd-tmpfiles[1922]: ACLs are not supported, ignoring.
Feb 13 16:05:52.553922 systemd-tmpfiles[1922]: ACLs are not supported, ignoring.
Feb 13 16:05:52.563508 systemd-tmpfiles[1922]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 16:05:52.563906 systemd-tmpfiles[1922]: Skipping /boot
Feb 13 16:05:52.604131 systemd-tmpfiles[1922]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 16:05:52.604152 systemd-tmpfiles[1922]: Skipping /boot
Feb 13 16:05:52.651959 zram_generator::config[1952]: No configuration found.
Feb 13 16:05:52.755697 ldconfig[1817]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Feb 13 16:05:52.966222 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 16:05:53.116989 systemd[1]: Reloading finished in 650 ms.
Feb 13 16:05:53.147855 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Feb 13 16:05:53.160861 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 16:05:53.182371 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Feb 13 16:05:53.192986 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Feb 13 16:05:53.200960 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Feb 13 16:05:53.217009 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 16:05:53.228082 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Feb 13 16:05:53.256359 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 16:05:53.266948 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 16:05:53.287207 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 16:05:53.298281 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 16:05:53.300534 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 16:05:53.308981 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 16:05:53.309400 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 16:05:53.328407 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 16:05:53.337288 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 16:05:53.342936 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 16:05:53.344189 systemd[1]: Reached target time-set.target - System Time Set.
Feb 13 16:05:53.351058 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 16:05:53.351451 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 16:05:53.357781 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 16:05:53.358193 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 16:05:53.367446 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 16:05:53.393541 systemd[1]: Finished ensure-sysext.service.
Feb 13 16:05:53.396198 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Feb 13 16:05:53.399292 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 16:05:53.399753 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 16:05:53.408592 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 16:05:53.416356 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Feb 13 16:05:53.429144 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Feb 13 16:05:53.448929 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 16:05:53.455233 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 16:05:53.478003 augenrules[2053]: No rules
Feb 13 16:05:53.485523 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Feb 13 16:05:53.528643 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Feb 13 16:05:53.535718 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Feb 13 16:05:53.540732 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb 13 16:05:53.570950 systemd-resolved[2020]: Positive Trust Anchors:
Feb 13 16:05:53.572036 systemd-resolved[2020]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 16:05:53.572112 systemd-resolved[2020]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 16:05:53.584039 systemd-resolved[2020]: Defaulting to hostname 'linux'.
Feb 13 16:05:53.588588 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 16:05:53.591192 systemd[1]: Reached target network.target - Network.
Feb 13 16:05:53.593295 systemd[1]: Reached target network-online.target - Network is Online.
Feb 13 16:05:53.595840 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 16:05:53.598298 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 16:05:53.601023 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Feb 13 16:05:53.604291 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Feb 13 16:05:53.607049 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Feb 13 16:05:53.609336 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Feb 13 16:05:53.611765 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Feb 13 16:05:53.614236 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Feb 13 16:05:53.614534 systemd[1]: Reached target paths.target - Path Units.
Feb 13 16:05:53.616523 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 16:05:53.620550 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Feb 13 16:05:53.626394 systemd[1]: Starting docker.socket - Docker Socket for the API...
Feb 13 16:05:53.631864 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Feb 13 16:05:53.634848 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Feb 13 16:05:53.637165 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 16:05:53.639167 systemd[1]: Reached target basic.target - Basic System.
Feb 13 16:05:53.641420 systemd[1]: System is tainted: cgroupsv1
Feb 13 16:05:53.641518 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Feb 13 16:05:53.641571 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Feb 13 16:05:53.650817 systemd[1]: Starting containerd.service - containerd container runtime...
Feb 13 16:05:53.663215 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent...
Feb 13 16:05:53.674950 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Feb 13 16:05:53.688877 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Feb 13 16:05:53.714918 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Feb 13 16:05:53.717855 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Feb 13 16:05:53.737900 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 16:05:53.751166 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Feb 13 16:05:53.766013 jq[2070]: false
Feb 13 16:05:53.770255 systemd[1]: Started ntpd.service - Network Time Service.
Feb 13 16:05:53.807167 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Feb 13 16:05:53.841847 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin...
Feb 13 16:05:53.857606 systemd[1]: Starting setup-oem.service - Setup OEM...
Feb 13 16:05:53.865677 extend-filesystems[2071]: Found loop4
Feb 13 16:05:53.865677 extend-filesystems[2071]: Found loop5
Feb 13 16:05:53.865677 extend-filesystems[2071]: Found loop6
Feb 13 16:05:53.865677 extend-filesystems[2071]: Found loop7
Feb 13 16:05:53.865677 extend-filesystems[2071]: Found nvme0n1
Feb 13 16:05:53.865677 extend-filesystems[2071]: Found nvme0n1p1
Feb 13 16:05:53.865677 extend-filesystems[2071]: Found nvme0n1p2
Feb 13 16:05:53.865677 extend-filesystems[2071]: Found nvme0n1p3
Feb 13 16:05:53.865677 extend-filesystems[2071]: Found usr
Feb 13 16:05:53.865677 extend-filesystems[2071]: Found nvme0n1p4
Feb 13 16:05:53.865677 extend-filesystems[2071]: Found nvme0n1p6
Feb 13 16:05:53.865677 extend-filesystems[2071]: Found nvme0n1p7
Feb 13 16:05:53.865677 extend-filesystems[2071]: Found nvme0n1p9
Feb 13 16:05:53.865677 extend-filesystems[2071]: Checking size of /dev/nvme0n1p9
Feb 13 16:05:53.868599 dbus-daemon[2069]: [system] SELinux support is enabled
Feb 13 16:05:53.895812 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Feb 13 16:05:53.886986 dbus-daemon[2069]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1684 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0")
Feb 13 16:05:53.938290 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Feb 13 16:05:53.990985 systemd[1]: Starting systemd-logind.service - User Login Management...
Feb 13 16:05:53.996578 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Feb 13 16:05:54.017796 systemd[1]: Starting update-engine.service - Update Engine...
Feb 13 16:05:54.036946 extend-filesystems[2071]: Resized partition /dev/nvme0n1p9
Feb 13 16:05:54.040222 ntpd[2078]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:58:42 UTC 2025 (1): Starting
Feb 13 16:05:54.041392 ntpd[2078]: 13 Feb 16:05:54 ntpd[2078]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:58:42 UTC 2025 (1): Starting
Feb 13 16:05:54.041392 ntpd[2078]: 13 Feb 16:05:54 ntpd[2078]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp
Feb 13 16:05:54.041392 ntpd[2078]: 13 Feb 16:05:54 ntpd[2078]: ----------------------------------------------------
Feb 13 16:05:54.041392 ntpd[2078]: 13 Feb 16:05:54 ntpd[2078]: ntp-4 is maintained by Network Time Foundation,
Feb 13 16:05:54.041392 ntpd[2078]: 13 Feb 16:05:54 ntpd[2078]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
Feb 13 16:05:54.041392 ntpd[2078]: 13 Feb 16:05:54 ntpd[2078]: corporation.  Support and training for ntp-4 are
Feb 13 16:05:54.041392 ntpd[2078]: 13 Feb 16:05:54 ntpd[2078]: available at https://www.nwtime.org/support
Feb 13 16:05:54.041392 ntpd[2078]: 13 Feb 16:05:54 ntpd[2078]: ----------------------------------------------------
Feb 13 16:05:54.040304 ntpd[2078]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp
Feb 13 16:05:54.040336 ntpd[2078]: ----------------------------------------------------
Feb 13 16:05:54.040386 ntpd[2078]: ntp-4 is maintained by Network Time Foundation,
Feb 13 16:05:54.062232 ntpd[2078]: 13 Feb 16:05:54 ntpd[2078]: proto: precision = 0.096 usec (-23)
Feb 13 16:05:54.062232 ntpd[2078]: 13 Feb 16:05:54 ntpd[2078]: basedate set to 2025-02-01
Feb 13 16:05:54.062232 ntpd[2078]: 13 Feb 16:05:54 ntpd[2078]: gps base set to 2025-02-02 (week 2352)
Feb 13 16:05:54.062232 ntpd[2078]: 13 Feb 16:05:54 ntpd[2078]: Listen and drop on 0 v6wildcard [::]:123
Feb 13 16:05:54.062232 ntpd[2078]: 13 Feb 16:05:54 ntpd[2078]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Feb 13 16:05:54.062232 ntpd[2078]: 13 Feb 16:05:54 ntpd[2078]: Listen normally on 2 lo 127.0.0.1:123
Feb 13 16:05:54.062232 ntpd[2078]: 13 Feb 16:05:54 ntpd[2078]: Listen normally on 3 eth0 172.31.25.253:123
Feb 13 16:05:54.062232 ntpd[2078]: 13 Feb 16:05:54 ntpd[2078]: Listen normally on 4 lo [::1]:123
Feb 13 16:05:54.062232 ntpd[2078]: 13 Feb 16:05:54 ntpd[2078]: Listen normally on 5 eth0 [fe80::4d9:4cff:fe69:89d5%2]:123
Feb 13 16:05:54.062232 ntpd[2078]: 13 Feb 16:05:54 ntpd[2078]: Listening on routing socket on fd #22 for interface updates
Feb 13 16:05:54.040421 ntpd[2078]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
Feb 13 16:05:54.040447 ntpd[2078]: corporation.  Support and training for ntp-4 are
Feb 13 16:05:54.040480 ntpd[2078]: available at https://www.nwtime.org/support
Feb 13 16:05:54.040509 ntpd[2078]: ----------------------------------------------------
Feb 13 16:05:54.047511 ntpd[2078]: proto: precision = 0.096 usec (-23)
Feb 13 16:05:54.054110 ntpd[2078]: basedate set to 2025-02-01
Feb 13 16:05:54.054158 ntpd[2078]: gps base set to 2025-02-02 (week 2352)
Feb 13 16:05:54.058393 ntpd[2078]: Listen and drop on 0 v6wildcard [::]:123
Feb 13 16:05:54.058504 ntpd[2078]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Feb 13 16:05:54.060999 ntpd[2078]: Listen normally on 2 lo 127.0.0.1:123
Feb 13 16:05:54.061088 ntpd[2078]: Listen normally on 3 eth0 172.31.25.253:123
Feb 13 16:05:54.061183 ntpd[2078]: Listen normally on 4 lo [::1]:123
Feb 13 16:05:54.061309 ntpd[2078]: Listen normally on 5 eth0 [fe80::4d9:4cff:fe69:89d5%2]:123
Feb 13 16:05:54.061437 ntpd[2078]: Listening on routing socket on fd #22 for interface updates
Feb 13 16:05:54.067215 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Feb 13 16:05:54.079999 extend-filesystems[2110]: resize2fs 1.47.1 (20-May-2024)
Feb 13 16:05:54.076011 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Feb 13 16:05:54.093344 coreos-metadata[2067]: Feb 13 16:05:54.091 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Feb 13 16:05:54.093344 coreos-metadata[2067]: Feb 13 16:05:54.091 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1
Feb 13 16:05:54.093344 coreos-metadata[2067]: Feb 13 16:05:54.091 INFO Fetch successful
Feb 13 16:05:54.093344 coreos-metadata[2067]: Feb 13 16:05:54.091 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1
Feb 13 16:05:54.093344 coreos-metadata[2067]: Feb 13 16:05:54.091 INFO Fetch successful
Feb 13 16:05:54.093344 coreos-metadata[2067]: Feb 13 16:05:54.091 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1
Feb 13 16:05:54.093344 coreos-metadata[2067]: Feb 13 16:05:54.091 INFO Fetch successful
Feb 13 16:05:54.093344 coreos-metadata[2067]: Feb 13 16:05:54.091 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1
Feb 13 16:05:54.097500 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Feb 13 16:05:54.105823 ntpd[2078]: 13 Feb 16:05:54 ntpd[2078]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 16:05:54.105823 ntpd[2078]: 13 Feb 16:05:54 ntpd[2078]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 16:05:54.102716 ntpd[2078]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 16:05:54.106052 coreos-metadata[2067]: Feb 13 16:05:54.097 INFO Fetch successful
Feb 13 16:05:54.106052 coreos-metadata[2067]: Feb 13 16:05:54.097 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1
Feb 13 16:05:54.101067 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Feb 13 16:05:54.102788 ntpd[2078]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 16:05:54.103916 systemd[1]: motdgen.service: Deactivated successfully.
Feb 13 16:05:54.107144 coreos-metadata[2067]: Feb 13 16:05:54.107 INFO Fetch failed with 404: resource not found
Feb 13 16:05:54.107144 coreos-metadata[2067]: Feb 13 16:05:54.107 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1
Feb 13 16:05:54.112870 jq[2107]: true
Feb 13 16:05:54.113926 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Feb 13 16:05:54.141768 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks
Feb 13 16:05:54.141898 coreos-metadata[2067]: Feb 13 16:05:54.141 INFO Fetch successful
Feb 13 16:05:54.141898 coreos-metadata[2067]: Feb 13 16:05:54.141 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1
Feb 13 16:05:54.142598 coreos-metadata[2067]: Feb 13 16:05:54.142 INFO Fetch successful
Feb 13 16:05:54.143449 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Feb 13 16:05:54.149714 coreos-metadata[2067]: Feb 13 16:05:54.142 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1
Feb 13 16:05:54.166799 coreos-metadata[2067]: Feb 13 16:05:54.166 INFO Fetch successful
Feb 13 16:05:54.166799 coreos-metadata[2067]: Feb 13 16:05:54.166 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1
Feb 13 16:05:54.171896 coreos-metadata[2067]: Feb 13 16:05:54.171 INFO Fetch successful
Feb 13 16:05:54.171896 coreos-metadata[2067]: Feb 13 16:05:54.171 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1
Feb 13 16:05:54.174565 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Feb 13 16:05:54.175360 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Feb 13 16:05:54.191103 coreos-metadata[2067]: Feb 13 16:05:54.188 INFO Fetch successful
Feb 13 16:05:54.278453 update_engine[2102]: I20250213 16:05:54.275907  2102 main.cc:92] Flatcar Update Engine starting
Feb 13 16:05:54.290119 update_engine[2102]: I20250213 16:05:54.286962  2102 update_check_scheduler.cc:74] Next update check in 4m35s
Feb 13 16:05:54.326907 jq[2121]: true
Feb 13 16:05:54.337387 (ntainerd)[2126]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Feb 13 16:05:54.390755 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915
Feb 13 16:05:54.421417 extend-filesystems[2110]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required
Feb 13 16:05:54.421417 extend-filesystems[2110]: old_desc_blocks = 1, new_desc_blocks = 1
Feb 13 16:05:54.421417 extend-filesystems[2110]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long.
Feb 13 16:05:54.436508 dbus-daemon[2069]: [system] Successfully activated service 'org.freedesktop.systemd1'
Feb 13 16:05:54.445661 extend-filesystems[2071]: Resized filesystem in /dev/nvme0n1p9
Feb 13 16:05:54.447594 systemd[1]: extend-filesystems.service: Deactivated successfully.
Feb 13 16:05:54.448244 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Feb 13 16:05:54.466572 systemd[1]: Finished setup-oem.service - Setup OEM.
Feb 13 16:05:54.481610 tar[2117]: linux-arm64/helm
Feb 13 16:05:54.487309 systemd[1]: Started update-engine.service - Update Engine.
Feb 13 16:05:54.510824 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent.
Feb 13 16:05:54.514072 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Feb 13 16:05:54.514148 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Feb 13 16:05:54.527992 systemd[1]: Starting systemd-hostnamed.service - Hostname Service...
Feb 13 16:05:54.531769 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Feb 13 16:05:54.531843 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Feb 13 16:05:54.547860 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Feb 13 16:05:54.578529 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Feb 13 16:05:54.583604 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent.
Feb 13 16:05:54.643189 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Feb 13 16:05:54.716654 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2173)
Feb 13 16:05:54.779854 bash[2190]: Updated "/home/core/.ssh/authorized_keys"
Feb 13 16:05:54.784378 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Feb 13 16:05:54.809507 amazon-ssm-agent[2166]: Initializing new seelog logger
Feb 13 16:05:54.809507 amazon-ssm-agent[2166]: New Seelog Logger Creation Complete
Feb 13 16:05:54.809507 amazon-ssm-agent[2166]: 2025/02/13 16:05:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 16:05:54.809507 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 16:05:54.809507 amazon-ssm-agent[2166]: 2025/02/13 16:05:54 processing appconfig overrides
Feb 13 16:05:54.809507 amazon-ssm-agent[2166]: 2025/02/13 16:05:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 16:05:54.809507 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 16:05:54.809507 amazon-ssm-agent[2166]: 2025/02/13 16:05:54 processing appconfig overrides
Feb 13 16:05:54.809507 amazon-ssm-agent[2166]: 2025/02/13 16:05:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 16:05:54.809507 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 16:05:54.809507 amazon-ssm-agent[2166]: 2025/02/13 16:05:54 processing appconfig overrides
Feb 13 16:05:54.809507 amazon-ssm-agent[2166]: 2025-02-13 16:05:54 INFO Proxy environment variables:
Feb 13 16:05:54.867507 amazon-ssm-agent[2166]: 2025/02/13 16:05:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 16:05:54.867507 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 16:05:54.867507 amazon-ssm-agent[2166]: 2025/02/13 16:05:54 processing appconfig overrides
Feb 13 16:05:54.844577 systemd[1]: Starting sshkeys.service...
Feb 13 16:05:54.869087 systemd-logind[2100]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 13 16:05:54.869139 systemd-logind[2100]: Watching system buttons on /dev/input/event1 (Sleep Button)
Feb 13 16:05:54.869637 systemd-logind[2100]: New seat seat0.
Feb 13 16:05:54.879741 systemd[1]: Started systemd-logind.service - User Login Management.
Feb 13 16:05:54.925311 amazon-ssm-agent[2166]: 2025-02-13 16:05:54 INFO http_proxy:
Feb 13 16:05:54.973695 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys.
Feb 13 16:05:55.017168 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)...
Feb 13 16:05:55.023143 amazon-ssm-agent[2166]: 2025-02-13 16:05:54 INFO no_proxy:
Feb 13 16:05:55.122313 amazon-ssm-agent[2166]: 2025-02-13 16:05:54 INFO https_proxy:
Feb 13 16:05:55.171677 containerd[2126]: time="2025-02-13T16:05:55.171476100Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21
Feb 13 16:05:55.224981 amazon-ssm-agent[2166]: 2025-02-13 16:05:54 INFO Checking if agent identity type OnPrem can be assumed
Feb 13 16:05:55.293970 locksmithd[2174]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Feb 13 16:05:55.328084 amazon-ssm-agent[2166]: 2025-02-13 16:05:54 INFO Checking if agent identity type EC2 can be assumed
Feb 13 16:05:55.431533 amazon-ssm-agent[2166]: 2025-02-13 16:05:55 INFO Agent will take identity from EC2
Feb 13 16:05:55.492325 coreos-metadata[2229]: Feb 13 16:05:55.492 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Feb 13 16:05:55.495489 coreos-metadata[2229]: Feb 13 16:05:55.495 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1
Feb 13 16:05:55.497447 containerd[2126]: time="2025-02-13T16:05:55.497268878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Feb 13 16:05:55.497818 coreos-metadata[2229]: Feb 13 16:05:55.497 INFO Fetch successful
Feb 13 16:05:55.497818 coreos-metadata[2229]: Feb 13 16:05:55.497 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1
Feb 13 16:05:55.504888 coreos-metadata[2229]: Feb 13 16:05:55.504 INFO Fetch successful
Feb 13 16:05:55.513585 unknown[2229]: wrote ssh authorized keys file for user: core
Feb 13 16:05:55.515699 containerd[2126]: time="2025-02-13T16:05:55.514502894Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Feb 13 16:05:55.515699 containerd[2126]: time="2025-02-13T16:05:55.514579058Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Feb 13 16:05:55.515699 containerd[2126]: time="2025-02-13T16:05:55.514634678Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Feb 13 16:05:55.515699 containerd[2126]: time="2025-02-13T16:05:55.514936430Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Feb 13 16:05:55.515699 containerd[2126]: time="2025-02-13T16:05:55.514980374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Feb 13 16:05:55.515699 containerd[2126]: time="2025-02-13T16:05:55.515107934Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 16:05:55.515699 containerd[2126]: time="2025-02-13T16:05:55.515138282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Feb 13 16:05:55.528710 containerd[2126]: time="2025-02-13T16:05:55.525950426Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 16:05:55.528710 containerd[2126]: time="2025-02-13T16:05:55.526017434Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Feb 13 16:05:55.528710 containerd[2126]: time="2025-02-13T16:05:55.526054502Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 16:05:55.528710 containerd[2126]: time="2025-02-13T16:05:55.526079762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Feb 13 16:05:55.528710 containerd[2126]: time="2025-02-13T16:05:55.526301402Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Feb 13 16:05:55.528710 containerd[2126]: time="2025-02-13T16:05:55.526846982Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Feb 13 16:05:55.529228 amazon-ssm-agent[2166]: 2025-02-13 16:05:55 INFO [amazon-ssm-agent] using named pipe channel for IPC
Feb 13 16:05:55.533691 containerd[2126]: time="2025-02-13T16:05:55.531082202Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 16:05:55.535179 containerd[2126]: time="2025-02-13T16:05:55.534316034Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Feb 13 16:05:55.535179 containerd[2126]: time="2025-02-13T16:05:55.534747206Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Feb 13 16:05:55.535179 containerd[2126]: time="2025-02-13T16:05:55.534909050Z" level=info msg="metadata content store policy set" policy=shared
Feb 13 16:05:55.549500 containerd[2126]: time="2025-02-13T16:05:55.547911050Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Feb 13 16:05:55.549500 containerd[2126]: time="2025-02-13T16:05:55.548140262Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Feb 13 16:05:55.549500 containerd[2126]: time="2025-02-13T16:05:55.548202074Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Feb 13 16:05:55.549500 containerd[2126]: time="2025-02-13T16:05:55.548263934Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Feb 13 16:05:55.549500 containerd[2126]: time="2025-02-13T16:05:55.548340482Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Feb 13 16:05:55.574658 containerd[2126]: time="2025-02-13T16:05:55.560906546Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Feb 13 16:05:55.583541 containerd[2126]: time="2025-02-13T16:05:55.579143438Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Feb 13 16:05:55.583541 containerd[2126]: time="2025-02-13T16:05:55.579460778Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Feb 13 16:05:55.583541 containerd[2126]: time="2025-02-13T16:05:55.579496046Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Feb 13 16:05:55.583541 containerd[2126]: time="2025-02-13T16:05:55.579527102Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Feb 13 16:05:55.583541 containerd[2126]: time="2025-02-13T16:05:55.579558962Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Feb 13 16:05:55.583541 containerd[2126]: time="2025-02-13T16:05:55.579589514Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Feb 13 16:05:55.583541 containerd[2126]: time="2025-02-13T16:05:55.581677658Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Feb 13 16:05:55.583541 containerd[2126]: time="2025-02-13T16:05:55.581752142Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Feb 13 16:05:55.583541 containerd[2126]: time="2025-02-13T16:05:55.581787194Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Feb 13 16:05:55.583541 containerd[2126]: time="2025-02-13T16:05:55.581819726Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Feb 13 16:05:55.583541 containerd[2126]: time="2025-02-13T16:05:55.581850770Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Feb 13 16:05:55.583541 containerd[2126]: time="2025-02-13T16:05:55.581891270Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Feb 13 16:05:55.583541 containerd[2126]: time="2025-02-13T16:05:55.581935754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Feb 13 16:05:55.583541 containerd[2126]: time="2025-02-13T16:05:55.581970902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Feb 13 16:05:55.584253 containerd[2126]: time="2025-02-13T16:05:55.582000698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Feb 13 16:05:55.584253 containerd[2126]: time="2025-02-13T16:05:55.582031778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Feb 13 16:05:55.584253 containerd[2126]: time="2025-02-13T16:05:55.582069602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Feb 13 16:05:55.584253 containerd[2126]: time="2025-02-13T16:05:55.582101810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb 13 16:05:55.584253 containerd[2126]: time="2025-02-13T16:05:55.582131414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Feb 13 16:05:55.584253 containerd[2126]: time="2025-02-13T16:05:55.582161546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Feb 13 16:05:55.584253 containerd[2126]: time="2025-02-13T16:05:55.582194198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Feb 13 16:05:55.584253 containerd[2126]: time="2025-02-13T16:05:55.582231158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Feb 13 16:05:55.584253 containerd[2126]: time="2025-02-13T16:05:55.582261230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Feb 13 16:05:55.584253 containerd[2126]: time="2025-02-13T16:05:55.582290018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Feb 13 16:05:55.584253 containerd[2126]: time="2025-02-13T16:05:55.582318686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Feb 13 16:05:55.584253 containerd[2126]: time="2025-02-13T16:05:55.582353426Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Feb 13 16:05:55.584253 containerd[2126]: time="2025-02-13T16:05:55.582400010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Feb 13 16:05:55.584253 containerd[2126]: time="2025-02-13T16:05:55.582435806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Feb 13 16:05:55.584253 containerd[2126]: time="2025-02-13T16:05:55.582484610Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Feb 13 16:05:55.600648 containerd[2126]: time="2025-02-13T16:05:55.598709594Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Feb 13 16:05:55.600648 containerd[2126]: time="2025-02-13T16:05:55.598786574Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Feb 13 16:05:55.600648 containerd[2126]: time="2025-02-13T16:05:55.598824050Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Feb 13 16:05:55.600648 containerd[2126]: time="2025-02-13T16:05:55.598858082Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Feb 13 16:05:55.600648 containerd[2126]: time="2025-02-13T16:05:55.598889462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Feb 13 16:05:55.600648 containerd[2126]: time="2025-02-13T16:05:55.598922042Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Feb 13 16:05:55.600648 containerd[2126]: time="2025-02-13T16:05:55.598965566Z" level=info msg="NRI interface is disabled by configuration."
Feb 13 16:05:55.600648 containerd[2126]: time="2025-02-13T16:05:55.598997366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Feb 13 16:05:55.601165 containerd[2126]: time="2025-02-13T16:05:55.599523974Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Feb 13 16:05:55.601165 containerd[2126]: time="2025-02-13T16:05:55.599686334Z" level=info msg="Connect containerd service"
Feb 13 16:05:55.601165 containerd[2126]: time="2025-02-13T16:05:55.599761130Z" level=info msg="using legacy CRI server"
Feb 13 16:05:55.601165 containerd[2126]: time="2025-02-13T16:05:55.599780066Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Feb 13 16:05:55.601165 containerd[2126]: time="2025-02-13T16:05:55.599931482Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Feb 13 16:05:55.614588 containerd[2126]: time="2025-02-13T16:05:55.612485822Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 16:05:55.614588 containerd[2126]: time="2025-02-13T16:05:55.612817034Z" level=info msg="Start subscribing containerd event"
Feb 13 16:05:55.614588 containerd[2126]: time="2025-02-13T16:05:55.612999422Z" level=info msg="Start recovering state"
Feb 13 16:05:55.614588 containerd[2126]: time="2025-02-13T16:05:55.613267070Z" level=info msg="Start event monitor"
Feb 13 16:05:55.614588 containerd[2126]: time="2025-02-13T16:05:55.613319690Z" level=info msg="Start snapshots syncer"
Feb 13 16:05:55.614588 containerd[2126]: time="2025-02-13T16:05:55.613361030Z" level=info msg="Start cni network conf syncer for default"
Feb 13 16:05:55.614588 containerd[2126]: time="2025-02-13T16:05:55.613383182Z" level=info msg="Start streaming server"
Feb 13 16:05:55.621024 containerd[2126]: time="2025-02-13T16:05:55.617878394Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb 13 16:05:55.621024 containerd[2126]: time="2025-02-13T16:05:55.618051350Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb 13 16:05:55.621024 containerd[2126]: time="2025-02-13T16:05:55.618162110Z" level=info msg="containerd successfully booted in 0.449847s"
Feb 13 16:05:55.618343 systemd[1]: Started containerd.service - containerd container runtime.
Feb 13 16:05:55.638957 amazon-ssm-agent[2166]: 2025-02-13 16:05:55 INFO [amazon-ssm-agent] using named pipe channel for IPC
Feb 13 16:05:55.635909 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys).
Feb 13 16:05:55.639227 update-ssh-keys[2294]: Updated "/home/core/.ssh/authorized_keys"
Feb 13 16:05:55.652407 dbus-daemon[2069]: [system] Successfully activated service 'org.freedesktop.hostname1'
Feb 13 16:05:55.653229 systemd[1]: Finished sshkeys.service.
Feb 13 16:05:55.659651 systemd[1]: Started systemd-hostnamed.service - Hostname Service.
Feb 13 16:05:55.674564 dbus-daemon[2069]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2170 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0")
Feb 13 16:05:55.691159 systemd[1]: Starting polkit.service - Authorization Manager...
Feb 13 16:05:55.734696 amazon-ssm-agent[2166]: 2025-02-13 16:05:55 INFO [amazon-ssm-agent] using named pipe channel for IPC
Feb 13 16:05:55.747129 polkitd[2302]: Started polkitd version 121
Feb 13 16:05:55.763790 polkitd[2302]: Loading rules from directory /etc/polkit-1/rules.d
Feb 13 16:05:55.764064 polkitd[2302]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 13 16:05:55.770139 polkitd[2302]: Finished loading, compiling and executing 2 rules
Feb 13 16:05:55.772147 dbus-daemon[2069]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Feb 13 16:05:55.772532 systemd[1]: Started polkit.service - Authorization Manager.
Feb 13 16:05:55.774971 polkitd[2302]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Feb 13 16:05:55.836769 amazon-ssm-agent[2166]: 2025-02-13 16:05:55 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0
Feb 13 16:05:55.845455 systemd-hostnamed[2170]: Hostname set to <ip-172-31-25-253> (transient)
Feb 13 16:05:55.847272 systemd-resolved[2020]: System hostname changed to 'ip-172-31-25-253'.
Feb 13 16:05:55.940987 amazon-ssm-agent[2166]: 2025-02-13 16:05:55 INFO [amazon-ssm-agent] OS: linux, Arch: arm64
Feb 13 16:05:56.042847 amazon-ssm-agent[2166]: 2025-02-13 16:05:55 INFO [amazon-ssm-agent] Starting Core Agent
Feb 13 16:05:56.139716 amazon-ssm-agent[2166]: 2025-02-13 16:05:55 INFO [amazon-ssm-agent] registrar detected. Attempting registration
Feb 13 16:05:56.240010 amazon-ssm-agent[2166]: 2025-02-13 16:05:55 INFO [Registrar] Starting registrar module
Feb 13 16:05:56.341008 amazon-ssm-agent[2166]: 2025-02-13 16:05:55 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration
Feb 13 16:05:56.824780 tar[2117]: linux-arm64/LICENSE
Feb 13 16:05:56.826256 tar[2117]: linux-arm64/README.md
Feb 13 16:05:56.877595 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin.
Feb 13 16:05:56.943965 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 16:05:56.961470 (kubelet)[2344]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 16:05:57.095739 amazon-ssm-agent[2166]: 2025-02-13 16:05:57 INFO [EC2Identity] EC2 registration was successful.
Feb 13 16:05:57.144511 amazon-ssm-agent[2166]: 2025-02-13 16:05:57 INFO [CredentialRefresher] credentialRefresher has started
Feb 13 16:05:57.144511 amazon-ssm-agent[2166]: 2025-02-13 16:05:57 INFO [CredentialRefresher] Starting credentials refresher loop
Feb 13 16:05:57.144511 amazon-ssm-agent[2166]: 2025-02-13 16:05:57 INFO EC2RoleProvider Successfully connected with instance profile role credentials
Feb 13 16:05:57.196482 amazon-ssm-agent[2166]: 2025-02-13 16:05:57 INFO [CredentialRefresher] Next credential rotation will be in 30.283301629733334 minutes
Feb 13 16:05:58.071898 kubelet[2344]: E0213 16:05:58.071782    2344 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 16:05:58.080947 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 16:05:58.081358 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 16:05:58.186721 amazon-ssm-agent[2166]: 2025-02-13 16:05:58 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process
Feb 13 16:05:58.288650 amazon-ssm-agent[2166]: 2025-02-13 16:05:58 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2355) started
Feb 13 16:05:58.388464 amazon-ssm-agent[2166]: 2025-02-13 16:05:58 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds
Feb 13 16:05:58.481257 sshd_keygen[2125]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Feb 13 16:05:58.537031 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Feb 13 16:05:58.551443 systemd[1]: Starting issuegen.service - Generate /run/issue...
Feb 13 16:05:58.588701 systemd[1]: issuegen.service: Deactivated successfully.
Feb 13 16:05:58.590040 systemd[1]: Finished issuegen.service - Generate /run/issue.
Feb 13 16:05:58.604148 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Feb 13 16:05:58.637275 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Feb 13 16:05:58.652189 systemd[1]: Started getty@tty1.service - Getty on tty1.
Feb 13 16:05:58.663649 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0.
Feb 13 16:05:58.667355 systemd[1]: Reached target getty.target - Login Prompts.
Feb 13 16:05:58.670208 systemd[1]: Reached target multi-user.target - Multi-User System.
Feb 13 16:05:58.673464 systemd[1]: Startup finished in 10.923s (kernel) + 12.275s (userspace) = 23.198s.
Feb 13 16:06:01.520079 systemd-resolved[2020]: Clock change detected. Flushing caches.
Feb 13 16:06:01.864384 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Feb 13 16:06:01.870899 systemd[1]: Started sshd@0-172.31.25.253:22-139.178.68.195:59232.service - OpenSSH per-connection server daemon (139.178.68.195:59232).
Feb 13 16:06:02.135787 sshd[2388]: Accepted publickey for core from 139.178.68.195 port 59232 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU
Feb 13 16:06:02.141770 sshd[2388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 16:06:02.171057 systemd-logind[2100]: New session 1 of user core.
Feb 13 16:06:02.174457 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Feb 13 16:06:02.187846 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Feb 13 16:06:02.217075 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Feb 13 16:06:02.228103 systemd[1]: Starting user@500.service - User Manager for UID 500...
Feb 13 16:06:02.252440 (systemd)[2394]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Feb 13 16:06:02.479593 systemd[2394]: Queued start job for default target default.target.
Feb 13 16:06:02.480441 systemd[2394]: Created slice app.slice - User Application Slice.
Feb 13 16:06:02.480508 systemd[2394]: Reached target paths.target - Paths.
Feb 13 16:06:02.480542 systemd[2394]: Reached target timers.target - Timers.
Feb 13 16:06:02.491577 systemd[2394]: Starting dbus.socket - D-Bus User Message Bus Socket...
Feb 13 16:06:02.511342 systemd[2394]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Feb 13 16:06:02.511824 systemd[2394]: Reached target sockets.target - Sockets.
Feb 13 16:06:02.511872 systemd[2394]: Reached target basic.target - Basic System.
Feb 13 16:06:02.511977 systemd[2394]: Reached target default.target - Main User Target.
Feb 13 16:06:02.512049 systemd[2394]: Startup finished in 247ms.
Feb 13 16:06:02.512815 systemd[1]: Started user@500.service - User Manager for UID 500.
Feb 13 16:06:02.525732 systemd[1]: Started session-1.scope - Session 1 of User core.
Feb 13 16:06:02.682027 systemd[1]: Started sshd@1-172.31.25.253:22-139.178.68.195:59242.service - OpenSSH per-connection server daemon (139.178.68.195:59242).
Feb 13 16:06:02.859926 sshd[2406]: Accepted publickey for core from 139.178.68.195 port 59242 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU
Feb 13 16:06:02.863646 sshd[2406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 16:06:02.874579 systemd-logind[2100]: New session 2 of user core.
Feb 13 16:06:02.887269 systemd[1]: Started session-2.scope - Session 2 of User core.
Feb 13 16:06:03.024793 sshd[2406]: pam_unix(sshd:session): session closed for user core
Feb 13 16:06:03.031187 systemd[1]: sshd@1-172.31.25.253:22-139.178.68.195:59242.service: Deactivated successfully.
Feb 13 16:06:03.042659 systemd[1]: session-2.scope: Deactivated successfully.
Feb 13 16:06:03.042859 systemd-logind[2100]: Session 2 logged out. Waiting for processes to exit.
Feb 13 16:06:03.055898 systemd[1]: Started sshd@2-172.31.25.253:22-139.178.68.195:59250.service - OpenSSH per-connection server daemon (139.178.68.195:59250).
Feb 13 16:06:03.058183 systemd-logind[2100]: Removed session 2.
Feb 13 16:06:03.246504 sshd[2414]: Accepted publickey for core from 139.178.68.195 port 59250 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU
Feb 13 16:06:03.249348 sshd[2414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 16:06:03.260749 systemd-logind[2100]: New session 3 of user core.
Feb 13 16:06:03.273937 systemd[1]: Started session-3.scope - Session 3 of User core.
Feb 13 16:06:03.399820 sshd[2414]: pam_unix(sshd:session): session closed for user core
Feb 13 16:06:03.409098 systemd-logind[2100]: Session 3 logged out. Waiting for processes to exit.
Feb 13 16:06:03.410520 systemd[1]: sshd@2-172.31.25.253:22-139.178.68.195:59250.service: Deactivated successfully.
Feb 13 16:06:03.418014 systemd[1]: session-3.scope: Deactivated successfully.
Feb 13 16:06:03.419964 systemd-logind[2100]: Removed session 3.
Feb 13 16:06:03.436436 systemd[1]: Started sshd@3-172.31.25.253:22-139.178.68.195:59258.service - OpenSSH per-connection server daemon (139.178.68.195:59258).
Feb 13 16:06:03.622057 sshd[2422]: Accepted publickey for core from 139.178.68.195 port 59258 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU
Feb 13 16:06:03.625210 sshd[2422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 16:06:03.635583 systemd-logind[2100]: New session 4 of user core.
Feb 13 16:06:03.643008 systemd[1]: Started session-4.scope - Session 4 of User core.
Feb 13 16:06:03.779797 sshd[2422]: pam_unix(sshd:session): session closed for user core
Feb 13 16:06:03.785581 systemd[1]: sshd@3-172.31.25.253:22-139.178.68.195:59258.service: Deactivated successfully.
Feb 13 16:06:03.793613 systemd[1]: session-4.scope: Deactivated successfully.
Feb 13 16:06:03.795744 systemd-logind[2100]: Session 4 logged out. Waiting for processes to exit.
Feb 13 16:06:03.797881 systemd-logind[2100]: Removed session 4.
Feb 13 16:06:03.812913 systemd[1]: Started sshd@4-172.31.25.253:22-139.178.68.195:59268.service - OpenSSH per-connection server daemon (139.178.68.195:59268).
Feb 13 16:06:03.990246 sshd[2430]: Accepted publickey for core from 139.178.68.195 port 59268 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU
Feb 13 16:06:03.993124 sshd[2430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 16:06:04.004739 systemd-logind[2100]: New session 5 of user core.
Feb 13 16:06:04.014106 systemd[1]: Started session-5.scope - Session 5 of User core.
Feb 13 16:06:04.159740 sudo[2434]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Feb 13 16:06:04.161279 sudo[2434]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 16:06:04.802968 systemd[1]: Starting docker.service - Docker Application Container Engine...
Feb 13 16:06:04.815465 (dockerd)[2450]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU
Feb 13 16:06:05.347813 dockerd[2450]: time="2025-02-13T16:06:05.347709783Z" level=info msg="Starting up"
Feb 13 16:06:05.528954 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1360609428-merged.mount: Deactivated successfully.
Feb 13 16:06:05.743511 dockerd[2450]: time="2025-02-13T16:06:05.743326024Z" level=info msg="Loading containers: start."
Feb 13 16:06:05.941397 kernel: Initializing XFRM netlink socket
Feb 13 16:06:06.028580 (udev-worker)[2472]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 16:06:06.134869 systemd-networkd[1684]: docker0: Link UP
Feb 13 16:06:06.160734 dockerd[2450]: time="2025-02-13T16:06:06.160663779Z" level=info msg="Loading containers: done."
Feb 13 16:06:06.189415 dockerd[2450]: time="2025-02-13T16:06:06.189071547Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Feb 13 16:06:06.189415 dockerd[2450]: time="2025-02-13T16:06:06.189252219Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0
Feb 13 16:06:06.190281 dockerd[2450]: time="2025-02-13T16:06:06.189834183Z" level=info msg="Daemon has completed initialization"
Feb 13 16:06:06.253248 dockerd[2450]: time="2025-02-13T16:06:06.252978387Z" level=info msg="API listen on /run/docker.sock"
Feb 13 16:06:06.253554 systemd[1]: Started docker.service - Docker Application Container Engine.
Feb 13 16:06:06.522139 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3940648535-merged.mount: Deactivated successfully.
Feb 13 16:06:07.473821 containerd[2126]: time="2025-02-13T16:06:07.473420273Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\""
Feb 13 16:06:08.126028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2877936055.mount: Deactivated successfully.
Feb 13 16:06:08.771784 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Feb 13 16:06:08.778761 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 16:06:09.201350 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 16:06:09.212849 (kubelet)[2659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 16:06:09.328847 kubelet[2659]: E0213 16:06:09.328714    2659 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 16:06:09.344765 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 16:06:09.346266 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 16:06:10.238254 containerd[2126]: time="2025-02-13T16:06:10.238036711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:06:10.241436 containerd[2126]: time="2025-02-13T16:06:10.241275727Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=32205861"
Feb 13 16:06:10.242346 containerd[2126]: time="2025-02-13T16:06:10.241828003Z" level=info msg="ImageCreate event name:\"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:06:10.252969 containerd[2126]: time="2025-02-13T16:06:10.252780775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:06:10.262455 containerd[2126]: time="2025-02-13T16:06:10.262325239Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"32202661\" in 2.788815794s"
Feb 13 16:06:10.263289 containerd[2126]: time="2025-02-13T16:06:10.262708159Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\""
Feb 13 16:06:10.310734 containerd[2126]: time="2025-02-13T16:06:10.310408591Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\""
Feb 13 16:06:12.191669 containerd[2126]: time="2025-02-13T16:06:12.191597085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:06:12.193971 containerd[2126]: time="2025-02-13T16:06:12.193450893Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=29383091"
Feb 13 16:06:12.195915 containerd[2126]: time="2025-02-13T16:06:12.195303177Z" level=info msg="ImageCreate event name:\"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:06:12.202393 containerd[2126]: time="2025-02-13T16:06:12.202263513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:06:12.205174 containerd[2126]: time="2025-02-13T16:06:12.205104789Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"30786820\" in 1.894627654s"
Feb 13 16:06:12.206614 containerd[2126]: time="2025-02-13T16:06:12.205431717Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\""
Feb 13 16:06:12.262223 containerd[2126]: time="2025-02-13T16:06:12.262154313Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\""
Feb 13 16:06:13.393073 containerd[2126]: time="2025-02-13T16:06:13.392644810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:06:13.395885 containerd[2126]: time="2025-02-13T16:06:13.395713198Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=15766980"
Feb 13 16:06:13.396841 containerd[2126]: time="2025-02-13T16:06:13.396667246Z" level=info msg="ImageCreate event name:\"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:06:13.407677 containerd[2126]: time="2025-02-13T16:06:13.407470139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:06:13.412847 containerd[2126]: time="2025-02-13T16:06:13.411632975Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"17170727\" in 1.14939507s"
Feb 13 16:06:13.412847 containerd[2126]: time="2025-02-13T16:06:13.411779027Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\""
Feb 13 16:06:13.465863 containerd[2126]: time="2025-02-13T16:06:13.465767459Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\""
Feb 13 16:06:14.923210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2112344964.mount: Deactivated successfully.
Feb 13 16:06:15.497444 containerd[2126]: time="2025-02-13T16:06:15.496129093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:06:15.499726 containerd[2126]: time="2025-02-13T16:06:15.499631377Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=25273375"
Feb 13 16:06:15.502719 containerd[2126]: time="2025-02-13T16:06:15.502569613Z" level=info msg="ImageCreate event name:\"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:06:15.507085 containerd[2126]: time="2025-02-13T16:06:15.506910529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:06:15.514258 containerd[2126]: time="2025-02-13T16:06:15.513585385Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"25272394\" in 2.047697938s"
Feb 13 16:06:15.514258 containerd[2126]: time="2025-02-13T16:06:15.513758521Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\""
Feb 13 16:06:15.564270 containerd[2126]: time="2025-02-13T16:06:15.564209473Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\""
Feb 13 16:06:16.159844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2996836206.mount: Deactivated successfully.
Feb 13 16:06:17.906455 containerd[2126]: time="2025-02-13T16:06:17.906331157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:06:17.951440 containerd[2126]: time="2025-02-13T16:06:17.951335321Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381"
Feb 13 16:06:17.993409 containerd[2126]: time="2025-02-13T16:06:17.993305537Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:06:18.036923 containerd[2126]: time="2025-02-13T16:06:18.036842270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:06:18.045441 containerd[2126]: time="2025-02-13T16:06:18.043941770Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.479660921s"
Feb 13 16:06:18.045441 containerd[2126]: time="2025-02-13T16:06:18.044055686Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\""
Feb 13 16:06:18.090703 containerd[2126]: time="2025-02-13T16:06:18.090632822Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\""
Feb 13 16:06:18.859160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2376069691.mount: Deactivated successfully.
Feb 13 16:06:18.873712 containerd[2126]: time="2025-02-13T16:06:18.872896338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:06:18.875470 containerd[2126]: time="2025-02-13T16:06:18.875340066Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821"
Feb 13 16:06:18.878783 containerd[2126]: time="2025-02-13T16:06:18.878633778Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:06:18.887123 containerd[2126]: time="2025-02-13T16:06:18.886824606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:06:18.890442 containerd[2126]: time="2025-02-13T16:06:18.889079562Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 798.350104ms"
Feb 13 16:06:18.890442 containerd[2126]: time="2025-02-13T16:06:18.889241298Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\""
Feb 13 16:06:18.931899 containerd[2126]: time="2025-02-13T16:06:18.931790538Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\""
Feb 13 16:06:19.495057 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Feb 13 16:06:19.506001 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 16:06:19.565022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1468701586.mount: Deactivated successfully.
Feb 13 16:06:19.962665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 16:06:19.982635 (kubelet)[2784]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 16:06:20.203674 kubelet[2784]: E0213 16:06:20.203220    2784 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 16:06:20.214953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 16:06:20.215574 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 16:06:22.053900 containerd[2126]: time="2025-02-13T16:06:22.053833337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:06:22.057223 containerd[2126]: time="2025-02-13T16:06:22.057123966Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786"
Feb 13 16:06:22.059994 containerd[2126]: time="2025-02-13T16:06:22.059911470Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:06:22.069058 containerd[2126]: time="2025-02-13T16:06:22.068960526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:06:22.071798 containerd[2126]: time="2025-02-13T16:06:22.071728182Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.139862752s"
Feb 13 16:06:22.072117 containerd[2126]: time="2025-02-13T16:06:22.071962782Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\""
Feb 13 16:06:26.361218 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 13 16:06:30.271997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
Feb 13 16:06:30.282953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 16:06:30.647736 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 16:06:30.665145 (kubelet)[2900]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 16:06:30.765351 kubelet[2900]: E0213 16:06:30.765274    2900 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 16:06:30.771687 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 16:06:30.772079 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 16:06:33.344738 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 16:06:33.358885 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 16:06:33.422485 systemd[1]: Reloading requested from client PID 2918 ('systemctl') (unit session-5.scope)...
Feb 13 16:06:33.422785 systemd[1]: Reloading...
Feb 13 16:06:33.688455 zram_generator::config[2961]: No configuration found.
Feb 13 16:06:34.008853 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 16:06:34.179015 systemd[1]: Reloading finished in 755 ms.
Feb 13 16:06:34.263544 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM
Feb 13 16:06:34.264089 systemd[1]: kubelet.service: Failed with result 'signal'.
Feb 13 16:06:34.264952 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 16:06:34.280276 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 16:06:34.585127 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 16:06:34.618164 (kubelet)[3030]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Feb 13 16:06:34.715933 kubelet[3030]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 16:06:34.715933 kubelet[3030]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 13 16:06:34.718646 kubelet[3030]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 16:06:34.718646 kubelet[3030]: I0213 16:06:34.716595    3030 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 13 16:06:35.798112 kubelet[3030]: I0213 16:06:35.798027    3030 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Feb 13 16:06:35.799056 kubelet[3030]: I0213 16:06:35.799021    3030 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 13 16:06:35.799711 kubelet[3030]: I0213 16:06:35.799658    3030 server.go:919] "Client rotation is on, will bootstrap in background"
Feb 13 16:06:35.841013 kubelet[3030]: E0213 16:06:35.840952    3030 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.25.253:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.25.253:6443: connect: connection refused
Feb 13 16:06:35.842299 kubelet[3030]: I0213 16:06:35.842184    3030 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 16:06:35.866432 kubelet[3030]: I0213 16:06:35.865711    3030 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 13 16:06:35.869138 kubelet[3030]: I0213 16:06:35.869044    3030 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 13 16:06:35.869601 kubelet[3030]: I0213 16:06:35.869517    3030 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Feb 13 16:06:35.869601 kubelet[3030]: I0213 16:06:35.869597    3030 topology_manager.go:138] "Creating topology manager with none policy"
Feb 13 16:06:35.869914 kubelet[3030]: I0213 16:06:35.869622    3030 container_manager_linux.go:301] "Creating device plugin manager"
Feb 13 16:06:35.872175 kubelet[3030]: I0213 16:06:35.872089    3030 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 16:06:35.877270 kubelet[3030]: I0213 16:06:35.877094    3030 kubelet.go:396] "Attempting to sync node with API server"
Feb 13 16:06:35.877270 kubelet[3030]: I0213 16:06:35.877156    3030 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 13 16:06:35.877270 kubelet[3030]: I0213 16:06:35.877202    3030 kubelet.go:312] "Adding apiserver pod source"
Feb 13 16:06:35.877270 kubelet[3030]: I0213 16:06:35.877228    3030 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 13 16:06:35.881759 kubelet[3030]: W0213 16:06:35.880950    3030 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.25.253:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-253&limit=500&resourceVersion=0": dial tcp 172.31.25.253:6443: connect: connection refused
Feb 13 16:06:35.881759 kubelet[3030]: E0213 16:06:35.881043    3030 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.25.253:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-253&limit=500&resourceVersion=0": dial tcp 172.31.25.253:6443: connect: connection refused
Feb 13 16:06:35.882695 kubelet[3030]: W0213 16:06:35.882637    3030 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.25.253:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.25.253:6443: connect: connection refused
Feb 13 16:06:35.882910 kubelet[3030]: E0213 16:06:35.882881    3030 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.25.253:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.25.253:6443: connect: connection refused
Feb 13 16:06:35.883225 kubelet[3030]: I0213 16:06:35.883191    3030 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1"
Feb 13 16:06:35.883949 kubelet[3030]: I0213 16:06:35.883905    3030 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Feb 13 16:06:35.886434 kubelet[3030]: W0213 16:06:35.885301    3030 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Feb 13 16:06:35.886826 kubelet[3030]: I0213 16:06:35.886789    3030 server.go:1256] "Started kubelet"
Feb 13 16:06:35.890525 kubelet[3030]: I0213 16:06:35.890475    3030 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Feb 13 16:06:35.892174 kubelet[3030]: I0213 16:06:35.892076    3030 server.go:461] "Adding debug handlers to kubelet server"
Feb 13 16:06:35.893573 kubelet[3030]: I0213 16:06:35.893527    3030 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Feb 13 16:06:35.894127 kubelet[3030]: I0213 16:06:35.894085    3030 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb 13 16:06:35.898223 kubelet[3030]: I0213 16:06:35.898120    3030 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 13 16:06:35.900167 kubelet[3030]: E0213 16:06:35.899637    3030 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.25.253:6443/api/v1/namespaces/default/events\": dial tcp 172.31.25.253:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-25-253.1823d03260d65676  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-253,UID:ip-172-31-25-253,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-253,},FirstTimestamp:2025-02-13 16:06:35.886745206 +0000 UTC m=+1.260091207,LastTimestamp:2025-02-13 16:06:35.886745206 +0000 UTC m=+1.260091207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-253,}"
Feb 13 16:06:35.908458 kubelet[3030]: E0213 16:06:35.907074    3030 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 13 16:06:35.908458 kubelet[3030]: E0213 16:06:35.907192    3030 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-25-253\" not found"
Feb 13 16:06:35.908458 kubelet[3030]: I0213 16:06:35.907260    3030 volume_manager.go:291] "Starting Kubelet Volume Manager"
Feb 13 16:06:35.908458 kubelet[3030]: I0213 16:06:35.907521    3030 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Feb 13 16:06:35.908458 kubelet[3030]: I0213 16:06:35.907641    3030 reconciler_new.go:29] "Reconciler: start to sync state"
Feb 13 16:06:35.908458 kubelet[3030]: W0213 16:06:35.908245    3030 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.25.253:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.253:6443: connect: connection refused
Feb 13 16:06:35.908458 kubelet[3030]: E0213 16:06:35.908344    3030 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.25.253:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.253:6443: connect: connection refused
Feb 13 16:06:35.909938 kubelet[3030]: E0213 16:06:35.909897    3030 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-253?timeout=10s\": dial tcp 172.31.25.253:6443: connect: connection refused" interval="200ms"
Feb 13 16:06:35.917299 kubelet[3030]: I0213 16:06:35.917242    3030 factory.go:221] Registration of the containerd container factory successfully
Feb 13 16:06:35.918091 kubelet[3030]: I0213 16:06:35.917896    3030 factory.go:221] Registration of the systemd container factory successfully
Feb 13 16:06:35.918296 kubelet[3030]: I0213 16:06:35.918243    3030 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Feb 13 16:06:35.958667 kubelet[3030]: I0213 16:06:35.958620    3030 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 13 16:06:35.972826 kubelet[3030]: I0213 16:06:35.972773    3030 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 13 16:06:35.973031 kubelet[3030]: I0213 16:06:35.973005    3030 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb 13 16:06:35.973158 kubelet[3030]: I0213 16:06:35.973136    3030 kubelet.go:2329] "Starting kubelet main sync loop"
Feb 13 16:06:35.973711 kubelet[3030]: E0213 16:06:35.973599    3030 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb 13 16:06:35.979434 kubelet[3030]: W0213 16:06:35.979251    3030 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.25.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.253:6443: connect: connection refused
Feb 13 16:06:35.979434 kubelet[3030]: E0213 16:06:35.979398    3030 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.25.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.253:6443: connect: connection refused
Feb 13 16:06:36.014040 kubelet[3030]: I0213 16:06:36.013994    3030 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-25-253"
Feb 13 16:06:36.015141 kubelet[3030]: I0213 16:06:36.015100    3030 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 13 16:06:36.015818 kubelet[3030]: I0213 16:06:36.015776    3030 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 13 16:06:36.016030 kubelet[3030]: E0213 16:06:36.015646    3030 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.25.253:6443/api/v1/nodes\": dial tcp 172.31.25.253:6443: connect: connection refused" node="ip-172-31-25-253"
Feb 13 16:06:36.016160 kubelet[3030]: I0213 16:06:36.016136    3030 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 16:06:36.019446 kubelet[3030]: I0213 16:06:36.019189    3030 policy_none.go:49] "None policy: Start"
Feb 13 16:06:36.021334 kubelet[3030]: I0213 16:06:36.020716    3030 memory_manager.go:170] "Starting memorymanager" policy="None"
Feb 13 16:06:36.021334 kubelet[3030]: I0213 16:06:36.020797    3030 state_mem.go:35] "Initializing new in-memory state store"
Feb 13 16:06:36.032078 kubelet[3030]: I0213 16:06:36.032028    3030 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 13 16:06:36.032798 kubelet[3030]: I0213 16:06:36.032762    3030 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 13 16:06:36.040576 kubelet[3030]: E0213 16:06:36.040540    3030 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-25-253\" not found"
Feb 13 16:06:36.075335 kubelet[3030]: I0213 16:06:36.075076    3030 topology_manager.go:215] "Topology Admit Handler" podUID="bb636e1ad9b66465858680f8041a5e00" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-25-253"
Feb 13 16:06:36.081739 kubelet[3030]: I0213 16:06:36.081434    3030 topology_manager.go:215] "Topology Admit Handler" podUID="7d2b5db342d29102f96dbf4a03339947" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-25-253"
Feb 13 16:06:36.088450 kubelet[3030]: I0213 16:06:36.086681    3030 topology_manager.go:215] "Topology Admit Handler" podUID="a1fee8877fb4534fcc854345da75b4d8" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-25-253"
Feb 13 16:06:36.108423 kubelet[3030]: I0213 16:06:36.108346    3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1fee8877fb4534fcc854345da75b4d8-ca-certs\") pod \"kube-apiserver-ip-172-31-25-253\" (UID: \"a1fee8877fb4534fcc854345da75b4d8\") " pod="kube-system/kube-apiserver-ip-172-31-25-253"
Feb 13 16:06:36.108555 kubelet[3030]: I0213 16:06:36.108496    3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1fee8877fb4534fcc854345da75b4d8-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-253\" (UID: \"a1fee8877fb4534fcc854345da75b4d8\") " pod="kube-system/kube-apiserver-ip-172-31-25-253"
Feb 13 16:06:36.108640 kubelet[3030]: I0213 16:06:36.108553    3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1fee8877fb4534fcc854345da75b4d8-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-253\" (UID: \"a1fee8877fb4534fcc854345da75b4d8\") " pod="kube-system/kube-apiserver-ip-172-31-25-253"
Feb 13 16:06:36.108640 kubelet[3030]: I0213 16:06:36.108600    3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bb636e1ad9b66465858680f8041a5e00-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-253\" (UID: \"bb636e1ad9b66465858680f8041a5e00\") " pod="kube-system/kube-controller-manager-ip-172-31-25-253"
Feb 13 16:06:36.108772 kubelet[3030]: I0213 16:06:36.108650    3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bb636e1ad9b66465858680f8041a5e00-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-253\" (UID: \"bb636e1ad9b66465858680f8041a5e00\") " pod="kube-system/kube-controller-manager-ip-172-31-25-253"
Feb 13 16:06:36.108772 kubelet[3030]: I0213 16:06:36.108697    3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7d2b5db342d29102f96dbf4a03339947-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-253\" (UID: \"7d2b5db342d29102f96dbf4a03339947\") " pod="kube-system/kube-scheduler-ip-172-31-25-253"
Feb 13 16:06:36.108772 kubelet[3030]: I0213 16:06:36.108741    3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bb636e1ad9b66465858680f8041a5e00-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-253\" (UID: \"bb636e1ad9b66465858680f8041a5e00\") " pod="kube-system/kube-controller-manager-ip-172-31-25-253"
Feb 13 16:06:36.108925 kubelet[3030]: I0213 16:06:36.108799    3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bb636e1ad9b66465858680f8041a5e00-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-253\" (UID: \"bb636e1ad9b66465858680f8041a5e00\") " pod="kube-system/kube-controller-manager-ip-172-31-25-253"
Feb 13 16:06:36.108925 kubelet[3030]: I0213 16:06:36.108851    3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bb636e1ad9b66465858680f8041a5e00-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-253\" (UID: \"bb636e1ad9b66465858680f8041a5e00\") " pod="kube-system/kube-controller-manager-ip-172-31-25-253"
Feb 13 16:06:36.111049 kubelet[3030]: E0213 16:06:36.111006    3030 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-253?timeout=10s\": dial tcp 172.31.25.253:6443: connect: connection refused" interval="400ms"
Feb 13 16:06:36.218530 kubelet[3030]: I0213 16:06:36.218479    3030 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-25-253"
Feb 13 16:06:36.219832 kubelet[3030]: E0213 16:06:36.219781    3030 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.25.253:6443/api/v1/nodes\": dial tcp 172.31.25.253:6443: connect: connection refused" node="ip-172-31-25-253"
Feb 13 16:06:36.396501 containerd[2126]: time="2025-02-13T16:06:36.396277041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-253,Uid:bb636e1ad9b66465858680f8041a5e00,Namespace:kube-system,Attempt:0,}"
Feb 13 16:06:36.403820 containerd[2126]: time="2025-02-13T16:06:36.403629189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-253,Uid:a1fee8877fb4534fcc854345da75b4d8,Namespace:kube-system,Attempt:0,}"
Feb 13 16:06:36.412008 containerd[2126]: time="2025-02-13T16:06:36.411921429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-253,Uid:7d2b5db342d29102f96dbf4a03339947,Namespace:kube-system,Attempt:0,}"
Feb 13 16:06:36.512100 kubelet[3030]: E0213 16:06:36.512062    3030 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-253?timeout=10s\": dial tcp 172.31.25.253:6443: connect: connection refused" interval="800ms"
Feb 13 16:06:36.625270 kubelet[3030]: I0213 16:06:36.624588    3030 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-25-253"
Feb 13 16:06:36.625270 kubelet[3030]: E0213 16:06:36.625119    3030 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.25.253:6443/api/v1/nodes\": dial tcp 172.31.25.253:6443: connect: connection refused" node="ip-172-31-25-253"
Feb 13 16:06:36.836616 kubelet[3030]: W0213 16:06:36.836533    3030 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.25.253:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.25.253:6443: connect: connection refused
Feb 13 16:06:36.837255 kubelet[3030]: E0213 16:06:36.836631    3030 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.25.253:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.25.253:6443: connect: connection refused
Feb 13 16:06:36.931873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3789097495.mount: Deactivated successfully.
Feb 13 16:06:36.949063 containerd[2126]: time="2025-02-13T16:06:36.948955595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 16:06:36.951625 containerd[2126]: time="2025-02-13T16:06:36.951546671Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 16:06:36.953762 containerd[2126]: time="2025-02-13T16:06:36.953645628Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173"
Feb 13 16:06:36.956245 containerd[2126]: time="2025-02-13T16:06:36.956087328Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Feb 13 16:06:36.958430 containerd[2126]: time="2025-02-13T16:06:36.958200180Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 16:06:36.961251 containerd[2126]: time="2025-02-13T16:06:36.961046796Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 16:06:36.962657 containerd[2126]: time="2025-02-13T16:06:36.962528736Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Feb 13 16:06:36.967226 containerd[2126]: time="2025-02-13T16:06:36.967167516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 16:06:36.971955 containerd[2126]: time="2025-02-13T16:06:36.971549976Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 567.796251ms"
Feb 13 16:06:36.976895 containerd[2126]: time="2025-02-13T16:06:36.976476816Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 578.038755ms"
Feb 13 16:06:37.009417 containerd[2126]: time="2025-02-13T16:06:37.007931840Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 595.873491ms"
Feb 13 16:06:37.225668 containerd[2126]: time="2025-02-13T16:06:37.224798253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 16:06:37.225668 containerd[2126]: time="2025-02-13T16:06:37.224897961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 16:06:37.225668 containerd[2126]: time="2025-02-13T16:06:37.224924349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 16:06:37.228269 containerd[2126]: time="2025-02-13T16:06:37.227689701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 16:06:37.228269 containerd[2126]: time="2025-02-13T16:06:37.227996505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 16:06:37.228269 containerd[2126]: time="2025-02-13T16:06:37.228028617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 16:06:37.228713 containerd[2126]: time="2025-02-13T16:06:37.227703885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 16:06:37.228713 containerd[2126]: time="2025-02-13T16:06:37.227864169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 16:06:37.228713 containerd[2126]: time="2025-02-13T16:06:37.228022437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 16:06:37.230202 containerd[2126]: time="2025-02-13T16:06:37.230030085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 16:06:37.230653 containerd[2126]: time="2025-02-13T16:06:37.230062509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 16:06:37.230844 containerd[2126]: time="2025-02-13T16:06:37.230433045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 16:06:37.261854 kubelet[3030]: W0213 16:06:37.261681    3030 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.25.253:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.253:6443: connect: connection refused
Feb 13 16:06:37.261854 kubelet[3030]: E0213 16:06:37.261785    3030 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.25.253:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.253:6443: connect: connection refused
Feb 13 16:06:37.314154 kubelet[3030]: E0213 16:06:37.313901    3030 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-253?timeout=10s\": dial tcp 172.31.25.253:6443: connect: connection refused" interval="1.6s"
Feb 13 16:06:37.336685 kubelet[3030]: W0213 16:06:37.336116    3030 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.25.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.253:6443: connect: connection refused
Feb 13 16:06:37.336685 kubelet[3030]: E0213 16:06:37.336190    3030 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.25.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.253:6443: connect: connection refused
Feb 13 16:06:37.402264 kubelet[3030]: W0213 16:06:37.401819    3030 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.25.253:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-253&limit=500&resourceVersion=0": dial tcp 172.31.25.253:6443: connect: connection refused
Feb 13 16:06:37.402264 kubelet[3030]: E0213 16:06:37.401927    3030 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.25.253:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-253&limit=500&resourceVersion=0": dial tcp 172.31.25.253:6443: connect: connection refused
Feb 13 16:06:37.406724 containerd[2126]: time="2025-02-13T16:06:37.406480870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-253,Uid:7d2b5db342d29102f96dbf4a03339947,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b7dc8259cff81e81ff1c2951571c22c15a65ddc6f1dd04bbb9d0a5edf188d75\""
Feb 13 16:06:37.425259 containerd[2126]: time="2025-02-13T16:06:37.424981714Z" level=info msg="CreateContainer within sandbox \"9b7dc8259cff81e81ff1c2951571c22c15a65ddc6f1dd04bbb9d0a5edf188d75\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Feb 13 16:06:37.434778 kubelet[3030]: I0213 16:06:37.434683    3030 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-25-253"
Feb 13 16:06:37.437176 kubelet[3030]: E0213 16:06:37.437122    3030 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.25.253:6443/api/v1/nodes\": dial tcp 172.31.25.253:6443: connect: connection refused" node="ip-172-31-25-253"
Feb 13 16:06:37.442747 containerd[2126]: time="2025-02-13T16:06:37.442636498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-253,Uid:a1fee8877fb4534fcc854345da75b4d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"02d69207a7450dfd542587e49604f937d93e5e0b671ba6d72b70e8bd83e7da5a\""
Feb 13 16:06:37.453417 containerd[2126]: time="2025-02-13T16:06:37.453323530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-253,Uid:bb636e1ad9b66465858680f8041a5e00,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c52abd2f9891874e7b856e89145212298bf0ba8162c9fe9403ae57e2b383d18\""
Feb 13 16:06:37.456713 containerd[2126]: time="2025-02-13T16:06:37.456514186Z" level=info msg="CreateContainer within sandbox \"02d69207a7450dfd542587e49604f937d93e5e0b671ba6d72b70e8bd83e7da5a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Feb 13 16:06:37.465080 containerd[2126]: time="2025-02-13T16:06:37.465013462Z" level=info msg="CreateContainer within sandbox \"8c52abd2f9891874e7b856e89145212298bf0ba8162c9fe9403ae57e2b383d18\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Feb 13 16:06:37.490833 containerd[2126]: time="2025-02-13T16:06:37.490464262Z" level=info msg="CreateContainer within sandbox \"9b7dc8259cff81e81ff1c2951571c22c15a65ddc6f1dd04bbb9d0a5edf188d75\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b0bb23f7381717e645224b8623e8a7167b97f80aa1676bfec53b549b99c5c023\""
Feb 13 16:06:37.493547 containerd[2126]: time="2025-02-13T16:06:37.492800494Z" level=info msg="StartContainer for \"b0bb23f7381717e645224b8623e8a7167b97f80aa1676bfec53b549b99c5c023\""
Feb 13 16:06:37.512250 containerd[2126]: time="2025-02-13T16:06:37.512154574Z" level=info msg="CreateContainer within sandbox \"02d69207a7450dfd542587e49604f937d93e5e0b671ba6d72b70e8bd83e7da5a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"eb61d65958e2b63f4606e814b86d9fc9ecd23570ea091a69a0a23cda4e279e5e\""
Feb 13 16:06:37.514265 containerd[2126]: time="2025-02-13T16:06:37.514185946Z" level=info msg="StartContainer for \"eb61d65958e2b63f4606e814b86d9fc9ecd23570ea091a69a0a23cda4e279e5e\""
Feb 13 16:06:37.532406 containerd[2126]: time="2025-02-13T16:06:37.532269490Z" level=info msg="CreateContainer within sandbox \"8c52abd2f9891874e7b856e89145212298bf0ba8162c9fe9403ae57e2b383d18\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5abb7e7bd9949ad3f422e4ed369bb59728b6476c2c7816e564439fad74757e36\""
Feb 13 16:06:37.533527 containerd[2126]: time="2025-02-13T16:06:37.533445370Z" level=info msg="StartContainer for \"5abb7e7bd9949ad3f422e4ed369bb59728b6476c2c7816e564439fad74757e36\""
Feb 13 16:06:37.712176 containerd[2126]: time="2025-02-13T16:06:37.712109699Z" level=info msg="StartContainer for \"b0bb23f7381717e645224b8623e8a7167b97f80aa1676bfec53b549b99c5c023\" returns successfully"
Feb 13 16:06:37.775429 containerd[2126]: time="2025-02-13T16:06:37.773435868Z" level=info msg="StartContainer for \"eb61d65958e2b63f4606e814b86d9fc9ecd23570ea091a69a0a23cda4e279e5e\" returns successfully"
Feb 13 16:06:37.869999 containerd[2126]: time="2025-02-13T16:06:37.869921880Z" level=info msg="StartContainer for \"5abb7e7bd9949ad3f422e4ed369bb59728b6476c2c7816e564439fad74757e36\" returns successfully"
Feb 13 16:06:37.920607 kubelet[3030]: E0213 16:06:37.920453    3030 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.25.253:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.25.253:6443: connect: connection refused
Feb 13 16:06:39.041875 kubelet[3030]: I0213 16:06:39.041841    3030 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-25-253"
Feb 13 16:06:39.966580 update_engine[2102]: I20250213 16:06:39.966415  2102 update_attempter.cc:509] Updating boot flags...
Feb 13 16:06:40.291681 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3315)
Feb 13 16:06:41.068455 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3306)
Feb 13 16:06:41.797631 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3306)
Feb 13 16:06:42.888410 kubelet[3030]: I0213 16:06:42.886936    3030 apiserver.go:52] "Watching apiserver"
Feb 13 16:06:43.008683 kubelet[3030]: I0213 16:06:43.008602    3030 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb 13 16:06:43.031718 kubelet[3030]: E0213 16:06:43.031632    3030 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-25-253\" not found" node="ip-172-31-25-253"
Feb 13 16:06:43.046153 kubelet[3030]: I0213 16:06:43.045949    3030 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-25-253"
Feb 13 16:06:43.161554 kubelet[3030]: E0213 16:06:43.159795    3030 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-25-253.1823d03260d65676  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-253,UID:ip-172-31-25-253,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-25-253,},FirstTimestamp:2025-02-13 16:06:35.886745206 +0000 UTC m=+1.260091207,LastTimestamp:2025-02-13 16:06:35.886745206 +0000 UTC m=+1.260091207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-253,}"
Feb 13 16:06:43.258291 kubelet[3030]: E0213 16:06:43.257991    3030 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-25-253.1823d032620c082a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-25-253,UID:ip-172-31-25-253,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-25-253,},FirstTimestamp:2025-02-13 16:06:35.907041322 +0000 UTC m=+1.280387359,LastTimestamp:2025-02-13 16:06:35.907041322 +0000 UTC m=+1.280387359,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-25-253,}"
Feb 13 16:06:46.004034 kubelet[3030]: I0213 16:06:46.003826    3030 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-25-253" podStartSLOduration=2.003759688 podStartE2EDuration="2.003759688s" podCreationTimestamp="2025-02-13 16:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:06:46.003452704 +0000 UTC m=+11.376798741" watchObservedRunningTime="2025-02-13 16:06:46.003759688 +0000 UTC m=+11.377105701"
Feb 13 16:06:46.321693 systemd[1]: Reloading requested from client PID 3570 ('systemctl') (unit session-5.scope)...
Feb 13 16:06:46.321743 systemd[1]: Reloading...
Feb 13 16:06:46.486486 zram_generator::config[3607]: No configuration found.
Feb 13 16:06:46.815170 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 16:06:47.007113 systemd[1]: Reloading finished in 684 ms.
Feb 13 16:06:47.081219 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 16:06:47.097245 systemd[1]: kubelet.service: Deactivated successfully.
Feb 13 16:06:47.097901 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 16:06:47.111046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 16:06:47.442722 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 16:06:47.461502 (kubelet)[3680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Feb 13 16:06:47.584019 kubelet[3680]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 16:06:47.584019 kubelet[3680]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 13 16:06:47.584019 kubelet[3680]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 16:06:47.584928 kubelet[3680]: I0213 16:06:47.584160    3680 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 13 16:06:47.593937 kubelet[3680]: I0213 16:06:47.593872    3680 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Feb 13 16:06:47.593937 kubelet[3680]: I0213 16:06:47.593933    3680 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 13 16:06:47.594359 kubelet[3680]: I0213 16:06:47.594308    3680 server.go:919] "Client rotation is on, will bootstrap in background"
Feb 13 16:06:47.597945 kubelet[3680]: I0213 16:06:47.597889    3680 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Feb 13 16:06:47.602946 kubelet[3680]: I0213 16:06:47.602864    3680 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 16:06:47.629737 kubelet[3680]: I0213 16:06:47.629644    3680 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 13 16:06:47.631849 kubelet[3680]: I0213 16:06:47.631789    3680 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 13 16:06:47.632165 kubelet[3680]: I0213 16:06:47.632114    3680 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Feb 13 16:06:47.632368 kubelet[3680]: I0213 16:06:47.632174    3680 topology_manager.go:138] "Creating topology manager with none policy"
Feb 13 16:06:47.632368 kubelet[3680]: I0213 16:06:47.632196    3680 container_manager_linux.go:301] "Creating device plugin manager"
Feb 13 16:06:47.632368 kubelet[3680]: I0213 16:06:47.632255    3680 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 16:06:47.632774 kubelet[3680]: I0213 16:06:47.632685    3680 kubelet.go:396] "Attempting to sync node with API server"
Feb 13 16:06:47.632774 kubelet[3680]: I0213 16:06:47.632768    3680 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 13 16:06:47.632926 kubelet[3680]: I0213 16:06:47.632862    3680 kubelet.go:312] "Adding apiserver pod source"
Feb 13 16:06:47.632926 kubelet[3680]: I0213 16:06:47.632914    3680 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 13 16:06:47.675433 kubelet[3680]: I0213 16:06:47.673020    3680 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1"
Feb 13 16:06:47.675433 kubelet[3680]: I0213 16:06:47.673552    3680 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Feb 13 16:06:47.675433 kubelet[3680]: I0213 16:06:47.674798    3680 server.go:1256] "Started kubelet"
Feb 13 16:06:47.691409 kubelet[3680]: I0213 16:06:47.687830    3680 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 13 16:06:47.700587 kubelet[3680]: I0213 16:06:47.700355    3680 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Feb 13 16:06:47.702517 kubelet[3680]: I0213 16:06:47.702482    3680 server.go:461] "Adding debug handlers to kubelet server"
Feb 13 16:06:47.704829 kubelet[3680]: I0213 16:06:47.704748    3680 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Feb 13 16:06:47.705699 kubelet[3680]: I0213 16:06:47.705649    3680 volume_manager.go:291] "Starting Kubelet Volume Manager"
Feb 13 16:06:47.715565 kubelet[3680]: I0213 16:06:47.708584    3680 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Feb 13 16:06:47.716154 kubelet[3680]: I0213 16:06:47.716121    3680 reconciler_new.go:29] "Reconciler: start to sync state"
Feb 13 16:06:47.716952 kubelet[3680]: I0213 16:06:47.708900    3680 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb 13 16:06:47.725403 kubelet[3680]: I0213 16:06:47.725323    3680 factory.go:221] Registration of the systemd container factory successfully
Feb 13 16:06:47.725549 kubelet[3680]: I0213 16:06:47.725511    3680 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Feb 13 16:06:47.735408 kubelet[3680]: E0213 16:06:47.735349    3680 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 13 16:06:47.739078 kubelet[3680]: I0213 16:06:47.738886    3680 factory.go:221] Registration of the containerd container factory successfully
Feb 13 16:06:47.757714 kubelet[3680]: I0213 16:06:47.756677    3680 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 13 16:06:47.760666 kubelet[3680]: I0213 16:06:47.759641    3680 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 13 16:06:47.760666 kubelet[3680]: I0213 16:06:47.759683    3680 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb 13 16:06:47.760666 kubelet[3680]: I0213 16:06:47.759718    3680 kubelet.go:2329] "Starting kubelet main sync loop"
Feb 13 16:06:47.760666 kubelet[3680]: E0213 16:06:47.759799    3680 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb 13 16:06:47.847806 kubelet[3680]: I0213 16:06:47.847754    3680 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-25-253"
Feb 13 16:06:47.861301 kubelet[3680]: E0213 16:06:47.861243    3680 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Feb 13 16:06:47.869416 kubelet[3680]: I0213 16:06:47.868511    3680 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-25-253"
Feb 13 16:06:47.871856 kubelet[3680]: I0213 16:06:47.870020    3680 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-25-253"
Feb 13 16:06:48.049016 kubelet[3680]: I0213 16:06:48.048980    3680 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 13 16:06:48.050413 kubelet[3680]: I0213 16:06:48.049203    3680 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 13 16:06:48.050413 kubelet[3680]: I0213 16:06:48.049273    3680 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 16:06:48.052066 kubelet[3680]: I0213 16:06:48.050902    3680 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Feb 13 16:06:48.052066 kubelet[3680]: I0213 16:06:48.050955    3680 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Feb 13 16:06:48.052066 kubelet[3680]: I0213 16:06:48.050975    3680 policy_none.go:49] "None policy: Start"
Feb 13 16:06:48.052639 kubelet[3680]: I0213 16:06:48.052595    3680 memory_manager.go:170] "Starting memorymanager" policy="None"
Feb 13 16:06:48.052730 kubelet[3680]: I0213 16:06:48.052655    3680 state_mem.go:35] "Initializing new in-memory state store"
Feb 13 16:06:48.053011 kubelet[3680]: I0213 16:06:48.052966    3680 state_mem.go:75] "Updated machine memory state"
Feb 13 16:06:48.060142 kubelet[3680]: I0213 16:06:48.056033    3680 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 13 16:06:48.060142 kubelet[3680]: I0213 16:06:48.058119    3680 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 13 16:06:48.062719 kubelet[3680]: I0213 16:06:48.062648    3680 topology_manager.go:215] "Topology Admit Handler" podUID="bb636e1ad9b66465858680f8041a5e00" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-25-253"
Feb 13 16:06:48.063850 kubelet[3680]: I0213 16:06:48.063706    3680 topology_manager.go:215] "Topology Admit Handler" podUID="7d2b5db342d29102f96dbf4a03339947" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-25-253"
Feb 13 16:06:48.064290 kubelet[3680]: I0213 16:06:48.064252    3680 topology_manager.go:215] "Topology Admit Handler" podUID="a1fee8877fb4534fcc854345da75b4d8" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-25-253"
Feb 13 16:06:48.088651 kubelet[3680]: E0213 16:06:48.088608    3680 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-25-253\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-25-253"
Feb 13 16:06:48.126018 kubelet[3680]: I0213 16:06:48.125958    3680 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bb636e1ad9b66465858680f8041a5e00-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-253\" (UID: \"bb636e1ad9b66465858680f8041a5e00\") " pod="kube-system/kube-controller-manager-ip-172-31-25-253"
Feb 13 16:06:48.126466 kubelet[3680]: I0213 16:06:48.126039    3680 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7d2b5db342d29102f96dbf4a03339947-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-253\" (UID: \"7d2b5db342d29102f96dbf4a03339947\") " pod="kube-system/kube-scheduler-ip-172-31-25-253"
Feb 13 16:06:48.126466 kubelet[3680]: I0213 16:06:48.126090    3680 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bb636e1ad9b66465858680f8041a5e00-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-253\" (UID: \"bb636e1ad9b66465858680f8041a5e00\") " pod="kube-system/kube-controller-manager-ip-172-31-25-253"
Feb 13 16:06:48.126466 kubelet[3680]: I0213 16:06:48.126140    3680 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bb636e1ad9b66465858680f8041a5e00-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-253\" (UID: \"bb636e1ad9b66465858680f8041a5e00\") " pod="kube-system/kube-controller-manager-ip-172-31-25-253"
Feb 13 16:06:48.126466 kubelet[3680]: I0213 16:06:48.126194    3680 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bb636e1ad9b66465858680f8041a5e00-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-253\" (UID: \"bb636e1ad9b66465858680f8041a5e00\") " pod="kube-system/kube-controller-manager-ip-172-31-25-253"
Feb 13 16:06:48.126466 kubelet[3680]: I0213 16:06:48.126287    3680 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1fee8877fb4534fcc854345da75b4d8-ca-certs\") pod \"kube-apiserver-ip-172-31-25-253\" (UID: \"a1fee8877fb4534fcc854345da75b4d8\") " pod="kube-system/kube-apiserver-ip-172-31-25-253"
Feb 13 16:06:48.126841 kubelet[3680]: I0213 16:06:48.126367    3680 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1fee8877fb4534fcc854345da75b4d8-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-253\" (UID: \"a1fee8877fb4534fcc854345da75b4d8\") " pod="kube-system/kube-apiserver-ip-172-31-25-253"
Feb 13 16:06:48.126841 kubelet[3680]: I0213 16:06:48.126477    3680 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1fee8877fb4534fcc854345da75b4d8-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-253\" (UID: \"a1fee8877fb4534fcc854345da75b4d8\") " pod="kube-system/kube-apiserver-ip-172-31-25-253"
Feb 13 16:06:48.126841 kubelet[3680]: I0213 16:06:48.126531    3680 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bb636e1ad9b66465858680f8041a5e00-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-253\" (UID: \"bb636e1ad9b66465858680f8041a5e00\") " pod="kube-system/kube-controller-manager-ip-172-31-25-253"
Feb 13 16:06:48.634037 kubelet[3680]: I0213 16:06:48.633925    3680 apiserver.go:52] "Watching apiserver"
Feb 13 16:06:48.716291 kubelet[3680]: I0213 16:06:48.716053    3680 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb 13 16:06:48.927364 kubelet[3680]: I0213 16:06:48.927175    3680 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-25-253" podStartSLOduration=0.926864243 podStartE2EDuration="926.864243ms" podCreationTimestamp="2025-02-13 16:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:06:48.898235963 +0000 UTC m=+1.426182920" watchObservedRunningTime="2025-02-13 16:06:48.926864243 +0000 UTC m=+1.454811188"
Feb 13 16:06:48.977546 kubelet[3680]: I0213 16:06:48.977408    3680 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-25-253" podStartSLOduration=0.977326907 podStartE2EDuration="977.326907ms" podCreationTimestamp="2025-02-13 16:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:06:48.952354751 +0000 UTC m=+1.480301708" watchObservedRunningTime="2025-02-13 16:06:48.977326907 +0000 UTC m=+1.505273840"
Feb 13 16:06:49.558779 sudo[2434]: pam_unix(sudo:session): session closed for user root
Feb 13 16:06:49.583663 sshd[2430]: pam_unix(sshd:session): session closed for user core
Feb 13 16:06:49.592342 systemd[1]: sshd@4-172.31.25.253:22-139.178.68.195:59268.service: Deactivated successfully.
Feb 13 16:06:49.602855 systemd[1]: session-5.scope: Deactivated successfully.
Feb 13 16:06:49.603893 systemd-logind[2100]: Session 5 logged out. Waiting for processes to exit.
Feb 13 16:06:49.610111 systemd-logind[2100]: Removed session 5.
Feb 13 16:07:00.923589 kubelet[3680]: I0213 16:07:00.923532    3680 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Feb 13 16:07:00.925208 kubelet[3680]: I0213 16:07:00.924619    3680 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Feb 13 16:07:00.925320 containerd[2126]: time="2025-02-13T16:07:00.924251651Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Feb 13 16:07:01.152626 kubelet[3680]: I0213 16:07:01.149480    3680 topology_manager.go:215] "Topology Admit Handler" podUID="fa0b6a91-d5bb-4d0a-9a48-b738fd8852b6" podNamespace="kube-system" podName="kube-proxy-l7g4x"
Feb 13 16:07:01.192068 kubelet[3680]: I0213 16:07:01.191008    3680 topology_manager.go:215] "Topology Admit Handler" podUID="1c9ec636-00d7-41a7-a9ce-a5c64d262d74" podNamespace="kube-flannel" podName="kube-flannel-ds-zw9nw"
Feb 13 16:07:01.217486 kubelet[3680]: I0213 16:07:01.214117    3680 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1c9ec636-00d7-41a7-a9ce-a5c64d262d74-run\") pod \"kube-flannel-ds-zw9nw\" (UID: \"1c9ec636-00d7-41a7-a9ce-a5c64d262d74\") " pod="kube-flannel/kube-flannel-ds-zw9nw"
Feb 13 16:07:01.222410 kubelet[3680]: I0213 16:07:01.220287    3680 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/1c9ec636-00d7-41a7-a9ce-a5c64d262d74-flannel-cfg\") pod \"kube-flannel-ds-zw9nw\" (UID: \"1c9ec636-00d7-41a7-a9ce-a5c64d262d74\") " pod="kube-flannel/kube-flannel-ds-zw9nw"
Feb 13 16:07:01.222856 kubelet[3680]: I0213 16:07:01.222707    3680 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c9ec636-00d7-41a7-a9ce-a5c64d262d74-xtables-lock\") pod \"kube-flannel-ds-zw9nw\" (UID: \"1c9ec636-00d7-41a7-a9ce-a5c64d262d74\") " pod="kube-flannel/kube-flannel-ds-zw9nw"
Feb 13 16:07:01.222856 kubelet[3680]: I0213 16:07:01.222798    3680 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdbtd\" (UniqueName: \"kubernetes.io/projected/1c9ec636-00d7-41a7-a9ce-a5c64d262d74-kube-api-access-sdbtd\") pod \"kube-flannel-ds-zw9nw\" (UID: \"1c9ec636-00d7-41a7-a9ce-a5c64d262d74\") " pod="kube-flannel/kube-flannel-ds-zw9nw"
Feb 13 16:07:01.224412 kubelet[3680]: I0213 16:07:01.223226    3680 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa0b6a91-d5bb-4d0a-9a48-b738fd8852b6-lib-modules\") pod \"kube-proxy-l7g4x\" (UID: \"fa0b6a91-d5bb-4d0a-9a48-b738fd8852b6\") " pod="kube-system/kube-proxy-l7g4x"
Feb 13 16:07:01.224412 kubelet[3680]: I0213 16:07:01.223354    3680 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/1c9ec636-00d7-41a7-a9ce-a5c64d262d74-cni\") pod \"kube-flannel-ds-zw9nw\" (UID: \"1c9ec636-00d7-41a7-a9ce-a5c64d262d74\") " pod="kube-flannel/kube-flannel-ds-zw9nw"
Feb 13 16:07:01.227119 kubelet[3680]: I0213 16:07:01.226087    3680 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fa0b6a91-d5bb-4d0a-9a48-b738fd8852b6-kube-proxy\") pod \"kube-proxy-l7g4x\" (UID: \"fa0b6a91-d5bb-4d0a-9a48-b738fd8852b6\") " pod="kube-system/kube-proxy-l7g4x"
Feb 13 16:07:01.229739 kubelet[3680]: I0213 16:07:01.227702    3680 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlgsx\" (UniqueName: \"kubernetes.io/projected/fa0b6a91-d5bb-4d0a-9a48-b738fd8852b6-kube-api-access-mlgsx\") pod \"kube-proxy-l7g4x\" (UID: \"fa0b6a91-d5bb-4d0a-9a48-b738fd8852b6\") " pod="kube-system/kube-proxy-l7g4x"
Feb 13 16:07:01.229739 kubelet[3680]: I0213 16:07:01.227862    3680 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/1c9ec636-00d7-41a7-a9ce-a5c64d262d74-cni-plugin\") pod \"kube-flannel-ds-zw9nw\" (UID: \"1c9ec636-00d7-41a7-a9ce-a5c64d262d74\") " pod="kube-flannel/kube-flannel-ds-zw9nw"
Feb 13 16:07:01.229739 kubelet[3680]: I0213 16:07:01.227921    3680 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa0b6a91-d5bb-4d0a-9a48-b738fd8852b6-xtables-lock\") pod \"kube-proxy-l7g4x\" (UID: \"fa0b6a91-d5bb-4d0a-9a48-b738fd8852b6\") " pod="kube-system/kube-proxy-l7g4x"
Feb 13 16:07:01.347345 kubelet[3680]: E0213 16:07:01.347234    3680 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
Feb 13 16:07:01.348172 kubelet[3680]: E0213 16:07:01.348078    3680 projected.go:200] Error preparing data for projected volume kube-api-access-sdbtd for pod kube-flannel/kube-flannel-ds-zw9nw: configmap "kube-root-ca.crt" not found
Feb 13 16:07:01.349426 kubelet[3680]: E0213 16:07:01.348978    3680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c9ec636-00d7-41a7-a9ce-a5c64d262d74-kube-api-access-sdbtd podName:1c9ec636-00d7-41a7-a9ce-a5c64d262d74 nodeName:}" failed. No retries permitted until 2025-02-13 16:07:01.848934857 +0000 UTC m=+14.376881790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sdbtd" (UniqueName: "kubernetes.io/projected/1c9ec636-00d7-41a7-a9ce-a5c64d262d74-kube-api-access-sdbtd") pod "kube-flannel-ds-zw9nw" (UID: "1c9ec636-00d7-41a7-a9ce-a5c64d262d74") : configmap "kube-root-ca.crt" not found
Feb 13 16:07:01.349426 kubelet[3680]: E0213 16:07:01.349204    3680 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
Feb 13 16:07:01.349426 kubelet[3680]: E0213 16:07:01.349252    3680 projected.go:200] Error preparing data for projected volume kube-api-access-mlgsx for pod kube-system/kube-proxy-l7g4x: configmap "kube-root-ca.crt" not found
Feb 13 16:07:01.349426 kubelet[3680]: E0213 16:07:01.349326    3680 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fa0b6a91-d5bb-4d0a-9a48-b738fd8852b6-kube-api-access-mlgsx podName:fa0b6a91-d5bb-4d0a-9a48-b738fd8852b6 nodeName:}" failed. No retries permitted until 2025-02-13 16:07:01.849301025 +0000 UTC m=+14.377247946 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mlgsx" (UniqueName: "kubernetes.io/projected/fa0b6a91-d5bb-4d0a-9a48-b738fd8852b6-kube-api-access-mlgsx") pod "kube-proxy-l7g4x" (UID: "fa0b6a91-d5bb-4d0a-9a48-b738fd8852b6") : configmap "kube-root-ca.crt" not found
Feb 13 16:07:02.111008 containerd[2126]: time="2025-02-13T16:07:02.110904116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l7g4x,Uid:fa0b6a91-d5bb-4d0a-9a48-b738fd8852b6,Namespace:kube-system,Attempt:0,}"
Feb 13 16:07:02.138587 containerd[2126]: time="2025-02-13T16:07:02.138489465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-zw9nw,Uid:1c9ec636-00d7-41a7-a9ce-a5c64d262d74,Namespace:kube-flannel,Attempt:0,}"
Feb 13 16:07:02.187952 containerd[2126]: time="2025-02-13T16:07:02.187452321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 16:07:02.187952 containerd[2126]: time="2025-02-13T16:07:02.187575525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 16:07:02.187952 containerd[2126]: time="2025-02-13T16:07:02.187644345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 16:07:02.187952 containerd[2126]: time="2025-02-13T16:07:02.187856985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 16:07:02.219522 containerd[2126]: time="2025-02-13T16:07:02.219324909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 16:07:02.220621 containerd[2126]: time="2025-02-13T16:07:02.219953673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 16:07:02.223070 containerd[2126]: time="2025-02-13T16:07:02.222281301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 16:07:02.225921 containerd[2126]: time="2025-02-13T16:07:02.225849393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 16:07:02.287929 containerd[2126]: time="2025-02-13T16:07:02.287846661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l7g4x,Uid:fa0b6a91-d5bb-4d0a-9a48-b738fd8852b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"351d1a65726261c1ae775a487f1094dc927821f25898ed22f28d6065d339abc2\""
Feb 13 16:07:02.304085 containerd[2126]: time="2025-02-13T16:07:02.303590901Z" level=info msg="CreateContainer within sandbox \"351d1a65726261c1ae775a487f1094dc927821f25898ed22f28d6065d339abc2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb 13 16:07:02.344464 containerd[2126]: time="2025-02-13T16:07:02.344134066Z" level=info msg="CreateContainer within sandbox \"351d1a65726261c1ae775a487f1094dc927821f25898ed22f28d6065d339abc2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"36841f13c4c973435c4ec69b66c815c902286d206bd10ca8456de068719ec6a8\""
Feb 13 16:07:02.353360 containerd[2126]: time="2025-02-13T16:07:02.352706290Z" level=info msg="StartContainer for \"36841f13c4c973435c4ec69b66c815c902286d206bd10ca8456de068719ec6a8\""
Feb 13 16:07:02.367998 containerd[2126]: time="2025-02-13T16:07:02.367848226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-zw9nw,Uid:1c9ec636-00d7-41a7-a9ce-a5c64d262d74,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"e2fc9f13ef880baf66c844780bea6904bbc163b1625c95f96a9b7850e8332b93\""
Feb 13 16:07:02.376104 containerd[2126]: time="2025-02-13T16:07:02.375673966Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\""
Feb 13 16:07:02.479208 containerd[2126]: time="2025-02-13T16:07:02.479047798Z" level=info msg="StartContainer for \"36841f13c4c973435c4ec69b66c815c902286d206bd10ca8456de068719ec6a8\" returns successfully"
Feb 13 16:07:04.756745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3795499192.mount: Deactivated successfully.
Feb 13 16:07:04.846158 containerd[2126]: time="2025-02-13T16:07:04.846021314Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:07:04.849021 containerd[2126]: time="2025-02-13T16:07:04.848935082Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3673531"
Feb 13 16:07:04.851264 containerd[2126]: time="2025-02-13T16:07:04.851080526Z" level=info msg="ImageCreate event name:\"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:07:04.858654 containerd[2126]: time="2025-02-13T16:07:04.858554594Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:07:04.864991 containerd[2126]: time="2025-02-13T16:07:04.864657854Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3662650\" in 2.48890104s"
Feb 13 16:07:04.864991 containerd[2126]: time="2025-02-13T16:07:04.864763910Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\""
Feb 13 16:07:04.874367 containerd[2126]: time="2025-02-13T16:07:04.873753614Z" level=info msg="CreateContainer within sandbox \"e2fc9f13ef880baf66c844780bea6904bbc163b1625c95f96a9b7850e8332b93\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}"
Feb 13 16:07:04.910636 containerd[2126]: time="2025-02-13T16:07:04.910510418Z" level=info msg="CreateContainer within sandbox \"e2fc9f13ef880baf66c844780bea6904bbc163b1625c95f96a9b7850e8332b93\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"821e81b3eda569e1eaa6e076d7f2905b115ee353c1b332ec7945898444445609\""
Feb 13 16:07:04.911022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3836919463.mount: Deactivated successfully.
Feb 13 16:07:04.914857 containerd[2126]: time="2025-02-13T16:07:04.913476986Z" level=info msg="StartContainer for \"821e81b3eda569e1eaa6e076d7f2905b115ee353c1b332ec7945898444445609\""
Feb 13 16:07:05.018075 containerd[2126]: time="2025-02-13T16:07:05.017772767Z" level=info msg="StartContainer for \"821e81b3eda569e1eaa6e076d7f2905b115ee353c1b332ec7945898444445609\" returns successfully"
Feb 13 16:07:05.086407 containerd[2126]: time="2025-02-13T16:07:05.086181407Z" level=info msg="shim disconnected" id=821e81b3eda569e1eaa6e076d7f2905b115ee353c1b332ec7945898444445609 namespace=k8s.io
Feb 13 16:07:05.086407 containerd[2126]: time="2025-02-13T16:07:05.086254343Z" level=warning msg="cleaning up after shim disconnected" id=821e81b3eda569e1eaa6e076d7f2905b115ee353c1b332ec7945898444445609 namespace=k8s.io
Feb 13 16:07:05.086407 containerd[2126]: time="2025-02-13T16:07:05.086274239Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 16:07:05.590585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-821e81b3eda569e1eaa6e076d7f2905b115ee353c1b332ec7945898444445609-rootfs.mount: Deactivated successfully.
Feb 13 16:07:05.993686 containerd[2126]: time="2025-02-13T16:07:05.992544352Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\""
Feb 13 16:07:06.013931 kubelet[3680]: I0213 16:07:06.013641    3680 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-l7g4x" podStartSLOduration=5.013530156 podStartE2EDuration="5.013530156s" podCreationTimestamp="2025-02-13 16:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:07:02.997816129 +0000 UTC m=+15.525763062" watchObservedRunningTime="2025-02-13 16:07:06.013530156 +0000 UTC m=+18.541477101"
Feb 13 16:07:08.279025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3707459812.mount: Deactivated successfully.
Feb 13 16:07:09.690622 containerd[2126]: time="2025-02-13T16:07:09.690502566Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:07:09.693492 containerd[2126]: time="2025-02-13T16:07:09.693368778Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26874261"
Feb 13 16:07:09.697077 containerd[2126]: time="2025-02-13T16:07:09.696156714Z" level=info msg="ImageCreate event name:\"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:07:09.705066 containerd[2126]: time="2025-02-13T16:07:09.705007650Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 16:07:09.708589 containerd[2126]: time="2025-02-13T16:07:09.708437694Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26863435\" in 3.715808802s"
Feb 13 16:07:09.708589 containerd[2126]: time="2025-02-13T16:07:09.708573138Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\""
Feb 13 16:07:09.716520 containerd[2126]: time="2025-02-13T16:07:09.716453118Z" level=info msg="CreateContainer within sandbox \"e2fc9f13ef880baf66c844780bea6904bbc163b1625c95f96a9b7850e8332b93\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Feb 13 16:07:09.751154 containerd[2126]: time="2025-02-13T16:07:09.750932094Z" level=info msg="CreateContainer within sandbox \"e2fc9f13ef880baf66c844780bea6904bbc163b1625c95f96a9b7850e8332b93\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"390fbbe0f373878ddec9397fc2628e2eb42fad7af626abec01a8587b6c08e9c0\""
Feb 13 16:07:09.752534 containerd[2126]: time="2025-02-13T16:07:09.752201526Z" level=info msg="StartContainer for \"390fbbe0f373878ddec9397fc2628e2eb42fad7af626abec01a8587b6c08e9c0\""
Feb 13 16:07:09.916964 containerd[2126]: time="2025-02-13T16:07:09.916598539Z" level=info msg="StartContainer for \"390fbbe0f373878ddec9397fc2628e2eb42fad7af626abec01a8587b6c08e9c0\" returns successfully"
Feb 13 16:07:10.016993 kubelet[3680]: I0213 16:07:10.016841    3680 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Feb 13 16:07:10.070152 kubelet[3680]: I0213 16:07:10.070045    3680 topology_manager.go:215] "Topology Admit Handler" podUID="7f3ab109-24ac-4916-979a-96b029e98884" podNamespace="kube-system" podName="coredns-76f75df574-bqwnh"
Feb 13 16:07:10.092147 kubelet[3680]: I0213 16:07:10.091630    3680 topology_manager.go:215] "Topology Admit Handler" podUID="29e8725a-ea58-40f3-8528-bfdf123b8ba7" podNamespace="kube-system" podName="coredns-76f75df574-2hzsh"
Feb 13 16:07:10.099030 kubelet[3680]: I0213 16:07:10.098620    3680 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f3ab109-24ac-4916-979a-96b029e98884-config-volume\") pod \"coredns-76f75df574-bqwnh\" (UID: \"7f3ab109-24ac-4916-979a-96b029e98884\") " pod="kube-system/coredns-76f75df574-bqwnh"
Feb 13 16:07:10.099721 kubelet[3680]: I0213 16:07:10.098991    3680 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh4vk\" (UniqueName: \"kubernetes.io/projected/7f3ab109-24ac-4916-979a-96b029e98884-kube-api-access-kh4vk\") pod \"coredns-76f75df574-bqwnh\" (UID: \"7f3ab109-24ac-4916-979a-96b029e98884\") " pod="kube-system/coredns-76f75df574-bqwnh"
Feb 13 16:07:10.155559 containerd[2126]: time="2025-02-13T16:07:10.155280604Z" level=info msg="shim disconnected" id=390fbbe0f373878ddec9397fc2628e2eb42fad7af626abec01a8587b6c08e9c0 namespace=k8s.io
Feb 13 16:07:10.155559 containerd[2126]: time="2025-02-13T16:07:10.155453428Z" level=warning msg="cleaning up after shim disconnected" id=390fbbe0f373878ddec9397fc2628e2eb42fad7af626abec01a8587b6c08e9c0 namespace=k8s.io
Feb 13 16:07:10.155559 containerd[2126]: time="2025-02-13T16:07:10.155485204Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 16:07:10.202529 kubelet[3680]: I0213 16:07:10.200339    3680 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcz8x\" (UniqueName: \"kubernetes.io/projected/29e8725a-ea58-40f3-8528-bfdf123b8ba7-kube-api-access-vcz8x\") pod \"coredns-76f75df574-2hzsh\" (UID: \"29e8725a-ea58-40f3-8528-bfdf123b8ba7\") " pod="kube-system/coredns-76f75df574-2hzsh"
Feb 13 16:07:10.202529 kubelet[3680]: I0213 16:07:10.200471    3680 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29e8725a-ea58-40f3-8528-bfdf123b8ba7-config-volume\") pod \"coredns-76f75df574-2hzsh\" (UID: \"29e8725a-ea58-40f3-8528-bfdf123b8ba7\") " pod="kube-system/coredns-76f75df574-2hzsh"
Feb 13 16:07:10.402736 containerd[2126]: time="2025-02-13T16:07:10.402624690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bqwnh,Uid:7f3ab109-24ac-4916-979a-96b029e98884,Namespace:kube-system,Attempt:0,}"
Feb 13 16:07:10.426075 containerd[2126]: time="2025-02-13T16:07:10.425975814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2hzsh,Uid:29e8725a-ea58-40f3-8528-bfdf123b8ba7,Namespace:kube-system,Attempt:0,}"
Feb 13 16:07:10.489751 containerd[2126]: time="2025-02-13T16:07:10.489587874Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bqwnh,Uid:7f3ab109-24ac-4916-979a-96b029e98884,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b402fbc7051a2b8859ef4a435da2becd6c015311147ec329caf2cfeef6788071\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
Feb 13 16:07:10.493012 kubelet[3680]: E0213 16:07:10.491275    3680 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b402fbc7051a2b8859ef4a435da2becd6c015311147ec329caf2cfeef6788071\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
Feb 13 16:07:10.493012 kubelet[3680]: E0213 16:07:10.491522    3680 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b402fbc7051a2b8859ef4a435da2becd6c015311147ec329caf2cfeef6788071\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-bqwnh"
Feb 13 16:07:10.493012 kubelet[3680]: E0213 16:07:10.491597    3680 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b402fbc7051a2b8859ef4a435da2becd6c015311147ec329caf2cfeef6788071\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-bqwnh"
Feb 13 16:07:10.493012 kubelet[3680]: E0213 16:07:10.491777    3680 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-bqwnh_kube-system(7f3ab109-24ac-4916-979a-96b029e98884)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-bqwnh_kube-system(7f3ab109-24ac-4916-979a-96b029e98884)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b402fbc7051a2b8859ef4a435da2becd6c015311147ec329caf2cfeef6788071\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-bqwnh" podUID="7f3ab109-24ac-4916-979a-96b029e98884"
Feb 13 16:07:10.504000 containerd[2126]: time="2025-02-13T16:07:10.503356914Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2hzsh,Uid:29e8725a-ea58-40f3-8528-bfdf123b8ba7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b90b1ddcb07eeed0faf23ea4f7ba6123bb30875f99a9a48e49645431153bc2d4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
Feb 13 16:07:10.504216 kubelet[3680]: E0213 16:07:10.504065    3680 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b90b1ddcb07eeed0faf23ea4f7ba6123bb30875f99a9a48e49645431153bc2d4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory"
Feb 13 16:07:10.504216 kubelet[3680]: E0213 16:07:10.504154    3680 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b90b1ddcb07eeed0faf23ea4f7ba6123bb30875f99a9a48e49645431153bc2d4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-2hzsh"
Feb 13 16:07:10.504216 kubelet[3680]: E0213 16:07:10.504193    3680 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b90b1ddcb07eeed0faf23ea4f7ba6123bb30875f99a9a48e49645431153bc2d4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-2hzsh"
Feb 13 16:07:10.504504 kubelet[3680]: E0213 16:07:10.504293    3680 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-2hzsh_kube-system(29e8725a-ea58-40f3-8528-bfdf123b8ba7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-2hzsh_kube-system(29e8725a-ea58-40f3-8528-bfdf123b8ba7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b90b1ddcb07eeed0faf23ea4f7ba6123bb30875f99a9a48e49645431153bc2d4\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-2hzsh" podUID="29e8725a-ea58-40f3-8528-bfdf123b8ba7"
Feb 13 16:07:10.738996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-390fbbe0f373878ddec9397fc2628e2eb42fad7af626abec01a8587b6c08e9c0-rootfs.mount: Deactivated successfully.
Feb 13 16:07:11.046516 containerd[2126]: time="2025-02-13T16:07:11.043158857Z" level=info msg="CreateContainer within sandbox \"e2fc9f13ef880baf66c844780bea6904bbc163b1625c95f96a9b7850e8332b93\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}"
Feb 13 16:07:11.126569 containerd[2126]: time="2025-02-13T16:07:11.119184869Z" level=info msg="CreateContainer within sandbox \"e2fc9f13ef880baf66c844780bea6904bbc163b1625c95f96a9b7850e8332b93\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"972669d6118b1ea6c67319e0b43a5a9d9005f25e1fc0fc5878e6a78bb855fdd3\""
Feb 13 16:07:11.126569 containerd[2126]: time="2025-02-13T16:07:11.122771861Z" level=info msg="StartContainer for \"972669d6118b1ea6c67319e0b43a5a9d9005f25e1fc0fc5878e6a78bb855fdd3\""
Feb 13 16:07:11.125559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3607733348.mount: Deactivated successfully.
Feb 13 16:07:11.278023 containerd[2126]: time="2025-02-13T16:07:11.277955238Z" level=info msg="StartContainer for \"972669d6118b1ea6c67319e0b43a5a9d9005f25e1fc0fc5878e6a78bb855fdd3\" returns successfully"
Feb 13 16:07:12.062473 kubelet[3680]: I0213 16:07:12.062360    3680 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-zw9nw" podStartSLOduration=3.723726626 podStartE2EDuration="11.062294718s" podCreationTimestamp="2025-02-13 16:07:01 +0000 UTC" firstStartedPulling="2025-02-13 16:07:02.370762666 +0000 UTC m=+14.898709599" lastFinishedPulling="2025-02-13 16:07:09.709330758 +0000 UTC m=+22.237277691" observedRunningTime="2025-02-13 16:07:12.058741698 +0000 UTC m=+24.586688631" watchObservedRunningTime="2025-02-13 16:07:12.062294718 +0000 UTC m=+24.590241687"
Feb 13 16:07:12.353903 (udev-worker)[4217]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 16:07:12.378647 systemd-networkd[1684]: flannel.1: Link UP
Feb 13 16:07:12.378672 systemd-networkd[1684]: flannel.1: Gained carrier
Feb 13 16:07:13.968721 systemd-networkd[1684]: flannel.1: Gained IPv6LL
Feb 13 16:07:16.519932 ntpd[2078]: Listen normally on 6 flannel.1 192.168.0.0:123
Feb 13 16:07:16.520096 ntpd[2078]: Listen normally on 7 flannel.1 [fe80::4037:8dff:fea4:72b%4]:123
Feb 13 16:07:16.520860 ntpd[2078]: 13 Feb 16:07:16 ntpd[2078]: Listen normally on 6 flannel.1 192.168.0.0:123
Feb 13 16:07:16.520860 ntpd[2078]: 13 Feb 16:07:16 ntpd[2078]: Listen normally on 7 flannel.1 [fe80::4037:8dff:fea4:72b%4]:123
Feb 13 16:07:24.763835 containerd[2126]: time="2025-02-13T16:07:24.762642501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2hzsh,Uid:29e8725a-ea58-40f3-8528-bfdf123b8ba7,Namespace:kube-system,Attempt:0,}"
Feb 13 16:07:24.765118 containerd[2126]: time="2025-02-13T16:07:24.763878705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bqwnh,Uid:7f3ab109-24ac-4916-979a-96b029e98884,Namespace:kube-system,Attempt:0,}"
Feb 13 16:07:24.844198 systemd-networkd[1684]: cni0: Link UP
Feb 13 16:07:24.844234 systemd-networkd[1684]: cni0: Gained carrier
Feb 13 16:07:24.856131 (udev-worker)[4370]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 16:07:24.860216 systemd-networkd[1684]: cni0: Lost carrier
Feb 13 16:07:24.868471 systemd-networkd[1684]: veth7a8ea2b2: Link UP
Feb 13 16:07:24.871466 systemd-networkd[1684]: vethda05a576: Link UP
Feb 13 16:07:24.874341 kernel: cni0: port 1(veth7a8ea2b2) entered blocking state
Feb 13 16:07:24.874518 kernel: cni0: port 1(veth7a8ea2b2) entered disabled state
Feb 13 16:07:24.874556 kernel: veth7a8ea2b2: entered allmulticast mode
Feb 13 16:07:24.877195 kernel: veth7a8ea2b2: entered promiscuous mode
Feb 13 16:07:24.878531 kernel: cni0: port 1(veth7a8ea2b2) entered blocking state
Feb 13 16:07:24.880638 kernel: cni0: port 1(veth7a8ea2b2) entered forwarding state
Feb 13 16:07:24.882548 (udev-worker)[4373]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 16:07:24.888950 kernel: cni0: port 1(veth7a8ea2b2) entered disabled state
Feb 13 16:07:24.891865 kernel: cni0: port 2(vethda05a576) entered blocking state
Feb 13 16:07:24.892064 kernel: cni0: port 2(vethda05a576) entered disabled state
Feb 13 16:07:24.895108 kernel: vethda05a576: entered allmulticast mode
Feb 13 16:07:24.897421 kernel: vethda05a576: entered promiscuous mode
Feb 13 16:07:24.901274 (udev-worker)[4372]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 16:07:24.917016 kernel: cni0: port 1(veth7a8ea2b2) entered blocking state
Feb 13 16:07:24.917145 kernel: cni0: port 1(veth7a8ea2b2) entered forwarding state
Feb 13 16:07:24.916789 systemd-networkd[1684]: veth7a8ea2b2: Gained carrier
Feb 13 16:07:24.919287 systemd-networkd[1684]: cni0: Gained carrier
Feb 13 16:07:24.936483 containerd[2126]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000016938), "name":"cbr0", "type":"bridge"}
Feb 13 16:07:24.936483 containerd[2126]: delegateAdd: netconf sent to delegate plugin:
Feb 13 16:07:24.943050 kernel: cni0: port 2(vethda05a576) entered blocking state
Feb 13 16:07:24.943168 kernel: cni0: port 2(vethda05a576) entered forwarding state
Feb 13 16:07:24.941906 systemd-networkd[1684]: vethda05a576: Gained carrier
Feb 13 16:07:24.961245 containerd[2126]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}
Feb 13 16:07:24.961245 containerd[2126]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"}
Feb 13 16:07:24.961245 containerd[2126]: delegateAdd: netconf sent to delegate plugin:
Feb 13 16:07:25.010135 containerd[2126]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T16:07:25.009997194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 16:07:25.010708 containerd[2126]: time="2025-02-13T16:07:25.010564698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 16:07:25.013714 containerd[2126]: time="2025-02-13T16:07:25.013538346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 16:07:25.015411 containerd[2126]: time="2025-02-13T16:07:25.014858394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 16:07:25.041693 containerd[2126]: time="2025-02-13T16:07:25.040980414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 16:07:25.041693 containerd[2126]: time="2025-02-13T16:07:25.041100222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 16:07:25.041693 containerd[2126]: time="2025-02-13T16:07:25.041161254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 16:07:25.043100 containerd[2126]: time="2025-02-13T16:07:25.041714394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 16:07:25.238453 containerd[2126]: time="2025-02-13T16:07:25.238280467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bqwnh,Uid:7f3ab109-24ac-4916-979a-96b029e98884,Namespace:kube-system,Attempt:0,} returns sandbox id \"a67f143df3a602091ecf23f08efbf25124f8217a6bae099257da2044c3f519e4\""
Feb 13 16:07:25.251272 containerd[2126]: time="2025-02-13T16:07:25.251055871Z" level=info msg="CreateContainer within sandbox \"a67f143df3a602091ecf23f08efbf25124f8217a6bae099257da2044c3f519e4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb 13 16:07:25.260833 containerd[2126]: time="2025-02-13T16:07:25.260735623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2hzsh,Uid:29e8725a-ea58-40f3-8528-bfdf123b8ba7,Namespace:kube-system,Attempt:0,} returns sandbox id \"995d2518ce352961a433132143ad30b1fffb8783d3a389713a89cc3e3be192ee\""
Feb 13 16:07:25.272956 containerd[2126]: time="2025-02-13T16:07:25.270274795Z" level=info msg="CreateContainer within sandbox \"995d2518ce352961a433132143ad30b1fffb8783d3a389713a89cc3e3be192ee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb 13 16:07:25.296867 containerd[2126]: time="2025-02-13T16:07:25.296796068Z" level=info msg="CreateContainer within sandbox \"a67f143df3a602091ecf23f08efbf25124f8217a6bae099257da2044c3f519e4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d89aa8ac5950b88516911dd9555126f3803877c429c9899ff8751d949592d6ed\""
Feb 13 16:07:25.298341 containerd[2126]: time="2025-02-13T16:07:25.298232528Z" level=info msg="StartContainer for \"d89aa8ac5950b88516911dd9555126f3803877c429c9899ff8751d949592d6ed\""
Feb 13 16:07:25.304193 containerd[2126]: time="2025-02-13T16:07:25.304082420Z" level=info msg="CreateContainer within sandbox \"995d2518ce352961a433132143ad30b1fffb8783d3a389713a89cc3e3be192ee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7461bc05138c268ddfa2c677e6efcaaba1f7740020a6d6dc86d1bc0f6f0c21c0\""
Feb 13 16:07:25.307268 containerd[2126]: time="2025-02-13T16:07:25.305727704Z" level=info msg="StartContainer for \"7461bc05138c268ddfa2c677e6efcaaba1f7740020a6d6dc86d1bc0f6f0c21c0\""
Feb 13 16:07:25.482496 containerd[2126]: time="2025-02-13T16:07:25.481678965Z" level=info msg="StartContainer for \"7461bc05138c268ddfa2c677e6efcaaba1f7740020a6d6dc86d1bc0f6f0c21c0\" returns successfully"
Feb 13 16:07:25.500657 containerd[2126]: time="2025-02-13T16:07:25.498935145Z" level=info msg="StartContainer for \"d89aa8ac5950b88516911dd9555126f3803877c429c9899ff8751d949592d6ed\" returns successfully"
Feb 13 16:07:26.144716 kubelet[3680]: I0213 16:07:26.144639    3680 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-bqwnh" podStartSLOduration=24.144565736 podStartE2EDuration="24.144565736s" podCreationTimestamp="2025-02-13 16:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 16:07:26.140629652 +0000 UTC m=+38.668576765" watchObservedRunningTime="2025-02-13 16:07:26.144565736 +0000 UTC m=+38.672512765"
Feb 13 16:07:26.321153 systemd-networkd[1684]: cni0: Gained IPv6LL
Feb 13 16:07:26.512842 systemd-networkd[1684]: vethda05a576: Gained IPv6LL
Feb 13 16:07:26.960829 systemd-networkd[1684]: veth7a8ea2b2: Gained IPv6LL
Feb 13 16:07:28.250166 systemd[1]: Started sshd@5-172.31.25.253:22-139.178.68.195:42288.service - OpenSSH per-connection server daemon (139.178.68.195:42288).
Feb 13 16:07:28.463333 sshd[4586]: Accepted publickey for core from 139.178.68.195 port 42288 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU
Feb 13 16:07:28.467892 sshd[4586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 16:07:28.480571 systemd-logind[2100]: New session 6 of user core.
Feb 13 16:07:28.491483 systemd[1]: Started session-6.scope - Session 6 of User core.
Feb 13 16:07:28.774749 sshd[4586]: pam_unix(sshd:session): session closed for user core
Feb 13 16:07:28.785153 systemd[1]: sshd@5-172.31.25.253:22-139.178.68.195:42288.service: Deactivated successfully.
Feb 13 16:07:28.795313 systemd[1]: session-6.scope: Deactivated successfully.
Feb 13 16:07:28.799099 systemd-logind[2100]: Session 6 logged out. Waiting for processes to exit.
Feb 13 16:07:28.804412 systemd-logind[2100]: Removed session 6.
Feb 13 16:07:29.519993 ntpd[2078]: Listen normally on 8 cni0 192.168.0.1:123
Feb 13 16:07:29.520960 ntpd[2078]: 13 Feb 16:07:29 ntpd[2078]: Listen normally on 8 cni0 192.168.0.1:123
Feb 13 16:07:29.520960 ntpd[2078]: 13 Feb 16:07:29 ntpd[2078]: Listen normally on 9 cni0 [fe80::a025:2cff:fe9e:d0f3%5]:123
Feb 13 16:07:29.520960 ntpd[2078]: 13 Feb 16:07:29 ntpd[2078]: Listen normally on 10 veth7a8ea2b2 [fe80::8c6a:c3ff:fe94:6cac%6]:123
Feb 13 16:07:29.520960 ntpd[2078]: 13 Feb 16:07:29 ntpd[2078]: Listen normally on 11 vethda05a576 [fe80::c4b1:a9ff:fec4:d019%7]:123
Feb 13 16:07:29.520180 ntpd[2078]: Listen normally on 9 cni0 [fe80::a025:2cff:fe9e:d0f3%5]:123
Feb 13 16:07:29.520303 ntpd[2078]: Listen normally on 10 veth7a8ea2b2 [fe80::8c6a:c3ff:fe94:6cac%6]:123
Feb 13 16:07:29.520463 ntpd[2078]: Listen normally on 11 vethda05a576 [fe80::c4b1:a9ff:fec4:d019%7]:123
Feb 13 16:07:33.810064 systemd[1]: Started sshd@6-172.31.25.253:22-139.178.68.195:42302.service - OpenSSH per-connection server daemon (139.178.68.195:42302).
Feb 13 16:07:34.007168 sshd[4624]: Accepted publickey for core from 139.178.68.195 port 42302 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU
Feb 13 16:07:34.010139 sshd[4624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 16:07:34.023701 systemd-logind[2100]: New session 7 of user core.
Feb 13 16:07:34.029263 systemd[1]: Started session-7.scope - Session 7 of User core.
Feb 13 16:07:34.315729 sshd[4624]: pam_unix(sshd:session): session closed for user core
Feb 13 16:07:34.325892 systemd[1]: sshd@6-172.31.25.253:22-139.178.68.195:42302.service: Deactivated successfully.
Feb 13 16:07:34.336693 systemd[1]: session-7.scope: Deactivated successfully.
Feb 13 16:07:34.339580 systemd-logind[2100]: Session 7 logged out. Waiting for processes to exit.
Feb 13 16:07:34.344235 systemd-logind[2100]: Removed session 7.
Feb 13 16:07:39.350321 systemd[1]: Started sshd@7-172.31.25.253:22-139.178.68.195:58942.service - OpenSSH per-connection server daemon (139.178.68.195:58942).
Feb 13 16:07:39.542608 sshd[4659]: Accepted publickey for core from 139.178.68.195 port 58942 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU
Feb 13 16:07:39.547176 sshd[4659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 16:07:39.558496 systemd-logind[2100]: New session 8 of user core.
Feb 13 16:07:39.568908 systemd[1]: Started session-8.scope - Session 8 of User core.
Feb 13 16:07:39.841997 sshd[4659]: pam_unix(sshd:session): session closed for user core
Feb 13 16:07:39.851331 systemd[1]: sshd@7-172.31.25.253:22-139.178.68.195:58942.service: Deactivated successfully.
Feb 13 16:07:39.866485 systemd[1]: session-8.scope: Deactivated successfully.
Feb 13 16:07:39.869942 systemd-logind[2100]: Session 8 logged out. Waiting for processes to exit.
Feb 13 16:07:39.880873 systemd[1]: Started sshd@8-172.31.25.253:22-139.178.68.195:58950.service - OpenSSH per-connection server daemon (139.178.68.195:58950).
Feb 13 16:07:39.883602 systemd-logind[2100]: Removed session 8.
Feb 13 16:07:40.071874 sshd[4675]: Accepted publickey for core from 139.178.68.195 port 58950 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU
Feb 13 16:07:40.075260 sshd[4675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 16:07:40.088142 systemd-logind[2100]: New session 9 of user core.
Feb 13 16:07:40.097097 systemd[1]: Started session-9.scope - Session 9 of User core.
Feb 13 16:07:40.450813 sshd[4675]: pam_unix(sshd:session): session closed for user core
Feb 13 16:07:40.473722 systemd-logind[2100]: Session 9 logged out. Waiting for processes to exit.
Feb 13 16:07:40.475894 systemd[1]: sshd@8-172.31.25.253:22-139.178.68.195:58950.service: Deactivated successfully.
Feb 13 16:07:40.492758 systemd[1]: session-9.scope: Deactivated successfully.
Feb 13 16:07:40.511927 systemd[1]: Started sshd@9-172.31.25.253:22-139.178.68.195:58954.service - OpenSSH per-connection server daemon (139.178.68.195:58954).
Feb 13 16:07:40.515864 systemd-logind[2100]: Removed session 9.
Feb 13 16:07:40.705438 sshd[4686]: Accepted publickey for core from 139.178.68.195 port 58954 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU
Feb 13 16:07:40.709269 sshd[4686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 16:07:40.721704 systemd-logind[2100]: New session 10 of user core.
Feb 13 16:07:40.736473 systemd[1]: Started session-10.scope - Session 10 of User core.
Feb 13 16:07:41.057968 sshd[4686]: pam_unix(sshd:session): session closed for user core
Feb 13 16:07:41.071191 systemd[1]: sshd@9-172.31.25.253:22-139.178.68.195:58954.service: Deactivated successfully.
Feb 13 16:07:41.078462 systemd-logind[2100]: Session 10 logged out. Waiting for processes to exit.
Feb 13 16:07:41.079954 systemd[1]: session-10.scope: Deactivated successfully.
Feb 13 16:07:41.084562 systemd-logind[2100]: Removed session 10.
Feb 13 16:07:46.089142 systemd[1]: Started sshd@10-172.31.25.253:22-139.178.68.195:58960.service - OpenSSH per-connection server daemon (139.178.68.195:58960).
Feb 13 16:07:46.283991 sshd[4721]: Accepted publickey for core from 139.178.68.195 port 58960 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU
Feb 13 16:07:46.287150 sshd[4721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 16:07:46.297732 systemd-logind[2100]: New session 11 of user core.
Feb 13 16:07:46.307080 systemd[1]: Started session-11.scope - Session 11 of User core.
Feb 13 16:07:46.595046 sshd[4721]: pam_unix(sshd:session): session closed for user core
Feb 13 16:07:46.604744 systemd[1]: sshd@10-172.31.25.253:22-139.178.68.195:58960.service: Deactivated successfully.
Feb 13 16:07:46.618063 systemd-logind[2100]: Session 11 logged out. Waiting for processes to exit.
Feb 13 16:07:46.624943 systemd[1]: session-11.scope: Deactivated successfully.
Feb 13 16:07:46.634926 systemd[1]: Started sshd@11-172.31.25.253:22-139.178.68.195:50134.service - OpenSSH per-connection server daemon (139.178.68.195:50134).
Feb 13 16:07:46.636485 systemd-logind[2100]: Removed session 11.
Feb 13 16:07:46.818359 sshd[4735]: Accepted publickey for core from 139.178.68.195 port 50134 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU
Feb 13 16:07:46.822295 sshd[4735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 16:07:46.834955 systemd-logind[2100]: New session 12 of user core.
Feb 13 16:07:46.842044 systemd[1]: Started session-12.scope - Session 12 of User core.
Feb 13 16:07:47.199141 sshd[4735]: pam_unix(sshd:session): session closed for user core
Feb 13 16:07:47.211302 systemd-logind[2100]: Session 12 logged out. Waiting for processes to exit.
Feb 13 16:07:47.212458 systemd[1]: sshd@11-172.31.25.253:22-139.178.68.195:50134.service: Deactivated successfully.
Feb 13 16:07:47.224165 systemd[1]: session-12.scope: Deactivated successfully.
Feb 13 16:07:47.240029 systemd[1]: Started sshd@12-172.31.25.253:22-139.178.68.195:50150.service - OpenSSH per-connection server daemon (139.178.68.195:50150).
Feb 13 16:07:47.241657 systemd-logind[2100]: Removed session 12.
Feb 13 16:07:47.425228 sshd[4747]: Accepted publickey for core from 139.178.68.195 port 50150 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU
Feb 13 16:07:47.428738 sshd[4747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 16:07:47.441108 systemd-logind[2100]: New session 13 of user core.
Feb 13 16:07:47.450433 systemd[1]: Started session-13.scope - Session 13 of User core.
Feb 13 16:07:50.023699 sshd[4747]: pam_unix(sshd:session): session closed for user core
Feb 13 16:07:50.040119 systemd-logind[2100]: Session 13 logged out. Waiting for processes to exit.
Feb 13 16:07:50.045019 systemd[1]: sshd@12-172.31.25.253:22-139.178.68.195:50150.service: Deactivated successfully.
Feb 13 16:07:50.065119 systemd[1]: session-13.scope: Deactivated successfully.
Feb 13 16:07:50.077346 systemd-logind[2100]: Removed session 13.
Feb 13 16:07:50.087143 systemd[1]: Started sshd@13-172.31.25.253:22-139.178.68.195:50158.service - OpenSSH per-connection server daemon (139.178.68.195:50158).
Feb 13 16:07:50.292158 sshd[4789]: Accepted publickey for core from 139.178.68.195 port 50158 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU
Feb 13 16:07:50.296089 sshd[4789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 16:07:50.307910 systemd-logind[2100]: New session 14 of user core.
Feb 13 16:07:50.320908 systemd[1]: Started session-14.scope - Session 14 of User core.
Feb 13 16:07:50.880608 sshd[4789]: pam_unix(sshd:session): session closed for user core
Feb 13 16:07:50.894350 systemd[1]: sshd@13-172.31.25.253:22-139.178.68.195:50158.service: Deactivated successfully.
Feb 13 16:07:50.902265 systemd[1]: session-14.scope: Deactivated successfully.
Feb 13 16:07:50.904525 systemd-logind[2100]: Session 14 logged out. Waiting for processes to exit.
Feb 13 16:07:50.915147 systemd[1]: Started sshd@14-172.31.25.253:22-139.178.68.195:50166.service - OpenSSH per-connection server daemon (139.178.68.195:50166).
Feb 13 16:07:50.918699 systemd-logind[2100]: Removed session 14.
Feb 13 16:07:51.101829 sshd[4801]: Accepted publickey for core from 139.178.68.195 port 50166 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU
Feb 13 16:07:51.106825 sshd[4801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 16:07:51.115891 systemd-logind[2100]: New session 15 of user core.
Feb 13 16:07:51.124973 systemd[1]: Started session-15.scope - Session 15 of User core.
Feb 13 16:07:51.378484 sshd[4801]: pam_unix(sshd:session): session closed for user core
Feb 13 16:07:51.383738 systemd[1]: sshd@14-172.31.25.253:22-139.178.68.195:50166.service: Deactivated successfully.
Feb 13 16:07:51.391476 systemd[1]: session-15.scope: Deactivated successfully.
Feb 13 16:07:51.394905 systemd-logind[2100]: Session 15 logged out. Waiting for processes to exit.
Feb 13 16:07:51.397032 systemd-logind[2100]: Removed session 15.
Feb 13 16:07:56.408021 systemd[1]: Started sshd@15-172.31.25.253:22-139.178.68.195:50174.service - OpenSSH per-connection server daemon (139.178.68.195:50174).
Feb 13 16:07:56.585316 sshd[4836]: Accepted publickey for core from 139.178.68.195 port 50174 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU
Feb 13 16:07:56.588735 sshd[4836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 16:07:56.598290 systemd-logind[2100]: New session 16 of user core.
Feb 13 16:07:56.607021 systemd[1]: Started session-16.scope - Session 16 of User core.
Feb 13 16:07:56.860767 sshd[4836]: pam_unix(sshd:session): session closed for user core
Feb 13 16:07:56.868310 systemd[1]: sshd@15-172.31.25.253:22-139.178.68.195:50174.service: Deactivated successfully.
Feb 13 16:07:56.877829 systemd-logind[2100]: Session 16 logged out. Waiting for processes to exit.
Feb 13 16:07:56.879306 systemd[1]: session-16.scope: Deactivated successfully.
Feb 13 16:07:56.883470 systemd-logind[2100]: Removed session 16.
Feb 13 16:08:01.893999 systemd[1]: Started sshd@16-172.31.25.253:22-139.178.68.195:42054.service - OpenSSH per-connection server daemon (139.178.68.195:42054).
Feb 13 16:08:02.080663 sshd[4876]: Accepted publickey for core from 139.178.68.195 port 42054 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU
Feb 13 16:08:02.083980 sshd[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 16:08:02.094745 systemd-logind[2100]: New session 17 of user core.
Feb 13 16:08:02.103981 systemd[1]: Started session-17.scope - Session 17 of User core.
Feb 13 16:08:02.373201 sshd[4876]: pam_unix(sshd:session): session closed for user core
Feb 13 16:08:02.379348 systemd[1]: sshd@16-172.31.25.253:22-139.178.68.195:42054.service: Deactivated successfully.
Feb 13 16:08:02.387286 systemd[1]: session-17.scope: Deactivated successfully.
Feb 13 16:08:02.391336 systemd-logind[2100]: Session 17 logged out. Waiting for processes to exit.
Feb 13 16:08:02.393854 systemd-logind[2100]: Removed session 17.
Feb 13 16:08:07.410495 systemd[1]: Started sshd@17-172.31.25.253:22-139.178.68.195:49570.service - OpenSSH per-connection server daemon (139.178.68.195:49570).
Feb 13 16:08:07.605524 sshd[4912]: Accepted publickey for core from 139.178.68.195 port 49570 ssh2: RSA SHA256:ucMx2cSvTkGUIEkBWIRjoHjrp2OD2GS2ULysK2Q5fkU
Feb 13 16:08:07.608897 sshd[4912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 16:08:07.620272 systemd-logind[2100]: New session 18 of user core.
Feb 13 16:08:07.626086 systemd[1]: Started session-18.scope - Session 18 of User core.
Feb 13 16:08:07.898767 sshd[4912]: pam_unix(sshd:session): session closed for user core
Feb 13 16:08:07.908994 systemd[1]: sshd@17-172.31.25.253:22-139.178.68.195:49570.service: Deactivated successfully.
Feb 13 16:08:07.918974 systemd[1]: session-18.scope: Deactivated successfully.
Feb 13 16:08:07.921289 systemd-logind[2100]: Session 18 logged out. Waiting for processes to exit.
Feb 13 16:08:07.926341 systemd-logind[2100]: Removed session 18.
Feb 13 16:08:22.277973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5abb7e7bd9949ad3f422e4ed369bb59728b6476c2c7816e564439fad74757e36-rootfs.mount: Deactivated successfully.
Feb 13 16:08:22.283024 containerd[2126]: time="2025-02-13T16:08:22.281758323Z" level=info msg="shim disconnected" id=5abb7e7bd9949ad3f422e4ed369bb59728b6476c2c7816e564439fad74757e36 namespace=k8s.io
Feb 13 16:08:22.283024 containerd[2126]: time="2025-02-13T16:08:22.281967351Z" level=warning msg="cleaning up after shim disconnected" id=5abb7e7bd9949ad3f422e4ed369bb59728b6476c2c7816e564439fad74757e36 namespace=k8s.io
Feb 13 16:08:22.283024 containerd[2126]: time="2025-02-13T16:08:22.281993523Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 16:08:23.300257 kubelet[3680]: I0213 16:08:23.299334    3680 scope.go:117] "RemoveContainer" containerID="5abb7e7bd9949ad3f422e4ed369bb59728b6476c2c7816e564439fad74757e36"
Feb 13 16:08:23.307798 containerd[2126]: time="2025-02-13T16:08:23.307703836Z" level=info msg="CreateContainer within sandbox \"8c52abd2f9891874e7b856e89145212298bf0ba8162c9fe9403ae57e2b383d18\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}"
Feb 13 16:08:23.341865 containerd[2126]: time="2025-02-13T16:08:23.341571040Z" level=info msg="CreateContainer within sandbox \"8c52abd2f9891874e7b856e89145212298bf0ba8162c9fe9403ae57e2b383d18\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"44485a1c05b6cccfcef4448e1c96734bd1448d6d83a7cabe3476a8e2e20abd48\""
Feb 13 16:08:23.342659 containerd[2126]: time="2025-02-13T16:08:23.342468208Z" level=info msg="StartContainer for \"44485a1c05b6cccfcef4448e1c96734bd1448d6d83a7cabe3476a8e2e20abd48\""
Feb 13 16:08:23.480306 containerd[2126]: time="2025-02-13T16:08:23.480194669Z" level=info msg="StartContainer for \"44485a1c05b6cccfcef4448e1c96734bd1448d6d83a7cabe3476a8e2e20abd48\" returns successfully"
Feb 13 16:08:26.324021 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0bb23f7381717e645224b8623e8a7167b97f80aa1676bfec53b549b99c5c023-rootfs.mount: Deactivated successfully.
Feb 13 16:08:26.341912 containerd[2126]: time="2025-02-13T16:08:26.341600695Z" level=info msg="shim disconnected" id=b0bb23f7381717e645224b8623e8a7167b97f80aa1676bfec53b549b99c5c023 namespace=k8s.io
Feb 13 16:08:26.345938 containerd[2126]: time="2025-02-13T16:08:26.342457963Z" level=warning msg="cleaning up after shim disconnected" id=b0bb23f7381717e645224b8623e8a7167b97f80aa1676bfec53b549b99c5c023 namespace=k8s.io
Feb 13 16:08:26.345938 containerd[2126]: time="2025-02-13T16:08:26.342489067Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 16:08:27.319174 kubelet[3680]: I0213 16:08:27.318570    3680 scope.go:117] "RemoveContainer" containerID="b0bb23f7381717e645224b8623e8a7167b97f80aa1676bfec53b549b99c5c023"
Feb 13 16:08:27.322651 containerd[2126]: time="2025-02-13T16:08:27.322522796Z" level=info msg="CreateContainer within sandbox \"9b7dc8259cff81e81ff1c2951571c22c15a65ddc6f1dd04bbb9d0a5edf188d75\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}"
Feb 13 16:08:27.356408 containerd[2126]: time="2025-02-13T16:08:27.356314904Z" level=info msg="CreateContainer within sandbox \"9b7dc8259cff81e81ff1c2951571c22c15a65ddc6f1dd04bbb9d0a5edf188d75\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"492629ff95536d43703154aa73f6311909f8669f783c15aa22237a756f8ff185\""
Feb 13 16:08:27.357590 containerd[2126]: time="2025-02-13T16:08:27.357435668Z" level=info msg="StartContainer for \"492629ff95536d43703154aa73f6311909f8669f783c15aa22237a756f8ff185\""
Feb 13 16:08:27.524423 containerd[2126]: time="2025-02-13T16:08:27.524163801Z" level=info msg="StartContainer for \"492629ff95536d43703154aa73f6311909f8669f783c15aa22237a756f8ff185\" returns successfully"
Feb 13 16:08:29.491441 kubelet[3680]: E0213 16:08:29.489762    3680 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-253?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Feb 13 16:08:39.490881 kubelet[3680]: E0213 16:08:39.490556    3680 controller.go:195] "Failed to update lease" err="Put \"https://172.31.25.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-253?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"