Feb 9 19:14:35.970232 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 9 19:14:35.970268 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 17:24:35 -00 2024 Feb 9 19:14:35.970291 kernel: efi: EFI v2.70 by EDK II Feb 9 19:14:35.970307 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x71a8cf98 Feb 9 19:14:35.970320 kernel: ACPI: Early table checksum verification disabled Feb 9 19:14:35.970335 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 9 19:14:35.970351 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 9 19:14:35.970365 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 9 19:14:35.970379 kernel: ACPI: DSDT 0x0000000078640000 00154F (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 9 19:14:35.970392 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 9 19:14:35.970411 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 9 19:14:35.970425 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 9 19:14:35.970439 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 9 19:14:35.970453 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 9 19:14:35.970469 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 9 19:14:35.970488 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 9 19:14:35.970503 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 9 19:14:35.970518 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 9 19:14:35.970532 kernel: printk: bootconsole [uart0] enabled Feb 9 19:14:35.970546 kernel: NUMA: Failed to initialise from firmware Feb 9 19:14:35.970561 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 9 19:14:35.970576 kernel: NUMA: NODE_DATA [mem 0x4b5841900-0x4b5846fff] Feb 9 19:14:35.970590 kernel: Zone ranges: Feb 9 19:14:35.970605 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 9 19:14:35.970619 kernel: DMA32 empty Feb 9 19:14:35.970660 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 9 19:14:35.970683 kernel: Movable zone start for each node Feb 9 19:14:35.970699 kernel: Early memory node ranges Feb 9 19:14:35.970714 kernel: node 0: [mem 0x0000000040000000-0x00000000786effff] Feb 9 19:14:35.970728 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 9 19:14:35.970742 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 9 19:14:35.970757 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 9 19:14:35.970771 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 9 19:14:35.970786 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 9 19:14:35.970800 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 9 19:14:35.970814 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 9 19:14:35.970829 kernel: psci: probing for conduit method from ACPI. Feb 9 19:14:35.970843 kernel: psci: PSCIv1.0 detected in firmware. Feb 9 19:14:35.970862 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 19:14:35.970895 kernel: psci: Trusted OS migration not required Feb 9 19:14:35.970919 kernel: psci: SMC Calling Convention v1.1 Feb 9 19:14:35.970935 kernel: ACPI: SRAT not present Feb 9 19:14:35.970950 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 19:14:35.970970 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 19:14:35.970985 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 9 19:14:35.971000 kernel: Detected PIPT I-cache on CPU0 Feb 9 19:14:35.971015 kernel: CPU features: detected: GIC system register CPU interface Feb 9 19:14:35.971030 kernel: CPU features: detected: Spectre-v2 Feb 9 19:14:35.971045 kernel: CPU features: detected: Spectre-v3a Feb 9 19:14:35.971060 kernel: CPU features: detected: Spectre-BHB Feb 9 19:14:35.971075 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 19:14:35.971091 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 19:14:35.971106 kernel: CPU features: detected: ARM erratum 1742098 Feb 9 19:14:35.971121 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 9 19:14:35.971140 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 9 19:14:35.971155 kernel: Policy zone: Normal Feb 9 19:14:35.971173 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 19:14:35.971189 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:14:35.971204 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:14:35.971220 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 19:14:35.971235 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:14:35.971250 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 9 19:14:35.971266 kernel: Memory: 3826316K/4030464K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 204148K reserved, 0K cma-reserved) Feb 9 19:14:35.971282 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:14:35.971301 kernel: trace event string verifier disabled Feb 9 19:14:35.971316 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 19:14:35.971332 kernel: rcu: RCU event tracing is enabled. Feb 9 19:14:35.971348 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:14:35.971363 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 19:14:35.971379 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:14:35.971394 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:14:35.971409 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:14:35.971424 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 19:14:35.971439 kernel: GICv3: 96 SPIs implemented Feb 9 19:14:35.971454 kernel: GICv3: 0 Extended SPIs implemented Feb 9 19:14:35.971469 kernel: GICv3: Distributor has no Range Selector support Feb 9 19:14:35.971489 kernel: Root IRQ handler: gic_handle_irq Feb 9 19:14:35.971503 kernel: GICv3: 16 PPIs implemented Feb 9 19:14:35.971518 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 9 19:14:35.971533 kernel: ACPI: SRAT not present Feb 9 19:14:35.971548 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 9 19:14:35.971563 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 19:14:35.971578 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Feb 9 19:14:35.971593 kernel: GICv3: using LPI property table @0x00000004000c0000 Feb 9 19:14:35.971608 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 9 19:14:35.971624 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Feb 9 19:14:35.972141 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 9 19:14:35.972165 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 9 19:14:35.972182 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 9 19:14:35.972198 kernel: Console: colour dummy device 80x25 Feb 9 19:14:35.972214 kernel: printk: console [tty1] enabled Feb 9 19:14:35.972229 kernel: ACPI: Core revision 20210730 Feb 9 19:14:35.972245 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 9 19:14:35.972261 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:14:35.972277 kernel: LSM: Security Framework initializing Feb 9 19:14:35.972292 kernel: SELinux: Initializing. Feb 9 19:14:35.972308 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 19:14:35.972328 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 19:14:35.972344 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:14:35.972359 kernel: Platform MSI: ITS@0x10080000 domain created Feb 9 19:14:35.972375 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 9 19:14:35.972390 kernel: Remapping and enabling EFI services. Feb 9 19:14:35.972406 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:14:35.972421 kernel: Detected PIPT I-cache on CPU1 Feb 9 19:14:35.972437 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 9 19:14:35.972453 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Feb 9 19:14:35.972473 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 9 19:14:35.972488 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:14:35.972504 kernel: SMP: Total of 2 processors activated. Feb 9 19:14:35.972520 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 19:14:35.972535 kernel: CPU features: detected: 32-bit EL1 Support Feb 9 19:14:35.972551 kernel: CPU features: detected: CRC32 instructions Feb 9 19:14:35.972567 kernel: CPU: All CPU(s) started at EL1 Feb 9 19:14:35.972582 kernel: alternatives: patching kernel code Feb 9 19:14:35.972597 kernel: devtmpfs: initialized Feb 9 19:14:35.972617 kernel: KASLR disabled due to lack of seed Feb 9 19:14:35.972652 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:14:35.972671 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:14:35.972698 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:14:35.972718 kernel: SMBIOS 3.0.0 present. Feb 9 19:14:35.972735 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 9 19:14:35.972751 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:14:35.972767 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 19:14:35.972783 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 19:14:35.972800 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 19:14:35.972816 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:14:35.972832 kernel: audit: type=2000 audit(0.257:1): state=initialized audit_enabled=0 res=1 Feb 9 19:14:35.972853 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:14:35.972869 kernel: cpuidle: using governor menu Feb 9 19:14:35.972886 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 19:14:35.972902 kernel: ASID allocator initialised with 32768 entries Feb 9 19:14:35.972922 kernel: ACPI: bus type PCI registered Feb 9 19:14:35.972938 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:14:35.972955 kernel: Serial: AMBA PL011 UART driver Feb 9 19:14:35.972971 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:14:35.972988 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 19:14:35.973004 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:14:35.973020 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 19:14:35.973036 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:14:35.973052 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 19:14:35.973069 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:14:35.973089 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:14:35.973105 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:14:35.973121 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:14:35.973138 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:14:35.973154 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:14:35.973170 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:14:35.973187 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:14:35.973203 kernel: ACPI: Interpreter enabled Feb 9 19:14:35.973219 kernel: ACPI: Using GIC for interrupt routing Feb 9 19:14:35.973239 kernel: ACPI: MCFG table detected, 1 entries Feb 9 19:14:35.973255 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 9 19:14:35.973541 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:14:35.973767 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 19:14:35.973965 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 19:14:35.974157 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 9 19:14:35.974351 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 9 19:14:35.974379 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 9 19:14:35.974396 kernel: acpiphp: Slot [1] registered Feb 9 19:14:35.974413 kernel: acpiphp: Slot [2] registered Feb 9 19:14:35.974429 kernel: acpiphp: Slot [3] registered Feb 9 19:14:35.974445 kernel: acpiphp: Slot [4] registered Feb 9 19:14:35.974461 kernel: acpiphp: Slot [5] registered Feb 9 19:14:35.974477 kernel: acpiphp: Slot [6] registered Feb 9 19:14:35.974494 kernel: acpiphp: Slot [7] registered Feb 9 19:14:35.974510 kernel: acpiphp: Slot [8] registered Feb 9 19:14:35.974530 kernel: acpiphp: Slot [9] registered Feb 9 19:14:35.974547 kernel: acpiphp: Slot [10] registered Feb 9 19:14:35.974563 kernel: acpiphp: Slot [11] registered Feb 9 19:14:35.974579 kernel: acpiphp: Slot [12] registered Feb 9 19:14:35.974595 kernel: acpiphp: Slot [13] registered Feb 9 19:14:35.974611 kernel: acpiphp: Slot [14] registered Feb 9 19:14:35.974643 kernel: acpiphp: Slot [15] registered Feb 9 19:14:35.978737 kernel: acpiphp: Slot [16] registered Feb 9 19:14:35.978758 kernel: acpiphp: Slot [17] registered Feb 9 19:14:35.978775 kernel: acpiphp: Slot [18] registered Feb 9 19:14:35.978802 kernel: acpiphp: Slot [19] registered Feb 9 19:14:35.978818 kernel: acpiphp: Slot [20] registered Feb 9 19:14:35.978834 kernel: acpiphp: Slot [21] registered Feb 9 19:14:35.978851 kernel: acpiphp: Slot [22] registered Feb 9 19:14:35.978867 kernel: acpiphp: Slot [23] registered Feb 9 19:14:35.978906 kernel: acpiphp: Slot [24] registered Feb 9 19:14:35.978923 kernel: acpiphp: Slot [25] registered Feb 9 19:14:35.978940 kernel: acpiphp: Slot [26] registered Feb 9 19:14:35.978956 kernel: acpiphp: Slot [27] registered Feb 9 19:14:35.978977 kernel: acpiphp: Slot [28] registered Feb 9 19:14:35.978994 kernel: acpiphp: Slot [29] registered Feb 9 19:14:35.979011 kernel: acpiphp: Slot [30] registered Feb 9 19:14:35.979027 kernel: acpiphp: Slot [31] registered Feb 9 19:14:35.979044 kernel: PCI host bridge to bus 0000:00 Feb 9 19:14:35.979329 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 9 19:14:35.979526 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 19:14:35.979760 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 9 19:14:35.979953 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 9 19:14:35.980190 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 9 19:14:35.980411 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 9 19:14:35.980617 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 9 19:14:35.981968 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 9 19:14:35.982192 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 9 19:14:35.982401 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 19:14:35.985682 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 9 19:14:35.985950 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 9 19:14:35.986153 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 9 19:14:35.986351 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 9 19:14:35.986550 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 19:14:35.986775 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 9 19:14:35.987008 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 9 19:14:35.987214 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 9 19:14:35.987414 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 9 19:14:35.987619 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 9 19:14:35.987835 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 9 19:14:35.988013 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 19:14:35.988196 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 9 19:14:35.988225 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 19:14:35.988242 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 19:14:35.988259 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 19:14:35.988276 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 19:14:35.988292 kernel: iommu: Default domain type: Translated Feb 9 19:14:35.988309 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 19:14:35.988325 kernel: vgaarb: loaded Feb 9 19:14:35.988341 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:14:35.988358 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:14:35.988378 kernel: PTP clock support registered Feb 9 19:14:35.988394 kernel: Registered efivars operations Feb 9 19:14:35.988411 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 19:14:35.988427 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:14:35.988444 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:14:35.988460 kernel: pnp: PnP ACPI init Feb 9 19:14:35.988689 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 9 19:14:35.988716 kernel: pnp: PnP ACPI: found 1 devices Feb 9 19:14:35.988733 kernel: NET: Registered PF_INET protocol family Feb 9 19:14:35.988756 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:14:35.988773 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 19:14:35.988790 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:14:35.988806 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 19:14:35.988823 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 19:14:35.988840 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 19:14:35.988856 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 19:14:35.988873 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 19:14:35.988893 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:14:35.988910 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:14:35.988926 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 9 19:14:35.988943 kernel: kvm [1]: HYP mode not available Feb 9 19:14:35.988959 kernel: Initialise system trusted keyrings Feb 9 19:14:35.988976 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 19:14:35.988992 kernel: Key type asymmetric registered Feb 9 19:14:35.989009 kernel: Asymmetric key parser 'x509' registered Feb 9 19:14:35.989025 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:14:35.989045 kernel: io scheduler mq-deadline registered Feb 9 19:14:35.989062 kernel: io scheduler kyber registered Feb 9 19:14:35.989078 kernel: io scheduler bfq registered Feb 9 19:14:35.989297 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 9 19:14:35.989322 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 19:14:35.989339 kernel: ACPI: button: Power Button [PWRB] Feb 9 19:14:35.989356 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:14:35.989373 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 9 19:14:35.989573 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 9 19:14:35.989600 kernel: printk: console [ttyS0] disabled Feb 9 19:14:35.989617 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 9 19:14:35.989651 kernel: printk: console [ttyS0] enabled Feb 9 19:14:35.989670 kernel: printk: bootconsole [uart0] disabled Feb 9 19:14:35.989686 kernel: thunder_xcv, ver 1.0 Feb 9 19:14:35.989703 kernel: thunder_bgx, ver 1.0 Feb 9 19:14:35.989719 kernel: nicpf, ver 1.0 Feb 9 19:14:35.989735 kernel: nicvf, ver 1.0 Feb 9 19:14:35.989950 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 19:14:35.990143 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T19:14:35 UTC (1707506075) Feb 9 19:14:35.990166 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 19:14:35.990182 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:14:35.990198 kernel: Segment Routing with IPv6 Feb 9 19:14:35.990215 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:14:35.990231 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:14:35.990247 kernel: Key type dns_resolver registered Feb 9 19:14:35.990263 kernel: registered taskstats version 1 Feb 9 19:14:35.990284 kernel: Loading compiled-in X.509 certificates Feb 9 19:14:35.990301 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 947a80114e81e2815f6db72a0d388260762488f9' Feb 9 19:14:35.990317 kernel: Key type .fscrypt registered Feb 9 19:14:35.990333 kernel: Key type fscrypt-provisioning registered Feb 9 19:14:35.990349 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:14:35.990366 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:14:35.990382 kernel: ima: No architecture policies found Feb 9 19:14:35.990398 kernel: Freeing unused kernel memory: 34688K Feb 9 19:14:35.990414 kernel: Run /init as init process Feb 9 19:14:35.990434 kernel: with arguments: Feb 9 19:14:35.990451 kernel: /init Feb 9 19:14:35.990467 kernel: with environment: Feb 9 19:14:35.990483 kernel: HOME=/ Feb 9 19:14:35.990499 kernel: TERM=linux Feb 9 19:14:35.990515 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:14:35.990536 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:14:35.990556 systemd[1]: Detected virtualization amazon. Feb 9 19:14:35.990579 systemd[1]: Detected architecture arm64. Feb 9 19:14:35.990596 systemd[1]: Running in initrd. Feb 9 19:14:35.990614 systemd[1]: No hostname configured, using default hostname. Feb 9 19:14:35.990647 systemd[1]: Hostname set to . Feb 9 19:14:35.990670 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:14:35.990687 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:14:35.990705 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:14:35.990722 systemd[1]: Reached target cryptsetup.target. Feb 9 19:14:35.990744 systemd[1]: Reached target paths.target. Feb 9 19:14:35.990762 systemd[1]: Reached target slices.target. Feb 9 19:14:35.990779 systemd[1]: Reached target swap.target. Feb 9 19:14:35.990796 systemd[1]: Reached target timers.target. Feb 9 19:14:35.990815 systemd[1]: Listening on iscsid.socket. Feb 9 19:14:35.990832 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:14:35.990850 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:14:35.990867 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:14:35.990906 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:14:35.990925 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:14:35.990942 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:14:35.990960 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:14:35.990977 systemd[1]: Reached target sockets.target. Feb 9 19:14:35.990995 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:14:35.991012 systemd[1]: Finished network-cleanup.service. Feb 9 19:14:35.991030 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:14:35.991047 systemd[1]: Starting systemd-journald.service... Feb 9 19:14:35.991069 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:14:35.991087 systemd[1]: Starting systemd-resolved.service... Feb 9 19:14:35.991104 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:14:35.991122 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:14:35.991142 systemd-journald[308]: Journal started Feb 9 19:14:35.991237 systemd-journald[308]: Runtime Journal (/run/log/journal/ec23c850af0334e4c20ff6cccba83a55) is 8.0M, max 75.4M, 67.4M free. Feb 9 19:14:35.974694 systemd-modules-load[309]: Inserted module 'overlay' Feb 9 19:14:36.005809 kernel: audit: type=1130 audit(1707506075.995:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:36.005851 systemd[1]: Started systemd-journald.service. Feb 9 19:14:35.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:36.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:36.013330 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:14:36.044819 kernel: audit: type=1130 audit(1707506076.011:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:36.044857 kernel: audit: type=1130 audit(1707506076.022:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:36.044892 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:14:36.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:36.024512 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:14:36.056090 kernel: audit: type=1130 audit(1707506076.034:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:36.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:36.037555 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:14:36.065372 systemd-modules-load[309]: Inserted module 'br_netfilter' Feb 9 19:14:36.065654 kernel: Bridge firewalling registered Feb 9 19:14:36.074313 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:14:36.090696 kernel: SCSI subsystem initialized Feb 9 19:14:36.097989 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:14:36.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:36.117508 kernel: audit: type=1130 audit(1707506076.096:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:36.117577 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:14:36.119492 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:14:36.123688 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:14:36.128767 systemd-modules-load[309]: Inserted module 'dm_multipath' Feb 9 19:14:36.131961 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:14:36.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:36.140093 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:14:36.155685 kernel: audit: type=1130 audit(1707506076.130:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:36.158798 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:14:36.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:36.175580 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:14:36.178373 kernel: audit: type=1130 audit(1707506076.161:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:36.193537 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:14:36.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:36.205705 kernel: audit: type=1130 audit(1707506076.194:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:36.216218 dracut-cmdline[331]: dracut-dracut-053 Feb 9 19:14:36.218114 systemd-resolved[310]: Positive Trust Anchors: Feb 9 19:14:36.218131 systemd-resolved[310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:14:36.218185 systemd-resolved[310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:14:36.258049 dracut-cmdline[331]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 19:14:36.390666 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:14:36.405670 kernel: iscsi: registered transport (tcp) Feb 9 19:14:36.432146 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:14:36.432228 kernel: QLogic iSCSI HBA Driver Feb 9 19:14:36.612673 kernel: random: crng init done Feb 9 19:14:36.613069 systemd-resolved[310]: Defaulting to hostname 'linux'. Feb 9 19:14:36.617105 systemd[1]: Started systemd-resolved.service. Feb 9 19:14:36.630728 kernel: audit: type=1130 audit(1707506076.617:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:36.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:36.619215 systemd[1]: Reached target nss-lookup.target. Feb 9 19:14:36.645926 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:14:36.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:36.649919 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:14:36.721678 kernel: raid6: neonx8 gen() 6335 MB/s Feb 9 19:14:36.739688 kernel: raid6: neonx8 xor() 4622 MB/s Feb 9 19:14:36.757691 kernel: raid6: neonx4 gen() 6391 MB/s Feb 9 19:14:36.775693 kernel: raid6: neonx4 xor() 4791 MB/s Feb 9 19:14:36.793691 kernel: raid6: neonx2 gen() 5622 MB/s Feb 9 19:14:36.811673 kernel: raid6: neonx2 xor() 4368 MB/s Feb 9 19:14:36.829688 kernel: raid6: neonx1 gen() 4406 MB/s Feb 9 19:14:36.847685 kernel: raid6: neonx1 xor() 3597 MB/s Feb 9 19:14:36.865689 kernel: raid6: int64x8 gen() 3380 MB/s Feb 9 19:14:36.883685 kernel: raid6: int64x8 xor() 2068 MB/s Feb 9 19:14:36.901692 kernel: raid6: int64x4 gen() 3741 MB/s Feb 9 19:14:36.919691 kernel: raid6: int64x4 xor() 2165 MB/s Feb 9 19:14:36.937688 kernel: raid6: int64x2 gen() 3514 MB/s Feb 9 19:14:36.955688 kernel: raid6: int64x2 xor() 1921 MB/s Feb 9 19:14:36.973693 kernel: raid6: int64x1 gen() 2742 MB/s Feb 9 19:14:36.993232 kernel: raid6: int64x1 xor() 1437 MB/s Feb 9 19:14:36.993301 kernel: raid6: using algorithm neonx4 gen() 6391 MB/s Feb 9 19:14:36.993327 kernel: raid6: .... xor() 4791 MB/s, rmw enabled Feb 9 19:14:36.995069 kernel: raid6: using neon recovery algorithm Feb 9 19:14:37.015693 kernel: xor: measuring software checksum speed Feb 9 19:14:37.017685 kernel: 8regs : 9412 MB/sec Feb 9 19:14:37.020690 kernel: 32regs : 11155 MB/sec Feb 9 19:14:37.024712 kernel: arm64_neon : 9636 MB/sec Feb 9 19:14:37.024794 kernel: xor: using function: 32regs (11155 MB/sec) Feb 9 19:14:37.121699 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 19:14:37.143037 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:14:37.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:37.145000 audit: BPF prog-id=7 op=LOAD Feb 9 19:14:37.145000 audit: BPF prog-id=8 op=LOAD Feb 9 19:14:37.147917 systemd[1]: Starting systemd-udevd.service... Feb 9 19:14:37.179256 systemd-udevd[508]: Using default interface naming scheme 'v252'. Feb 9 19:14:37.191296 systemd[1]: Started systemd-udevd.service. Feb 9 19:14:37.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:37.200448 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:14:37.231011 dracut-pre-trigger[518]: rd.md=0: removing MD RAID activation Feb 9 19:14:37.302288 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:14:37.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:37.306892 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:14:37.420796 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:14:37.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:37.551899 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 19:14:37.551970 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 9 19:14:37.566923 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 9 19:14:37.567269 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 9 19:14:37.572324 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 9 19:14:37.572390 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 9 19:14:37.582678 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 9 19:14:37.590594 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 19:14:37.590692 kernel: GPT:9289727 != 16777215 Feb 9 19:14:37.590718 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 19:14:37.590741 kernel: GPT:9289727 != 16777215 Feb 9 19:14:37.590772 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 19:14:37.590794 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:14:37.602678 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:35:f0:10:a5:bd Feb 9 19:14:37.608218 (udev-worker)[561]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:14:37.683707 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (567) Feb 9 19:14:37.712069 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:14:37.780714 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:14:37.804971 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:14:37.809228 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:14:37.829571 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:14:37.852005 systemd[1]: Starting disk-uuid.service... Feb 9 19:14:37.869676 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:14:37.870895 disk-uuid[665]: Primary Header is updated. Feb 9 19:14:37.870895 disk-uuid[665]: Secondary Entries is updated. Feb 9 19:14:37.870895 disk-uuid[665]: Secondary Header is updated. Feb 9 19:14:37.889681 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:14:37.897672 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:14:38.906318 disk-uuid[666]: The operation has completed successfully. Feb 9 19:14:38.909103 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:14:39.075959 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:14:39.076235 systemd[1]: Finished disk-uuid.service. Feb 9 19:14:39.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:39.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:39.091759 systemd[1]: Starting verity-setup.service... Feb 9 19:14:39.127675 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 19:14:39.206901 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:14:39.212014 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:14:39.219946 systemd[1]: Finished verity-setup.service. Feb 9 19:14:39.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:39.302680 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:14:39.303419 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:14:39.303957 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:14:39.309899 systemd[1]: Starting ignition-setup.service... Feb 9 19:14:39.324382 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:14:39.336100 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 19:14:39.336170 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 19:14:39.338687 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 19:14:39.345694 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 19:14:39.364974 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:14:39.387500 systemd[1]: Finished ignition-setup.service. Feb 9 19:14:39.393062 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:14:39.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:39.497327 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:14:39.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:39.501000 audit: BPF prog-id=9 op=LOAD Feb 9 19:14:39.504225 systemd[1]: Starting systemd-networkd.service... Feb 9 19:14:39.552534 systemd-networkd[1178]: lo: Link UP Feb 9 19:14:39.552561 systemd-networkd[1178]: lo: Gained carrier Feb 9 19:14:39.555324 systemd-networkd[1178]: Enumeration completed Feb 9 19:14:39.555504 systemd[1]: Started systemd-networkd.service. Feb 9 19:14:39.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:39.559732 systemd[1]: Reached target network.target. Feb 9 19:14:39.561805 systemd-networkd[1178]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:14:39.584904 systemd-networkd[1178]: eth0: Link UP Feb 9 19:14:39.584912 systemd-networkd[1178]: eth0: Gained carrier Feb 9 19:14:39.586044 systemd[1]: Starting iscsiuio.service... Feb 9 19:14:39.601797 systemd[1]: Started iscsiuio.service. Feb 9 19:14:39.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:39.605316 systemd[1]: Starting iscsid.service... Feb 9 19:14:39.606797 systemd-networkd[1178]: eth0: DHCPv4 address 172.31.21.34/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 19:14:39.617728 iscsid[1183]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:14:39.617728 iscsid[1183]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:14:39.617728 iscsid[1183]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:14:39.617728 iscsid[1183]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:14:39.637098 iscsid[1183]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:14:39.637098 iscsid[1183]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:14:39.631740 systemd[1]: Started iscsid.service. Feb 9 19:14:39.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:39.648124 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:14:39.677895 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:14:39.683754 kernel: kauditd_printk_skb: 16 callbacks suppressed Feb 9 19:14:39.683809 kernel: audit: type=1130 audit(1707506079.680:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:39.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:39.681490 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:14:39.693913 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:14:39.695871 systemd[1]: Reached target remote-fs.target. Feb 9 19:14:39.702915 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:14:39.723408 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:14:39.733823 kernel: audit: type=1130 audit(1707506079.721:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:39.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:39.983922 ignition[1096]: Ignition 2.14.0 Feb 9 19:14:39.985758 ignition[1096]: Stage: fetch-offline Feb 9 19:14:39.987646 ignition[1096]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:39.990881 ignition[1096]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:40.007231 ignition[1096]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:40.008289 ignition[1096]: Ignition finished successfully Feb 9 19:14:40.013401 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:14:40.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.024295 systemd[1]: Starting ignition-fetch.service... Feb 9 19:14:40.033655 kernel: audit: type=1130 audit(1707506080.012:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.040487 ignition[1202]: Ignition 2.14.0 Feb 9 19:14:40.040515 ignition[1202]: Stage: fetch Feb 9 19:14:40.040850 ignition[1202]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:40.040910 ignition[1202]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:40.054622 ignition[1202]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:40.057137 ignition[1202]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:40.073577 ignition[1202]: INFO : PUT result: OK Feb 9 19:14:40.077607 ignition[1202]: DEBUG : parsed url from cmdline: "" Feb 9 19:14:40.077607 ignition[1202]: INFO : no config URL provided Feb 9 19:14:40.077607 ignition[1202]: INFO : reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:14:40.083894 ignition[1202]: INFO : no config at "/usr/lib/ignition/user.ign" Feb 9 19:14:40.083894 ignition[1202]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:40.083894 ignition[1202]: INFO : PUT result: OK Feb 9 19:14:40.083894 ignition[1202]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 9 19:14:40.093668 ignition[1202]: INFO : GET result: OK Feb 9 19:14:40.095317 ignition[1202]: DEBUG : parsing config with SHA512: e638160bacb682f5da6ae2caf3d1d12471b1a453bcf7a9da8ddb3b8f08e61377626e0eab7882ea4a56b6f79d2afa7c9150958bd7f6fd45cd6a171b61fc4cdb4a Feb 9 19:14:40.167138 unknown[1202]: fetched base config from "system" Feb 9 19:14:40.167184 unknown[1202]: fetched base config from "system" Feb 9 19:14:40.167201 unknown[1202]: fetched user config from "aws" Feb 9 19:14:40.173112 ignition[1202]: fetch: fetch complete Feb 9 19:14:40.173151 ignition[1202]: fetch: fetch passed Feb 9 19:14:40.173276 ignition[1202]: Ignition finished successfully Feb 9 19:14:40.179404 systemd[1]: Finished ignition-fetch.service. Feb 9 19:14:40.183024 systemd[1]: Starting ignition-kargs.service... Feb 9 19:14:40.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.200181 kernel: audit: type=1130 audit(1707506080.179:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.212915 ignition[1208]: Ignition 2.14.0 Feb 9 19:14:40.212951 ignition[1208]: Stage: kargs Feb 9 19:14:40.213277 ignition[1208]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:40.213332 ignition[1208]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:40.229176 ignition[1208]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:40.231686 ignition[1208]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:40.234975 ignition[1208]: INFO : PUT result: OK Feb 9 19:14:40.240978 ignition[1208]: kargs: kargs passed Feb 9 19:14:40.241130 ignition[1208]: Ignition finished successfully Feb 9 19:14:40.245419 systemd[1]: Finished ignition-kargs.service. Feb 9 19:14:40.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.257729 kernel: audit: type=1130 audit(1707506080.246:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.250305 systemd[1]: Starting ignition-disks.service... Feb 9 19:14:40.266727 ignition[1214]: Ignition 2.14.0 Feb 9 19:14:40.266758 ignition[1214]: Stage: disks Feb 9 19:14:40.267116 ignition[1214]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:40.267179 ignition[1214]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:40.281719 ignition[1214]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:40.284065 ignition[1214]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:40.287102 ignition[1214]: INFO : PUT result: OK Feb 9 19:14:40.292398 ignition[1214]: disks: disks passed Feb 9 19:14:40.292532 ignition[1214]: Ignition finished successfully Feb 9 19:14:40.296823 systemd[1]: Finished ignition-disks.service. Feb 9 19:14:40.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.300208 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:14:40.313059 kernel: audit: type=1130 audit(1707506080.298:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.311400 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:14:40.316865 systemd[1]: Reached target local-fs.target. Feb 9 19:14:40.318540 systemd[1]: Reached target sysinit.target. Feb 9 19:14:40.320214 systemd[1]: Reached target basic.target. Feb 9 19:14:40.336493 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:14:40.382248 systemd-fsck[1222]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 19:14:40.389343 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:14:40.393911 systemd[1]: Mounting sysroot.mount... Feb 9 19:14:40.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.406542 kernel: audit: type=1130 audit(1707506080.390:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.418673 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:14:40.420389 systemd[1]: Mounted sysroot.mount. Feb 9 19:14:40.420690 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:14:40.433949 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:14:40.436295 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:14:40.436396 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:14:40.436459 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:14:40.454869 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:14:40.471084 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:14:40.476004 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:14:40.494690 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1239) Feb 9 19:14:40.495042 initrd-setup-root[1244]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:14:40.504077 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 19:14:40.504153 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 19:14:40.504179 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 19:14:40.512681 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 19:14:40.517268 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:14:40.526426 initrd-setup-root[1270]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:14:40.529964 initrd-setup-root[1278]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:14:40.536956 initrd-setup-root[1286]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:14:40.738416 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:14:40.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.742921 systemd[1]: Starting ignition-mount.service... Feb 9 19:14:40.752668 kernel: audit: type=1130 audit(1707506080.740:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.753143 systemd[1]: Starting sysroot-boot.service... Feb 9 19:14:40.764358 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:14:40.764536 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:14:40.799207 systemd[1]: Finished sysroot-boot.service. Feb 9 19:14:40.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.809816 ignition[1306]: INFO : Ignition 2.14.0 Feb 9 19:14:40.809816 ignition[1306]: INFO : Stage: mount Feb 9 19:14:40.809816 ignition[1306]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:40.809816 ignition[1306]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:40.820458 kernel: audit: type=1130 audit(1707506080.801:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.827011 ignition[1306]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:40.830194 ignition[1306]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:40.832951 ignition[1306]: INFO : PUT result: OK Feb 9 19:14:40.838558 ignition[1306]: INFO : mount: mount passed Feb 9 19:14:40.840239 ignition[1306]: INFO : Ignition finished successfully Feb 9 19:14:40.843510 systemd[1]: Finished ignition-mount.service. Feb 9 19:14:40.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.854697 kernel: audit: type=1130 audit(1707506080.844:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:40.855325 systemd[1]: Starting ignition-files.service... Feb 9 19:14:40.871771 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:14:40.889674 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1314) Feb 9 19:14:40.895768 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 19:14:40.895840 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 19:14:40.895864 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 19:14:40.904669 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 19:14:40.909235 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:14:40.928750 ignition[1333]: INFO : Ignition 2.14.0 Feb 9 19:14:40.928750 ignition[1333]: INFO : Stage: files Feb 9 19:14:40.932086 ignition[1333]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:40.932086 ignition[1333]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:40.949341 ignition[1333]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:40.952324 ignition[1333]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:40.955773 ignition[1333]: INFO : PUT result: OK Feb 9 19:14:40.960917 ignition[1333]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:14:40.967450 ignition[1333]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:14:40.971035 ignition[1333]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:14:41.007212 ignition[1333]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:14:41.010420 ignition[1333]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:14:41.014259 unknown[1333]: wrote ssh authorized keys file for user: core Feb 9 19:14:41.016465 ignition[1333]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:14:41.025362 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:14:41.028962 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:14:41.028962 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 19:14:41.028962 ignition[1333]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 19:14:41.121429 ignition[1333]: INFO : GET result: OK Feb 9 19:14:41.226681 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 19:14:41.230812 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 19:14:41.234808 ignition[1333]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 19:14:41.340934 systemd-networkd[1178]: eth0: Gained IPv6LL Feb 9 19:14:41.703856 ignition[1333]: INFO : GET result: OK Feb 9 19:14:42.042798 ignition[1333]: DEBUG : file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 19:14:42.047933 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 19:14:42.047933 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 19:14:42.047933 ignition[1333]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 19:14:42.428320 ignition[1333]: INFO : GET result: OK Feb 9 19:14:42.878745 ignition[1333]: DEBUG : file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 19:14:42.883570 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 19:14:42.883570 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:14:42.883570 ignition[1333]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 19:14:43.019272 ignition[1333]: INFO : GET result: OK Feb 9 19:14:44.734126 ignition[1333]: DEBUG : file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 19:14:44.739380 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:14:44.739380 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 19:14:44.739380 ignition[1333]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:14:44.760190 ignition[1333]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2823476593" Feb 9 19:14:44.767098 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1336) Feb 9 19:14:44.767140 ignition[1333]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2823476593": device or resource busy Feb 9 19:14:44.767140 ignition[1333]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2823476593", trying btrfs: device or resource busy Feb 9 19:14:44.767140 ignition[1333]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2823476593" Feb 9 19:14:44.787057 ignition[1333]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2823476593" Feb 9 19:14:44.802372 ignition[1333]: INFO : op(3): [started] unmounting "/mnt/oem2823476593" Feb 9 19:14:44.807193 systemd[1]: mnt-oem2823476593.mount: Deactivated successfully. Feb 9 19:14:44.810509 ignition[1333]: INFO : op(3): [finished] unmounting "/mnt/oem2823476593" Feb 9 19:14:44.812878 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 19:14:44.812878 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:14:44.819882 ignition[1333]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 19:14:44.899695 ignition[1333]: INFO : GET result: OK Feb 9 19:14:45.541736 ignition[1333]: DEBUG : file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 19:14:45.546860 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:14:45.546860 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:14:45.546860 ignition[1333]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1 Feb 9 19:14:45.609657 ignition[1333]: INFO : GET result: OK Feb 9 19:14:46.245830 ignition[1333]: DEBUG : file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a Feb 9 19:14:46.252181 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:14:46.252181 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:14:46.252181 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:14:46.252181 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:14:46.252181 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:14:46.252181 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:14:46.252181 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:14:46.252181 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:14:46.252181 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:14:46.252181 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:14:46.252181 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:14:46.252181 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:14:46.252181 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:14:46.252181 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:14:46.252181 ignition[1333]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:14:46.312885 ignition[1333]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem545276890" Feb 9 19:14:46.312885 ignition[1333]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem545276890": device or resource busy Feb 9 19:14:46.312885 ignition[1333]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem545276890", trying btrfs: device or resource busy Feb 9 19:14:46.312885 ignition[1333]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem545276890" Feb 9 19:14:46.325968 ignition[1333]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem545276890" Feb 9 19:14:46.325968 ignition[1333]: INFO : op(6): [started] unmounting "/mnt/oem545276890" Feb 9 19:14:46.331418 ignition[1333]: INFO : op(6): [finished] unmounting "/mnt/oem545276890" Feb 9 19:14:46.335041 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:14:46.335041 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 19:14:46.335041 ignition[1333]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:14:46.359034 ignition[1333]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4004178876" Feb 9 19:14:46.359034 ignition[1333]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4004178876": device or resource busy Feb 9 19:14:46.359034 ignition[1333]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4004178876", trying btrfs: device or resource busy Feb 9 19:14:46.359034 ignition[1333]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4004178876" Feb 9 19:14:46.359034 ignition[1333]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4004178876" Feb 9 19:14:46.359034 ignition[1333]: INFO : op(9): [started] unmounting "/mnt/oem4004178876" Feb 9 19:14:46.377911 ignition[1333]: INFO : op(9): [finished] unmounting "/mnt/oem4004178876" Feb 9 19:14:46.377911 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 19:14:46.377911 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 19:14:46.377911 ignition[1333]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:14:46.406683 ignition[1333]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3615141101" Feb 9 19:14:46.406683 ignition[1333]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3615141101": device or resource busy Feb 9 19:14:46.406683 ignition[1333]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3615141101", trying btrfs: device or resource busy Feb 9 19:14:46.406683 ignition[1333]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3615141101" Feb 9 19:14:46.406683 ignition[1333]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3615141101" Feb 9 19:14:46.425101 ignition[1333]: INFO : op(c): [started] unmounting "/mnt/oem3615141101" Feb 9 19:14:46.427530 ignition[1333]: INFO : op(c): [finished] unmounting "/mnt/oem3615141101" Feb 9 19:14:46.430086 ignition[1333]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 19:14:46.433857 ignition[1333]: INFO : files: op(14): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:14:46.433857 ignition[1333]: INFO : files: op(14): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:14:46.433857 ignition[1333]: INFO : files: op(15): [started] processing unit "amazon-ssm-agent.service" Feb 9 19:14:46.433857 ignition[1333]: INFO : files: op(15): op(16): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 19:14:46.433857 ignition[1333]: INFO : files: op(15): op(16): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 19:14:46.433857 ignition[1333]: INFO : files: op(15): [finished] processing unit "amazon-ssm-agent.service" Feb 9 19:14:46.433857 ignition[1333]: INFO : files: op(17): [started] processing unit "nvidia.service" Feb 9 19:14:46.433857 ignition[1333]: INFO : files: op(17): [finished] processing unit "nvidia.service" Feb 9 19:14:46.433857 ignition[1333]: INFO : files: op(18): [started] processing unit "containerd.service" Feb 9 19:14:46.433857 ignition[1333]: INFO : files: op(18): op(19): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:14:46.465514 ignition[1333]: INFO : files: op(18): op(19): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:14:46.465514 ignition[1333]: INFO : files: op(18): [finished] processing unit "containerd.service" Feb 9 19:14:46.465514 ignition[1333]: INFO : files: op(1a): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:14:46.465514 ignition[1333]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:14:46.465514 ignition[1333]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:14:46.465514 ignition[1333]: INFO : files: op(1a): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:14:46.465514 ignition[1333]: INFO : files: op(1c): [started] processing unit "prepare-critools.service" Feb 9 19:14:46.465514 ignition[1333]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:14:46.465514 ignition[1333]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:14:46.465514 ignition[1333]: INFO : files: op(1c): [finished] processing unit "prepare-critools.service" Feb 9 19:14:46.465514 ignition[1333]: INFO : files: op(1e): [started] processing unit "prepare-helm.service" Feb 9 19:14:46.465514 ignition[1333]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:14:46.465514 ignition[1333]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:14:46.465514 ignition[1333]: INFO : files: op(1e): [finished] processing unit "prepare-helm.service" Feb 9 19:14:46.465514 ignition[1333]: INFO : files: op(20): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:14:46.465514 ignition[1333]: INFO : files: op(20): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:14:46.465514 ignition[1333]: INFO : files: op(21): [started] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 19:14:46.465514 ignition[1333]: INFO : files: op(21): [finished] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 19:14:46.465514 ignition[1333]: INFO : files: op(22): [started] setting preset to enabled for "nvidia.service" Feb 9 19:14:46.536556 ignition[1333]: INFO : files: op(22): [finished] setting preset to enabled for "nvidia.service" Feb 9 19:14:46.536556 ignition[1333]: INFO : files: op(23): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:14:46.536556 ignition[1333]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:14:46.536556 ignition[1333]: INFO : files: op(24): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:14:46.536556 ignition[1333]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:14:46.536556 ignition[1333]: INFO : files: op(25): [started] setting preset to enabled for "prepare-helm.service" Feb 9 19:14:46.536556 ignition[1333]: INFO : files: op(25): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 19:14:46.557235 systemd[1]: mnt-oem3615141101.mount: Deactivated successfully. Feb 9 19:14:46.569446 ignition[1333]: INFO : files: createResultFile: createFiles: op(26): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:14:46.574784 ignition[1333]: INFO : files: createResultFile: createFiles: op(26): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:14:46.574784 ignition[1333]: INFO : files: files passed Feb 9 19:14:46.574784 ignition[1333]: INFO : Ignition finished successfully Feb 9 19:14:46.578487 systemd[1]: Finished ignition-files.service. Feb 9 19:14:46.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.593694 kernel: audit: type=1130 audit(1707506086.584:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.605131 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:14:46.613935 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:14:46.631729 systemd[1]: Starting ignition-quench.service... Feb 9 19:14:46.642387 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:14:46.660858 kernel: audit: type=1130 audit(1707506086.641:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.660912 kernel: audit: type=1131 audit(1707506086.641:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.661058 initrd-setup-root-after-ignition[1358]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:14:46.642685 systemd[1]: Finished ignition-quench.service. Feb 9 19:14:46.677879 kernel: audit: type=1130 audit(1707506086.663:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.663665 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:14:46.666738 systemd[1]: Reached target ignition-complete.target. Feb 9 19:14:46.678020 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:14:46.712585 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:14:46.731550 kernel: audit: type=1130 audit(1707506086.713:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.732156 kernel: audit: type=1131 audit(1707506086.713:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.712843 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:14:46.714963 systemd[1]: Reached target initrd-fs.target. Feb 9 19:14:46.734645 systemd[1]: Reached target initrd.target. Feb 9 19:14:46.739743 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:14:46.744146 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:14:46.771531 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:14:46.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.776763 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:14:46.785797 kernel: audit: type=1130 audit(1707506086.773:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.800861 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:14:46.804673 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:14:46.808818 systemd[1]: Stopped target timers.target. Feb 9 19:14:46.812320 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:14:46.814546 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:14:46.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.818178 systemd[1]: Stopped target initrd.target. Feb 9 19:14:46.832954 kernel: audit: type=1131 audit(1707506086.816:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.827296 systemd[1]: Stopped target basic.target. Feb 9 19:14:46.829186 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:14:46.831409 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:14:46.835139 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:14:46.837845 systemd[1]: Stopped target remote-fs.target. Feb 9 19:14:46.841318 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:14:46.843962 systemd[1]: Stopped target sysinit.target. Feb 9 19:14:46.846965 systemd[1]: Stopped target local-fs.target. Feb 9 19:14:46.849955 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:14:46.853549 systemd[1]: Stopped target swap.target. Feb 9 19:14:46.864151 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:14:46.865420 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:14:46.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.877418 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:14:46.883024 kernel: audit: type=1131 audit(1707506086.865:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.883020 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:14:46.885095 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:14:46.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.888897 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:14:46.905105 kernel: audit: type=1131 audit(1707506086.887:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.889199 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:14:46.898377 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:14:46.898693 systemd[1]: Stopped ignition-files.service. Feb 9 19:14:46.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.922469 iscsid[1183]: iscsid shutting down. Feb 9 19:14:46.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.912998 systemd[1]: Stopping ignition-mount.service... Feb 9 19:14:46.915223 systemd[1]: Stopping iscsid.service... Feb 9 19:14:46.916721 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:14:46.917019 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:14:46.927396 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:14:46.933878 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:14:46.934256 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:14:46.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.950518 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:14:46.950852 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:14:46.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.963146 ignition[1371]: INFO : Ignition 2.14.0 Feb 9 19:14:46.963146 ignition[1371]: INFO : Stage: umount Feb 9 19:14:46.963146 ignition[1371]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:46.963146 ignition[1371]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:46.962275 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:14:46.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.964435 systemd[1]: Stopped iscsid.service. Feb 9 19:14:46.989135 systemd[1]: Stopping iscsiuio.service... Feb 9 19:14:46.995263 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:14:46.996059 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:14:47.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.004440 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:14:47.005490 systemd[1]: Stopped iscsiuio.service. Feb 9 19:14:47.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.013842 ignition[1371]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:47.013842 ignition[1371]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:47.018939 ignition[1371]: INFO : PUT result: OK Feb 9 19:14:47.022216 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:14:47.023213 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:14:47.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.030887 ignition[1371]: INFO : umount: umount passed Feb 9 19:14:47.032837 ignition[1371]: INFO : Ignition finished successfully Feb 9 19:14:47.036359 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:14:47.036961 systemd[1]: Stopped ignition-mount.service. Feb 9 19:14:47.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.040340 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:14:47.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.040448 systemd[1]: Stopped ignition-disks.service. Feb 9 19:14:47.043779 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:14:47.043887 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:14:47.045732 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:14:47.045843 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:14:47.047754 systemd[1]: Stopped target network.target. Feb 9 19:14:47.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.049446 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:14:47.049566 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:14:47.053233 systemd[1]: Stopped target paths.target. Feb 9 19:14:47.054804 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:14:47.058766 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:14:47.065935 systemd[1]: Stopped target slices.target. Feb 9 19:14:47.069377 systemd[1]: Stopped target sockets.target. Feb 9 19:14:47.072576 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:14:47.072692 systemd[1]: Closed iscsid.socket. Feb 9 19:14:47.074246 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:14:47.074331 systemd[1]: Closed iscsiuio.socket. Feb 9 19:14:47.075896 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:14:47.076017 systemd[1]: Stopped ignition-setup.service. Feb 9 19:14:47.078411 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:14:47.078527 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:14:47.081518 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:14:47.084518 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:14:47.087477 systemd-networkd[1178]: eth0: DHCPv6 lease lost Feb 9 19:14:47.118621 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:14:47.119303 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:14:47.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.123000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:14:47.124034 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:14:47.124142 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:14:47.131055 systemd[1]: Stopping network-cleanup.service... Feb 9 19:14:47.138489 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:14:47.139213 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:14:47.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.144850 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:14:47.145161 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:14:47.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.155150 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:14:47.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.155275 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:14:47.157961 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:14:47.164776 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:14:47.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.165025 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:14:47.175000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:14:47.181023 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:14:47.181624 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:14:47.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.187190 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:14:47.189221 systemd[1]: Stopped network-cleanup.service. Feb 9 19:14:47.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.191514 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:14:47.191606 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:14:47.196472 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:14:47.196597 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:14:47.202972 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:14:47.203107 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:14:47.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.208501 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:14:47.210413 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:14:47.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.214056 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:14:47.214198 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:14:47.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.221405 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:14:47.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.239047 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:14:47.239179 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:14:47.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:47.241824 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:14:47.242032 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:14:47.250323 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:14:47.259921 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:14:47.285534 systemd[1]: Switching root. Feb 9 19:14:47.287000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:14:47.287000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:14:47.287000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:14:47.291000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:14:47.291000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:14:47.315465 systemd-journald[308]: Journal stopped Feb 9 19:14:53.209348 systemd-journald[308]: Received SIGTERM from PID 1 (systemd). Feb 9 19:14:53.209503 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:14:53.209560 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:14:53.209595 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:14:53.209672 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:14:53.209711 kernel: SELinux: policy capability open_perms=1 Feb 9 19:14:53.209744 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:14:53.209784 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:14:53.209813 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:14:53.209845 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:14:53.209886 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:14:53.209921 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:14:53.209965 systemd[1]: Successfully loaded SELinux policy in 133.836ms. Feb 9 19:14:53.210020 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.618ms. Feb 9 19:14:53.210060 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:14:53.210095 systemd[1]: Detected virtualization amazon. Feb 9 19:14:53.210133 systemd[1]: Detected architecture arm64. Feb 9 19:14:53.210167 systemd[1]: Detected first boot. Feb 9 19:14:53.210207 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:14:53.210241 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:14:53.210272 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:14:53.210306 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:14:53.210345 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:14:53.210387 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:14:53.210422 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:14:53.210455 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:14:53.210490 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:14:53.210523 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 19:14:53.210557 systemd[1]: Created slice system-getty.slice. Feb 9 19:14:53.210587 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:14:53.210621 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:14:53.210688 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:14:53.210729 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:14:53.210772 systemd[1]: Created slice user.slice. Feb 9 19:14:53.210803 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:14:53.210855 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:14:53.210894 systemd[1]: Set up automount boot.automount. Feb 9 19:14:53.210927 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:14:53.210958 systemd[1]: Reached target integritysetup.target. Feb 9 19:14:53.210990 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:14:53.211024 systemd[1]: Reached target remote-fs.target. Feb 9 19:14:53.211061 systemd[1]: Reached target slices.target. Feb 9 19:14:53.211095 systemd[1]: Reached target swap.target. Feb 9 19:14:53.211130 systemd[1]: Reached target torcx.target. Feb 9 19:14:53.211162 systemd[1]: Reached target veritysetup.target. Feb 9 19:14:53.211192 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:14:53.211223 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:14:53.211256 kernel: kauditd_printk_skb: 48 callbacks suppressed Feb 9 19:14:53.211290 kernel: audit: type=1400 audit(1707506092.783:88): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:14:53.211326 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:14:53.211360 kernel: audit: type=1335 audit(1707506092.784:89): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 19:14:53.211390 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:14:53.211420 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:14:53.211452 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:14:53.211489 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:14:53.211519 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:14:53.211553 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:14:53.211585 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:14:53.211621 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:14:53.218405 systemd[1]: Mounting media.mount... Feb 9 19:14:53.218455 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:14:53.218561 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:14:53.218604 systemd[1]: Mounting tmp.mount... Feb 9 19:14:53.218944 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:14:53.218995 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:14:53.219029 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:14:53.219060 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:14:53.219098 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:14:53.219132 systemd[1]: Starting modprobe@drm.service... Feb 9 19:14:53.219163 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:14:53.219193 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:14:53.219224 systemd[1]: Starting modprobe@loop.service... Feb 9 19:14:53.219256 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:14:53.219287 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 19:14:53.219323 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 19:14:53.219359 systemd[1]: Starting systemd-journald.service... Feb 9 19:14:53.219395 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:14:53.219425 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:14:53.219463 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:14:53.219497 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:14:53.219530 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:14:53.219561 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:14:53.219593 systemd[1]: Mounted media.mount. Feb 9 19:14:53.219623 kernel: loop: module loaded Feb 9 19:14:53.219930 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:14:53.219990 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:14:53.220025 systemd[1]: Mounted tmp.mount. Feb 9 19:14:53.220056 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:14:53.220088 kernel: audit: type=1130 audit(1707506093.080:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.220120 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:14:53.220154 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:14:53.220187 kernel: audit: type=1130 audit(1707506093.098:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.220235 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:14:53.220273 kernel: audit: type=1131 audit(1707506093.098:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.220314 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:14:53.220346 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:14:53.220377 systemd[1]: Finished modprobe@drm.service. Feb 9 19:14:53.220414 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:14:53.220446 kernel: audit: type=1130 audit(1707506093.128:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.220480 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:14:53.220513 kernel: audit: type=1131 audit(1707506093.128:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.220544 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:14:53.220577 systemd[1]: Finished modprobe@loop.service. Feb 9 19:14:53.220608 kernel: fuse: init (API version 7.34) Feb 9 19:14:53.222783 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:14:53.222896 kernel: audit: type=1130 audit(1707506093.138:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.222935 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:14:53.222967 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:14:53.222998 kernel: audit: type=1131 audit(1707506093.138:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.223028 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:14:53.223059 kernel: audit: type=1130 audit(1707506093.163:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.223104 systemd-journald[1523]: Journal started Feb 9 19:14:53.223230 systemd-journald[1523]: Runtime Journal (/run/log/journal/ec23c850af0334e4c20ff6cccba83a55) is 8.0M, max 75.4M, 67.4M free. Feb 9 19:14:52.784000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 19:14:53.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.190000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:14:53.190000 audit[1523]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffe6fe2820 a2=4000 a3=1 items=0 ppid=1 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:14:53.190000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:14:53.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.233956 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:14:53.243056 systemd[1]: Started systemd-journald.service. Feb 9 19:14:53.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.241238 systemd[1]: Reached target network-pre.target. Feb 9 19:14:53.245757 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:14:53.250574 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:14:53.252511 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:14:53.259738 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:14:53.268904 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:14:53.270921 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:14:53.278067 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:14:53.283945 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:14:53.286859 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:14:53.292250 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:14:53.299104 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:14:53.335191 systemd-journald[1523]: Time spent on flushing to /var/log/journal/ec23c850af0334e4c20ff6cccba83a55 is 81.227ms for 1110 entries. Feb 9 19:14:53.335191 systemd-journald[1523]: System Journal (/var/log/journal/ec23c850af0334e4c20ff6cccba83a55) is 8.0M, max 195.6M, 187.6M free. Feb 9 19:14:53.431132 systemd-journald[1523]: Received client request to flush runtime journal. Feb 9 19:14:53.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.354913 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:14:53.357285 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:14:53.393166 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:14:53.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.432991 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:14:53.435746 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:14:53.441350 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:14:53.477618 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:14:53.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.482605 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:14:53.501731 udevadm[1578]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 19:14:53.584100 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:14:53.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.588805 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:14:53.677818 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:14:53.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:54.318348 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:14:54.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:54.322928 systemd[1]: Starting systemd-udevd.service... Feb 9 19:14:54.367033 systemd-udevd[1584]: Using default interface naming scheme 'v252'. Feb 9 19:14:54.414074 systemd[1]: Started systemd-udevd.service. Feb 9 19:14:54.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:54.419352 systemd[1]: Starting systemd-networkd.service... Feb 9 19:14:54.430065 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:14:54.539608 systemd[1]: Found device dev-ttyS0.device. Feb 9 19:14:54.554127 systemd[1]: Started systemd-userdbd.service. Feb 9 19:14:54.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:54.580106 (udev-worker)[1596]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:14:54.718181 systemd-networkd[1588]: lo: Link UP Feb 9 19:14:54.718219 systemd-networkd[1588]: lo: Gained carrier Feb 9 19:14:54.719210 systemd-networkd[1588]: Enumeration completed Feb 9 19:14:54.719424 systemd[1]: Started systemd-networkd.service. Feb 9 19:14:54.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:54.723977 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:14:54.727864 systemd-networkd[1588]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:14:54.733681 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:14:54.733964 systemd-networkd[1588]: eth0: Link UP Feb 9 19:14:54.734269 systemd-networkd[1588]: eth0: Gained carrier Feb 9 19:14:54.745002 systemd-networkd[1588]: eth0: DHCPv4 address 172.31.21.34/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 19:14:54.804673 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1589) Feb 9 19:14:54.973573 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:14:54.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:54.987138 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 9 19:14:54.989892 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:14:55.055780 lvm[1704]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:14:55.093676 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:14:55.095969 systemd[1]: Reached target cryptsetup.target. Feb 9 19:14:55.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:55.101050 systemd[1]: Starting lvm2-activation.service... Feb 9 19:14:55.112678 lvm[1706]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:14:55.150856 systemd[1]: Finished lvm2-activation.service. Feb 9 19:14:55.152978 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:14:55.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:55.154932 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:14:55.154982 systemd[1]: Reached target local-fs.target. Feb 9 19:14:55.156848 systemd[1]: Reached target machines.target. Feb 9 19:14:55.161423 systemd[1]: Starting ldconfig.service... Feb 9 19:14:55.169072 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:14:55.169261 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:14:55.173098 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:14:55.179386 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:14:55.185567 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:14:55.189192 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:14:55.189361 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:14:55.192475 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:14:55.209700 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1709 (bootctl) Feb 9 19:14:55.212945 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:14:55.243066 systemd-tmpfiles[1712]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:14:55.246852 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:14:55.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:55.249685 systemd-tmpfiles[1712]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:14:55.253670 systemd-tmpfiles[1712]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:14:55.319577 systemd-fsck[1718]: fsck.fat 4.2 (2021-01-31) Feb 9 19:14:55.319577 systemd-fsck[1718]: /dev/nvme0n1p1: 236 files, 113719/258078 clusters Feb 9 19:14:55.323150 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:14:55.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:55.328910 systemd[1]: Mounting boot.mount... Feb 9 19:14:55.364821 systemd[1]: Mounted boot.mount. Feb 9 19:14:55.402323 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:14:55.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:55.659390 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:14:55.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:55.665247 systemd[1]: Starting audit-rules.service... Feb 9 19:14:55.670022 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:14:55.675265 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:14:55.689181 systemd[1]: Starting systemd-resolved.service... Feb 9 19:14:55.697119 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:14:55.709832 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:14:55.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:55.715213 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:14:55.718565 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:14:55.764000 audit[1744]: SYSTEM_BOOT pid=1744 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:14:55.772608 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:14:55.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:55.804126 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:14:55.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:55.885000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:14:55.885000 audit[1760]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff13334c0 a2=420 a3=0 items=0 ppid=1736 pid=1760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:14:55.885000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:14:55.887336 augenrules[1760]: No rules Feb 9 19:14:55.888377 systemd[1]: Finished audit-rules.service. Feb 9 19:14:55.911386 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:14:55.913540 systemd[1]: Reached target time-set.target. Feb 9 19:14:55.965302 systemd-resolved[1740]: Positive Trust Anchors: Feb 9 19:14:55.965845 systemd-resolved[1740]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:14:55.965996 systemd-resolved[1740]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:14:56.005605 systemd-resolved[1740]: Defaulting to hostname 'linux'. Feb 9 19:14:56.009031 systemd[1]: Started systemd-resolved.service. Feb 9 19:14:56.011185 systemd[1]: Reached target network.target. Feb 9 19:14:56.012853 systemd[1]: Reached target nss-lookup.target. Feb 9 19:14:56.074425 systemd-timesyncd[1741]: Contacted time server 73.193.62.54:123 (0.flatcar.pool.ntp.org). Feb 9 19:14:56.074733 systemd-timesyncd[1741]: Initial clock synchronization to Fri 2024-02-09 19:14:56.203093 UTC. Feb 9 19:14:56.136905 ldconfig[1708]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:14:56.157533 systemd[1]: Finished ldconfig.service. Feb 9 19:14:56.163473 systemd[1]: Starting systemd-update-done.service... Feb 9 19:14:56.182387 systemd[1]: Finished systemd-update-done.service. Feb 9 19:14:56.184555 systemd[1]: Reached target sysinit.target. Feb 9 19:14:56.186528 systemd[1]: Started motdgen.path. Feb 9 19:14:56.188241 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:14:56.190759 systemd[1]: Started logrotate.timer. Feb 9 19:14:56.192472 systemd[1]: Started mdadm.timer. Feb 9 19:14:56.193958 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:14:56.195856 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:14:56.195922 systemd[1]: Reached target paths.target. Feb 9 19:14:56.197547 systemd[1]: Reached target timers.target. Feb 9 19:14:56.199795 systemd[1]: Listening on dbus.socket. Feb 9 19:14:56.204046 systemd[1]: Starting docker.socket... Feb 9 19:14:56.208574 systemd[1]: Listening on sshd.socket. Feb 9 19:14:56.210556 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:14:56.211615 systemd[1]: Listening on docker.socket. Feb 9 19:14:56.215855 systemd[1]: Reached target sockets.target. Feb 9 19:14:56.217799 systemd[1]: Reached target basic.target. Feb 9 19:14:56.220152 systemd[1]: System is tainted: cgroupsv1 Feb 9 19:14:56.220547 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:14:56.220959 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:14:56.223947 systemd[1]: Starting containerd.service... Feb 9 19:14:56.228178 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 19:14:56.233108 systemd[1]: Starting dbus.service... Feb 9 19:14:56.243584 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:14:56.249032 systemd[1]: Starting extend-filesystems.service... Feb 9 19:14:56.250863 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:14:56.255695 systemd[1]: Starting motdgen.service... Feb 9 19:14:56.260132 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:14:56.271364 systemd[1]: Starting prepare-critools.service... Feb 9 19:14:56.278365 jq[1776]: false Feb 9 19:14:56.279741 systemd[1]: Starting prepare-helm.service... Feb 9 19:14:56.286845 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:14:56.294460 systemd[1]: Starting sshd-keygen.service... Feb 9 19:14:56.321757 systemd[1]: Starting systemd-logind.service... Feb 9 19:14:56.326332 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:14:56.326558 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:14:56.330155 systemd[1]: Starting update-engine.service... Feb 9 19:14:56.338512 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:14:56.352664 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:14:56.353427 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:14:56.368415 jq[1797]: true Feb 9 19:14:56.390053 tar[1800]: ./ Feb 9 19:14:56.390053 tar[1800]: ./macvlan Feb 9 19:14:56.396534 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:14:56.403103 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:14:56.420770 tar[1801]: crictl Feb 9 19:14:56.436757 tar[1802]: linux-arm64/helm Feb 9 19:14:56.466614 jq[1804]: true Feb 9 19:14:56.480579 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:14:56.481204 systemd[1]: Finished motdgen.service. Feb 9 19:14:56.500546 dbus-daemon[1775]: [system] SELinux support is enabled Feb 9 19:14:56.516131 systemd[1]: Started dbus.service. Feb 9 19:14:56.523419 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:14:56.523491 systemd[1]: Reached target system-config.target. Feb 9 19:14:56.526050 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:14:56.526115 systemd[1]: Reached target user-config.target. Feb 9 19:14:56.547096 extend-filesystems[1777]: Found nvme0n1 Feb 9 19:14:56.551010 dbus-daemon[1775]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1588 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 9 19:14:56.553845 extend-filesystems[1777]: Found nvme0n1p1 Feb 9 19:14:56.555819 extend-filesystems[1777]: Found nvme0n1p2 Feb 9 19:14:56.557748 extend-filesystems[1777]: Found nvme0n1p3 Feb 9 19:14:56.561480 extend-filesystems[1777]: Found usr Feb 9 19:14:56.562965 update_engine[1795]: I0209 19:14:56.562460 1795 main.cc:92] Flatcar Update Engine starting Feb 9 19:14:56.565834 extend-filesystems[1777]: Found nvme0n1p4 Feb 9 19:14:56.567543 extend-filesystems[1777]: Found nvme0n1p6 Feb 9 19:14:56.572808 extend-filesystems[1777]: Found nvme0n1p7 Feb 9 19:14:56.578085 extend-filesystems[1777]: Found nvme0n1p9 Feb 9 19:14:56.583805 extend-filesystems[1777]: Checking size of /dev/nvme0n1p9 Feb 9 19:14:56.587288 dbus-daemon[1775]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 19:14:56.597176 systemd[1]: Starting systemd-hostnamed.service... Feb 9 19:14:56.606521 systemd[1]: Started update-engine.service. Feb 9 19:14:56.612042 systemd[1]: Started locksmithd.service. Feb 9 19:14:56.616087 update_engine[1795]: I0209 19:14:56.613833 1795 update_check_scheduler.cc:74] Next update check in 9m21s Feb 9 19:14:56.683375 extend-filesystems[1777]: Resized partition /dev/nvme0n1p9 Feb 9 19:14:56.711874 extend-filesystems[1845]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 19:14:56.739527 env[1808]: time="2024-02-09T19:14:56.738913733Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:14:56.754793 bash[1842]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:14:56.757074 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:14:56.764940 systemd-networkd[1588]: eth0: Gained IPv6LL Feb 9 19:14:56.773338 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:14:56.775757 systemd[1]: Reached target network-online.target. Feb 9 19:14:56.780975 systemd[1]: Started amazon-ssm-agent.service. Feb 9 19:14:56.787363 systemd[1]: Started nvidia.service. Feb 9 19:14:56.799113 systemd-logind[1793]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 19:14:56.799531 systemd-logind[1793]: New seat seat0. Feb 9 19:14:56.807397 systemd[1]: Started systemd-logind.service. Feb 9 19:14:56.962673 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 9 19:14:57.062682 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 9 19:14:57.126991 extend-filesystems[1845]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 9 19:14:57.126991 extend-filesystems[1845]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 19:14:57.126991 extend-filesystems[1845]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 9 19:14:57.135630 extend-filesystems[1777]: Resized filesystem in /dev/nvme0n1p9 Feb 9 19:14:57.151331 tar[1800]: ./static Feb 9 19:14:57.154379 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:14:57.155029 systemd[1]: Finished extend-filesystems.service. Feb 9 19:14:57.243297 env[1808]: time="2024-02-09T19:14:57.242598553Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:14:57.244730 env[1808]: time="2024-02-09T19:14:57.244678986Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:14:57.271660 amazon-ssm-agent[1856]: 2024/02/09 19:14:57 Failed to load instance info from vault. RegistrationKey does not exist. Feb 9 19:14:57.281867 env[1808]: time="2024-02-09T19:14:57.281790171Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:14:57.284615 env[1808]: time="2024-02-09T19:14:57.284543186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:14:57.285499 env[1808]: time="2024-02-09T19:14:57.285454245Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:14:57.286641 env[1808]: time="2024-02-09T19:14:57.286596794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:14:57.286837 env[1808]: time="2024-02-09T19:14:57.286801471Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:14:57.290045 env[1808]: time="2024-02-09T19:14:57.289969900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:14:57.290676 env[1808]: time="2024-02-09T19:14:57.290615035Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:14:57.292490 env[1808]: time="2024-02-09T19:14:57.292427337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:14:57.293914 env[1808]: time="2024-02-09T19:14:57.293841357Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:14:57.294482 amazon-ssm-agent[1856]: Initializing new seelog logger Feb 9 19:14:57.294727 env[1808]: time="2024-02-09T19:14:57.294688267Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:14:57.295170 amazon-ssm-agent[1856]: New Seelog Logger Creation Complete Feb 9 19:14:57.295563 env[1808]: time="2024-02-09T19:14:57.295507535Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:14:57.296791 env[1808]: time="2024-02-09T19:14:57.296680848Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:14:57.297148 amazon-ssm-agent[1856]: 2024/02/09 19:14:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 19:14:57.297719 amazon-ssm-agent[1856]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 19:14:57.298226 amazon-ssm-agent[1856]: 2024/02/09 19:14:57 processing appconfig overrides Feb 9 19:14:57.359007 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 19:14:57.387522 env[1808]: time="2024-02-09T19:14:57.385349643Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:14:57.387522 env[1808]: time="2024-02-09T19:14:57.385461785Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:14:57.387522 env[1808]: time="2024-02-09T19:14:57.385549381Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:14:57.387522 env[1808]: time="2024-02-09T19:14:57.385701408Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:14:57.387522 env[1808]: time="2024-02-09T19:14:57.385854409Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:14:57.387522 env[1808]: time="2024-02-09T19:14:57.385892111Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:14:57.387522 env[1808]: time="2024-02-09T19:14:57.385949297Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:14:57.387522 env[1808]: time="2024-02-09T19:14:57.386681334Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:14:57.387522 env[1808]: time="2024-02-09T19:14:57.386753360Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:14:57.387522 env[1808]: time="2024-02-09T19:14:57.386815083Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:14:57.387522 env[1808]: time="2024-02-09T19:14:57.386852357Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:14:57.387522 env[1808]: time="2024-02-09T19:14:57.386909434Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:14:57.387522 env[1808]: time="2024-02-09T19:14:57.387255468Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:14:57.390464 env[1808]: time="2024-02-09T19:14:57.389801344Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:14:57.391062 env[1808]: time="2024-02-09T19:14:57.390984081Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:14:57.392001 env[1808]: time="2024-02-09T19:14:57.391934391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:14:57.392302 env[1808]: time="2024-02-09T19:14:57.392270828Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:14:57.392743 env[1808]: time="2024-02-09T19:14:57.392564821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:14:57.393492 env[1808]: time="2024-02-09T19:14:57.393442116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:14:57.393826 env[1808]: time="2024-02-09T19:14:57.393794881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:14:57.393999 env[1808]: time="2024-02-09T19:14:57.393969550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:14:57.394263 env[1808]: time="2024-02-09T19:14:57.394233291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:14:57.394533 env[1808]: time="2024-02-09T19:14:57.394500653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:14:57.394853 env[1808]: time="2024-02-09T19:14:57.394808949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:14:57.394999 env[1808]: time="2024-02-09T19:14:57.394969510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:14:57.395315 env[1808]: time="2024-02-09T19:14:57.395284304Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:14:57.395961 env[1808]: time="2024-02-09T19:14:57.395901212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:14:57.396157 env[1808]: time="2024-02-09T19:14:57.396124167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:14:57.396348 env[1808]: time="2024-02-09T19:14:57.396302433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:14:57.396494 env[1808]: time="2024-02-09T19:14:57.396462799Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:14:57.396725 env[1808]: time="2024-02-09T19:14:57.396690228Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:14:57.397004 env[1808]: time="2024-02-09T19:14:57.396970821Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:14:57.397298 env[1808]: time="2024-02-09T19:14:57.397265045Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:14:57.400002 env[1808]: time="2024-02-09T19:14:57.399935975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:14:57.400795 env[1808]: time="2024-02-09T19:14:57.400668341Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:14:57.401969 env[1808]: time="2024-02-09T19:14:57.401313562Z" level=info msg="Connect containerd service" Feb 9 19:14:57.401969 env[1808]: time="2024-02-09T19:14:57.401471636Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:14:57.413529 env[1808]: time="2024-02-09T19:14:57.413440810Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:14:57.417075 env[1808]: time="2024-02-09T19:14:57.416996925Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:14:57.420305 env[1808]: time="2024-02-09T19:14:57.420229283Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:14:57.421945 env[1808]: time="2024-02-09T19:14:57.421485608Z" level=info msg="Start subscribing containerd event" Feb 9 19:14:57.421945 env[1808]: time="2024-02-09T19:14:57.421579508Z" level=info msg="Start recovering state" Feb 9 19:14:57.421570 systemd[1]: Started containerd.service. Feb 9 19:14:57.423896 env[1808]: time="2024-02-09T19:14:57.421734071Z" level=info msg="Start event monitor" Feb 9 19:14:57.423896 env[1808]: time="2024-02-09T19:14:57.423188451Z" level=info msg="Start snapshots syncer" Feb 9 19:14:57.423896 env[1808]: time="2024-02-09T19:14:57.423234212Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:14:57.423896 env[1808]: time="2024-02-09T19:14:57.423255087Z" level=info msg="Start streaming server" Feb 9 19:14:57.425716 tar[1800]: ./vlan Feb 9 19:14:57.426515 env[1808]: time="2024-02-09T19:14:57.426453012Z" level=info msg="containerd successfully booted in 0.700715s" Feb 9 19:14:57.485785 coreos-metadata[1773]: Feb 09 19:14:57.485 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 9 19:14:57.489904 coreos-metadata[1773]: Feb 09 19:14:57.489 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Feb 9 19:14:57.497773 coreos-metadata[1773]: Feb 09 19:14:57.497 INFO Fetch successful Feb 9 19:14:57.497773 coreos-metadata[1773]: Feb 09 19:14:57.497 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 9 19:14:57.506986 coreos-metadata[1773]: Feb 09 19:14:57.506 INFO Fetch successful Feb 9 19:14:57.521102 unknown[1773]: wrote ssh authorized keys file for user: core Feb 9 19:14:57.626363 dbus-daemon[1775]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 9 19:14:57.626678 systemd[1]: Started systemd-hostnamed.service. Feb 9 19:14:57.629486 dbus-daemon[1775]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1829 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 9 19:14:57.649979 systemd[1]: Starting polkit.service... Feb 9 19:14:57.735248 polkitd[1943]: Started polkitd version 121 Feb 9 19:14:57.739820 tar[1800]: ./portmap Feb 9 19:14:57.778448 update-ssh-keys[1916]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:14:57.779997 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 19:14:57.801310 polkitd[1943]: Loading rules from directory /etc/polkit-1/rules.d Feb 9 19:14:57.801704 polkitd[1943]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 9 19:14:57.810020 polkitd[1943]: Finished loading, compiling and executing 2 rules Feb 9 19:14:57.813669 dbus-daemon[1775]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 9 19:14:57.813956 systemd[1]: Started polkit.service. Feb 9 19:14:57.817976 polkitd[1943]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 9 19:14:57.874245 systemd-hostnamed[1829]: Hostname set to (transient) Feb 9 19:14:57.874430 systemd-resolved[1740]: System hostname changed to 'ip-172-31-21-34'. Feb 9 19:14:57.948378 tar[1800]: ./host-local Feb 9 19:14:57.973241 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:14:57.974801 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:14:58.120913 tar[1800]: ./vrf Feb 9 19:14:58.299881 tar[1800]: ./bridge Feb 9 19:14:58.331866 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO Create new startup processor Feb 9 19:14:58.350435 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [LongRunningPluginsManager] registered plugins: {} Feb 9 19:14:58.350679 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO Initializing bookkeeping folders Feb 9 19:14:58.350679 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO removing the completed state files Feb 9 19:14:58.350679 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO Initializing bookkeeping folders for long running plugins Feb 9 19:14:58.350679 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Feb 9 19:14:58.350679 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO Initializing healthcheck folders for long running plugins Feb 9 19:14:58.350979 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO Initializing locations for inventory plugin Feb 9 19:14:58.350979 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO Initializing default location for custom inventory Feb 9 19:14:58.350979 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO Initializing default location for file inventory Feb 9 19:14:58.350979 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO Initializing default location for role inventory Feb 9 19:14:58.350979 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO Init the cloudwatchlogs publisher Feb 9 19:14:58.350979 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [instanceID=i-09bbe2a7d18c0ccbd] Successfully loaded platform independent plugin aws:runDocument Feb 9 19:14:58.350979 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [instanceID=i-09bbe2a7d18c0ccbd] Successfully loaded platform independent plugin aws:runPowerShellScript Feb 9 19:14:58.350979 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [instanceID=i-09bbe2a7d18c0ccbd] Successfully loaded platform independent plugin aws:configureDocker Feb 9 19:14:58.350979 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [instanceID=i-09bbe2a7d18c0ccbd] Successfully loaded platform independent plugin aws:runDockerAction Feb 9 19:14:58.351479 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [instanceID=i-09bbe2a7d18c0ccbd] Successfully loaded platform independent plugin aws:downloadContent Feb 9 19:14:58.351479 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [instanceID=i-09bbe2a7d18c0ccbd] Successfully loaded platform independent plugin aws:softwareInventory Feb 9 19:14:58.351479 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [instanceID=i-09bbe2a7d18c0ccbd] Successfully loaded platform independent plugin aws:updateSsmAgent Feb 9 19:14:58.351479 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [instanceID=i-09bbe2a7d18c0ccbd] Successfully loaded platform independent plugin aws:refreshAssociation Feb 9 19:14:58.351479 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [instanceID=i-09bbe2a7d18c0ccbd] Successfully loaded platform independent plugin aws:configurePackage Feb 9 19:14:58.351479 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [instanceID=i-09bbe2a7d18c0ccbd] Successfully loaded platform dependent plugin aws:runShellScript Feb 9 19:14:58.351479 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Feb 9 19:14:58.351479 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO OS: linux, Arch: arm64 Feb 9 19:14:58.367273 amazon-ssm-agent[1856]: datastore file /var/lib/amazon/ssm/i-09bbe2a7d18c0ccbd/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Feb 9 19:14:58.448623 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [MessagingDeliveryService] Starting document processing engine... Feb 9 19:14:58.484354 tar[1800]: ./tuning Feb 9 19:14:58.543673 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [MessagingDeliveryService] [EngineProcessor] Starting Feb 9 19:14:58.603389 tar[1800]: ./firewall Feb 9 19:14:58.638075 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Feb 9 19:14:58.732654 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [MessagingDeliveryService] Starting message polling Feb 9 19:14:58.754877 tar[1800]: ./host-device Feb 9 19:14:58.827423 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [MessagingDeliveryService] Starting send replies to MDS Feb 9 19:14:58.866418 tar[1800]: ./sbr Feb 9 19:14:58.922342 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [instanceID=i-09bbe2a7d18c0ccbd] Starting association polling Feb 9 19:14:58.975435 tar[1800]: ./loopback Feb 9 19:14:59.017492 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Feb 9 19:14:59.054933 tar[1802]: linux-arm64/LICENSE Feb 9 19:14:59.055537 tar[1802]: linux-arm64/README.md Feb 9 19:14:59.072471 systemd[1]: Finished prepare-helm.service. Feb 9 19:14:59.093208 tar[1800]: ./dhcp Feb 9 19:14:59.112796 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [MessagingDeliveryService] [Association] Launching response handler Feb 9 19:14:59.170419 systemd[1]: Finished prepare-critools.service. Feb 9 19:14:59.208331 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Feb 9 19:14:59.266800 tar[1800]: ./ptp Feb 9 19:14:59.304035 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Feb 9 19:14:59.317911 tar[1800]: ./ipvlan Feb 9 19:14:59.368299 tar[1800]: ./bandwidth Feb 9 19:14:59.399929 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Feb 9 19:14:59.450018 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:14:59.495513 locksmithd[1834]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:14:59.496514 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 19:14:59.592876 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [OfflineService] Starting document processing engine... Feb 9 19:14:59.690534 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [OfflineService] [EngineProcessor] Starting Feb 9 19:14:59.787315 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [OfflineService] [EngineProcessor] Initial processing Feb 9 19:14:59.884404 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [OfflineService] Starting message polling Feb 9 19:14:59.981604 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [OfflineService] Starting send replies to MDS Feb 9 19:15:00.049900 sshd_keygen[1820]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:15:00.078923 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [MessageGatewayService] Starting session document processing engine... Feb 9 19:15:00.089884 systemd[1]: Finished sshd-keygen.service. Feb 9 19:15:00.095129 systemd[1]: Starting issuegen.service... Feb 9 19:15:00.107584 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:15:00.108150 systemd[1]: Finished issuegen.service. Feb 9 19:15:00.113717 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:15:00.131009 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:15:00.137581 systemd[1]: Started getty@tty1.service. Feb 9 19:15:00.143660 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:15:00.147224 systemd[1]: Reached target getty.target. Feb 9 19:15:00.149873 systemd[1]: Reached target multi-user.target. Feb 9 19:15:00.156602 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:15:00.176531 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [MessageGatewayService] [EngineProcessor] Starting Feb 9 19:15:00.178859 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:15:00.179686 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:15:00.185110 systemd[1]: Startup finished in 13.686s (kernel) + 12.101s (userspace) = 25.787s. Feb 9 19:15:00.274527 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Feb 9 19:15:00.372646 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-09bbe2a7d18c0ccbd, requestId: 27b422bf-3bee-4142-a370-b501cb841867 Feb 9 19:15:00.471297 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [MessageGatewayService] listening reply. Feb 9 19:15:00.570582 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [LongRunningPluginsManager] starting long running plugin manager Feb 9 19:15:00.669232 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [StartupProcessor] Executing startup processor tasks Feb 9 19:15:00.768539 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Feb 9 19:15:00.868440 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Feb 9 19:15:00.968569 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.2 Feb 9 19:15:01.068785 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Feb 9 19:15:01.169148 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Feb 9 19:15:01.269127 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-09bbe2a7d18c0ccbd?role=subscribe&stream=input Feb 9 19:15:01.369853 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-09bbe2a7d18c0ccbd?role=subscribe&stream=input Feb 9 19:15:01.470350 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [MessageGatewayService] Starting receiving message from control channel Feb 9 19:15:01.571593 amazon-ssm-agent[1856]: 2024-02-09 19:14:58 INFO [MessageGatewayService] [EngineProcessor] Initial processing Feb 9 19:15:06.116092 systemd[1]: Created slice system-sshd.slice. Feb 9 19:15:06.118973 systemd[1]: Started sshd@0-172.31.21.34:22-147.75.109.163:58244.service. Feb 9 19:15:06.312242 sshd[2025]: Accepted publickey for core from 147.75.109.163 port 58244 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:15:06.317368 sshd[2025]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:06.337802 systemd[1]: Created slice user-500.slice. Feb 9 19:15:06.339908 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:15:06.345458 systemd-logind[1793]: New session 1 of user core. Feb 9 19:15:06.360575 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:15:06.364622 systemd[1]: Starting user@500.service... Feb 9 19:15:06.376096 (systemd)[2030]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:06.571863 systemd[2030]: Queued start job for default target default.target. Feb 9 19:15:06.573763 systemd[2030]: Reached target paths.target. Feb 9 19:15:06.574022 systemd[2030]: Reached target sockets.target. Feb 9 19:15:06.574185 systemd[2030]: Reached target timers.target. Feb 9 19:15:06.574341 systemd[2030]: Reached target basic.target. Feb 9 19:15:06.574578 systemd[2030]: Reached target default.target. Feb 9 19:15:06.574748 systemd[1]: Started user@500.service. Feb 9 19:15:06.575680 systemd[2030]: Startup finished in 186ms. Feb 9 19:15:06.576807 systemd[1]: Started session-1.scope. Feb 9 19:15:06.727818 systemd[1]: Started sshd@1-172.31.21.34:22-147.75.109.163:58260.service. Feb 9 19:15:06.907182 sshd[2039]: Accepted publickey for core from 147.75.109.163 port 58260 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:15:06.910260 sshd[2039]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:06.917956 systemd-logind[1793]: New session 2 of user core. Feb 9 19:15:06.919137 systemd[1]: Started session-2.scope. Feb 9 19:15:07.056903 sshd[2039]: pam_unix(sshd:session): session closed for user core Feb 9 19:15:07.062532 systemd-logind[1793]: Session 2 logged out. Waiting for processes to exit. Feb 9 19:15:07.063760 systemd[1]: sshd@1-172.31.21.34:22-147.75.109.163:58260.service: Deactivated successfully. Feb 9 19:15:07.065297 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 19:15:07.066440 systemd-logind[1793]: Removed session 2. Feb 9 19:15:07.083011 systemd[1]: Started sshd@2-172.31.21.34:22-147.75.109.163:58266.service. Feb 9 19:15:07.263497 sshd[2046]: Accepted publickey for core from 147.75.109.163 port 58266 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:15:07.266696 sshd[2046]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:07.276008 systemd[1]: Started session-3.scope. Feb 9 19:15:07.276780 systemd-logind[1793]: New session 3 of user core. Feb 9 19:15:07.404924 sshd[2046]: pam_unix(sshd:session): session closed for user core Feb 9 19:15:07.410484 systemd[1]: sshd@2-172.31.21.34:22-147.75.109.163:58266.service: Deactivated successfully. Feb 9 19:15:07.412139 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 19:15:07.415671 systemd-logind[1793]: Session 3 logged out. Waiting for processes to exit. Feb 9 19:15:07.419684 systemd-logind[1793]: Removed session 3. Feb 9 19:15:07.430760 systemd[1]: Started sshd@3-172.31.21.34:22-147.75.109.163:58278.service. Feb 9 19:15:07.606922 sshd[2053]: Accepted publickey for core from 147.75.109.163 port 58278 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:15:07.609374 sshd[2053]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:07.618203 systemd[1]: Started session-4.scope. Feb 9 19:15:07.618605 systemd-logind[1793]: New session 4 of user core. Feb 9 19:15:07.754488 sshd[2053]: pam_unix(sshd:session): session closed for user core Feb 9 19:15:07.759676 systemd-logind[1793]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:15:07.761140 systemd[1]: sshd@3-172.31.21.34:22-147.75.109.163:58278.service: Deactivated successfully. Feb 9 19:15:07.762808 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:15:07.763936 systemd-logind[1793]: Removed session 4. Feb 9 19:15:07.780253 systemd[1]: Started sshd@4-172.31.21.34:22-147.75.109.163:58294.service. Feb 9 19:15:07.957901 sshd[2060]: Accepted publickey for core from 147.75.109.163 port 58294 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:15:07.961231 sshd[2060]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:07.970850 systemd[1]: Started session-5.scope. Feb 9 19:15:07.972434 systemd-logind[1793]: New session 5 of user core. Feb 9 19:15:08.092671 sudo[2064]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:15:08.093188 sudo[2064]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:15:08.804172 systemd[1]: Starting docker.service... Feb 9 19:15:08.882076 env[2079]: time="2024-02-09T19:15:08.881985121Z" level=info msg="Starting up" Feb 9 19:15:08.884690 env[2079]: time="2024-02-09T19:15:08.884625130Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:15:08.884931 env[2079]: time="2024-02-09T19:15:08.884902551Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:15:08.885071 env[2079]: time="2024-02-09T19:15:08.885038786Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:15:08.885180 env[2079]: time="2024-02-09T19:15:08.885152607Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:15:08.889558 env[2079]: time="2024-02-09T19:15:08.889506981Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:15:08.889792 env[2079]: time="2024-02-09T19:15:08.889763083Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:15:08.889926 env[2079]: time="2024-02-09T19:15:08.889893453Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:15:08.890034 env[2079]: time="2024-02-09T19:15:08.890007129Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:15:08.901232 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3824782603-merged.mount: Deactivated successfully. Feb 9 19:15:09.253897 env[2079]: time="2024-02-09T19:15:09.253847839Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 19:15:09.254204 env[2079]: time="2024-02-09T19:15:09.254175707Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 19:15:09.254615 env[2079]: time="2024-02-09T19:15:09.254549263Z" level=info msg="Loading containers: start." Feb 9 19:15:09.442681 kernel: Initializing XFRM netlink socket Feb 9 19:15:09.483838 env[2079]: time="2024-02-09T19:15:09.483793727Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 19:15:09.487579 (udev-worker)[2089]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:15:09.586896 systemd-networkd[1588]: docker0: Link UP Feb 9 19:15:09.605553 env[2079]: time="2024-02-09T19:15:09.605506867Z" level=info msg="Loading containers: done." Feb 9 19:15:09.632955 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck387875308-merged.mount: Deactivated successfully. Feb 9 19:15:09.637038 env[2079]: time="2024-02-09T19:15:09.636987788Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 19:15:09.637602 env[2079]: time="2024-02-09T19:15:09.637548309Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 19:15:09.638011 env[2079]: time="2024-02-09T19:15:09.637963688Z" level=info msg="Daemon has completed initialization" Feb 9 19:15:09.667002 systemd[1]: Started docker.service. Feb 9 19:15:09.673154 env[2079]: time="2024-02-09T19:15:09.673041012Z" level=info msg="API listen on /run/docker.sock" Feb 9 19:15:09.706191 systemd[1]: Reloading. Feb 9 19:15:09.808962 /usr/lib/systemd/system-generators/torcx-generator[2218]: time="2024-02-09T19:15:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:15:09.809043 /usr/lib/systemd/system-generators/torcx-generator[2218]: time="2024-02-09T19:15:09Z" level=info msg="torcx already run" Feb 9 19:15:10.015954 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:15:10.016529 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:15:10.061009 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:15:10.265204 systemd[1]: Started kubelet.service. Feb 9 19:15:10.422454 kubelet[2277]: E0209 19:15:10.422329 2277 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:15:10.427432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:15:10.427916 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:15:10.873296 env[1808]: time="2024-02-09T19:15:10.873110473Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 19:15:11.520766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2612131351.mount: Deactivated successfully. Feb 9 19:15:13.975839 env[1808]: time="2024-02-09T19:15:13.975758715Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:13.983388 env[1808]: time="2024-02-09T19:15:13.980825781Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:13.987778 env[1808]: time="2024-02-09T19:15:13.987709556Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:13.991165 env[1808]: time="2024-02-09T19:15:13.991102866Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:13.994077 env[1808]: time="2024-02-09T19:15:13.993974144Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\"" Feb 9 19:15:14.015271 env[1808]: time="2024-02-09T19:15:14.015082418Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 19:15:16.478023 env[1808]: time="2024-02-09T19:15:16.477925393Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:16.481916 env[1808]: time="2024-02-09T19:15:16.481830482Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:16.486134 env[1808]: time="2024-02-09T19:15:16.486058723Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:16.490057 env[1808]: time="2024-02-09T19:15:16.489977005Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:16.492210 env[1808]: time="2024-02-09T19:15:16.492139757Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\"" Feb 9 19:15:16.513321 env[1808]: time="2024-02-09T19:15:16.513236658Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 19:15:18.224240 env[1808]: time="2024-02-09T19:15:18.224153485Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:18.227832 env[1808]: time="2024-02-09T19:15:18.227757717Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:18.231910 env[1808]: time="2024-02-09T19:15:18.231849613Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:18.236058 env[1808]: time="2024-02-09T19:15:18.235987670Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:18.238072 env[1808]: time="2024-02-09T19:15:18.238012387Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\"" Feb 9 19:15:18.267043 env[1808]: time="2024-02-09T19:15:18.266988032Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:15:18.887171 amazon-ssm-agent[1856]: 2024-02-09 19:15:18 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Feb 9 19:15:19.727549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1288156717.mount: Deactivated successfully. Feb 9 19:15:20.508020 env[1808]: time="2024-02-09T19:15:20.507952893Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:20.511693 env[1808]: time="2024-02-09T19:15:20.511596593Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:20.514732 env[1808]: time="2024-02-09T19:15:20.514621002Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:20.518533 env[1808]: time="2024-02-09T19:15:20.518471363Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:20.520278 env[1808]: time="2024-02-09T19:15:20.520196679Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 19:15:20.539230 env[1808]: time="2024-02-09T19:15:20.539153378Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 19:15:20.600354 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 19:15:20.600804 systemd[1]: Stopped kubelet.service. Feb 9 19:15:20.604179 systemd[1]: Started kubelet.service. Feb 9 19:15:20.711498 kubelet[2316]: E0209 19:15:20.711379 2316 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:15:20.719914 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:15:20.720348 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:15:21.070279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3311196431.mount: Deactivated successfully. Feb 9 19:15:21.077241 env[1808]: time="2024-02-09T19:15:21.077158315Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:21.080225 env[1808]: time="2024-02-09T19:15:21.080144474Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:21.083864 env[1808]: time="2024-02-09T19:15:21.083806709Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:21.087431 env[1808]: time="2024-02-09T19:15:21.087336174Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:21.089247 env[1808]: time="2024-02-09T19:15:21.089156527Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 19:15:21.107553 env[1808]: time="2024-02-09T19:15:21.107470491Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 19:15:22.276721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1899971824.mount: Deactivated successfully. Feb 9 19:15:25.593665 env[1808]: time="2024-02-09T19:15:25.593574575Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:25.616213 env[1808]: time="2024-02-09T19:15:25.616131105Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:25.627353 env[1808]: time="2024-02-09T19:15:25.627282691Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:25.641592 env[1808]: time="2024-02-09T19:15:25.641511355Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:25.643692 env[1808]: time="2024-02-09T19:15:25.643605509Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\"" Feb 9 19:15:25.664381 env[1808]: time="2024-02-09T19:15:25.664312839Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 19:15:26.446156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2443197498.mount: Deactivated successfully. Feb 9 19:15:27.439950 env[1808]: time="2024-02-09T19:15:27.439877626Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:27.443501 env[1808]: time="2024-02-09T19:15:27.443431195Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:27.446880 env[1808]: time="2024-02-09T19:15:27.446816092Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:27.449938 env[1808]: time="2024-02-09T19:15:27.449870351Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:27.451342 env[1808]: time="2024-02-09T19:15:27.451274019Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\"" Feb 9 19:15:27.908794 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 9 19:15:30.850309 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 19:15:30.850695 systemd[1]: Stopped kubelet.service. Feb 9 19:15:30.855462 systemd[1]: Started kubelet.service. Feb 9 19:15:30.964415 kubelet[2391]: E0209 19:15:30.964304 2391 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:15:30.968210 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:15:30.968701 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:15:33.745707 systemd[1]: Stopped kubelet.service. Feb 9 19:15:33.783810 systemd[1]: Reloading. Feb 9 19:15:33.961859 /usr/lib/systemd/system-generators/torcx-generator[2424]: time="2024-02-09T19:15:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:15:33.961930 /usr/lib/systemd/system-generators/torcx-generator[2424]: time="2024-02-09T19:15:33Z" level=info msg="torcx already run" Feb 9 19:15:34.118816 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:15:34.118867 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:15:34.161764 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:15:34.373572 systemd[1]: Started kubelet.service. Feb 9 19:15:34.472763 kubelet[2483]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:15:34.472763 kubelet[2483]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:15:34.473374 kubelet[2483]: I0209 19:15:34.472873 2483 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:15:34.475293 kubelet[2483]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:15:34.475293 kubelet[2483]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:15:36.494879 kubelet[2483]: I0209 19:15:36.494819 2483 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:15:36.494879 kubelet[2483]: I0209 19:15:36.494867 2483 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:15:36.495598 kubelet[2483]: I0209 19:15:36.495250 2483 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:15:36.502161 kubelet[2483]: E0209 19:15:36.502115 2483 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.21.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.21.34:6443: connect: connection refused Feb 9 19:15:36.502443 kubelet[2483]: I0209 19:15:36.502404 2483 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:15:36.509303 kubelet[2483]: W0209 19:15:36.509238 2483 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 19:15:36.512752 kubelet[2483]: I0209 19:15:36.512693 2483 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:15:36.513804 kubelet[2483]: I0209 19:15:36.513733 2483 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:15:36.513969 kubelet[2483]: I0209 19:15:36.513946 2483 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:15:36.514188 kubelet[2483]: I0209 19:15:36.514020 2483 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:15:36.514188 kubelet[2483]: I0209 19:15:36.514052 2483 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:15:36.514331 kubelet[2483]: I0209 19:15:36.514275 2483 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:15:36.521661 kubelet[2483]: I0209 19:15:36.521605 2483 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:15:36.521661 kubelet[2483]: I0209 19:15:36.521661 2483 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:15:36.521894 kubelet[2483]: I0209 19:15:36.521753 2483 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:15:36.521894 kubelet[2483]: I0209 19:15:36.521777 2483 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:15:36.523482 kubelet[2483]: W0209 19:15:36.523371 2483 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.21.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.34:6443: connect: connection refused Feb 9 19:15:36.523482 kubelet[2483]: E0209 19:15:36.523481 2483 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.21.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.34:6443: connect: connection refused Feb 9 19:15:36.523784 kubelet[2483]: W0209 19:15:36.523625 2483 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.21.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-34&limit=500&resourceVersion=0": dial tcp 172.31.21.34:6443: connect: connection refused Feb 9 19:15:36.523784 kubelet[2483]: E0209 19:15:36.523745 2483 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.21.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-34&limit=500&resourceVersion=0": dial tcp 172.31.21.34:6443: connect: connection refused Feb 9 19:15:36.524045 kubelet[2483]: I0209 19:15:36.523992 2483 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:15:36.524817 kubelet[2483]: W0209 19:15:36.524767 2483 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:15:36.525580 kubelet[2483]: I0209 19:15:36.525530 2483 server.go:1186] "Started kubelet" Feb 9 19:15:36.532047 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:15:36.532881 kubelet[2483]: E0209 19:15:36.532834 2483 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:15:36.533128 kubelet[2483]: E0209 19:15:36.533099 2483 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:15:36.533249 kubelet[2483]: I0209 19:15:36.533117 2483 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:15:36.534431 kubelet[2483]: I0209 19:15:36.534390 2483 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:15:36.536561 kubelet[2483]: E0209 19:15:36.536387 2483 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-21-34.17b247c8cf66e0f7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-21-34", UID:"ip-172-31-21-34", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-21-34"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 525496567, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 525496567, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.31.21.34:6443/api/v1/namespaces/default/events": dial tcp 172.31.21.34:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:15:36.536915 kubelet[2483]: I0209 19:15:36.533040 2483 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:15:36.542650 kubelet[2483]: I0209 19:15:36.542397 2483 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:15:36.542650 kubelet[2483]: I0209 19:15:36.542581 2483 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:15:36.543321 kubelet[2483]: W0209 19:15:36.543238 2483 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.21.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.34:6443: connect: connection refused Feb 9 19:15:36.543321 kubelet[2483]: E0209 19:15:36.543324 2483 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.21.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.34:6443: connect: connection refused Feb 9 19:15:36.544010 kubelet[2483]: E0209 19:15:36.543965 2483 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.31.21.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-34?timeout=10s": dial tcp 172.31.21.34:6443: connect: connection refused Feb 9 19:15:36.645816 kubelet[2483]: I0209 19:15:36.645768 2483 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-21-34" Feb 9 19:15:36.646357 kubelet[2483]: E0209 19:15:36.646296 2483 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.21.34:6443/api/v1/nodes\": dial tcp 172.31.21.34:6443: connect: connection refused" node="ip-172-31-21-34" Feb 9 19:15:36.650788 kubelet[2483]: I0209 19:15:36.650731 2483 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:15:36.667986 kubelet[2483]: I0209 19:15:36.667939 2483 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:15:36.668199 kubelet[2483]: I0209 19:15:36.668178 2483 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:15:36.668329 kubelet[2483]: I0209 19:15:36.668309 2483 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:15:36.671159 kubelet[2483]: I0209 19:15:36.671117 2483 policy_none.go:49] "None policy: Start" Feb 9 19:15:36.672414 kubelet[2483]: I0209 19:15:36.672385 2483 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:15:36.672775 kubelet[2483]: I0209 19:15:36.672753 2483 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:15:36.682324 kubelet[2483]: I0209 19:15:36.682273 2483 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:15:36.682693 kubelet[2483]: I0209 19:15:36.682662 2483 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:15:36.685147 kubelet[2483]: E0209 19:15:36.685100 2483 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-34\" not found" Feb 9 19:15:36.725453 kubelet[2483]: I0209 19:15:36.725405 2483 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:15:36.725453 kubelet[2483]: I0209 19:15:36.725454 2483 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:15:36.725725 kubelet[2483]: I0209 19:15:36.725497 2483 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:15:36.725725 kubelet[2483]: E0209 19:15:36.725586 2483 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:15:36.727683 kubelet[2483]: W0209 19:15:36.727613 2483 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.21.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.34:6443: connect: connection refused Feb 9 19:15:36.727891 kubelet[2483]: E0209 19:15:36.727708 2483 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.21.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.34:6443: connect: connection refused Feb 9 19:15:36.745976 kubelet[2483]: E0209 19:15:36.745196 2483 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.31.21.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-34?timeout=10s": dial tcp 172.31.21.34:6443: connect: connection refused Feb 9 19:15:36.826691 kubelet[2483]: I0209 19:15:36.826600 2483 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:15:36.828753 kubelet[2483]: I0209 19:15:36.828711 2483 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:15:36.833462 kubelet[2483]: I0209 19:15:36.833426 2483 status_manager.go:698] "Failed to get status for pod" podUID=24dbd899bcff81f76d9248fafb9f6bae pod="kube-system/kube-apiserver-ip-172-31-21-34" err="Get \"https://172.31.21.34:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-31-21-34\": dial tcp 172.31.21.34:6443: connect: connection refused" Feb 9 19:15:36.838066 kubelet[2483]: I0209 19:15:36.838010 2483 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:15:36.849470 kubelet[2483]: I0209 19:15:36.849414 2483 status_manager.go:698] "Failed to get status for pod" podUID=3386ac0d48035e68a9494046a3ff8aab pod="kube-system/kube-controller-manager-ip-172-31-21-34" err="Get \"https://172.31.21.34:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ip-172-31-21-34\": dial tcp 172.31.21.34:6443: connect: connection refused" Feb 9 19:15:36.849816 kubelet[2483]: I0209 19:15:36.849721 2483 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-21-34" Feb 9 19:15:36.854345 kubelet[2483]: E0209 19:15:36.854314 2483 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.21.34:6443/api/v1/nodes\": dial tcp 172.31.21.34:6443: connect: connection refused" node="ip-172-31-21-34" Feb 9 19:15:36.854688 kubelet[2483]: I0209 19:15:36.854623 2483 status_manager.go:698] "Failed to get status for pod" podUID=2b179e587eef79879d3ae96d9ef9dd43 pod="kube-system/kube-scheduler-ip-172-31-21-34" err="Get \"https://172.31.21.34:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ip-172-31-21-34\": dial tcp 172.31.21.34:6443: connect: connection refused" Feb 9 19:15:36.943942 kubelet[2483]: I0209 19:15:36.943888 2483 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3386ac0d48035e68a9494046a3ff8aab-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-34\" (UID: \"3386ac0d48035e68a9494046a3ff8aab\") " pod="kube-system/kube-controller-manager-ip-172-31-21-34" Feb 9 19:15:36.944123 kubelet[2483]: I0209 19:15:36.943970 2483 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3386ac0d48035e68a9494046a3ff8aab-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-34\" (UID: \"3386ac0d48035e68a9494046a3ff8aab\") " pod="kube-system/kube-controller-manager-ip-172-31-21-34" Feb 9 19:15:36.944123 kubelet[2483]: I0209 19:15:36.944018 2483 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3386ac0d48035e68a9494046a3ff8aab-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-34\" (UID: \"3386ac0d48035e68a9494046a3ff8aab\") " pod="kube-system/kube-controller-manager-ip-172-31-21-34" Feb 9 19:15:36.944123 kubelet[2483]: I0209 19:15:36.944067 2483 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3386ac0d48035e68a9494046a3ff8aab-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-34\" (UID: \"3386ac0d48035e68a9494046a3ff8aab\") " pod="kube-system/kube-controller-manager-ip-172-31-21-34" Feb 9 19:15:36.944123 kubelet[2483]: I0209 19:15:36.944113 2483 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b179e587eef79879d3ae96d9ef9dd43-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-34\" (UID: \"2b179e587eef79879d3ae96d9ef9dd43\") " pod="kube-system/kube-scheduler-ip-172-31-21-34" Feb 9 19:15:36.944386 kubelet[2483]: I0209 19:15:36.944155 2483 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/24dbd899bcff81f76d9248fafb9f6bae-ca-certs\") pod \"kube-apiserver-ip-172-31-21-34\" (UID: \"24dbd899bcff81f76d9248fafb9f6bae\") " pod="kube-system/kube-apiserver-ip-172-31-21-34" Feb 9 19:15:36.944386 kubelet[2483]: I0209 19:15:36.944196 2483 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/24dbd899bcff81f76d9248fafb9f6bae-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-34\" (UID: \"24dbd899bcff81f76d9248fafb9f6bae\") " pod="kube-system/kube-apiserver-ip-172-31-21-34" Feb 9 19:15:36.944386 kubelet[2483]: I0209 19:15:36.944241 2483 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/24dbd899bcff81f76d9248fafb9f6bae-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-34\" (UID: \"24dbd899bcff81f76d9248fafb9f6bae\") " pod="kube-system/kube-apiserver-ip-172-31-21-34" Feb 9 19:15:36.944386 kubelet[2483]: I0209 19:15:36.944313 2483 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3386ac0d48035e68a9494046a3ff8aab-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-34\" (UID: \"3386ac0d48035e68a9494046a3ff8aab\") " pod="kube-system/kube-controller-manager-ip-172-31-21-34" Feb 9 19:15:37.140176 env[1808]: time="2024-02-09T19:15:37.140092828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-34,Uid:24dbd899bcff81f76d9248fafb9f6bae,Namespace:kube-system,Attempt:0,}" Feb 9 19:15:37.145902 kubelet[2483]: E0209 19:15:37.145853 2483 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.31.21.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-34?timeout=10s": dial tcp 172.31.21.34:6443: connect: connection refused Feb 9 19:15:37.150804 env[1808]: time="2024-02-09T19:15:37.150269953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-34,Uid:3386ac0d48035e68a9494046a3ff8aab,Namespace:kube-system,Attempt:0,}" Feb 9 19:15:37.157394 env[1808]: time="2024-02-09T19:15:37.157334608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-34,Uid:2b179e587eef79879d3ae96d9ef9dd43,Namespace:kube-system,Attempt:0,}" Feb 9 19:15:37.258062 kubelet[2483]: I0209 19:15:37.257870 2483 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-21-34" Feb 9 19:15:37.258610 kubelet[2483]: E0209 19:15:37.258576 2483 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.21.34:6443/api/v1/nodes\": dial tcp 172.31.21.34:6443: connect: connection refused" node="ip-172-31-21-34" Feb 9 19:15:37.340282 kubelet[2483]: W0209 19:15:37.340166 2483 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.21.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.34:6443: connect: connection refused Feb 9 19:15:37.340282 kubelet[2483]: E0209 19:15:37.340249 2483 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.21.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.34:6443: connect: connection refused Feb 9 19:15:37.640440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount132344401.mount: Deactivated successfully. Feb 9 19:15:37.644749 kubelet[2483]: W0209 19:15:37.644572 2483 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.21.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.34:6443: connect: connection refused Feb 9 19:15:37.644749 kubelet[2483]: E0209 19:15:37.644714 2483 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.21.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.34:6443: connect: connection refused Feb 9 19:15:37.652833 env[1808]: time="2024-02-09T19:15:37.652755242Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:37.654587 env[1808]: time="2024-02-09T19:15:37.654505498Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:37.659362 env[1808]: time="2024-02-09T19:15:37.659284905Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:37.661577 env[1808]: time="2024-02-09T19:15:37.661511808Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:37.664194 env[1808]: time="2024-02-09T19:15:37.664118164Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:37.670321 env[1808]: time="2024-02-09T19:15:37.670229440Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:37.676404 env[1808]: time="2024-02-09T19:15:37.676341294Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:37.681786 env[1808]: time="2024-02-09T19:15:37.681719732Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:37.685089 env[1808]: time="2024-02-09T19:15:37.685008774Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:37.687568 env[1808]: time="2024-02-09T19:15:37.687499762Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:37.692761 env[1808]: time="2024-02-09T19:15:37.692704389Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:37.694517 env[1808]: time="2024-02-09T19:15:37.694465997Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:37.758175 env[1808]: time="2024-02-09T19:15:37.757971033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:15:37.758455 env[1808]: time="2024-02-09T19:15:37.758155198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:15:37.758455 env[1808]: time="2024-02-09T19:15:37.758187512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:15:37.758884 env[1808]: time="2024-02-09T19:15:37.758763983Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e246c71c205f0226b38c03a7267263d4881aff80a779cbe46c10059f29d2caf6 pid=2564 runtime=io.containerd.runc.v2 Feb 9 19:15:37.763496 env[1808]: time="2024-02-09T19:15:37.762394952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:15:37.763496 env[1808]: time="2024-02-09T19:15:37.763027822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:15:37.763496 env[1808]: time="2024-02-09T19:15:37.763072869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:15:37.763908 env[1808]: time="2024-02-09T19:15:37.763563955Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9f21a8b42c3e35c2b0e29c0a63247060cb16f84859139be22785228375c9e445 pid=2566 runtime=io.containerd.runc.v2 Feb 9 19:15:37.778411 env[1808]: time="2024-02-09T19:15:37.778262481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:15:37.778796 env[1808]: time="2024-02-09T19:15:37.778720124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:15:37.779038 env[1808]: time="2024-02-09T19:15:37.778970225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:15:37.789308 env[1808]: time="2024-02-09T19:15:37.780397608Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/43e0d0b2596ae918c6fc3a90657be3f1c04bc7ca2944384b9dfce9374ed8ce60 pid=2594 runtime=io.containerd.runc.v2 Feb 9 19:15:37.948875 kubelet[2483]: E0209 19:15:37.947520 2483 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://172.31.21.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-34?timeout=10s": dial tcp 172.31.21.34:6443: connect: connection refused Feb 9 19:15:37.960858 env[1808]: time="2024-02-09T19:15:37.960766295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-34,Uid:3386ac0d48035e68a9494046a3ff8aab,Namespace:kube-system,Attempt:0,} returns sandbox id \"e246c71c205f0226b38c03a7267263d4881aff80a779cbe46c10059f29d2caf6\"" Feb 9 19:15:37.970971 env[1808]: time="2024-02-09T19:15:37.970891525Z" level=info msg="CreateContainer within sandbox \"e246c71c205f0226b38c03a7267263d4881aff80a779cbe46c10059f29d2caf6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 19:15:37.986301 env[1808]: time="2024-02-09T19:15:37.986236135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-34,Uid:2b179e587eef79879d3ae96d9ef9dd43,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f21a8b42c3e35c2b0e29c0a63247060cb16f84859139be22785228375c9e445\"" Feb 9 19:15:37.996928 env[1808]: time="2024-02-09T19:15:37.996858217Z" level=info msg="CreateContainer within sandbox \"9f21a8b42c3e35c2b0e29c0a63247060cb16f84859139be22785228375c9e445\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 19:15:38.004397 env[1808]: time="2024-02-09T19:15:38.004325839Z" level=info msg="CreateContainer within sandbox \"e246c71c205f0226b38c03a7267263d4881aff80a779cbe46c10059f29d2caf6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3ca207294877e98ee4c7684a3be16a2f89053cfecac45606837fda8caa03461a\"" Feb 9 19:15:38.007079 env[1808]: time="2024-02-09T19:15:38.007025699Z" level=info msg="StartContainer for \"3ca207294877e98ee4c7684a3be16a2f89053cfecac45606837fda8caa03461a\"" Feb 9 19:15:38.013534 env[1808]: time="2024-02-09T19:15:38.013476222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-34,Uid:24dbd899bcff81f76d9248fafb9f6bae,Namespace:kube-system,Attempt:0,} returns sandbox id \"43e0d0b2596ae918c6fc3a90657be3f1c04bc7ca2944384b9dfce9374ed8ce60\"" Feb 9 19:15:38.020522 env[1808]: time="2024-02-09T19:15:38.020398454Z" level=info msg="CreateContainer within sandbox \"43e0d0b2596ae918c6fc3a90657be3f1c04bc7ca2944384b9dfce9374ed8ce60\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 19:15:38.040651 env[1808]: time="2024-02-09T19:15:38.040464409Z" level=info msg="CreateContainer within sandbox \"9f21a8b42c3e35c2b0e29c0a63247060cb16f84859139be22785228375c9e445\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ead4546fe1ba38fe6a85819586c0d101a6cea19e5ccc4f1e95387b9945ab88b4\"" Feb 9 19:15:38.042649 env[1808]: time="2024-02-09T19:15:38.041778802Z" level=info msg="StartContainer for \"ead4546fe1ba38fe6a85819586c0d101a6cea19e5ccc4f1e95387b9945ab88b4\"" Feb 9 19:15:38.057706 env[1808]: time="2024-02-09T19:15:38.055048771Z" level=info msg="CreateContainer within sandbox \"43e0d0b2596ae918c6fc3a90657be3f1c04bc7ca2944384b9dfce9374ed8ce60\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f75bf2f63de6924df22ef4ca36bf4eca1c8753962cbe442dfdbd9d3b6a430c20\"" Feb 9 19:15:38.070161 kubelet[2483]: I0209 19:15:38.070057 2483 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-21-34" Feb 9 19:15:38.070852 kubelet[2483]: E0209 19:15:38.070764 2483 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.21.34:6443/api/v1/nodes\": dial tcp 172.31.21.34:6443: connect: connection refused" node="ip-172-31-21-34" Feb 9 19:15:38.082466 env[1808]: time="2024-02-09T19:15:38.082370156Z" level=info msg="StartContainer for \"f75bf2f63de6924df22ef4ca36bf4eca1c8753962cbe442dfdbd9d3b6a430c20\"" Feb 9 19:15:38.101883 kubelet[2483]: W0209 19:15:38.101574 2483 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.21.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-34&limit=500&resourceVersion=0": dial tcp 172.31.21.34:6443: connect: connection refused Feb 9 19:15:38.101883 kubelet[2483]: E0209 19:15:38.101804 2483 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.21.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-34&limit=500&resourceVersion=0": dial tcp 172.31.21.34:6443: connect: connection refused Feb 9 19:15:38.164507 kubelet[2483]: W0209 19:15:38.164450 2483 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.21.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.34:6443: connect: connection refused Feb 9 19:15:38.164816 kubelet[2483]: E0209 19:15:38.164774 2483 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.21.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.34:6443: connect: connection refused Feb 9 19:15:38.196990 env[1808]: time="2024-02-09T19:15:38.196888008Z" level=info msg="StartContainer for \"3ca207294877e98ee4c7684a3be16a2f89053cfecac45606837fda8caa03461a\" returns successfully" Feb 9 19:15:38.374070 env[1808]: time="2024-02-09T19:15:38.373993089Z" level=info msg="StartContainer for \"ead4546fe1ba38fe6a85819586c0d101a6cea19e5ccc4f1e95387b9945ab88b4\" returns successfully" Feb 9 19:15:38.377955 env[1808]: time="2024-02-09T19:15:38.377868557Z" level=info msg="StartContainer for \"f75bf2f63de6924df22ef4ca36bf4eca1c8753962cbe442dfdbd9d3b6a430c20\" returns successfully" Feb 9 19:15:39.673467 kubelet[2483]: I0209 19:15:39.673427 2483 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-21-34" Feb 9 19:15:41.412589 update_engine[1795]: I0209 19:15:41.411710 1795 update_attempter.cc:509] Updating boot flags... Feb 9 19:15:43.535293 kubelet[2483]: I0209 19:15:43.535228 2483 apiserver.go:52] "Watching apiserver" Feb 9 19:15:43.643239 kubelet[2483]: I0209 19:15:43.643177 2483 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:15:43.653918 kubelet[2483]: E0209 19:15:43.653866 2483 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-21-34\" not found" node="ip-172-31-21-34" Feb 9 19:15:43.708256 kubelet[2483]: I0209 19:15:43.708194 2483 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-21-34" Feb 9 19:15:43.715138 kubelet[2483]: I0209 19:15:43.715081 2483 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:15:43.771231 kubelet[2483]: E0209 19:15:43.771068 2483 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-21-34.17b247c8cf66e0f7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-21-34", UID:"ip-172-31-21-34", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-21-34"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 525496567, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 525496567, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:15:43.870304 kubelet[2483]: E0209 19:15:43.870100 2483 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-21-34.17b247c8cfda90dc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-21-34", UID:"ip-172-31-21-34", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-21-34"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 533078236, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 533078236, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:15:43.955170 kubelet[2483]: E0209 19:15:43.955007 2483 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-21-34.17b247c8d6912b94", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-21-34", UID:"ip-172-31-21-34", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ip-172-31-21-34 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-21-34"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 645708692, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 645708692, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:15:44.028506 kubelet[2483]: E0209 19:15:44.028309 2483 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-21-34.17b247c8d6914b28", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-21-34", UID:"ip-172-31-21-34", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ip-172-31-21-34 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-21-34"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 645716776, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 645716776, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:15:44.088094 kubelet[2483]: E0209 19:15:44.087899 2483 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-21-34.17b247c8d6916738", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-21-34", UID:"ip-172-31-21-34", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ip-172-31-21-34 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-21-34"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 645723960, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 645723960, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:15:44.150022 kubelet[2483]: E0209 19:15:44.149756 2483 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-21-34.17b247c8d6912b94", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-21-34", UID:"ip-172-31-21-34", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ip-172-31-21-34 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-21-34"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 645708692, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 666621459, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:15:44.211010 kubelet[2483]: E0209 19:15:44.210837 2483 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-21-34.17b247c8d6914b28", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-21-34", UID:"ip-172-31-21-34", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ip-172-31-21-34 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-21-34"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 645716776, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 666671638, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:15:44.275975 kubelet[2483]: E0209 19:15:44.275829 2483 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-21-34.17b247c8d6916738", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-21-34", UID:"ip-172-31-21-34", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ip-172-31-21-34 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-21-34"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 645723960, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 666678918, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:15:44.335867 kubelet[2483]: E0209 19:15:44.335668 2483 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-21-34.17b247c8d8d7b55d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-21-34", UID:"ip-172-31-21-34", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-21-34"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 683885917, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 683885917, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:15:44.437999 kubelet[2483]: E0209 19:15:44.437715 2483 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-21-34.17b247c8d6912b94", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-21-34", UID:"ip-172-31-21-34", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ip-172-31-21-34 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-21-34"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 645708692, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 828583241, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:15:44.830071 kubelet[2483]: E0209 19:15:44.829769 2483 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-21-34.17b247c8d6914b28", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-21-34", UID:"ip-172-31-21-34", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ip-172-31-21-34 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-21-34"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 645716776, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 828590990, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:15:45.230471 kubelet[2483]: E0209 19:15:45.230314 2483 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-21-34.17b247c8d6916738", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-21-34", UID:"ip-172-31-21-34", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ip-172-31-21-34 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-21-34"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 645723960, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 15, 36, 828596155, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:15:46.570530 systemd[1]: Reloading. Feb 9 19:15:46.831032 kubelet[2483]: I0209 19:15:46.830867 2483 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-34" podStartSLOduration=1.8307877879999999 pod.CreationTimestamp="2024-02-09 19:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:15:46.826828056 +0000 UTC m=+12.445512738" watchObservedRunningTime="2024-02-09 19:15:46.830787788 +0000 UTC m=+12.449472434" Feb 9 19:15:46.905881 /usr/lib/systemd/system-generators/torcx-generator[2996]: time="2024-02-09T19:15:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:15:46.906651 /usr/lib/systemd/system-generators/torcx-generator[2996]: time="2024-02-09T19:15:46Z" level=info msg="torcx already run" Feb 9 19:15:47.223486 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:15:47.223526 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:15:47.265964 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:15:47.552314 systemd[1]: Stopping kubelet.service... Feb 9 19:15:47.566846 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:15:47.568141 systemd[1]: Stopped kubelet.service. Feb 9 19:15:47.575493 systemd[1]: Started kubelet.service. Feb 9 19:15:47.744134 kubelet[3056]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:15:47.744134 kubelet[3056]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:15:47.745085 kubelet[3056]: I0209 19:15:47.744220 3056 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:15:47.746961 kubelet[3056]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:15:47.746961 kubelet[3056]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:15:47.753707 kubelet[3056]: I0209 19:15:47.753619 3056 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:15:47.753911 kubelet[3056]: I0209 19:15:47.753887 3056 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:15:47.754414 kubelet[3056]: I0209 19:15:47.754387 3056 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:15:47.758040 kubelet[3056]: I0209 19:15:47.757995 3056 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 19:15:47.760025 kubelet[3056]: I0209 19:15:47.759850 3056 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:15:47.765225 kubelet[3056]: W0209 19:15:47.765168 3056 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 19:15:47.766899 kubelet[3056]: I0209 19:15:47.766819 3056 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:15:47.768002 kubelet[3056]: I0209 19:15:47.767928 3056 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:15:47.768193 kubelet[3056]: I0209 19:15:47.768132 3056 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:15:47.768193 kubelet[3056]: I0209 19:15:47.768195 3056 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:15:47.768516 kubelet[3056]: I0209 19:15:47.768225 3056 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:15:47.768516 kubelet[3056]: I0209 19:15:47.768310 3056 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:15:47.778517 kubelet[3056]: I0209 19:15:47.778443 3056 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:15:47.778517 kubelet[3056]: I0209 19:15:47.778518 3056 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:15:47.778876 kubelet[3056]: I0209 19:15:47.778576 3056 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:15:47.778876 kubelet[3056]: I0209 19:15:47.778601 3056 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:15:47.789925 kubelet[3056]: I0209 19:15:47.789871 3056 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:15:47.790717 kubelet[3056]: I0209 19:15:47.790660 3056 server.go:1186] "Started kubelet" Feb 9 19:15:47.795908 kubelet[3056]: I0209 19:15:47.795850 3056 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:15:47.798377 kubelet[3056]: I0209 19:15:47.798315 3056 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:15:47.800758 kubelet[3056]: I0209 19:15:47.800684 3056 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:15:47.837008 kubelet[3056]: I0209 19:15:47.835297 3056 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:15:47.840406 kubelet[3056]: I0209 19:15:47.840362 3056 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:15:47.883488 kubelet[3056]: E0209 19:15:47.883420 3056 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:15:47.883939 kubelet[3056]: E0209 19:15:47.883904 3056 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:15:47.962878 kubelet[3056]: E0209 19:15:47.962797 3056 container_manager_linux.go:945] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Feb 9 19:15:47.987421 kubelet[3056]: I0209 19:15:47.987293 3056 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-21-34" Feb 9 19:15:48.031338 kubelet[3056]: I0209 19:15:48.031245 3056 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-21-34" Feb 9 19:15:48.031854 kubelet[3056]: I0209 19:15:48.031810 3056 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-21-34" Feb 9 19:15:48.060390 kubelet[3056]: I0209 19:15:48.060333 3056 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:15:48.191230 kubelet[3056]: I0209 19:15:48.191172 3056 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:15:48.191752 kubelet[3056]: I0209 19:15:48.191615 3056 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:15:48.195697 kubelet[3056]: I0209 19:15:48.195583 3056 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:15:48.204778 kubelet[3056]: E0209 19:15:48.204710 3056 kubelet.go:2137] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 9 19:15:48.295488 kubelet[3056]: I0209 19:15:48.295365 3056 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:15:48.295488 kubelet[3056]: I0209 19:15:48.295439 3056 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:15:48.295488 kubelet[3056]: I0209 19:15:48.295488 3056 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:15:48.296040 kubelet[3056]: I0209 19:15:48.295934 3056 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 19:15:48.296040 kubelet[3056]: I0209 19:15:48.295981 3056 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 19:15:48.296040 kubelet[3056]: I0209 19:15:48.296001 3056 policy_none.go:49] "None policy: Start" Feb 9 19:15:48.299856 kubelet[3056]: I0209 19:15:48.299766 3056 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:15:48.300273 kubelet[3056]: I0209 19:15:48.300220 3056 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:15:48.301143 kubelet[3056]: I0209 19:15:48.301058 3056 state_mem.go:75] "Updated machine memory state" Feb 9 19:15:48.312926 kubelet[3056]: E0209 19:15:48.312030 3056 kubelet.go:2137] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 9 19:15:48.332086 kubelet[3056]: I0209 19:15:48.332016 3056 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:15:48.334080 kubelet[3056]: I0209 19:15:48.333417 3056 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:15:48.513059 kubelet[3056]: I0209 19:15:48.512872 3056 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:15:48.513059 kubelet[3056]: I0209 19:15:48.513046 3056 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:15:48.513310 kubelet[3056]: I0209 19:15:48.513137 3056 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:15:48.532941 kubelet[3056]: E0209 19:15:48.532877 3056 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-21-34\" already exists" pod="kube-system/kube-scheduler-ip-172-31-21-34" Feb 9 19:15:48.535957 kubelet[3056]: E0209 19:15:48.535776 3056 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-21-34\" already exists" pod="kube-system/kube-apiserver-ip-172-31-21-34" Feb 9 19:15:48.557123 kubelet[3056]: I0209 19:15:48.557043 3056 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b179e587eef79879d3ae96d9ef9dd43-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-34\" (UID: \"2b179e587eef79879d3ae96d9ef9dd43\") " pod="kube-system/kube-scheduler-ip-172-31-21-34" Feb 9 19:15:48.557678 kubelet[3056]: I0209 19:15:48.557600 3056 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/24dbd899bcff81f76d9248fafb9f6bae-ca-certs\") pod \"kube-apiserver-ip-172-31-21-34\" (UID: \"24dbd899bcff81f76d9248fafb9f6bae\") " pod="kube-system/kube-apiserver-ip-172-31-21-34" Feb 9 19:15:48.558035 kubelet[3056]: I0209 19:15:48.558000 3056 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/24dbd899bcff81f76d9248fafb9f6bae-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-34\" (UID: \"24dbd899bcff81f76d9248fafb9f6bae\") " pod="kube-system/kube-apiserver-ip-172-31-21-34" Feb 9 19:15:48.558384 kubelet[3056]: I0209 19:15:48.558330 3056 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3386ac0d48035e68a9494046a3ff8aab-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-34\" (UID: \"3386ac0d48035e68a9494046a3ff8aab\") " pod="kube-system/kube-controller-manager-ip-172-31-21-34" Feb 9 19:15:48.558882 kubelet[3056]: I0209 19:15:48.558762 3056 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3386ac0d48035e68a9494046a3ff8aab-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-34\" (UID: \"3386ac0d48035e68a9494046a3ff8aab\") " pod="kube-system/kube-controller-manager-ip-172-31-21-34" Feb 9 19:15:48.559330 kubelet[3056]: I0209 19:15:48.559273 3056 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/24dbd899bcff81f76d9248fafb9f6bae-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-34\" (UID: \"24dbd899bcff81f76d9248fafb9f6bae\") " pod="kube-system/kube-apiserver-ip-172-31-21-34" Feb 9 19:15:48.559720 kubelet[3056]: I0209 19:15:48.559690 3056 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3386ac0d48035e68a9494046a3ff8aab-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-34\" (UID: \"3386ac0d48035e68a9494046a3ff8aab\") " pod="kube-system/kube-controller-manager-ip-172-31-21-34" Feb 9 19:15:48.560060 kubelet[3056]: I0209 19:15:48.560009 3056 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3386ac0d48035e68a9494046a3ff8aab-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-34\" (UID: \"3386ac0d48035e68a9494046a3ff8aab\") " pod="kube-system/kube-controller-manager-ip-172-31-21-34" Feb 9 19:15:48.560476 kubelet[3056]: I0209 19:15:48.560424 3056 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3386ac0d48035e68a9494046a3ff8aab-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-34\" (UID: \"3386ac0d48035e68a9494046a3ff8aab\") " pod="kube-system/kube-controller-manager-ip-172-31-21-34" Feb 9 19:15:48.791588 kubelet[3056]: I0209 19:15:48.791443 3056 apiserver.go:52] "Watching apiserver" Feb 9 19:15:48.840979 kubelet[3056]: I0209 19:15:48.840937 3056 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:15:48.864311 kubelet[3056]: I0209 19:15:48.864267 3056 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:15:48.916732 amazon-ssm-agent[1856]: 2024-02-09 19:15:48 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Feb 9 19:15:49.595479 kubelet[3056]: I0209 19:15:49.595403 3056 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-34" podStartSLOduration=1.595286464 pod.CreationTimestamp="2024-02-09 19:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:15:49.593281552 +0000 UTC m=+1.997159277" watchObservedRunningTime="2024-02-09 19:15:49.595286464 +0000 UTC m=+1.999164189" Feb 9 19:15:50.279283 sudo[2064]: pam_unix(sudo:session): session closed for user root Feb 9 19:15:50.304893 sshd[2060]: pam_unix(sshd:session): session closed for user core Feb 9 19:15:50.315113 systemd[1]: sshd@4-172.31.21.34:22-147.75.109.163:58294.service: Deactivated successfully. Feb 9 19:15:50.317057 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:15:50.322014 systemd-logind[1793]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:15:50.325313 systemd-logind[1793]: Removed session 5. Feb 9 19:15:54.352024 amazon-ssm-agent[1856]: 2024-02-09 19:15:54 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 19:16:00.933479 kubelet[3056]: I0209 19:16:00.933397 3056 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 19:16:00.934423 env[1808]: time="2024-02-09T19:16:00.934103787Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:16:00.935126 kubelet[3056]: I0209 19:16:00.934549 3056 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 19:16:01.008843 kubelet[3056]: I0209 19:16:01.008764 3056 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:16:01.032078 kubelet[3056]: I0209 19:16:01.031977 3056 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:16:01.050941 kubelet[3056]: I0209 19:16:01.050890 3056 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nzbc\" (UniqueName: \"kubernetes.io/projected/7ec9ea3c-8b83-4afd-bf84-48eaf1364b7e-kube-api-access-5nzbc\") pod \"kube-proxy-8hgnk\" (UID: \"7ec9ea3c-8b83-4afd-bf84-48eaf1364b7e\") " pod="kube-system/kube-proxy-8hgnk" Feb 9 19:16:01.051324 kubelet[3056]: I0209 19:16:01.051280 3056 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7ec9ea3c-8b83-4afd-bf84-48eaf1364b7e-kube-proxy\") pod \"kube-proxy-8hgnk\" (UID: \"7ec9ea3c-8b83-4afd-bf84-48eaf1364b7e\") " pod="kube-system/kube-proxy-8hgnk" Feb 9 19:16:01.051598 kubelet[3056]: I0209 19:16:01.051564 3056 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ec9ea3c-8b83-4afd-bf84-48eaf1364b7e-xtables-lock\") pod \"kube-proxy-8hgnk\" (UID: \"7ec9ea3c-8b83-4afd-bf84-48eaf1364b7e\") " pod="kube-system/kube-proxy-8hgnk" Feb 9 19:16:01.051851 kubelet[3056]: I0209 19:16:01.051827 3056 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ec9ea3c-8b83-4afd-bf84-48eaf1364b7e-lib-modules\") pod \"kube-proxy-8hgnk\" (UID: \"7ec9ea3c-8b83-4afd-bf84-48eaf1364b7e\") " pod="kube-system/kube-proxy-8hgnk" Feb 9 19:16:01.059675 kubelet[3056]: W0209 19:16:01.053295 3056 reflector.go:424] object-"kube-flannel"/"kube-flannel-cfg": failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ip-172-31-21-34" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-21-34' and this object Feb 9 19:16:01.059993 kubelet[3056]: E0209 19:16:01.059959 3056 reflector.go:140] object-"kube-flannel"/"kube-flannel-cfg": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ip-172-31-21-34" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-21-34' and this object Feb 9 19:16:01.060157 kubelet[3056]: W0209 19:16:01.053898 3056 reflector.go:424] object-"kube-flannel"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-21-34" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-21-34' and this object Feb 9 19:16:01.060311 kubelet[3056]: E0209 19:16:01.060285 3056 reflector.go:140] object-"kube-flannel"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-21-34" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ip-172-31-21-34' and this object Feb 9 19:16:01.152225 kubelet[3056]: I0209 19:16:01.152147 3056 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/e204cea9-6be5-443c-859b-7e39a3412f87-cni\") pod \"kube-flannel-ds-5twz6\" (UID: \"e204cea9-6be5-443c-859b-7e39a3412f87\") " pod="kube-flannel/kube-flannel-ds-5twz6" Feb 9 19:16:01.152721 kubelet[3056]: I0209 19:16:01.152669 3056 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e204cea9-6be5-443c-859b-7e39a3412f87-xtables-lock\") pod \"kube-flannel-ds-5twz6\" (UID: \"e204cea9-6be5-443c-859b-7e39a3412f87\") " pod="kube-flannel/kube-flannel-ds-5twz6" Feb 9 19:16:01.153119 kubelet[3056]: I0209 19:16:01.153078 3056 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e204cea9-6be5-443c-859b-7e39a3412f87-run\") pod \"kube-flannel-ds-5twz6\" (UID: \"e204cea9-6be5-443c-859b-7e39a3412f87\") " pod="kube-flannel/kube-flannel-ds-5twz6" Feb 9 19:16:01.153444 kubelet[3056]: I0209 19:16:01.153404 3056 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/e204cea9-6be5-443c-859b-7e39a3412f87-cni-plugin\") pod \"kube-flannel-ds-5twz6\" (UID: \"e204cea9-6be5-443c-859b-7e39a3412f87\") " pod="kube-flannel/kube-flannel-ds-5twz6" Feb 9 19:16:01.153778 kubelet[3056]: I0209 19:16:01.153723 3056 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/e204cea9-6be5-443c-859b-7e39a3412f87-flannel-cfg\") pod \"kube-flannel-ds-5twz6\" (UID: \"e204cea9-6be5-443c-859b-7e39a3412f87\") " pod="kube-flannel/kube-flannel-ds-5twz6" Feb 9 19:16:01.154121 kubelet[3056]: I0209 19:16:01.154081 3056 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmn2w\" (UniqueName: \"kubernetes.io/projected/e204cea9-6be5-443c-859b-7e39a3412f87-kube-api-access-lmn2w\") pod \"kube-flannel-ds-5twz6\" (UID: \"e204cea9-6be5-443c-859b-7e39a3412f87\") " pod="kube-flannel/kube-flannel-ds-5twz6" Feb 9 19:16:01.329422 env[1808]: time="2024-02-09T19:16:01.328663129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8hgnk,Uid:7ec9ea3c-8b83-4afd-bf84-48eaf1364b7e,Namespace:kube-system,Attempt:0,}" Feb 9 19:16:01.367828 env[1808]: time="2024-02-09T19:16:01.367320700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:16:01.367828 env[1808]: time="2024-02-09T19:16:01.367395931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:16:01.367828 env[1808]: time="2024-02-09T19:16:01.367423001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:16:01.368588 env[1808]: time="2024-02-09T19:16:01.368403567Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/94f796325f81d9defabc55ef56ff5b75c912fededcc41440b7f01ea26a2744fa pid=3148 runtime=io.containerd.runc.v2 Feb 9 19:16:01.480433 env[1808]: time="2024-02-09T19:16:01.480352348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8hgnk,Uid:7ec9ea3c-8b83-4afd-bf84-48eaf1364b7e,Namespace:kube-system,Attempt:0,} returns sandbox id \"94f796325f81d9defabc55ef56ff5b75c912fededcc41440b7f01ea26a2744fa\"" Feb 9 19:16:01.488856 env[1808]: time="2024-02-09T19:16:01.488782489Z" level=info msg="CreateContainer within sandbox \"94f796325f81d9defabc55ef56ff5b75c912fededcc41440b7f01ea26a2744fa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:16:01.531062 env[1808]: time="2024-02-09T19:16:01.530983802Z" level=info msg="CreateContainer within sandbox \"94f796325f81d9defabc55ef56ff5b75c912fededcc41440b7f01ea26a2744fa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"44bbd02687b58917b45b0d12207fcc67373ffa04518898b374901f92a220dd16\"" Feb 9 19:16:01.534195 env[1808]: time="2024-02-09T19:16:01.533126237Z" level=info msg="StartContainer for \"44bbd02687b58917b45b0d12207fcc67373ffa04518898b374901f92a220dd16\"" Feb 9 19:16:01.655676 env[1808]: time="2024-02-09T19:16:01.655554847Z" level=info msg="StartContainer for \"44bbd02687b58917b45b0d12207fcc67373ffa04518898b374901f92a220dd16\" returns successfully" Feb 9 19:16:02.271333 kubelet[3056]: E0209 19:16:02.271290 3056 projected.go:292] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 9 19:16:02.288804 kubelet[3056]: E0209 19:16:02.288749 3056 projected.go:198] Error preparing data for projected volume kube-api-access-lmn2w for pod kube-flannel/kube-flannel-ds-5twz6: failed to sync configmap cache: timed out waiting for the condition Feb 9 19:16:02.289203 kubelet[3056]: E0209 19:16:02.289163 3056 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e204cea9-6be5-443c-859b-7e39a3412f87-kube-api-access-lmn2w podName:e204cea9-6be5-443c-859b-7e39a3412f87 nodeName:}" failed. No retries permitted until 2024-02-09 19:16:02.789121908 +0000 UTC m=+15.192999609 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lmn2w" (UniqueName: "kubernetes.io/projected/e204cea9-6be5-443c-859b-7e39a3412f87-kube-api-access-lmn2w") pod "kube-flannel-ds-5twz6" (UID: "e204cea9-6be5-443c-859b-7e39a3412f87") : failed to sync configmap cache: timed out waiting for the condition Feb 9 19:16:03.146356 env[1808]: time="2024-02-09T19:16:03.146278780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-5twz6,Uid:e204cea9-6be5-443c-859b-7e39a3412f87,Namespace:kube-flannel,Attempt:0,}" Feb 9 19:16:03.189513 env[1808]: time="2024-02-09T19:16:03.189054583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:16:03.189513 env[1808]: time="2024-02-09T19:16:03.189141097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:16:03.189513 env[1808]: time="2024-02-09T19:16:03.189168310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:16:03.189955 env[1808]: time="2024-02-09T19:16:03.189601791Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3931728fd4a2651e154afd049dad75c6f265d3ac6aa134cf615638cf9248cf7b pid=3333 runtime=io.containerd.runc.v2 Feb 9 19:16:03.310957 env[1808]: time="2024-02-09T19:16:03.310847512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-5twz6,Uid:e204cea9-6be5-443c-859b-7e39a3412f87,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"3931728fd4a2651e154afd049dad75c6f265d3ac6aa134cf615638cf9248cf7b\"" Feb 9 19:16:03.318223 env[1808]: time="2024-02-09T19:16:03.317240534Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0\"" Feb 9 19:16:05.212763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3357617577.mount: Deactivated successfully. Feb 9 19:16:05.319386 env[1808]: time="2024-02-09T19:16:05.319284603Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:05.324303 env[1808]: time="2024-02-09T19:16:05.324212917Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b04a1a4152e14ddc6c26adc946baca3226718fa1acce540c015ac593e50218a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:05.330605 env[1808]: time="2024-02-09T19:16:05.330528629Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:05.334588 env[1808]: time="2024-02-09T19:16:05.334517652Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:05.336028 env[1808]: time="2024-02-09T19:16:05.335960698Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0\" returns image reference \"sha256:b04a1a4152e14ddc6c26adc946baca3226718fa1acce540c015ac593e50218a9\"" Feb 9 19:16:05.344198 env[1808]: time="2024-02-09T19:16:05.344114421Z" level=info msg="CreateContainer within sandbox \"3931728fd4a2651e154afd049dad75c6f265d3ac6aa134cf615638cf9248cf7b\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 9 19:16:05.369146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4277178175.mount: Deactivated successfully. Feb 9 19:16:05.387270 env[1808]: time="2024-02-09T19:16:05.387187034Z" level=info msg="CreateContainer within sandbox \"3931728fd4a2651e154afd049dad75c6f265d3ac6aa134cf615638cf9248cf7b\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"627289eb66ba009f571af3a40982f3dd8a1e94dad88f6558904e787031057e53\"" Feb 9 19:16:05.389062 env[1808]: time="2024-02-09T19:16:05.388982683Z" level=info msg="StartContainer for \"627289eb66ba009f571af3a40982f3dd8a1e94dad88f6558904e787031057e53\"" Feb 9 19:16:05.507287 env[1808]: time="2024-02-09T19:16:05.507053211Z" level=info msg="StartContainer for \"627289eb66ba009f571af3a40982f3dd8a1e94dad88f6558904e787031057e53\" returns successfully" Feb 9 19:16:05.598324 env[1808]: time="2024-02-09T19:16:05.598233539Z" level=info msg="shim disconnected" id=627289eb66ba009f571af3a40982f3dd8a1e94dad88f6558904e787031057e53 Feb 9 19:16:05.598709 env[1808]: time="2024-02-09T19:16:05.598319799Z" level=warning msg="cleaning up after shim disconnected" id=627289eb66ba009f571af3a40982f3dd8a1e94dad88f6558904e787031057e53 namespace=k8s.io Feb 9 19:16:05.598709 env[1808]: time="2024-02-09T19:16:05.598348176Z" level=info msg="cleaning up dead shim" Feb 9 19:16:05.615431 env[1808]: time="2024-02-09T19:16:05.615351822Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:16:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3412 runtime=io.containerd.runc.v2\n" Feb 9 19:16:06.005644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount887988155.mount: Deactivated successfully. Feb 9 19:16:06.291415 env[1808]: time="2024-02-09T19:16:06.290182277Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2\"" Feb 9 19:16:06.312399 kubelet[3056]: I0209 19:16:06.312330 3056 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8hgnk" podStartSLOduration=6.312155108 pod.CreationTimestamp="2024-02-09 19:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:16:02.298736219 +0000 UTC m=+14.702613968" watchObservedRunningTime="2024-02-09 19:16:06.312155108 +0000 UTC m=+18.716032845" Feb 9 19:16:08.411108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1434179975.mount: Deactivated successfully. Feb 9 19:16:09.718079 env[1808]: time="2024-02-09T19:16:09.718001662Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:09.722260 env[1808]: time="2024-02-09T19:16:09.722192358Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:37c457685cef0c53d8641973794ca8ca8b89902c01fd7b52bc718f9b434da459,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:09.726150 env[1808]: time="2024-02-09T19:16:09.726091286Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:09.729606 env[1808]: time="2024-02-09T19:16:09.729542123Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel@sha256:ec0f0b7430c8370c9f33fe76eb0392c1ad2ddf4ccaf2b9f43995cca6c94d3832,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:09.731458 env[1808]: time="2024-02-09T19:16:09.731378105Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2\" returns image reference \"sha256:37c457685cef0c53d8641973794ca8ca8b89902c01fd7b52bc718f9b434da459\"" Feb 9 19:16:09.736972 env[1808]: time="2024-02-09T19:16:09.736899414Z" level=info msg="CreateContainer within sandbox \"3931728fd4a2651e154afd049dad75c6f265d3ac6aa134cf615638cf9248cf7b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 19:16:09.762178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1917907691.mount: Deactivated successfully. Feb 9 19:16:09.774721 env[1808]: time="2024-02-09T19:16:09.774598889Z" level=info msg="CreateContainer within sandbox \"3931728fd4a2651e154afd049dad75c6f265d3ac6aa134cf615638cf9248cf7b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1f8eb9068a709e2ff94cb1e7d6b0645bb52559eb9bf57333b15894596246e5ca\"" Feb 9 19:16:09.777797 env[1808]: time="2024-02-09T19:16:09.776075475Z" level=info msg="StartContainer for \"1f8eb9068a709e2ff94cb1e7d6b0645bb52559eb9bf57333b15894596246e5ca\"" Feb 9 19:16:09.899509 env[1808]: time="2024-02-09T19:16:09.899428596Z" level=info msg="StartContainer for \"1f8eb9068a709e2ff94cb1e7d6b0645bb52559eb9bf57333b15894596246e5ca\" returns successfully" Feb 9 19:16:09.926881 kubelet[3056]: I0209 19:16:09.925247 3056 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:16:09.976100 kubelet[3056]: I0209 19:16:09.975932 3056 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:16:09.985769 kubelet[3056]: I0209 19:16:09.984836 3056 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:16:10.149770 kubelet[3056]: I0209 19:16:10.149701 3056 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmc4x\" (UniqueName: \"kubernetes.io/projected/f9c87703-c7e2-417f-a38d-fa87ccb3f051-kube-api-access-vmc4x\") pod \"coredns-787d4945fb-54l62\" (UID: \"f9c87703-c7e2-417f-a38d-fa87ccb3f051\") " pod="kube-system/coredns-787d4945fb-54l62" Feb 9 19:16:10.153879 kubelet[3056]: I0209 19:16:10.150061 3056 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmszs\" (UniqueName: \"kubernetes.io/projected/332a4f26-9dff-4ac5-af21-6999e93f7aff-kube-api-access-pmszs\") pod \"coredns-787d4945fb-z95rx\" (UID: \"332a4f26-9dff-4ac5-af21-6999e93f7aff\") " pod="kube-system/coredns-787d4945fb-z95rx" Feb 9 19:16:10.153879 kubelet[3056]: I0209 19:16:10.150252 3056 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9c87703-c7e2-417f-a38d-fa87ccb3f051-config-volume\") pod \"coredns-787d4945fb-54l62\" (UID: \"f9c87703-c7e2-417f-a38d-fa87ccb3f051\") " pod="kube-system/coredns-787d4945fb-54l62" Feb 9 19:16:10.153879 kubelet[3056]: I0209 19:16:10.150412 3056 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/332a4f26-9dff-4ac5-af21-6999e93f7aff-config-volume\") pod \"coredns-787d4945fb-z95rx\" (UID: \"332a4f26-9dff-4ac5-af21-6999e93f7aff\") " pod="kube-system/coredns-787d4945fb-z95rx" Feb 9 19:16:10.162037 env[1808]: time="2024-02-09T19:16:10.161949106Z" level=info msg="shim disconnected" id=1f8eb9068a709e2ff94cb1e7d6b0645bb52559eb9bf57333b15894596246e5ca Feb 9 19:16:10.162037 env[1808]: time="2024-02-09T19:16:10.162041388Z" level=warning msg="cleaning up after shim disconnected" id=1f8eb9068a709e2ff94cb1e7d6b0645bb52559eb9bf57333b15894596246e5ca namespace=k8s.io Feb 9 19:16:10.162343 env[1808]: time="2024-02-09T19:16:10.162063943Z" level=info msg="cleaning up dead shim" Feb 9 19:16:10.181550 env[1808]: time="2024-02-09T19:16:10.181462587Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:16:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3468 runtime=io.containerd.runc.v2\n" Feb 9 19:16:10.292465 env[1808]: time="2024-02-09T19:16:10.291895516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-54l62,Uid:f9c87703-c7e2-417f-a38d-fa87ccb3f051,Namespace:kube-system,Attempt:0,}" Feb 9 19:16:10.309447 env[1808]: time="2024-02-09T19:16:10.309385666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-z95rx,Uid:332a4f26-9dff-4ac5-af21-6999e93f7aff,Namespace:kube-system,Attempt:0,}" Feb 9 19:16:10.327305 env[1808]: time="2024-02-09T19:16:10.319782723Z" level=info msg="CreateContainer within sandbox \"3931728fd4a2651e154afd049dad75c6f265d3ac6aa134cf615638cf9248cf7b\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 9 19:16:10.363193 env[1808]: time="2024-02-09T19:16:10.363095132Z" level=info msg="CreateContainer within sandbox \"3931728fd4a2651e154afd049dad75c6f265d3ac6aa134cf615638cf9248cf7b\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"4eb3664c1dc1dd70c8876a5a07e08fb5c1c7ccc44fdd0dccd035865b4d270bf9\"" Feb 9 19:16:10.368443 env[1808]: time="2024-02-09T19:16:10.368354427Z" level=info msg="StartContainer for \"4eb3664c1dc1dd70c8876a5a07e08fb5c1c7ccc44fdd0dccd035865b4d270bf9\"" Feb 9 19:16:10.384767 env[1808]: time="2024-02-09T19:16:10.384672096Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-54l62,Uid:f9c87703-c7e2-417f-a38d-fa87ccb3f051,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"90985534f38fbb5e2bf454202f2fcbbc0e387c13bf69fd21aeaad3b92cea82e3\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 9 19:16:10.385539 kubelet[3056]: E0209 19:16:10.385491 3056 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90985534f38fbb5e2bf454202f2fcbbc0e387c13bf69fd21aeaad3b92cea82e3\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 9 19:16:10.385814 kubelet[3056]: E0209 19:16:10.385614 3056 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90985534f38fbb5e2bf454202f2fcbbc0e387c13bf69fd21aeaad3b92cea82e3\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-54l62" Feb 9 19:16:10.385814 kubelet[3056]: E0209 19:16:10.385706 3056 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90985534f38fbb5e2bf454202f2fcbbc0e387c13bf69fd21aeaad3b92cea82e3\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-54l62" Feb 9 19:16:10.385994 kubelet[3056]: E0209 19:16:10.385883 3056 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-54l62_kube-system(f9c87703-c7e2-417f-a38d-fa87ccb3f051)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-54l62_kube-system(f9c87703-c7e2-417f-a38d-fa87ccb3f051)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90985534f38fbb5e2bf454202f2fcbbc0e387c13bf69fd21aeaad3b92cea82e3\\\": plugin type=\\\"flannel\\\" failed (add): open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-787d4945fb-54l62" podUID=f9c87703-c7e2-417f-a38d-fa87ccb3f051 Feb 9 19:16:10.410079 env[1808]: time="2024-02-09T19:16:10.409970778Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-z95rx,Uid:332a4f26-9dff-4ac5-af21-6999e93f7aff,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fedb0219bd2c52f31df93bf5e0c21d6a37da824c034d49ff6e3431a154bf626d\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 9 19:16:10.410992 kubelet[3056]: E0209 19:16:10.410872 3056 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fedb0219bd2c52f31df93bf5e0c21d6a37da824c034d49ff6e3431a154bf626d\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 9 19:16:10.411191 kubelet[3056]: E0209 19:16:10.411056 3056 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fedb0219bd2c52f31df93bf5e0c21d6a37da824c034d49ff6e3431a154bf626d\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-z95rx" Feb 9 19:16:10.411191 kubelet[3056]: E0209 19:16:10.411163 3056 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fedb0219bd2c52f31df93bf5e0c21d6a37da824c034d49ff6e3431a154bf626d\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-z95rx" Feb 9 19:16:10.411419 kubelet[3056]: E0209 19:16:10.411323 3056 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-z95rx_kube-system(332a4f26-9dff-4ac5-af21-6999e93f7aff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-z95rx_kube-system(332a4f26-9dff-4ac5-af21-6999e93f7aff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fedb0219bd2c52f31df93bf5e0c21d6a37da824c034d49ff6e3431a154bf626d\\\": plugin type=\\\"flannel\\\" failed (add): open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-787d4945fb-z95rx" podUID=332a4f26-9dff-4ac5-af21-6999e93f7aff Feb 9 19:16:10.493774 env[1808]: time="2024-02-09T19:16:10.493712124Z" level=info msg="StartContainer for \"4eb3664c1dc1dd70c8876a5a07e08fb5c1c7ccc44fdd0dccd035865b4d270bf9\" returns successfully" Feb 9 19:16:10.761610 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f8eb9068a709e2ff94cb1e7d6b0645bb52559eb9bf57333b15894596246e5ca-rootfs.mount: Deactivated successfully. Feb 9 19:16:11.336534 kubelet[3056]: I0209 19:16:11.336454 3056 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-5twz6" podStartSLOduration=-9.223372026518381e+09 pod.CreationTimestamp="2024-02-09 19:16:01 +0000 UTC" firstStartedPulling="2024-02-09 19:16:03.313804655 +0000 UTC m=+15.717682368" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:16:11.336046125 +0000 UTC m=+23.739923838" watchObservedRunningTime="2024-02-09 19:16:11.336394365 +0000 UTC m=+23.740272078" Feb 9 19:16:11.933284 (udev-worker)[3569]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:16:11.947566 systemd-networkd[1588]: flannel.1: Link UP Feb 9 19:16:11.947582 systemd-networkd[1588]: flannel.1: Gained carrier Feb 9 19:16:13.052820 systemd-networkd[1588]: flannel.1: Gained IPv6LL Feb 9 19:16:21.206817 env[1808]: time="2024-02-09T19:16:21.206756947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-z95rx,Uid:332a4f26-9dff-4ac5-af21-6999e93f7aff,Namespace:kube-system,Attempt:0,}" Feb 9 19:16:21.252659 systemd-networkd[1588]: cni0: Link UP Feb 9 19:16:21.252675 systemd-networkd[1588]: cni0: Gained carrier Feb 9 19:16:21.255057 systemd-networkd[1588]: cni0: Lost carrier Feb 9 19:16:21.256218 (udev-worker)[3682]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:16:21.266440 systemd-networkd[1588]: veth88ee14f6: Link UP Feb 9 19:16:21.273812 kernel: cni0: port 1(veth88ee14f6) entered blocking state Feb 9 19:16:21.273995 kernel: cni0: port 1(veth88ee14f6) entered disabled state Feb 9 19:16:21.276556 kernel: device veth88ee14f6 entered promiscuous mode Feb 9 19:16:21.276967 kernel: cni0: port 1(veth88ee14f6) entered blocking state Feb 9 19:16:21.278679 kernel: cni0: port 1(veth88ee14f6) entered forwarding state Feb 9 19:16:21.291378 (udev-worker)[3687]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:16:21.292026 kernel: cni0: port 1(veth88ee14f6) entered disabled state Feb 9 19:16:21.299252 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth88ee14f6: link becomes ready Feb 9 19:16:21.299432 kernel: cni0: port 1(veth88ee14f6) entered blocking state Feb 9 19:16:21.299499 kernel: cni0: port 1(veth88ee14f6) entered forwarding state Feb 9 19:16:21.301624 systemd-networkd[1588]: veth88ee14f6: Gained carrier Feb 9 19:16:21.303672 systemd-networkd[1588]: cni0: Gained carrier Feb 9 19:16:21.309322 env[1808]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x4000014928), "name":"cbr0", "type":"bridge"} Feb 9 19:16:21.349483 env[1808]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2024-02-09T19:16:21.349319838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:16:21.349483 env[1808]: time="2024-02-09T19:16:21.349410039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:16:21.349483 env[1808]: time="2024-02-09T19:16:21.349438390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:16:21.350427 env[1808]: time="2024-02-09T19:16:21.350301102Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9a0dad1aa92e5a6832655d922206d2533eff891e82bc1a1561ee9c3ac5d04240 pid=3708 runtime=io.containerd.runc.v2 Feb 9 19:16:21.478912 env[1808]: time="2024-02-09T19:16:21.478731037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-z95rx,Uid:332a4f26-9dff-4ac5-af21-6999e93f7aff,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a0dad1aa92e5a6832655d922206d2533eff891e82bc1a1561ee9c3ac5d04240\"" Feb 9 19:16:21.488139 env[1808]: time="2024-02-09T19:16:21.488061943Z" level=info msg="CreateContainer within sandbox \"9a0dad1aa92e5a6832655d922206d2533eff891e82bc1a1561ee9c3ac5d04240\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:16:21.514845 env[1808]: time="2024-02-09T19:16:21.514764767Z" level=info msg="CreateContainer within sandbox \"9a0dad1aa92e5a6832655d922206d2533eff891e82bc1a1561ee9c3ac5d04240\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"64072ee6f832090fa50be3f1c7a795bcea23f115b29503d41f935b833f096bb3\"" Feb 9 19:16:21.518377 env[1808]: time="2024-02-09T19:16:21.516867908Z" level=info msg="StartContainer for \"64072ee6f832090fa50be3f1c7a795bcea23f115b29503d41f935b833f096bb3\"" Feb 9 19:16:21.631999 env[1808]: time="2024-02-09T19:16:21.631911917Z" level=info msg="StartContainer for \"64072ee6f832090fa50be3f1c7a795bcea23f115b29503d41f935b833f096bb3\" returns successfully" Feb 9 19:16:22.226405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount15354735.mount: Deactivated successfully. Feb 9 19:16:22.367882 kubelet[3056]: I0209 19:16:22.367826 3056 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-z95rx" podStartSLOduration=22.367740371 pod.CreationTimestamp="2024-02-09 19:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:16:22.366429995 +0000 UTC m=+34.770307720" watchObservedRunningTime="2024-02-09 19:16:22.367740371 +0000 UTC m=+34.771618084" Feb 9 19:16:22.652950 systemd-networkd[1588]: cni0: Gained IPv6LL Feb 9 19:16:23.100839 systemd-networkd[1588]: veth88ee14f6: Gained IPv6LL Feb 9 19:16:24.206885 env[1808]: time="2024-02-09T19:16:24.206824819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-54l62,Uid:f9c87703-c7e2-417f-a38d-fa87ccb3f051,Namespace:kube-system,Attempt:0,}" Feb 9 19:16:24.263225 (udev-worker)[3699]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:16:24.266068 systemd-networkd[1588]: veth05718aec: Link UP Feb 9 19:16:24.275067 kernel: cni0: port 2(veth05718aec) entered blocking state Feb 9 19:16:24.275250 kernel: cni0: port 2(veth05718aec) entered disabled state Feb 9 19:16:24.275326 kernel: device veth05718aec entered promiscuous mode Feb 9 19:16:24.297666 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:16:24.297918 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth05718aec: link becomes ready Feb 9 19:16:24.297986 kernel: cni0: port 2(veth05718aec) entered blocking state Feb 9 19:16:24.299985 kernel: cni0: port 2(veth05718aec) entered forwarding state Feb 9 19:16:24.302387 systemd-networkd[1588]: veth05718aec: Gained carrier Feb 9 19:16:24.307124 env[1808]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000988e8), "name":"cbr0", "type":"bridge"} Feb 9 19:16:24.328686 env[1808]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2024-02-09T19:16:24.328474630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:16:24.328686 env[1808]: time="2024-02-09T19:16:24.328580242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:16:24.329036 env[1808]: time="2024-02-09T19:16:24.328961915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:16:24.329847 env[1808]: time="2024-02-09T19:16:24.329667084Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e431ee838ed97959b4408a343d7003877a843aa7b637fa926849ada03dfbc308 pid=3885 runtime=io.containerd.runc.v2 Feb 9 19:16:24.461890 env[1808]: time="2024-02-09T19:16:24.461732963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-54l62,Uid:f9c87703-c7e2-417f-a38d-fa87ccb3f051,Namespace:kube-system,Attempt:0,} returns sandbox id \"e431ee838ed97959b4408a343d7003877a843aa7b637fa926849ada03dfbc308\"" Feb 9 19:16:24.470313 env[1808]: time="2024-02-09T19:16:24.470215580Z" level=info msg="CreateContainer within sandbox \"e431ee838ed97959b4408a343d7003877a843aa7b637fa926849ada03dfbc308\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:16:24.491137 env[1808]: time="2024-02-09T19:16:24.491042051Z" level=info msg="CreateContainer within sandbox \"e431ee838ed97959b4408a343d7003877a843aa7b637fa926849ada03dfbc308\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fd90c1563fc6d8214905de8df37995163fbf8e25b92947060b4aa119df444425\"" Feb 9 19:16:24.494351 env[1808]: time="2024-02-09T19:16:24.492520001Z" level=info msg="StartContainer for \"fd90c1563fc6d8214905de8df37995163fbf8e25b92947060b4aa119df444425\"" Feb 9 19:16:24.619769 env[1808]: time="2024-02-09T19:16:24.619686195Z" level=info msg="StartContainer for \"fd90c1563fc6d8214905de8df37995163fbf8e25b92947060b4aa119df444425\" returns successfully" Feb 9 19:16:25.399897 kubelet[3056]: I0209 19:16:25.399800 3056 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-54l62" podStartSLOduration=25.399708187 pod.CreationTimestamp="2024-02-09 19:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:16:25.376881985 +0000 UTC m=+37.780759710" watchObservedRunningTime="2024-02-09 19:16:25.399708187 +0000 UTC m=+37.803585900" Feb 9 19:16:25.468955 systemd-networkd[1588]: veth05718aec: Gained IPv6LL Feb 9 19:16:47.193035 systemd[1]: Started sshd@5-172.31.21.34:22-147.75.109.163:42484.service. Feb 9 19:16:47.377538 sshd[4083]: Accepted publickey for core from 147.75.109.163 port 42484 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:16:47.380387 sshd[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:47.391065 systemd-logind[1793]: New session 6 of user core. Feb 9 19:16:47.391700 systemd[1]: Started session-6.scope. Feb 9 19:16:47.709051 sshd[4083]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:47.715052 systemd[1]: sshd@5-172.31.21.34:22-147.75.109.163:42484.service: Deactivated successfully. Feb 9 19:16:47.717686 systemd-logind[1793]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:16:47.718095 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:16:47.722509 systemd-logind[1793]: Removed session 6. Feb 9 19:16:52.740259 systemd[1]: Started sshd@6-172.31.21.34:22-147.75.109.163:42488.service. Feb 9 19:16:52.924043 sshd[4118]: Accepted publickey for core from 147.75.109.163 port 42488 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:16:52.927803 sshd[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:52.938335 systemd-logind[1793]: New session 7 of user core. Feb 9 19:16:52.938524 systemd[1]: Started session-7.scope. Feb 9 19:16:53.210116 sshd[4118]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:53.215798 systemd[1]: sshd@6-172.31.21.34:22-147.75.109.163:42488.service: Deactivated successfully. Feb 9 19:16:53.218348 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:16:53.218418 systemd-logind[1793]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:16:53.222725 systemd-logind[1793]: Removed session 7. Feb 9 19:16:58.236683 systemd[1]: Started sshd@7-172.31.21.34:22-147.75.109.163:53378.service. Feb 9 19:16:58.414709 sshd[4149]: Accepted publickey for core from 147.75.109.163 port 53378 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:16:58.418396 sshd[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:58.435244 systemd[1]: Started session-8.scope. Feb 9 19:16:58.438064 systemd-logind[1793]: New session 8 of user core. Feb 9 19:16:58.724540 sshd[4149]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:58.731800 systemd[1]: sshd@7-172.31.21.34:22-147.75.109.163:53378.service: Deactivated successfully. Feb 9 19:16:58.733578 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 19:16:58.735571 systemd-logind[1793]: Session 8 logged out. Waiting for processes to exit. Feb 9 19:16:58.739567 systemd-logind[1793]: Removed session 8. Feb 9 19:17:03.753581 systemd[1]: Started sshd@8-172.31.21.34:22-147.75.109.163:53392.service. Feb 9 19:17:03.932429 sshd[4182]: Accepted publickey for core from 147.75.109.163 port 53392 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:03.935558 sshd[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:03.944875 systemd-logind[1793]: New session 9 of user core. Feb 9 19:17:03.945250 systemd[1]: Started session-9.scope. Feb 9 19:17:04.212101 sshd[4182]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:04.218570 systemd[1]: sshd@8-172.31.21.34:22-147.75.109.163:53392.service: Deactivated successfully. Feb 9 19:17:04.220117 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 19:17:04.222995 systemd-logind[1793]: Session 9 logged out. Waiting for processes to exit. Feb 9 19:17:04.227319 systemd-logind[1793]: Removed session 9. Feb 9 19:17:09.244485 systemd[1]: Started sshd@9-172.31.21.34:22-147.75.109.163:38670.service. Feb 9 19:17:09.432180 sshd[4214]: Accepted publickey for core from 147.75.109.163 port 38670 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:09.435214 sshd[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:09.445459 systemd-logind[1793]: New session 10 of user core. Feb 9 19:17:09.446747 systemd[1]: Started session-10.scope. Feb 9 19:17:09.712211 sshd[4214]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:09.717498 systemd[1]: sshd@9-172.31.21.34:22-147.75.109.163:38670.service: Deactivated successfully. Feb 9 19:17:09.721377 systemd-logind[1793]: Session 10 logged out. Waiting for processes to exit. Feb 9 19:17:09.721796 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 19:17:09.725818 systemd-logind[1793]: Removed session 10. Feb 9 19:17:09.739263 systemd[1]: Started sshd@10-172.31.21.34:22-147.75.109.163:38678.service. Feb 9 19:17:09.923767 sshd[4234]: Accepted publickey for core from 147.75.109.163 port 38678 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:09.927666 sshd[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:09.937952 systemd-logind[1793]: New session 11 of user core. Feb 9 19:17:09.939686 systemd[1]: Started session-11.scope. Feb 9 19:17:10.473158 sshd[4234]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:10.478939 systemd[1]: sshd@10-172.31.21.34:22-147.75.109.163:38678.service: Deactivated successfully. Feb 9 19:17:10.482017 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 19:17:10.482976 systemd-logind[1793]: Session 11 logged out. Waiting for processes to exit. Feb 9 19:17:10.487730 systemd-logind[1793]: Removed session 11. Feb 9 19:17:10.504953 systemd[1]: Started sshd@11-172.31.21.34:22-147.75.109.163:38692.service. Feb 9 19:17:10.687839 sshd[4245]: Accepted publickey for core from 147.75.109.163 port 38692 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:10.691296 sshd[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:10.701885 systemd-logind[1793]: New session 12 of user core. Feb 9 19:17:10.703206 systemd[1]: Started session-12.scope. Feb 9 19:17:10.977290 sshd[4245]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:10.983883 systemd-logind[1793]: Session 12 logged out. Waiting for processes to exit. Feb 9 19:17:10.985372 systemd[1]: sshd@11-172.31.21.34:22-147.75.109.163:38692.service: Deactivated successfully. Feb 9 19:17:10.988152 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 19:17:10.990147 systemd-logind[1793]: Removed session 12. Feb 9 19:17:16.005533 systemd[1]: Started sshd@12-172.31.21.34:22-147.75.109.163:57790.service. Feb 9 19:17:16.183678 sshd[4276]: Accepted publickey for core from 147.75.109.163 port 57790 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:16.187164 sshd[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:16.197718 systemd[1]: Started session-13.scope. Feb 9 19:17:16.198530 systemd-logind[1793]: New session 13 of user core. Feb 9 19:17:16.475027 sshd[4276]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:16.480818 systemd-logind[1793]: Session 13 logged out. Waiting for processes to exit. Feb 9 19:17:16.481440 systemd[1]: sshd@12-172.31.21.34:22-147.75.109.163:57790.service: Deactivated successfully. Feb 9 19:17:16.483867 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 19:17:16.486318 systemd-logind[1793]: Removed session 13. Feb 9 19:17:21.503399 systemd[1]: Started sshd@13-172.31.21.34:22-147.75.109.163:57798.service. Feb 9 19:17:21.684695 sshd[4307]: Accepted publickey for core from 147.75.109.163 port 57798 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:21.688141 sshd[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:21.697956 systemd[1]: Started session-14.scope. Feb 9 19:17:21.699009 systemd-logind[1793]: New session 14 of user core. Feb 9 19:17:21.979127 sshd[4307]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:21.984763 systemd[1]: sshd@13-172.31.21.34:22-147.75.109.163:57798.service: Deactivated successfully. Feb 9 19:17:21.987893 systemd-logind[1793]: Session 14 logged out. Waiting for processes to exit. Feb 9 19:17:21.989665 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 19:17:21.992430 systemd-logind[1793]: Removed session 14. Feb 9 19:17:22.009316 systemd[1]: Started sshd@14-172.31.21.34:22-147.75.109.163:57808.service. Feb 9 19:17:22.189047 sshd[4320]: Accepted publickey for core from 147.75.109.163 port 57808 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:22.192694 sshd[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:22.203097 systemd[1]: Started session-15.scope. Feb 9 19:17:22.208268 systemd-logind[1793]: New session 15 of user core. Feb 9 19:17:22.529100 sshd[4320]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:22.535738 systemd-logind[1793]: Session 15 logged out. Waiting for processes to exit. Feb 9 19:17:22.537585 systemd[1]: sshd@14-172.31.21.34:22-147.75.109.163:57808.service: Deactivated successfully. Feb 9 19:17:22.539774 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 19:17:22.546425 systemd-logind[1793]: Removed session 15. Feb 9 19:17:22.554611 systemd[1]: Started sshd@15-172.31.21.34:22-147.75.109.163:57822.service. Feb 9 19:17:22.741815 sshd[4343]: Accepted publickey for core from 147.75.109.163 port 57822 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:22.744186 sshd[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:22.753015 systemd-logind[1793]: New session 16 of user core. Feb 9 19:17:22.754048 systemd[1]: Started session-16.scope. Feb 9 19:17:24.330295 sshd[4343]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:24.340770 systemd[1]: sshd@15-172.31.21.34:22-147.75.109.163:57822.service: Deactivated successfully. Feb 9 19:17:24.345210 systemd-logind[1793]: Session 16 logged out. Waiting for processes to exit. Feb 9 19:17:24.345215 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 19:17:24.367853 systemd-logind[1793]: Removed session 16. Feb 9 19:17:24.380899 systemd[1]: Started sshd@16-172.31.21.34:22-147.75.109.163:57828.service. Feb 9 19:17:24.578909 sshd[4368]: Accepted publickey for core from 147.75.109.163 port 57828 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:24.581950 sshd[4368]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:24.596093 systemd-logind[1793]: New session 17 of user core. Feb 9 19:17:24.599881 systemd[1]: Started session-17.scope. Feb 9 19:17:25.054358 sshd[4368]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:25.061168 systemd-logind[1793]: Session 17 logged out. Waiting for processes to exit. Feb 9 19:17:25.061545 systemd[1]: sshd@16-172.31.21.34:22-147.75.109.163:57828.service: Deactivated successfully. Feb 9 19:17:25.064115 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 19:17:25.067258 systemd-logind[1793]: Removed session 17. Feb 9 19:17:25.082368 systemd[1]: Started sshd@17-172.31.21.34:22-147.75.109.163:38296.service. Feb 9 19:17:25.265875 sshd[4426]: Accepted publickey for core from 147.75.109.163 port 38296 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:25.268741 sshd[4426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:25.280459 systemd[1]: Started session-18.scope. Feb 9 19:17:25.281992 systemd-logind[1793]: New session 18 of user core. Feb 9 19:17:25.546904 sshd[4426]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:25.552844 systemd-logind[1793]: Session 18 logged out. Waiting for processes to exit. Feb 9 19:17:25.555684 systemd[1]: sshd@17-172.31.21.34:22-147.75.109.163:38296.service: Deactivated successfully. Feb 9 19:17:25.558577 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 19:17:25.563350 systemd-logind[1793]: Removed session 18. Feb 9 19:17:30.575677 systemd[1]: Started sshd@18-172.31.21.34:22-147.75.109.163:38304.service. Feb 9 19:17:30.757514 sshd[4457]: Accepted publickey for core from 147.75.109.163 port 38304 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:30.761476 sshd[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:30.773389 systemd[1]: Started session-19.scope. Feb 9 19:17:30.773954 systemd-logind[1793]: New session 19 of user core. Feb 9 19:17:31.036821 sshd[4457]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:31.042686 systemd[1]: sshd@18-172.31.21.34:22-147.75.109.163:38304.service: Deactivated successfully. Feb 9 19:17:31.044624 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 19:17:31.052950 systemd-logind[1793]: Session 19 logged out. Waiting for processes to exit. Feb 9 19:17:31.056738 systemd-logind[1793]: Removed session 19. Feb 9 19:17:36.064318 systemd[1]: Started sshd@19-172.31.21.34:22-147.75.109.163:47772.service. Feb 9 19:17:36.244078 sshd[4517]: Accepted publickey for core from 147.75.109.163 port 47772 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:36.246670 sshd[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:36.256959 systemd[1]: Started session-20.scope. Feb 9 19:17:36.259484 systemd-logind[1793]: New session 20 of user core. Feb 9 19:17:36.504941 sshd[4517]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:36.510070 systemd[1]: sshd@19-172.31.21.34:22-147.75.109.163:47772.service: Deactivated successfully. Feb 9 19:17:36.512484 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 19:17:36.512765 systemd-logind[1793]: Session 20 logged out. Waiting for processes to exit. Feb 9 19:17:36.515230 systemd-logind[1793]: Removed session 20. Feb 9 19:17:41.534964 systemd[1]: Started sshd@20-172.31.21.34:22-147.75.109.163:47780.service. Feb 9 19:17:41.716798 sshd[4549]: Accepted publickey for core from 147.75.109.163 port 47780 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:41.719598 sshd[4549]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:41.730517 systemd[1]: Started session-21.scope. Feb 9 19:17:41.731733 systemd-logind[1793]: New session 21 of user core. Feb 9 19:17:41.986956 sshd[4549]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:41.992049 systemd-logind[1793]: Session 21 logged out. Waiting for processes to exit. Feb 9 19:17:41.992759 systemd[1]: sshd@20-172.31.21.34:22-147.75.109.163:47780.service: Deactivated successfully. Feb 9 19:17:41.994255 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 19:17:41.998336 systemd-logind[1793]: Removed session 21. Feb 9 19:17:47.016070 systemd[1]: Started sshd@21-172.31.21.34:22-147.75.109.163:38840.service. Feb 9 19:17:47.198084 sshd[4580]: Accepted publickey for core from 147.75.109.163 port 38840 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:47.201674 sshd[4580]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:47.213799 systemd[1]: Started session-22.scope. Feb 9 19:17:47.214330 systemd-logind[1793]: New session 22 of user core. Feb 9 19:17:47.481054 sshd[4580]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:47.486259 systemd[1]: sshd@21-172.31.21.34:22-147.75.109.163:38840.service: Deactivated successfully. Feb 9 19:17:47.491271 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 19:17:47.493567 systemd-logind[1793]: Session 22 logged out. Waiting for processes to exit. Feb 9 19:17:47.497712 systemd-logind[1793]: Removed session 22. Feb 9 19:18:01.492477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ca207294877e98ee4c7684a3be16a2f89053cfecac45606837fda8caa03461a-rootfs.mount: Deactivated successfully. Feb 9 19:18:01.513584 env[1808]: time="2024-02-09T19:18:01.513493028Z" level=info msg="shim disconnected" id=3ca207294877e98ee4c7684a3be16a2f89053cfecac45606837fda8caa03461a Feb 9 19:18:01.513584 env[1808]: time="2024-02-09T19:18:01.513572228Z" level=warning msg="cleaning up after shim disconnected" id=3ca207294877e98ee4c7684a3be16a2f89053cfecac45606837fda8caa03461a namespace=k8s.io Feb 9 19:18:01.514374 env[1808]: time="2024-02-09T19:18:01.513595095Z" level=info msg="cleaning up dead shim" Feb 9 19:18:01.529381 env[1808]: time="2024-02-09T19:18:01.529289739Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4663 runtime=io.containerd.runc.v2\n" Feb 9 19:18:01.633583 kubelet[3056]: I0209 19:18:01.633141 3056 scope.go:115] "RemoveContainer" containerID="3ca207294877e98ee4c7684a3be16a2f89053cfecac45606837fda8caa03461a" Feb 9 19:18:01.639730 env[1808]: time="2024-02-09T19:18:01.639666446Z" level=info msg="CreateContainer within sandbox \"e246c71c205f0226b38c03a7267263d4881aff80a779cbe46c10059f29d2caf6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 9 19:18:01.664352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2437161262.mount: Deactivated successfully. Feb 9 19:18:01.680197 env[1808]: time="2024-02-09T19:18:01.680108866Z" level=info msg="CreateContainer within sandbox \"e246c71c205f0226b38c03a7267263d4881aff80a779cbe46c10059f29d2caf6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"24348bb2316bb519e88c51abf1c4792812f0fb1b00eab1990daaa9c162f3e760\"" Feb 9 19:18:01.681721 env[1808]: time="2024-02-09T19:18:01.681157355Z" level=info msg="StartContainer for \"24348bb2316bb519e88c51abf1c4792812f0fb1b00eab1990daaa9c162f3e760\"" Feb 9 19:18:01.824339 env[1808]: time="2024-02-09T19:18:01.823505765Z" level=info msg="StartContainer for \"24348bb2316bb519e88c51abf1c4792812f0fb1b00eab1990daaa9c162f3e760\" returns successfully" Feb 9 19:18:07.796893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ead4546fe1ba38fe6a85819586c0d101a6cea19e5ccc4f1e95387b9945ab88b4-rootfs.mount: Deactivated successfully. Feb 9 19:18:07.809976 env[1808]: time="2024-02-09T19:18:07.809909294Z" level=info msg="shim disconnected" id=ead4546fe1ba38fe6a85819586c0d101a6cea19e5ccc4f1e95387b9945ab88b4 Feb 9 19:18:07.811063 env[1808]: time="2024-02-09T19:18:07.811007435Z" level=warning msg="cleaning up after shim disconnected" id=ead4546fe1ba38fe6a85819586c0d101a6cea19e5ccc4f1e95387b9945ab88b4 namespace=k8s.io Feb 9 19:18:07.811306 env[1808]: time="2024-02-09T19:18:07.811261981Z" level=info msg="cleaning up dead shim" Feb 9 19:18:07.828072 env[1808]: time="2024-02-09T19:18:07.828015062Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4742 runtime=io.containerd.runc.v2\n" Feb 9 19:18:08.657983 kubelet[3056]: I0209 19:18:08.657926 3056 scope.go:115] "RemoveContainer" containerID="ead4546fe1ba38fe6a85819586c0d101a6cea19e5ccc4f1e95387b9945ab88b4" Feb 9 19:18:08.661450 env[1808]: time="2024-02-09T19:18:08.661370703Z" level=info msg="CreateContainer within sandbox \"9f21a8b42c3e35c2b0e29c0a63247060cb16f84859139be22785228375c9e445\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 9 19:18:08.684907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1306029865.mount: Deactivated successfully. Feb 9 19:18:08.699355 env[1808]: time="2024-02-09T19:18:08.699290792Z" level=info msg="CreateContainer within sandbox \"9f21a8b42c3e35c2b0e29c0a63247060cb16f84859139be22785228375c9e445\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"4dbe68954b15fabb6ae42460d0a09008f614c161cf9fc7c027b550ae84e485be\"" Feb 9 19:18:08.700339 env[1808]: time="2024-02-09T19:18:08.700297298Z" level=info msg="StartContainer for \"4dbe68954b15fabb6ae42460d0a09008f614c161cf9fc7c027b550ae84e485be\"" Feb 9 19:18:08.828532 env[1808]: time="2024-02-09T19:18:08.827332724Z" level=info msg="StartContainer for \"4dbe68954b15fabb6ae42460d0a09008f614c161cf9fc7c027b550ae84e485be\" returns successfully" Feb 9 19:18:10.874664 kubelet[3056]: E0209 19:18:10.874583 3056 controller.go:189] failed to update lease, error: Put "https://172.31.21.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-34?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 19:18:20.875721 kubelet[3056]: E0209 19:18:20.875614 3056 controller.go:189] failed to update lease, error: Put "https://172.31.21.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-34?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)