Dec 13 14:13:35.993275 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 13 14:13:35.993349 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Dec 13 12:58:58 -00 2024 Dec 13 14:13:35.993374 kernel: efi: EFI v2.70 by EDK II Dec 13 14:13:35.993389 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7171cf98 Dec 13 14:13:35.993404 kernel: ACPI: Early table checksum verification disabled Dec 13 14:13:35.993418 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 13 14:13:35.993434 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 13 14:13:35.993449 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 14:13:35.993463 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Dec 13 14:13:35.993477 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 14:13:35.993496 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 13 14:13:35.993510 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 13 14:13:35.993525 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 13 14:13:35.993539 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 14:13:35.993556 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 13 14:13:35.993576 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 13 14:13:35.993591 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 13 14:13:35.993606 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 13 14:13:35.993621 kernel: printk: bootconsole [uart0] enabled Dec 13 14:13:35.993636 kernel: NUMA: Failed to initialise from firmware Dec 13 14:13:35.993651 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 14:13:35.993667 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Dec 13 14:13:35.993682 kernel: Zone ranges: Dec 13 14:13:35.993697 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 14:13:35.993711 kernel: DMA32 empty Dec 13 14:13:35.993726 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 13 14:13:35.993746 kernel: Movable zone start for each node Dec 13 14:13:35.993761 kernel: Early memory node ranges Dec 13 14:13:35.993776 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 13 14:13:35.993791 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 13 14:13:35.993805 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 13 14:13:35.993820 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 13 14:13:35.993835 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 13 14:13:35.993850 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 13 14:13:35.993864 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 13 14:13:35.993879 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 13 14:13:35.993894 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 14:13:35.993908 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 13 14:13:35.993928 kernel: psci: probing for conduit method from ACPI. Dec 13 14:13:35.993943 kernel: psci: PSCIv1.0 detected in firmware. Dec 13 14:13:35.993964 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 14:13:35.993981 kernel: psci: Trusted OS migration not required Dec 13 14:13:35.993996 kernel: psci: SMC Calling Convention v1.1 Dec 13 14:13:35.994016 kernel: ACPI: SRAT not present Dec 13 14:13:35.994032 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Dec 13 14:13:35.994047 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Dec 13 14:13:35.994063 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 14:13:35.994079 kernel: Detected PIPT I-cache on CPU0 Dec 13 14:13:35.994094 kernel: CPU features: detected: GIC system register CPU interface Dec 13 14:13:35.994109 kernel: CPU features: detected: Spectre-v2 Dec 13 14:13:35.994125 kernel: CPU features: detected: Spectre-v3a Dec 13 14:13:35.994140 kernel: CPU features: detected: Spectre-BHB Dec 13 14:13:35.994155 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 14:13:35.994171 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 14:13:35.994192 kernel: CPU features: detected: ARM erratum 1742098 Dec 13 14:13:35.994209 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 13 14:13:35.994224 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Dec 13 14:13:35.994240 kernel: Policy zone: Normal Dec 13 14:13:35.994258 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:13:35.994275 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:13:35.994322 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:13:35.994345 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:13:35.994361 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:13:35.994377 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Dec 13 14:13:35.994402 kernel: Memory: 3824524K/4030464K available (9792K kernel code, 2092K rwdata, 7576K rodata, 36416K init, 777K bss, 205940K reserved, 0K cma-reserved) Dec 13 14:13:35.994419 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:13:35.994434 kernel: trace event string verifier disabled Dec 13 14:13:35.994450 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 14:13:35.994467 kernel: rcu: RCU event tracing is enabled. Dec 13 14:13:35.994484 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:13:35.994500 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 14:13:35.994517 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:13:35.994535 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:13:35.994551 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:13:35.994566 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 14:13:35.994582 kernel: GICv3: 96 SPIs implemented Dec 13 14:13:35.994607 kernel: GICv3: 0 Extended SPIs implemented Dec 13 14:13:35.994626 kernel: GICv3: Distributor has no Range Selector support Dec 13 14:13:35.994642 kernel: Root IRQ handler: gic_handle_irq Dec 13 14:13:35.994657 kernel: GICv3: 16 PPIs implemented Dec 13 14:13:35.994673 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 13 14:13:35.994689 kernel: ACPI: SRAT not present Dec 13 14:13:35.994705 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 13 14:13:35.994721 kernel: ITS@0x0000000010080000: allocated 8192 Devices @400090000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:13:35.994760 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000a0000 (flat, esz 8, psz 64K, shr 1) Dec 13 14:13:35.994785 kernel: GICv3: using LPI property table @0x00000004000b0000 Dec 13 14:13:35.994801 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 13 14:13:35.994826 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Dec 13 14:13:35.994842 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 13 14:13:35.994858 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 13 14:13:35.994874 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 13 14:13:35.994891 kernel: Console: colour dummy device 80x25 Dec 13 14:13:35.994909 kernel: printk: console [tty1] enabled Dec 13 14:13:35.994925 kernel: ACPI: Core revision 20210730 Dec 13 14:13:35.994942 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 13 14:13:35.994976 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:13:35.994994 kernel: LSM: Security Framework initializing Dec 13 14:13:35.995018 kernel: SELinux: Initializing. Dec 13 14:13:35.995036 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:13:35.995052 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:13:35.995069 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:13:35.995084 kernel: Platform MSI: ITS@0x10080000 domain created Dec 13 14:13:35.995100 kernel: PCI/MSI: ITS@0x10080000 domain created Dec 13 14:13:35.995117 kernel: Remapping and enabling EFI services. Dec 13 14:13:35.995134 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:13:35.995151 kernel: Detected PIPT I-cache on CPU1 Dec 13 14:13:35.995175 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 13 14:13:35.995192 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Dec 13 14:13:35.995208 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 13 14:13:35.995224 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:13:35.995239 kernel: SMP: Total of 2 processors activated. Dec 13 14:13:35.995256 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 14:13:35.995272 kernel: CPU features: detected: 32-bit EL1 Support Dec 13 14:13:36.000380 kernel: CPU features: detected: CRC32 instructions Dec 13 14:13:36.000419 kernel: CPU: All CPU(s) started at EL1 Dec 13 14:13:36.000447 kernel: alternatives: patching kernel code Dec 13 14:13:36.000464 kernel: devtmpfs: initialized Dec 13 14:13:36.000480 kernel: KASLR disabled due to lack of seed Dec 13 14:13:36.000508 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:13:36.000529 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:13:36.000546 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:13:36.000578 kernel: SMBIOS 3.0.0 present. Dec 13 14:13:36.000596 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 13 14:13:36.000613 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:13:36.000632 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 14:13:36.000650 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 14:13:36.000669 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 14:13:36.000692 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:13:36.000710 kernel: audit: type=2000 audit(0.265:1): state=initialized audit_enabled=0 res=1 Dec 13 14:13:36.000728 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:13:36.000744 kernel: cpuidle: using governor menu Dec 13 14:13:36.000760 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 14:13:36.000782 kernel: ASID allocator initialised with 32768 entries Dec 13 14:13:36.000800 kernel: ACPI: bus type PCI registered Dec 13 14:13:36.000817 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:13:36.000833 kernel: Serial: AMBA PL011 UART driver Dec 13 14:13:36.000850 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:13:36.000867 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 14:13:36.000883 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:13:36.000900 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 14:13:36.000917 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:13:36.000940 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 14:13:36.000957 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:13:36.000974 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:13:36.000991 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:13:36.001008 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:13:36.001027 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:13:36.001044 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:13:36.001061 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:13:36.001078 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:13:36.001103 kernel: ACPI: Interpreter enabled Dec 13 14:13:36.001122 kernel: ACPI: Using GIC for interrupt routing Dec 13 14:13:36.001139 kernel: ACPI: MCFG table detected, 1 entries Dec 13 14:13:36.001156 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Dec 13 14:13:36.001596 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:13:36.001802 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 14:13:36.001988 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 14:13:36.002181 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Dec 13 14:13:36.002420 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Dec 13 14:13:36.002452 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 13 14:13:36.002470 kernel: acpiphp: Slot [1] registered Dec 13 14:13:36.002486 kernel: acpiphp: Slot [2] registered Dec 13 14:13:36.002503 kernel: acpiphp: Slot [3] registered Dec 13 14:13:36.002519 kernel: acpiphp: Slot [4] registered Dec 13 14:13:36.003821 kernel: acpiphp: Slot [5] registered Dec 13 14:13:36.003839 kernel: acpiphp: Slot [6] registered Dec 13 14:13:36.003865 kernel: acpiphp: Slot [7] registered Dec 13 14:13:36.003882 kernel: acpiphp: Slot [8] registered Dec 13 14:13:36.003898 kernel: acpiphp: Slot [9] registered Dec 13 14:13:36.003914 kernel: acpiphp: Slot [10] registered Dec 13 14:13:36.003930 kernel: acpiphp: Slot [11] registered Dec 13 14:13:36.003946 kernel: acpiphp: Slot [12] registered Dec 13 14:13:36.003963 kernel: acpiphp: Slot [13] registered Dec 13 14:13:36.003979 kernel: acpiphp: Slot [14] registered Dec 13 14:13:36.003995 kernel: acpiphp: Slot [15] registered Dec 13 14:13:36.004011 kernel: acpiphp: Slot [16] registered Dec 13 14:13:36.004031 kernel: acpiphp: Slot [17] registered Dec 13 14:13:36.004047 kernel: acpiphp: Slot [18] registered Dec 13 14:13:36.004063 kernel: acpiphp: Slot [19] registered Dec 13 14:13:36.004079 kernel: acpiphp: Slot [20] registered Dec 13 14:13:36.004095 kernel: acpiphp: Slot [21] registered Dec 13 14:13:36.004111 kernel: acpiphp: Slot [22] registered Dec 13 14:13:36.004127 kernel: acpiphp: Slot [23] registered Dec 13 14:13:36.004143 kernel: acpiphp: Slot [24] registered Dec 13 14:13:36.004158 kernel: acpiphp: Slot [25] registered Dec 13 14:13:36.004178 kernel: acpiphp: Slot [26] registered Dec 13 14:13:36.004194 kernel: acpiphp: Slot [27] registered Dec 13 14:13:36.004210 kernel: acpiphp: Slot [28] registered Dec 13 14:13:36.004226 kernel: acpiphp: Slot [29] registered Dec 13 14:13:36.004242 kernel: acpiphp: Slot [30] registered Dec 13 14:13:36.004258 kernel: acpiphp: Slot [31] registered Dec 13 14:13:36.004274 kernel: PCI host bridge to bus 0000:00 Dec 13 14:13:36.011654 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 13 14:13:36.011878 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 14:13:36.012089 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 13 14:13:36.012267 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Dec 13 14:13:36.012510 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Dec 13 14:13:36.012730 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Dec 13 14:13:36.012941 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Dec 13 14:13:36.013181 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 14:13:36.017923 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Dec 13 14:13:36.018172 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 14:13:36.018430 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 14:13:36.018638 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Dec 13 14:13:36.018858 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Dec 13 14:13:36.019109 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Dec 13 14:13:36.019370 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 14:13:36.019618 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Dec 13 14:13:36.019831 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Dec 13 14:13:36.020051 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Dec 13 14:13:36.020276 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Dec 13 14:13:36.020560 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Dec 13 14:13:36.020758 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 13 14:13:36.020934 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 14:13:36.021127 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 13 14:13:36.021151 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 14:13:36.021168 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 14:13:36.021185 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 14:13:36.021202 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 14:13:36.021219 kernel: iommu: Default domain type: Translated Dec 13 14:13:36.021235 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 14:13:36.021251 kernel: vgaarb: loaded Dec 13 14:13:36.021268 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:13:36.021361 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:13:36.021383 kernel: PTP clock support registered Dec 13 14:13:36.021400 kernel: Registered efivars operations Dec 13 14:13:36.021417 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 14:13:36.021434 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:13:36.021450 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:13:36.021467 kernel: pnp: PnP ACPI init Dec 13 14:13:36.021695 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 13 14:13:36.021731 kernel: pnp: PnP ACPI: found 1 devices Dec 13 14:13:36.021748 kernel: NET: Registered PF_INET protocol family Dec 13 14:13:36.021765 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:13:36.021782 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:13:36.021799 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:13:36.021828 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:13:36.021852 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 14:13:36.021870 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:13:36.021886 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:13:36.021908 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:13:36.021924 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:13:36.021940 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:13:36.021957 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Dec 13 14:13:36.021974 kernel: kvm [1]: HYP mode not available Dec 13 14:13:36.021990 kernel: Initialise system trusted keyrings Dec 13 14:13:36.022007 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:13:36.022023 kernel: Key type asymmetric registered Dec 13 14:13:36.022039 kernel: Asymmetric key parser 'x509' registered Dec 13 14:13:36.022060 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:13:36.022077 kernel: io scheduler mq-deadline registered Dec 13 14:13:36.022093 kernel: io scheduler kyber registered Dec 13 14:13:36.022109 kernel: io scheduler bfq registered Dec 13 14:13:36.044068 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 13 14:13:36.044133 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 14:13:36.044152 kernel: ACPI: button: Power Button [PWRB] Dec 13 14:13:36.044170 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 13 14:13:36.044198 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 14:13:36.044215 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:13:36.044233 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 14:13:36.044484 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 13 14:13:36.044511 kernel: printk: console [ttyS0] disabled Dec 13 14:13:36.044528 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 13 14:13:36.044545 kernel: printk: console [ttyS0] enabled Dec 13 14:13:36.044561 kernel: printk: bootconsole [uart0] disabled Dec 13 14:13:36.044577 kernel: thunder_xcv, ver 1.0 Dec 13 14:13:36.044599 kernel: thunder_bgx, ver 1.0 Dec 13 14:13:36.044615 kernel: nicpf, ver 1.0 Dec 13 14:13:36.044631 kernel: nicvf, ver 1.0 Dec 13 14:13:36.044840 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 14:13:36.045022 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T14:13:35 UTC (1734099215) Dec 13 14:13:36.045046 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:13:36.045063 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:13:36.045079 kernel: Segment Routing with IPv6 Dec 13 14:13:36.045100 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:13:36.045116 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:13:36.045132 kernel: Key type dns_resolver registered Dec 13 14:13:36.045149 kernel: registered taskstats version 1 Dec 13 14:13:36.045165 kernel: Loading compiled-in X.509 certificates Dec 13 14:13:36.045182 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e011ba9949ade5a6d03f7a5e28171f7f59e70f8a' Dec 13 14:13:36.045198 kernel: Key type .fscrypt registered Dec 13 14:13:36.045214 kernel: Key type fscrypt-provisioning registered Dec 13 14:13:36.045230 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:13:36.045246 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:13:36.045266 kernel: ima: No architecture policies found Dec 13 14:13:36.045282 kernel: clk: Disabling unused clocks Dec 13 14:13:36.045319 kernel: Freeing unused kernel memory: 36416K Dec 13 14:13:36.045336 kernel: Run /init as init process Dec 13 14:13:36.045353 kernel: with arguments: Dec 13 14:13:36.045369 kernel: /init Dec 13 14:13:36.045385 kernel: with environment: Dec 13 14:13:36.045401 kernel: HOME=/ Dec 13 14:13:36.045417 kernel: TERM=linux Dec 13 14:13:36.045439 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:13:36.045460 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:13:36.045481 systemd[1]: Detected virtualization amazon. Dec 13 14:13:36.045499 systemd[1]: Detected architecture arm64. Dec 13 14:13:36.045517 systemd[1]: Running in initrd. Dec 13 14:13:36.045534 systemd[1]: No hostname configured, using default hostname. Dec 13 14:13:36.045551 systemd[1]: Hostname set to . Dec 13 14:13:36.045574 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:13:36.045591 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:13:36.045609 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:13:36.045626 systemd[1]: Reached target cryptsetup.target. Dec 13 14:13:36.045644 systemd[1]: Reached target paths.target. Dec 13 14:13:36.045661 systemd[1]: Reached target slices.target. Dec 13 14:13:36.045679 systemd[1]: Reached target swap.target. Dec 13 14:13:36.045696 systemd[1]: Reached target timers.target. Dec 13 14:13:36.045718 systemd[1]: Listening on iscsid.socket. Dec 13 14:13:36.045736 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:13:36.045754 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:13:36.045772 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:13:36.045790 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:13:36.045808 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:13:36.045827 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:13:36.045844 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:13:36.045866 systemd[1]: Reached target sockets.target. Dec 13 14:13:36.045886 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:13:36.045903 systemd[1]: Finished network-cleanup.service. Dec 13 14:13:36.045922 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:13:36.045939 systemd[1]: Starting systemd-journald.service... Dec 13 14:13:36.045957 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:13:36.045975 systemd[1]: Starting systemd-resolved.service... Dec 13 14:13:36.045993 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:13:36.046011 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:13:36.046032 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:13:36.046051 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:13:36.046069 kernel: audit: type=1130 audit(1734099215.988:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.046088 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:13:36.046105 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:13:36.046140 systemd-journald[309]: Journal started Dec 13 14:13:36.046240 systemd-journald[309]: Runtime Journal (/run/log/journal/ec208ff870e3b3b153c1d59f390a2bbf) is 8.0M, max 75.4M, 67.4M free. Dec 13 14:13:35.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:35.967379 systemd-modules-load[310]: Inserted module 'overlay' Dec 13 14:13:36.052434 systemd[1]: Started systemd-journald.service. Dec 13 14:13:36.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.053186 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:13:36.074240 kernel: audit: type=1130 audit(1734099216.051:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.074314 kernel: audit: type=1130 audit(1734099216.059:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.061873 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:13:36.077164 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:13:36.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.090312 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:13:36.090359 kernel: audit: type=1130 audit(1734099216.072:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.095916 systemd-resolved[311]: Positive Trust Anchors: Dec 13 14:13:36.095945 systemd-resolved[311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:13:36.096001 systemd-resolved[311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:13:36.119323 kernel: Bridge firewalling registered Dec 13 14:13:36.119405 systemd-modules-load[310]: Inserted module 'br_netfilter' Dec 13 14:13:36.143167 dracut-cmdline[325]: dracut-dracut-053 Dec 13 14:13:36.149820 dracut-cmdline[325]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:13:36.169336 kernel: SCSI subsystem initialized Dec 13 14:13:36.185896 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:13:36.185970 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:13:36.190711 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:13:36.195620 systemd-modules-load[310]: Inserted module 'dm_multipath' Dec 13 14:13:36.198952 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:13:36.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.208971 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:13:36.219335 kernel: audit: type=1130 audit(1734099216.198:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.236448 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:13:36.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.247335 kernel: audit: type=1130 audit(1734099216.236:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.344317 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:13:36.364368 kernel: iscsi: registered transport (tcp) Dec 13 14:13:36.393938 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:13:36.394008 kernel: QLogic iSCSI HBA Driver Dec 13 14:13:36.568225 systemd-resolved[311]: Defaulting to hostname 'linux'. Dec 13 14:13:36.570093 kernel: random: crng init done Dec 13 14:13:36.571927 systemd[1]: Started systemd-resolved.service. Dec 13 14:13:36.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.573923 systemd[1]: Reached target nss-lookup.target. Dec 13 14:13:36.585387 kernel: audit: type=1130 audit(1734099216.572:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.606662 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:13:36.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.609995 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:13:36.620534 kernel: audit: type=1130 audit(1734099216.605:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.680345 kernel: raid6: neonx8 gen() 6313 MB/s Dec 13 14:13:36.698332 kernel: raid6: neonx8 xor() 4795 MB/s Dec 13 14:13:36.716371 kernel: raid6: neonx4 gen() 6448 MB/s Dec 13 14:13:36.734347 kernel: raid6: neonx4 xor() 4966 MB/s Dec 13 14:13:36.752377 kernel: raid6: neonx2 gen() 5693 MB/s Dec 13 14:13:36.770372 kernel: raid6: neonx2 xor() 4537 MB/s Dec 13 14:13:36.788335 kernel: raid6: neonx1 gen() 4428 MB/s Dec 13 14:13:36.806336 kernel: raid6: neonx1 xor() 3684 MB/s Dec 13 14:13:36.824339 kernel: raid6: int64x8 gen() 3382 MB/s Dec 13 14:13:36.842342 kernel: raid6: int64x8 xor() 2081 MB/s Dec 13 14:13:36.860352 kernel: raid6: int64x4 gen() 3764 MB/s Dec 13 14:13:36.878336 kernel: raid6: int64x4 xor() 2186 MB/s Dec 13 14:13:36.896355 kernel: raid6: int64x2 gen() 3561 MB/s Dec 13 14:13:36.914382 kernel: raid6: int64x2 xor() 1942 MB/s Dec 13 14:13:36.932339 kernel: raid6: int64x1 gen() 2743 MB/s Dec 13 14:13:36.951501 kernel: raid6: int64x1 xor() 1446 MB/s Dec 13 14:13:36.951574 kernel: raid6: using algorithm neonx4 gen() 6448 MB/s Dec 13 14:13:36.951599 kernel: raid6: .... xor() 4966 MB/s, rmw enabled Dec 13 14:13:36.953134 kernel: raid6: using neon recovery algorithm Dec 13 14:13:36.974689 kernel: xor: measuring software checksum speed Dec 13 14:13:36.974833 kernel: 8regs : 9200 MB/sec Dec 13 14:13:36.976466 kernel: 32regs : 10636 MB/sec Dec 13 14:13:36.978314 kernel: arm64_neon : 9554 MB/sec Dec 13 14:13:36.978348 kernel: xor: using function: 32regs (10636 MB/sec) Dec 13 14:13:37.076340 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Dec 13 14:13:37.095351 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:13:37.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:37.097000 audit: BPF prog-id=7 op=LOAD Dec 13 14:13:37.104000 audit: BPF prog-id=8 op=LOAD Dec 13 14:13:37.107044 kernel: audit: type=1130 audit(1734099217.095:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:37.106499 systemd[1]: Starting systemd-udevd.service... Dec 13 14:13:37.134365 systemd-udevd[507]: Using default interface naming scheme 'v252'. Dec 13 14:13:37.145370 systemd[1]: Started systemd-udevd.service. Dec 13 14:13:37.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:37.153009 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:13:37.189893 dracut-pre-trigger[517]: rd.md=0: removing MD RAID activation Dec 13 14:13:37.263513 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:13:37.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:37.267963 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:13:37.373892 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:13:37.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:37.525607 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 14:13:37.525722 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 14:13:37.525748 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 13 14:13:37.549195 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 14:13:37.549502 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 14:13:37.549728 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 14:13:37.549929 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 14:13:37.550121 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:13:37.550146 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:dd:1e:85:4d:09 Dec 13 14:13:37.550369 kernel: GPT:9289727 != 16777215 Dec 13 14:13:37.550395 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:13:37.552329 kernel: GPT:9289727 != 16777215 Dec 13 14:13:37.555165 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:13:37.555212 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:13:37.560561 (udev-worker)[564]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:13:37.646393 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (565) Dec 13 14:13:37.696046 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:13:37.773954 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:13:37.793088 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:13:37.797677 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:13:37.822022 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:13:37.836585 systemd[1]: Starting disk-uuid.service... Dec 13 14:13:37.848925 disk-uuid[673]: Primary Header is updated. Dec 13 14:13:37.848925 disk-uuid[673]: Secondary Entries is updated. Dec 13 14:13:37.848925 disk-uuid[673]: Secondary Header is updated. Dec 13 14:13:37.858323 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:13:37.868375 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:13:38.874337 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:13:38.874617 disk-uuid[674]: The operation has completed successfully. Dec 13 14:13:39.075897 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:13:39.076524 systemd[1]: Finished disk-uuid.service. Dec 13 14:13:39.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.093220 systemd[1]: Starting verity-setup.service... Dec 13 14:13:39.132325 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 14:13:39.228085 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:13:39.233341 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:13:39.237061 systemd[1]: Finished verity-setup.service. Dec 13 14:13:39.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.324652 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:13:39.325375 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:13:39.325853 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:13:39.328820 systemd[1]: Starting ignition-setup.service... Dec 13 14:13:39.343415 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:13:39.369959 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:13:39.370028 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:13:39.372044 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:13:39.380350 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:13:39.399973 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:13:39.440083 systemd[1]: Finished ignition-setup.service. Dec 13 14:13:39.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.446359 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:13:39.524215 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:13:39.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.526000 audit: BPF prog-id=9 op=LOAD Dec 13 14:13:39.528934 systemd[1]: Starting systemd-networkd.service... Dec 13 14:13:39.578403 systemd-networkd[1102]: lo: Link UP Dec 13 14:13:39.578434 systemd-networkd[1102]: lo: Gained carrier Dec 13 14:13:39.582773 systemd-networkd[1102]: Enumeration completed Dec 13 14:13:39.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.583388 systemd-networkd[1102]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:13:39.583622 systemd[1]: Started systemd-networkd.service. Dec 13 14:13:39.585594 systemd[1]: Reached target network.target. Dec 13 14:13:39.588933 systemd[1]: Starting iscsiuio.service... Dec 13 14:13:39.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.592421 systemd-networkd[1102]: eth0: Link UP Dec 13 14:13:39.621748 iscsid[1107]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:13:39.621748 iscsid[1107]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 14:13:39.621748 iscsid[1107]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:13:39.621748 iscsid[1107]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:13:39.621748 iscsid[1107]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:13:39.621748 iscsid[1107]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:13:39.621748 iscsid[1107]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:13:39.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.592433 systemd-networkd[1102]: eth0: Gained carrier Dec 13 14:13:39.604241 systemd[1]: Started iscsiuio.service. Dec 13 14:13:39.608809 systemd[1]: Starting iscsid.service... Dec 13 14:13:39.622069 systemd[1]: Started iscsid.service. Dec 13 14:13:39.626098 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:13:39.640626 systemd-networkd[1102]: eth0: DHCPv4 address 172.31.20.19/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:13:39.678075 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:13:39.700283 kernel: kauditd_printk_skb: 14 callbacks suppressed Dec 13 14:13:39.700357 kernel: audit: type=1130 audit(1734099219.678:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.679978 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:13:39.691033 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:13:39.692775 systemd[1]: Reached target remote-fs.target. Dec 13 14:13:39.700118 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:13:39.724687 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:13:39.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.735336 kernel: audit: type=1130 audit(1734099219.726:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:40.451123 ignition[1051]: Ignition 2.14.0 Dec 13 14:13:40.451159 ignition[1051]: Stage: fetch-offline Dec 13 14:13:40.453189 ignition[1051]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:40.453463 ignition[1051]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:40.473107 ignition[1051]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:40.476174 ignition[1051]: Ignition finished successfully Dec 13 14:13:40.479620 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:13:40.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:40.490365 kernel: audit: type=1130 audit(1734099220.480:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:40.491735 systemd[1]: Starting ignition-fetch.service... Dec 13 14:13:40.508191 ignition[1126]: Ignition 2.14.0 Dec 13 14:13:40.509822 ignition[1126]: Stage: fetch Dec 13 14:13:40.511392 ignition[1126]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:40.513769 ignition[1126]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:40.525822 ignition[1126]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:40.528557 ignition[1126]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:40.539022 ignition[1126]: INFO : PUT result: OK Dec 13 14:13:40.543587 ignition[1126]: DEBUG : parsed url from cmdline: "" Dec 13 14:13:40.545513 ignition[1126]: INFO : no config URL provided Dec 13 14:13:40.547188 ignition[1126]: INFO : reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:13:40.549596 ignition[1126]: INFO : no config at "/usr/lib/ignition/user.ign" Dec 13 14:13:40.551715 ignition[1126]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:40.554593 ignition[1126]: INFO : PUT result: OK Dec 13 14:13:40.556156 ignition[1126]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 14:13:40.558519 ignition[1126]: INFO : GET result: OK Dec 13 14:13:40.572191 ignition[1126]: DEBUG : parsing config with SHA512: 9f1dfdba7e913b6a8ec2e5b003d80a1f715868144d7b40fb326cf6577f96fe1d3ca4a210b185000eea7977a77333d224e8fe6763b910f06932825c12a09147bb Dec 13 14:13:40.569743 ignition[1126]: fetch: fetch complete Dec 13 14:13:40.586563 kernel: audit: type=1130 audit(1734099220.577:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:40.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:40.568469 unknown[1126]: fetched base config from "system" Dec 13 14:13:40.569759 ignition[1126]: fetch: fetch passed Dec 13 14:13:40.568491 unknown[1126]: fetched base config from "system" Dec 13 14:13:40.569894 ignition[1126]: Ignition finished successfully Dec 13 14:13:40.568507 unknown[1126]: fetched user config from "aws" Dec 13 14:13:40.577031 systemd[1]: Finished ignition-fetch.service. Dec 13 14:13:40.596055 systemd[1]: Starting ignition-kargs.service... Dec 13 14:13:40.615014 ignition[1132]: Ignition 2.14.0 Dec 13 14:13:40.615622 ignition[1132]: Stage: kargs Dec 13 14:13:40.615938 ignition[1132]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:40.615990 ignition[1132]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:40.634980 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:40.637360 ignition[1132]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:40.640358 ignition[1132]: INFO : PUT result: OK Dec 13 14:13:40.645789 ignition[1132]: kargs: kargs passed Dec 13 14:13:40.645947 ignition[1132]: Ignition finished successfully Dec 13 14:13:40.651092 systemd[1]: Finished ignition-kargs.service. Dec 13 14:13:40.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:40.656380 systemd[1]: Starting ignition-disks.service... Dec 13 14:13:40.664388 kernel: audit: type=1130 audit(1734099220.652:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:40.678684 ignition[1138]: Ignition 2.14.0 Dec 13 14:13:40.678722 ignition[1138]: Stage: disks Dec 13 14:13:40.679031 ignition[1138]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:40.679093 ignition[1138]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:40.695701 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:40.698391 ignition[1138]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:40.701872 ignition[1138]: INFO : PUT result: OK Dec 13 14:13:40.706756 ignition[1138]: disks: disks passed Dec 13 14:13:40.706976 ignition[1138]: Ignition finished successfully Dec 13 14:13:40.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:40.726105 kernel: audit: type=1130 audit(1734099220.715:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:40.709364 systemd[1]: Finished ignition-disks.service. Dec 13 14:13:40.724395 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:13:40.726205 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:13:40.728985 systemd[1]: Reached target local-fs.target. Dec 13 14:13:40.730503 systemd[1]: Reached target sysinit.target. Dec 13 14:13:40.733481 systemd[1]: Reached target basic.target. Dec 13 14:13:40.736457 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:13:40.783500 systemd-fsck[1146]: ROOT: clean, 621/553520 files, 56020/553472 blocks Dec 13 14:13:40.790186 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:13:40.802654 kernel: audit: type=1130 audit(1734099220.790:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:40.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:40.795529 systemd[1]: Mounting sysroot.mount... Dec 13 14:13:40.820346 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:13:40.822884 systemd[1]: Mounted sysroot.mount. Dec 13 14:13:40.828780 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:13:40.845228 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:13:40.851713 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:13:40.851795 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:13:40.851855 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:13:40.858146 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:13:40.877597 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:13:40.880857 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:13:40.903635 initrd-setup-root[1168]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:13:40.906260 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1163) Dec 13 14:13:40.912397 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:13:40.912466 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:13:40.914384 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:13:40.921318 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:13:40.925125 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:13:40.930891 initrd-setup-root[1194]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:13:40.939956 initrd-setup-root[1202]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:13:40.948769 initrd-setup-root[1210]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:13:41.191527 systemd-networkd[1102]: eth0: Gained IPv6LL Dec 13 14:13:41.206226 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:13:41.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:41.210724 systemd[1]: Starting ignition-mount.service... Dec 13 14:13:41.227135 kernel: audit: type=1130 audit(1734099221.206:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:41.221883 systemd[1]: Starting sysroot-boot.service... Dec 13 14:13:41.239577 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:13:41.239761 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:13:41.272775 systemd[1]: Finished sysroot-boot.service. Dec 13 14:13:41.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:41.282329 ignition[1229]: INFO : Ignition 2.14.0 Dec 13 14:13:41.282329 ignition[1229]: INFO : Stage: mount Dec 13 14:13:41.282329 ignition[1229]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:41.282329 ignition[1229]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:41.298045 ignition[1229]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:41.298045 ignition[1229]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:41.302627 kernel: audit: type=1130 audit(1734099221.274:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:41.303838 ignition[1229]: INFO : PUT result: OK Dec 13 14:13:41.309095 ignition[1229]: INFO : mount: mount passed Dec 13 14:13:41.310711 ignition[1229]: INFO : Ignition finished successfully Dec 13 14:13:41.313691 systemd[1]: Finished ignition-mount.service. Dec 13 14:13:41.336694 kernel: audit: type=1130 audit(1734099221.314:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:41.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:41.317789 systemd[1]: Starting ignition-files.service... Dec 13 14:13:41.344193 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:13:41.362349 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1238) Dec 13 14:13:41.368196 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:13:41.368338 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:13:41.368368 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:13:41.377355 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:13:41.381963 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:13:41.400743 ignition[1257]: INFO : Ignition 2.14.0 Dec 13 14:13:41.400743 ignition[1257]: INFO : Stage: files Dec 13 14:13:41.404441 ignition[1257]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:41.404441 ignition[1257]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:41.420243 ignition[1257]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:41.422680 ignition[1257]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:41.425187 ignition[1257]: INFO : PUT result: OK Dec 13 14:13:41.431321 ignition[1257]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:13:41.436690 ignition[1257]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:13:41.439393 ignition[1257]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:13:41.497538 ignition[1257]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:13:41.500523 ignition[1257]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:13:41.503892 unknown[1257]: wrote ssh authorized keys file for user: core Dec 13 14:13:41.506282 ignition[1257]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:13:41.518836 ignition[1257]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:13:41.522507 ignition[1257]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 14:13:41.637993 ignition[1257]: INFO : GET result: OK Dec 13 14:13:41.784960 ignition[1257]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:13:41.788661 ignition[1257]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:13:41.788661 ignition[1257]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:13:41.809114 ignition[1257]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2950827028" Dec 13 14:13:41.815511 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1262) Dec 13 14:13:41.815552 ignition[1257]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2950827028": device or resource busy Dec 13 14:13:41.815552 ignition[1257]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2950827028", trying btrfs: device or resource busy Dec 13 14:13:41.815552 ignition[1257]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2950827028" Dec 13 14:13:41.824737 ignition[1257]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2950827028" Dec 13 14:13:41.845353 ignition[1257]: INFO : op(3): [started] unmounting "/mnt/oem2950827028" Dec 13 14:13:41.847819 ignition[1257]: INFO : op(3): [finished] unmounting "/mnt/oem2950827028" Dec 13 14:13:41.850225 ignition[1257]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:13:41.853398 ignition[1257]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:13:41.853398 ignition[1257]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:13:41.853398 ignition[1257]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:13:41.863018 ignition[1257]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:13:41.863018 ignition[1257]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:13:41.863018 ignition[1257]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:13:41.863018 ignition[1257]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:13:41.863018 ignition[1257]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:13:41.863018 ignition[1257]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:13:41.863018 ignition[1257]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:13:41.863018 ignition[1257]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:13:41.863018 ignition[1257]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:13:41.863018 ignition[1257]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:13:41.863018 ignition[1257]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:13:41.901899 systemd[1]: mnt-oem2950827028.mount: Deactivated successfully. Dec 13 14:13:41.917329 ignition[1257]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3586910014" Dec 13 14:13:41.917329 ignition[1257]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3586910014": device or resource busy Dec 13 14:13:41.917329 ignition[1257]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3586910014", trying btrfs: device or resource busy Dec 13 14:13:41.917329 ignition[1257]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3586910014" Dec 13 14:13:41.931172 ignition[1257]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3586910014" Dec 13 14:13:41.931172 ignition[1257]: INFO : op(6): [started] unmounting "/mnt/oem3586910014" Dec 13 14:13:41.931172 ignition[1257]: INFO : op(6): [finished] unmounting "/mnt/oem3586910014" Dec 13 14:13:41.931172 ignition[1257]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:13:41.931172 ignition[1257]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:13:41.931172 ignition[1257]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:13:41.961403 ignition[1257]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3901852084" Dec 13 14:13:41.970548 ignition[1257]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3901852084": device or resource busy Dec 13 14:13:41.970548 ignition[1257]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3901852084", trying btrfs: device or resource busy Dec 13 14:13:41.970548 ignition[1257]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3901852084" Dec 13 14:13:41.970548 ignition[1257]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3901852084" Dec 13 14:13:41.970548 ignition[1257]: INFO : op(9): [started] unmounting "/mnt/oem3901852084" Dec 13 14:13:41.970548 ignition[1257]: INFO : op(9): [finished] unmounting "/mnt/oem3901852084" Dec 13 14:13:41.970548 ignition[1257]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:13:41.970548 ignition[1257]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:13:41.970548 ignition[1257]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:13:41.975851 systemd[1]: mnt-oem3901852084.mount: Deactivated successfully. Dec 13 14:13:42.018595 ignition[1257]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1461281862" Dec 13 14:13:42.021347 ignition[1257]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1461281862": device or resource busy Dec 13 14:13:42.021347 ignition[1257]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1461281862", trying btrfs: device or resource busy Dec 13 14:13:42.021347 ignition[1257]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1461281862" Dec 13 14:13:42.030652 ignition[1257]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1461281862" Dec 13 14:13:42.030652 ignition[1257]: INFO : op(c): [started] unmounting "/mnt/oem1461281862" Dec 13 14:13:42.036791 ignition[1257]: INFO : op(c): [finished] unmounting "/mnt/oem1461281862" Dec 13 14:13:42.036791 ignition[1257]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:13:42.036791 ignition[1257]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:13:42.036791 ignition[1257]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 14:13:42.498441 ignition[1257]: INFO : GET result: OK Dec 13 14:13:43.037494 ignition[1257]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:13:43.037494 ignition[1257]: INFO : files: op(f): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:13:43.044651 ignition[1257]: INFO : files: op(f): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:13:43.044651 ignition[1257]: INFO : files: op(10): [started] processing unit "amazon-ssm-agent.service" Dec 13 14:13:43.044651 ignition[1257]: INFO : files: op(10): op(11): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:13:43.044651 ignition[1257]: INFO : files: op(10): op(11): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:13:43.044651 ignition[1257]: INFO : files: op(10): [finished] processing unit "amazon-ssm-agent.service" Dec 13 14:13:43.044651 ignition[1257]: INFO : files: op(12): [started] processing unit "nvidia.service" Dec 13 14:13:43.044651 ignition[1257]: INFO : files: op(12): [finished] processing unit "nvidia.service" Dec 13 14:13:43.044651 ignition[1257]: INFO : files: op(13): [started] processing unit "prepare-helm.service" Dec 13 14:13:43.044651 ignition[1257]: INFO : files: op(13): op(14): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:13:43.044651 ignition[1257]: INFO : files: op(13): op(14): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:13:43.044651 ignition[1257]: INFO : files: op(13): [finished] processing unit "prepare-helm.service" Dec 13 14:13:43.044651 ignition[1257]: INFO : files: op(15): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:13:43.080050 ignition[1257]: INFO : files: op(15): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:13:43.080050 ignition[1257]: INFO : files: op(16): [started] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:13:43.080050 ignition[1257]: INFO : files: op(16): [finished] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:13:43.080050 ignition[1257]: INFO : files: op(17): [started] setting preset to enabled for "nvidia.service" Dec 13 14:13:43.080050 ignition[1257]: INFO : files: op(17): [finished] setting preset to enabled for "nvidia.service" Dec 13 14:13:43.080050 ignition[1257]: INFO : files: op(18): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:13:43.080050 ignition[1257]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:13:43.099936 ignition[1257]: INFO : files: createResultFile: createFiles: op(19): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:13:43.099936 ignition[1257]: INFO : files: createResultFile: createFiles: op(19): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:13:43.099936 ignition[1257]: INFO : files: files passed Dec 13 14:13:43.099936 ignition[1257]: INFO : Ignition finished successfully Dec 13 14:13:43.115110 systemd[1]: Finished ignition-files.service. Dec 13 14:13:43.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.127581 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:13:43.129675 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:13:43.131728 systemd[1]: Starting ignition-quench.service... Dec 13 14:13:43.152032 initrd-setup-root-after-ignition[1282]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:13:43.152499 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:13:43.155530 systemd[1]: Finished ignition-quench.service. Dec 13 14:13:43.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.160501 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:13:43.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.165891 systemd[1]: Reached target ignition-complete.target. Dec 13 14:13:43.169048 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:13:43.211145 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:13:43.211664 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:13:43.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.216821 systemd[1]: Reached target initrd-fs.target. Dec 13 14:13:43.220020 systemd[1]: Reached target initrd.target. Dec 13 14:13:43.236395 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:13:43.240442 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:13:43.266086 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:13:43.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.270979 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:13:43.293267 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:13:43.296814 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:13:43.299220 systemd[1]: Stopped target timers.target. Dec 13 14:13:43.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.301871 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:13:43.302196 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:13:43.304763 systemd[1]: Stopped target initrd.target. Dec 13 14:13:43.306853 systemd[1]: Stopped target basic.target. Dec 13 14:13:43.308802 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:13:43.312195 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:13:43.315029 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:13:43.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.317141 systemd[1]: Stopped target remote-fs.target. Dec 13 14:13:43.319737 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:13:43.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.322361 systemd[1]: Stopped target sysinit.target. Dec 13 14:13:43.324788 systemd[1]: Stopped target local-fs.target. Dec 13 14:13:43.327659 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:13:43.356360 iscsid[1107]: iscsid shutting down. Dec 13 14:13:43.329696 systemd[1]: Stopped target swap.target. Dec 13 14:13:43.331825 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:13:43.332131 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:13:43.334513 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:13:43.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.336538 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:13:43.336831 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:13:43.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.339938 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:13:43.340244 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:13:43.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.342692 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:13:43.342991 systemd[1]: Stopped ignition-files.service. Dec 13 14:13:43.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.348102 systemd[1]: Stopping ignition-mount.service... Dec 13 14:13:43.359176 systemd[1]: Stopping iscsid.service... Dec 13 14:13:43.360713 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:13:43.362572 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:13:43.367627 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:13:43.380744 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:13:43.381097 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:13:43.385771 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:13:43.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.387579 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:13:43.405072 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:13:43.405396 systemd[1]: Stopped iscsid.service. Dec 13 14:13:43.428902 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:13:43.430275 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:13:43.430532 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:13:43.437611 systemd[1]: Stopping iscsiuio.service... Dec 13 14:13:43.453078 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:13:43.455147 systemd[1]: Stopped iscsiuio.service. Dec 13 14:13:43.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.465718 ignition[1295]: INFO : Ignition 2.14.0 Dec 13 14:13:43.467666 ignition[1295]: INFO : Stage: umount Dec 13 14:13:43.469611 ignition[1295]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:43.472229 ignition[1295]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:43.488704 ignition[1295]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:43.488704 ignition[1295]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:43.493563 ignition[1295]: INFO : PUT result: OK Dec 13 14:13:43.499538 ignition[1295]: INFO : umount: umount passed Dec 13 14:13:43.501950 ignition[1295]: INFO : Ignition finished successfully Dec 13 14:13:43.504012 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:13:43.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.504241 systemd[1]: Stopped ignition-mount.service. Dec 13 14:13:43.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.506410 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:13:43.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.506510 systemd[1]: Stopped ignition-disks.service. Dec 13 14:13:43.510698 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:13:43.510834 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:13:43.512930 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:13:43.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.513082 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:13:43.515036 systemd[1]: Stopped target network.target. Dec 13 14:13:43.515722 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:13:43.515838 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:13:43.516083 systemd[1]: Stopped target paths.target. Dec 13 14:13:43.516282 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:13:43.522483 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:13:43.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.522720 systemd[1]: Stopped target slices.target. Dec 13 14:13:43.528550 systemd[1]: Stopped target sockets.target. Dec 13 14:13:43.530771 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:13:43.566000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:13:43.530885 systemd[1]: Closed iscsid.socket. Dec 13 14:13:43.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.533716 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:13:43.533870 systemd[1]: Closed iscsiuio.socket. Dec 13 14:13:43.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.535918 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:13:43.536059 systemd[1]: Stopped ignition-setup.service. Dec 13 14:13:43.539019 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:13:43.542446 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:13:43.543877 systemd-networkd[1102]: eth0: DHCPv6 lease lost Dec 13 14:13:43.595000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:13:43.547758 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:13:43.548503 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:13:43.557507 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:13:43.557835 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:13:43.568829 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:13:43.568912 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:13:43.573338 systemd[1]: Stopping network-cleanup.service... Dec 13 14:13:43.573462 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:13:43.573560 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:13:43.573968 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:13:43.574051 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:13:43.577763 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:13:43.577900 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:13:43.582409 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:13:43.623368 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:13:43.626953 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:13:43.627266 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:13:43.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.637013 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:13:43.637409 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:13:43.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.642096 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:13:43.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.642212 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:13:43.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.644932 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:13:43.645014 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:13:43.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.647069 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:13:43.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:43.647177 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:13:43.652466 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:13:43.652582 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:13:43.656177 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:13:43.656448 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:13:43.660970 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:13:43.661169 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:13:43.665248 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:13:43.675150 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:13:43.675456 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:13:43.682800 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:13:43.683045 systemd[1]: Stopped network-cleanup.service. Dec 13 14:13:43.688729 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:13:43.688914 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:13:43.690908 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:13:43.694525 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:13:43.737809 systemd[1]: Switching root. Dec 13 14:13:43.761071 systemd-journald[309]: Journal stopped Dec 13 14:13:51.140188 systemd-journald[309]: Received SIGTERM from PID 1 (systemd). Dec 13 14:13:51.140357 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:13:51.140407 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:13:51.140444 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:13:51.140486 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:13:51.140518 kernel: SELinux: policy capability open_perms=1 Dec 13 14:13:51.140555 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:13:51.140588 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:13:51.140619 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:13:51.140659 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:13:51.140690 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:13:51.140721 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:13:51.140755 kernel: kauditd_printk_skb: 42 callbacks suppressed Dec 13 14:13:51.140787 kernel: audit: type=1403 audit(1734099225.096:77): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:13:51.140819 systemd[1]: Successfully loaded SELinux policy in 118.119ms. Dec 13 14:13:51.140878 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 33.643ms. Dec 13 14:13:51.140913 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:13:51.140944 systemd[1]: Detected virtualization amazon. Dec 13 14:13:51.140976 systemd[1]: Detected architecture arm64. Dec 13 14:13:51.141007 systemd[1]: Detected first boot. Dec 13 14:13:51.141043 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:13:51.141078 kernel: audit: type=1400 audit(1734099225.429:78): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:13:51.141108 kernel: audit: type=1400 audit(1734099225.429:79): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:13:51.141139 kernel: audit: type=1334 audit(1734099225.436:80): prog-id=10 op=LOAD Dec 13 14:13:51.141166 kernel: audit: type=1334 audit(1734099225.436:81): prog-id=10 op=UNLOAD Dec 13 14:13:51.141198 kernel: audit: type=1334 audit(1734099225.442:82): prog-id=11 op=LOAD Dec 13 14:13:51.141228 kernel: audit: type=1334 audit(1734099225.442:83): prog-id=11 op=UNLOAD Dec 13 14:13:51.141263 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:13:51.141438 kernel: audit: type=1400 audit(1734099225.777:84): avc: denied { associate } for pid=1328 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:13:51.141485 kernel: audit: type=1300 audit(1734099225.777:84): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458ac a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1311 pid=1328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:13:51.141518 kernel: audit: type=1327 audit(1734099225.777:84): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:13:51.141552 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:13:51.141588 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:13:51.141625 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:13:51.141664 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:13:51.141695 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 13 14:13:51.141728 kernel: audit: type=1334 audit(1734099230.740:86): prog-id=12 op=LOAD Dec 13 14:13:51.141759 kernel: audit: type=1334 audit(1734099230.740:87): prog-id=3 op=UNLOAD Dec 13 14:13:51.141789 kernel: audit: type=1334 audit(1734099230.742:88): prog-id=13 op=LOAD Dec 13 14:13:51.141840 kernel: audit: type=1334 audit(1734099230.745:89): prog-id=14 op=LOAD Dec 13 14:13:51.141874 kernel: audit: type=1334 audit(1734099230.745:90): prog-id=4 op=UNLOAD Dec 13 14:13:51.141909 kernel: audit: type=1334 audit(1734099230.745:91): prog-id=5 op=UNLOAD Dec 13 14:13:51.141942 kernel: audit: type=1334 audit(1734099230.749:92): prog-id=15 op=LOAD Dec 13 14:13:51.141977 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:13:51.142007 kernel: audit: type=1334 audit(1734099230.749:93): prog-id=12 op=UNLOAD Dec 13 14:13:51.142039 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:13:51.142071 kernel: audit: type=1334 audit(1734099230.752:94): prog-id=16 op=LOAD Dec 13 14:13:51.142098 kernel: audit: type=1334 audit(1734099230.754:95): prog-id=17 op=LOAD Dec 13 14:13:51.142135 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:13:51.142173 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:13:51.142204 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:13:51.142234 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:13:51.142264 systemd[1]: Created slice system-getty.slice. Dec 13 14:13:51.142322 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:13:51.142727 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:13:51.142771 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:13:51.142803 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:13:51.142842 systemd[1]: Created slice user.slice. Dec 13 14:13:51.142872 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:13:51.142904 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:13:51.142936 systemd[1]: Set up automount boot.automount. Dec 13 14:13:51.142967 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:13:51.142998 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:13:51.143028 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:13:51.143060 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:13:51.143090 systemd[1]: Reached target integritysetup.target. Dec 13 14:13:51.143124 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:13:51.143158 systemd[1]: Reached target remote-fs.target. Dec 13 14:13:51.143189 systemd[1]: Reached target slices.target. Dec 13 14:13:51.143224 systemd[1]: Reached target swap.target. Dec 13 14:13:51.143255 systemd[1]: Reached target torcx.target. Dec 13 14:13:51.143416 systemd[1]: Reached target veritysetup.target. Dec 13 14:13:51.143452 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:13:51.143483 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:13:51.143512 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:13:51.143545 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:13:51.143579 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:13:51.143610 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:13:51.143640 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:13:51.143670 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:13:51.143699 systemd[1]: Mounting media.mount... Dec 13 14:13:51.143729 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:13:51.143758 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:13:51.143788 systemd[1]: Mounting tmp.mount... Dec 13 14:13:51.143817 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:13:51.143851 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:13:51.143881 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:13:51.143914 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:13:51.143947 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:13:51.143980 systemd[1]: Starting modprobe@drm.service... Dec 13 14:13:51.144015 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:13:51.144045 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:13:51.144078 systemd[1]: Starting modprobe@loop.service... Dec 13 14:13:51.144110 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:13:51.144148 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:13:51.144180 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:13:51.144210 kernel: fuse: init (API version 7.34) Dec 13 14:13:51.144241 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:13:51.144272 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:13:51.144334 systemd[1]: Stopped systemd-journald.service. Dec 13 14:13:51.144369 kernel: loop: module loaded Dec 13 14:13:51.144399 systemd[1]: Starting systemd-journald.service... Dec 13 14:13:51.144429 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:13:51.144461 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:13:51.144505 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:13:51.144538 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:13:51.144573 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:13:51.144661 systemd[1]: Stopped verity-setup.service. Dec 13 14:13:51.144702 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:13:51.144732 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:13:51.144763 systemd[1]: Mounted media.mount. Dec 13 14:13:51.144803 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:13:51.144833 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:13:51.144871 systemd[1]: Mounted tmp.mount. Dec 13 14:13:51.144902 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:13:51.144933 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:13:51.144963 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:13:51.144997 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:13:51.145579 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:13:51.145629 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:13:51.145663 systemd[1]: Finished modprobe@drm.service. Dec 13 14:13:51.145698 systemd-journald[1403]: Journal started Dec 13 14:13:51.145805 systemd-journald[1403]: Runtime Journal (/run/log/journal/ec208ff870e3b3b153c1d59f390a2bbf) is 8.0M, max 75.4M, 67.4M free. Dec 13 14:13:45.096000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:13:45.429000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:13:45.429000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:13:45.436000 audit: BPF prog-id=10 op=LOAD Dec 13 14:13:45.436000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:13:45.442000 audit: BPF prog-id=11 op=LOAD Dec 13 14:13:45.442000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:13:45.777000 audit[1328]: AVC avc: denied { associate } for pid=1328 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:13:45.777000 audit[1328]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458ac a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1311 pid=1328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:13:45.777000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:13:45.780000 audit[1328]: AVC avc: denied { associate } for pid=1328 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:13:45.780000 audit[1328]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145985 a2=1ed a3=0 items=2 ppid=1311 pid=1328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:13:45.780000 audit: CWD cwd="/" Dec 13 14:13:45.780000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:13:45.780000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:13:45.780000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:13:50.740000 audit: BPF prog-id=12 op=LOAD Dec 13 14:13:50.740000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:13:50.742000 audit: BPF prog-id=13 op=LOAD Dec 13 14:13:50.745000 audit: BPF prog-id=14 op=LOAD Dec 13 14:13:50.745000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:13:50.745000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:13:50.749000 audit: BPF prog-id=15 op=LOAD Dec 13 14:13:50.749000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:13:50.752000 audit: BPF prog-id=16 op=LOAD Dec 13 14:13:50.754000 audit: BPF prog-id=17 op=LOAD Dec 13 14:13:50.754000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:13:50.754000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:13:50.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:50.766000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:13:50.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:50.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.150338 systemd[1]: Started systemd-journald.service. Dec 13 14:13:51.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.032000 audit: BPF prog-id=18 op=LOAD Dec 13 14:13:51.032000 audit: BPF prog-id=19 op=LOAD Dec 13 14:13:51.032000 audit: BPF prog-id=20 op=LOAD Dec 13 14:13:51.032000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:13:51.032000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:13:51.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.131000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:13:51.131000 audit[1403]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffe2b8e1f0 a2=4000 a3=1 items=0 ppid=1 pid=1403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:13:51.131000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:13:51.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.775142 /usr/lib/systemd/system-generators/torcx-generator[1328]: time="2024-12-13T14:13:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:13:50.738111 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:13:45.775763 /usr/lib/systemd/system-generators/torcx-generator[1328]: time="2024-12-13T14:13:45Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:13:50.758357 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:13:45.775812 /usr/lib/systemd/system-generators/torcx-generator[1328]: time="2024-12-13T14:13:45Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:13:51.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.775879 /usr/lib/systemd/system-generators/torcx-generator[1328]: time="2024-12-13T14:13:45Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:13:45.775905 /usr/lib/systemd/system-generators/torcx-generator[1328]: time="2024-12-13T14:13:45Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:13:45.775971 /usr/lib/systemd/system-generators/torcx-generator[1328]: time="2024-12-13T14:13:45Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:13:45.776002 /usr/lib/systemd/system-generators/torcx-generator[1328]: time="2024-12-13T14:13:45Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:13:51.153668 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:13:45.776487 /usr/lib/systemd/system-generators/torcx-generator[1328]: time="2024-12-13T14:13:45Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:13:51.154057 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:13:45.776593 /usr/lib/systemd/system-generators/torcx-generator[1328]: time="2024-12-13T14:13:45Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:13:45.776630 /usr/lib/systemd/system-generators/torcx-generator[1328]: time="2024-12-13T14:13:45Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:13:45.777651 /usr/lib/systemd/system-generators/torcx-generator[1328]: time="2024-12-13T14:13:45Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:13:45.777754 /usr/lib/systemd/system-generators/torcx-generator[1328]: time="2024-12-13T14:13:45Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:13:45.777802 /usr/lib/systemd/system-generators/torcx-generator[1328]: time="2024-12-13T14:13:45Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:13:45.777841 /usr/lib/systemd/system-generators/torcx-generator[1328]: time="2024-12-13T14:13:45Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:13:45.777888 /usr/lib/systemd/system-generators/torcx-generator[1328]: time="2024-12-13T14:13:45Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:13:51.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.777925 /usr/lib/systemd/system-generators/torcx-generator[1328]: time="2024-12-13T14:13:45Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:13:51.158465 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:13:51.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:49.751884 /usr/lib/systemd/system-generators/torcx-generator[1328]: time="2024-12-13T14:13:49Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:13:51.158846 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:13:49.752442 /usr/lib/systemd/system-generators/torcx-generator[1328]: time="2024-12-13T14:13:49Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:13:51.163494 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:13:49.752753 /usr/lib/systemd/system-generators/torcx-generator[1328]: time="2024-12-13T14:13:49Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:13:51.163902 systemd[1]: Finished modprobe@loop.service. Dec 13 14:13:49.753226 /usr/lib/systemd/system-generators/torcx-generator[1328]: time="2024-12-13T14:13:49Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:13:51.166368 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:13:49.753375 /usr/lib/systemd/system-generators/torcx-generator[1328]: time="2024-12-13T14:13:49Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:13:51.169660 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:13:49.753530 /usr/lib/systemd/system-generators/torcx-generator[1328]: time="2024-12-13T14:13:49Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:13:51.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.172392 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:13:51.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.175893 systemd[1]: Reached target network-pre.target. Dec 13 14:13:51.181056 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:13:51.187901 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:13:51.189656 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:13:51.193783 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:13:51.199157 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:13:51.201408 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:13:51.207953 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:13:51.209777 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:13:51.213567 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:13:51.218639 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:13:51.220983 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:13:51.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.254745 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:13:51.256714 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:13:51.262708 systemd-journald[1403]: Time spent on flushing to /var/log/journal/ec208ff870e3b3b153c1d59f390a2bbf is 84.778ms for 1147 entries. Dec 13 14:13:51.262708 systemd-journald[1403]: System Journal (/var/log/journal/ec208ff870e3b3b153c1d59f390a2bbf) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:13:51.359247 systemd-journald[1403]: Received client request to flush runtime journal. Dec 13 14:13:51.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.322777 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:13:51.349569 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:13:51.353859 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:13:51.362593 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:13:51.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.418364 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:13:51.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.423181 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:13:51.441183 udevadm[1448]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:13:51.596406 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:13:51.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.221309 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:13:52.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.223000 audit: BPF prog-id=21 op=LOAD Dec 13 14:13:52.223000 audit: BPF prog-id=22 op=LOAD Dec 13 14:13:52.223000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:13:52.223000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:13:52.226525 systemd[1]: Starting systemd-udevd.service... Dec 13 14:13:52.268905 systemd-udevd[1449]: Using default interface naming scheme 'v252'. Dec 13 14:13:52.313313 systemd[1]: Started systemd-udevd.service. Dec 13 14:13:52.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.315000 audit: BPF prog-id=23 op=LOAD Dec 13 14:13:52.318783 systemd[1]: Starting systemd-networkd.service... Dec 13 14:13:52.325000 audit: BPF prog-id=24 op=LOAD Dec 13 14:13:52.326000 audit: BPF prog-id=25 op=LOAD Dec 13 14:13:52.326000 audit: BPF prog-id=26 op=LOAD Dec 13 14:13:52.329068 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:13:52.414204 (udev-worker)[1457]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:13:52.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.437197 systemd[1]: Started systemd-userdbd.service. Dec 13 14:13:52.443618 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 14:13:52.627974 systemd-networkd[1454]: lo: Link UP Dec 13 14:13:52.628001 systemd-networkd[1454]: lo: Gained carrier Dec 13 14:13:52.628984 systemd-networkd[1454]: Enumeration completed Dec 13 14:13:52.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.629191 systemd[1]: Started systemd-networkd.service. Dec 13 14:13:52.629219 systemd-networkd[1454]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:13:52.634041 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:13:52.647336 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:13:52.648211 systemd-networkd[1454]: eth0: Link UP Dec 13 14:13:52.648584 systemd-networkd[1454]: eth0: Gained carrier Dec 13 14:13:52.666618 systemd-networkd[1454]: eth0: DHCPv4 address 172.31.20.19/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:13:52.707363 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1478) Dec 13 14:13:52.859743 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:13:52.862348 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:13:52.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.867055 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:13:52.943715 lvm[1568]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:13:52.988607 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:13:52.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.990876 systemd[1]: Reached target cryptsetup.target. Dec 13 14:13:52.995934 systemd[1]: Starting lvm2-activation.service... Dec 13 14:13:53.007030 lvm[1569]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:13:53.043946 systemd[1]: Finished lvm2-activation.service. Dec 13 14:13:53.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:53.045901 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:13:53.047609 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:13:53.047666 systemd[1]: Reached target local-fs.target. Dec 13 14:13:53.049281 systemd[1]: Reached target machines.target. Dec 13 14:13:53.053307 systemd[1]: Starting ldconfig.service... Dec 13 14:13:53.055795 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:13:53.056142 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:13:53.059491 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:13:53.063868 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:13:53.070427 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:13:53.075820 systemd[1]: Starting systemd-sysext.service... Dec 13 14:13:53.078276 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1571 (bootctl) Dec 13 14:13:53.081022 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:13:53.125316 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:13:53.133085 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:13:53.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:53.143644 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:13:53.144066 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:13:53.168350 kernel: loop0: detected capacity change from 0 to 194512 Dec 13 14:13:53.285972 systemd-fsck[1583]: fsck.fat 4.2 (2021-01-31) Dec 13 14:13:53.285972 systemd-fsck[1583]: /dev/nvme0n1p1: 236 files, 117175/258078 clusters Dec 13 14:13:53.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:53.292030 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:13:53.299429 systemd[1]: Mounting boot.mount... Dec 13 14:13:53.342308 systemd[1]: Mounted boot.mount. Dec 13 14:13:53.366033 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:13:53.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:53.401348 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:13:53.424343 kernel: loop1: detected capacity change from 0 to 194512 Dec 13 14:13:53.438338 (sd-sysext)[1598]: Using extensions 'kubernetes'. Dec 13 14:13:53.441123 (sd-sysext)[1598]: Merged extensions into '/usr'. Dec 13 14:13:53.491447 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:13:53.493819 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:13:53.501601 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:13:53.506263 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:13:53.511185 systemd[1]: Starting modprobe@loop.service... Dec 13 14:13:53.513419 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:13:53.513799 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:13:53.522472 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:13:53.525506 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:13:53.525861 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:13:53.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:53.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:53.528682 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:13:53.529004 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:13:53.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:53.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:53.532163 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:13:53.532497 systemd[1]: Finished modprobe@loop.service. Dec 13 14:13:53.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:53.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:53.535412 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:13:53.535664 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:13:53.537915 systemd[1]: Finished systemd-sysext.service. Dec 13 14:13:53.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:53.542465 systemd[1]: Starting ensure-sysext.service... Dec 13 14:13:53.547080 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:13:53.568537 systemd[1]: Reloading. Dec 13 14:13:53.645804 systemd-tmpfiles[1605]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:13:53.658722 /usr/lib/systemd/system-generators/torcx-generator[1625]: time="2024-12-13T14:13:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:13:53.660558 /usr/lib/systemd/system-generators/torcx-generator[1625]: time="2024-12-13T14:13:53Z" level=info msg="torcx already run" Dec 13 14:13:53.681958 systemd-tmpfiles[1605]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:13:53.718696 systemd-tmpfiles[1605]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:13:53.735477 systemd-networkd[1454]: eth0: Gained IPv6LL Dec 13 14:13:53.976023 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:13:53.976382 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:13:54.029985 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:13:54.212000 audit: BPF prog-id=27 op=LOAD Dec 13 14:13:54.213000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:13:54.213000 audit: BPF prog-id=28 op=LOAD Dec 13 14:13:54.214000 audit: BPF prog-id=29 op=LOAD Dec 13 14:13:54.214000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:13:54.214000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:13:54.215000 audit: BPF prog-id=30 op=LOAD Dec 13 14:13:54.215000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:13:54.219000 audit: BPF prog-id=31 op=LOAD Dec 13 14:13:54.220000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:13:54.220000 audit: BPF prog-id=32 op=LOAD Dec 13 14:13:54.220000 audit: BPF prog-id=33 op=LOAD Dec 13 14:13:54.220000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:13:54.220000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:13:54.222000 audit: BPF prog-id=34 op=LOAD Dec 13 14:13:54.222000 audit: BPF prog-id=35 op=LOAD Dec 13 14:13:54.222000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:13:54.223000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:13:54.243857 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:13:54.247494 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:13:54.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.250750 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:13:54.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.255482 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:13:54.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.266062 systemd[1]: Starting audit-rules.service... Dec 13 14:13:54.270571 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:13:54.276207 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:13:54.282000 audit: BPF prog-id=36 op=LOAD Dec 13 14:13:54.285609 systemd[1]: Starting systemd-resolved.service... Dec 13 14:13:54.288000 audit: BPF prog-id=37 op=LOAD Dec 13 14:13:54.294773 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:13:54.299429 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:13:54.324448 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:13:54.329828 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:13:54.334254 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:13:54.333000 audit[1686]: SYSTEM_BOOT pid=1686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.340803 systemd[1]: Starting modprobe@loop.service... Dec 13 14:13:54.344641 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:13:54.344999 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:13:54.347214 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:13:54.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.352968 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:13:54.357338 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:13:54.357716 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:13:54.357927 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:13:54.358120 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:13:54.362860 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:13:54.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.366922 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:13:54.367245 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:13:54.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.371468 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:13:54.371767 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:13:54.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.377619 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:13:54.384822 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:13:54.388717 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:13:54.395114 systemd[1]: Starting modprobe@drm.service... Dec 13 14:13:54.399567 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:13:54.403758 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:13:54.404104 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:13:54.404506 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:13:54.407039 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:13:54.407481 systemd[1]: Finished modprobe@loop.service. Dec 13 14:13:54.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.414135 systemd[1]: Finished ensure-sysext.service. Dec 13 14:13:54.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.438955 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:13:54.439255 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:13:54.442326 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:13:54.442627 systemd[1]: Finished modprobe@drm.service. Dec 13 14:13:54.444705 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:13:54.451537 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:13:54.454058 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:13:54.454394 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:13:54.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.456867 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:13:54.512000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:13:54.512000 audit[1707]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcac34e20 a2=420 a3=0 items=0 ppid=1681 pid=1707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:13:54.512000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:13:54.514517 augenrules[1707]: No rules Dec 13 14:13:54.516240 systemd[1]: Finished audit-rules.service. Dec 13 14:13:54.541179 ldconfig[1570]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:13:54.553985 systemd[1]: Finished ldconfig.service. Dec 13 14:13:54.559641 systemd[1]: Starting systemd-update-done.service... Dec 13 14:13:54.569707 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:13:54.571591 systemd[1]: Reached target time-set.target. Dec 13 14:13:54.579107 systemd[1]: Finished systemd-update-done.service. Dec 13 14:13:54.582281 systemd-resolved[1684]: Positive Trust Anchors: Dec 13 14:13:54.582869 systemd-resolved[1684]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:13:54.583043 systemd-resolved[1684]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:13:54.618308 systemd-resolved[1684]: Defaulting to hostname 'linux'. Dec 13 14:13:54.621988 systemd[1]: Started systemd-resolved.service. Dec 13 14:13:54.623774 systemd[1]: Reached target network.target. Dec 13 14:13:54.625365 systemd[1]: Reached target network-online.target. Dec 13 14:13:54.627231 systemd[1]: Reached target nss-lookup.target. Dec 13 14:13:54.629051 systemd[1]: Reached target sysinit.target. Dec 13 14:13:54.631379 systemd[1]: Started motdgen.path. Dec 13 14:13:54.633156 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:13:54.635855 systemd[1]: Started logrotate.timer. Dec 13 14:13:54.637766 systemd[1]: Started mdadm.timer. Dec 13 14:13:54.639317 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:13:54.641232 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:13:54.641344 systemd[1]: Reached target paths.target. Dec 13 14:13:54.642782 systemd[1]: Reached target timers.target. Dec 13 14:13:54.644731 systemd[1]: Listening on dbus.socket. Dec 13 14:13:54.648406 systemd[1]: Starting docker.socket... Dec 13 14:13:54.652380 systemd-timesyncd[1685]: Contacted time server 108.181.201.22:123 (0.flatcar.pool.ntp.org). Dec 13 14:13:54.652524 systemd-timesyncd[1685]: Initial clock synchronization to Fri 2024-12-13 14:13:54.866674 UTC. Dec 13 14:13:54.658189 systemd[1]: Listening on sshd.socket. Dec 13 14:13:54.660181 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:13:54.661208 systemd[1]: Listening on docker.socket. Dec 13 14:13:54.662917 systemd[1]: Reached target sockets.target. Dec 13 14:13:54.664587 systemd[1]: Reached target basic.target. Dec 13 14:13:54.666247 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:13:54.666335 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:13:54.678942 systemd[1]: Started amazon-ssm-agent.service. Dec 13 14:13:54.683745 systemd[1]: Starting containerd.service... Dec 13 14:13:54.688643 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:13:54.696765 systemd[1]: Starting dbus.service... Dec 13 14:13:54.707541 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:13:54.712067 systemd[1]: Starting extend-filesystems.service... Dec 13 14:13:54.713820 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:13:54.717516 systemd[1]: Starting kubelet.service... Dec 13 14:13:54.727193 systemd[1]: Starting motdgen.service... Dec 13 14:13:54.733884 systemd[1]: Started nvidia.service. Dec 13 14:13:54.742681 systemd[1]: Starting prepare-helm.service... Dec 13 14:13:54.749530 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:13:54.755345 systemd[1]: Starting sshd-keygen.service... Dec 13 14:13:54.765914 systemd[1]: Starting systemd-logind.service... Dec 13 14:13:54.768601 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:13:54.768824 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:13:54.808063 jq[1720]: false Dec 13 14:13:54.771107 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:13:54.775250 systemd[1]: Starting update-engine.service... Dec 13 14:13:54.779614 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:13:54.798399 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:13:54.798933 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:13:54.880959 jq[1730]: true Dec 13 14:13:54.872569 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:13:54.873183 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:13:54.897900 tar[1737]: linux-arm64/helm Dec 13 14:13:54.937096 jq[1743]: true Dec 13 14:13:55.019350 extend-filesystems[1721]: Found loop1 Dec 13 14:13:55.031502 extend-filesystems[1721]: Found nvme0n1 Dec 13 14:13:55.034192 extend-filesystems[1721]: Found nvme0n1p1 Dec 13 14:13:55.035860 extend-filesystems[1721]: Found nvme0n1p2 Dec 13 14:13:55.035860 extend-filesystems[1721]: Found nvme0n1p3 Dec 13 14:13:55.038849 extend-filesystems[1721]: Found usr Dec 13 14:13:55.038849 extend-filesystems[1721]: Found nvme0n1p4 Dec 13 14:13:55.038849 extend-filesystems[1721]: Found nvme0n1p6 Dec 13 14:13:55.058645 extend-filesystems[1721]: Found nvme0n1p7 Dec 13 14:13:55.058645 extend-filesystems[1721]: Found nvme0n1p9 Dec 13 14:13:55.058645 extend-filesystems[1721]: Checking size of /dev/nvme0n1p9 Dec 13 14:13:55.081101 dbus-daemon[1719]: [system] SELinux support is enabled Dec 13 14:13:55.081451 systemd[1]: Started dbus.service. Dec 13 14:13:55.087617 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:13:55.087697 systemd[1]: Reached target system-config.target. Dec 13 14:13:55.089616 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:13:55.089673 systemd[1]: Reached target user-config.target. Dec 13 14:13:55.092211 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:13:55.092772 systemd[1]: Finished motdgen.service. Dec 13 14:13:55.106168 dbus-daemon[1719]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1454 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 14:13:55.124203 dbus-daemon[1719]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 14:13:55.139499 systemd[1]: Starting systemd-hostnamed.service... Dec 13 14:13:55.167011 extend-filesystems[1721]: Resized partition /dev/nvme0n1p9 Dec 13 14:13:55.185545 update_engine[1729]: I1213 14:13:55.185075 1729 main.cc:92] Flatcar Update Engine starting Dec 13 14:13:55.187228 extend-filesystems[1779]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:13:55.196584 systemd[1]: Started update-engine.service. Dec 13 14:13:55.197136 update_engine[1729]: I1213 14:13:55.197099 1729 update_check_scheduler.cc:74] Next update check in 4m1s Dec 13 14:13:55.201910 systemd[1]: Started locksmithd.service. Dec 13 14:13:55.208895 amazon-ssm-agent[1716]: 2024/12/13 14:13:55 Failed to load instance info from vault. RegistrationKey does not exist. Dec 13 14:13:55.215328 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 14:13:55.223417 amazon-ssm-agent[1716]: Initializing new seelog logger Dec 13 14:13:55.223851 amazon-ssm-agent[1716]: New Seelog Logger Creation Complete Dec 13 14:13:55.225776 amazon-ssm-agent[1716]: 2024/12/13 14:13:55 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:13:55.226192 amazon-ssm-agent[1716]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:13:55.226829 amazon-ssm-agent[1716]: 2024/12/13 14:13:55 processing appconfig overrides Dec 13 14:13:55.281358 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 14:13:55.333510 extend-filesystems[1779]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 14:13:55.333510 extend-filesystems[1779]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:13:55.333510 extend-filesystems[1779]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 14:13:55.341365 extend-filesystems[1721]: Resized filesystem in /dev/nvme0n1p9 Dec 13 14:13:55.338872 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:13:55.355473 bash[1785]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:13:55.343466 systemd[1]: Finished extend-filesystems.service. Dec 13 14:13:55.351602 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:13:55.377060 env[1738]: time="2024-12-13T14:13:55.364273464Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:13:55.428983 systemd-logind[1728]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 14:13:55.430605 systemd-logind[1728]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 13 14:13:55.436552 systemd-logind[1728]: New seat seat0. Dec 13 14:13:55.453263 systemd[1]: Started systemd-logind.service. Dec 13 14:13:55.491768 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 14:13:55.655632 env[1738]: time="2024-12-13T14:13:55.655365636Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:13:55.655805 env[1738]: time="2024-12-13T14:13:55.655736140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:13:55.663753 env[1738]: time="2024-12-13T14:13:55.663623978Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:13:55.663753 env[1738]: time="2024-12-13T14:13:55.663729339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:13:55.664503 env[1738]: time="2024-12-13T14:13:55.664413297Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:13:55.664503 env[1738]: time="2024-12-13T14:13:55.664487713Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:13:55.664739 env[1738]: time="2024-12-13T14:13:55.664527606Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:13:55.664739 env[1738]: time="2024-12-13T14:13:55.664556144Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:13:55.664845 env[1738]: time="2024-12-13T14:13:55.664772099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:13:55.665423 env[1738]: time="2024-12-13T14:13:55.665352522Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:13:55.677673 env[1738]: time="2024-12-13T14:13:55.677581476Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:13:55.677673 env[1738]: time="2024-12-13T14:13:55.677656324Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:13:55.677914 env[1738]: time="2024-12-13T14:13:55.677848569Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:13:55.677914 env[1738]: time="2024-12-13T14:13:55.677882056Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:13:55.687019 env[1738]: time="2024-12-13T14:13:55.686942874Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:13:55.687210 env[1738]: time="2024-12-13T14:13:55.687027424Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:13:55.687210 env[1738]: time="2024-12-13T14:13:55.687066626Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:13:55.687210 env[1738]: time="2024-12-13T14:13:55.687151843Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:13:55.687465 env[1738]: time="2024-12-13T14:13:55.687378697Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:13:55.687465 env[1738]: time="2024-12-13T14:13:55.687434402Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:13:55.687580 env[1738]: time="2024-12-13T14:13:55.687474702Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:13:55.688309 env[1738]: time="2024-12-13T14:13:55.688234088Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:13:55.688424 env[1738]: time="2024-12-13T14:13:55.688301494Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:13:55.688424 env[1738]: time="2024-12-13T14:13:55.688370307Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:13:55.688531 env[1738]: time="2024-12-13T14:13:55.688416113Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:13:55.688531 env[1738]: time="2024-12-13T14:13:55.688449785Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:13:55.688737 env[1738]: time="2024-12-13T14:13:55.688682280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:13:55.688921 env[1738]: time="2024-12-13T14:13:55.688867081Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:13:55.697017 env[1738]: time="2024-12-13T14:13:55.696944683Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:13:55.697167 env[1738]: time="2024-12-13T14:13:55.697034751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:13:55.697167 env[1738]: time="2024-12-13T14:13:55.697074545Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:13:55.697275 env[1738]: time="2024-12-13T14:13:55.697200173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:13:55.697390 env[1738]: time="2024-12-13T14:13:55.697234265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:13:55.697459 env[1738]: time="2024-12-13T14:13:55.697396725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:13:55.697459 env[1738]: time="2024-12-13T14:13:55.697431582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:13:55.697595 env[1738]: time="2024-12-13T14:13:55.697463452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:13:55.697595 env[1738]: time="2024-12-13T14:13:55.697496519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:13:55.697595 env[1738]: time="2024-12-13T14:13:55.697527451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:13:55.697595 env[1738]: time="2024-12-13T14:13:55.697559334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:13:55.697820 env[1738]: time="2024-12-13T14:13:55.697597202Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:13:55.698011 env[1738]: time="2024-12-13T14:13:55.697955005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:13:55.698094 env[1738]: time="2024-12-13T14:13:55.698016214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:13:55.698094 env[1738]: time="2024-12-13T14:13:55.698052985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:13:55.698094 env[1738]: time="2024-12-13T14:13:55.698083966Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:13:55.698260 env[1738]: time="2024-12-13T14:13:55.698119206Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:13:55.698260 env[1738]: time="2024-12-13T14:13:55.698151458Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:13:55.698260 env[1738]: time="2024-12-13T14:13:55.698189709Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:13:55.698442 env[1738]: time="2024-12-13T14:13:55.698255906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:13:55.698735 env[1738]: time="2024-12-13T14:13:55.698627261Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:13:55.699873 env[1738]: time="2024-12-13T14:13:55.698743299Z" level=info msg="Connect containerd service" Dec 13 14:13:55.699873 env[1738]: time="2024-12-13T14:13:55.698803632Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:13:55.708892 env[1738]: time="2024-12-13T14:13:55.708816044Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:13:55.711660 env[1738]: time="2024-12-13T14:13:55.711573289Z" level=info msg="Start subscribing containerd event" Dec 13 14:13:55.711820 env[1738]: time="2024-12-13T14:13:55.711719555Z" level=info msg="Start recovering state" Dec 13 14:13:55.711883 env[1738]: time="2024-12-13T14:13:55.711841431Z" level=info msg="Start event monitor" Dec 13 14:13:55.711943 env[1738]: time="2024-12-13T14:13:55.711881694Z" level=info msg="Start snapshots syncer" Dec 13 14:13:55.711943 env[1738]: time="2024-12-13T14:13:55.711907220Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:13:55.711943 env[1738]: time="2024-12-13T14:13:55.711927277Z" level=info msg="Start streaming server" Dec 13 14:13:55.712351 env[1738]: time="2024-12-13T14:13:55.712268898Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:13:55.712457 env[1738]: time="2024-12-13T14:13:55.712429420Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:13:55.712671 systemd[1]: Started containerd.service. Dec 13 14:13:55.719089 env[1738]: time="2024-12-13T14:13:55.719016355Z" level=info msg="containerd successfully booted in 0.369728s" Dec 13 14:13:55.798544 dbus-daemon[1719]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 14:13:55.798993 systemd[1]: Started systemd-hostnamed.service. Dec 13 14:13:55.803760 dbus-daemon[1719]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1776 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 14:13:55.809600 systemd[1]: Starting polkit.service... Dec 13 14:13:55.866296 polkitd[1819]: Started polkitd version 121 Dec 13 14:13:55.919992 polkitd[1819]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 14:13:55.927481 polkitd[1819]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 14:13:55.936615 polkitd[1819]: Finished loading, compiling and executing 2 rules Dec 13 14:13:55.938432 dbus-daemon[1719]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 14:13:55.938708 systemd[1]: Started polkit.service. Dec 13 14:13:55.941967 polkitd[1819]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 14:13:56.010670 systemd-resolved[1684]: System hostname changed to 'ip-172-31-20-19'. Dec 13 14:13:56.010806 systemd-hostnamed[1776]: Hostname set to (transient) Dec 13 14:13:56.169420 coreos-metadata[1718]: Dec 13 14:13:56.169 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 14:13:56.174555 coreos-metadata[1718]: Dec 13 14:13:56.174 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Dec 13 14:13:56.175589 coreos-metadata[1718]: Dec 13 14:13:56.175 INFO Fetch successful Dec 13 14:13:56.175951 coreos-metadata[1718]: Dec 13 14:13:56.175 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 14:13:56.177132 coreos-metadata[1718]: Dec 13 14:13:56.176 INFO Fetch successful Dec 13 14:13:56.179968 unknown[1718]: wrote ssh authorized keys file for user: core Dec 13 14:13:56.207712 update-ssh-keys[1871]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:13:56.209020 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:13:56.248575 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO Create new startup processor Dec 13 14:13:56.249246 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [LongRunningPluginsManager] registered plugins: {} Dec 13 14:13:56.249246 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO Initializing bookkeeping folders Dec 13 14:13:56.249246 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO removing the completed state files Dec 13 14:13:56.249246 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO Initializing bookkeeping folders for long running plugins Dec 13 14:13:56.249246 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Dec 13 14:13:56.249246 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO Initializing healthcheck folders for long running plugins Dec 13 14:13:56.249246 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO Initializing locations for inventory plugin Dec 13 14:13:56.249246 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO Initializing default location for custom inventory Dec 13 14:13:56.249246 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO Initializing default location for file inventory Dec 13 14:13:56.249246 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO Initializing default location for role inventory Dec 13 14:13:56.249825 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO Init the cloudwatchlogs publisher Dec 13 14:13:56.249825 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [instanceID=i-052e530630b6843f3] Successfully loaded platform independent plugin aws:softwareInventory Dec 13 14:13:56.249825 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [instanceID=i-052e530630b6843f3] Successfully loaded platform independent plugin aws:runPowerShellScript Dec 13 14:13:56.249825 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [instanceID=i-052e530630b6843f3] Successfully loaded platform independent plugin aws:configureDocker Dec 13 14:13:56.249825 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [instanceID=i-052e530630b6843f3] Successfully loaded platform independent plugin aws:runDockerAction Dec 13 14:13:56.249825 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [instanceID=i-052e530630b6843f3] Successfully loaded platform independent plugin aws:configurePackage Dec 13 14:13:56.249825 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [instanceID=i-052e530630b6843f3] Successfully loaded platform independent plugin aws:updateSsmAgent Dec 13 14:13:56.249825 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [instanceID=i-052e530630b6843f3] Successfully loaded platform independent plugin aws:refreshAssociation Dec 13 14:13:56.249825 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [instanceID=i-052e530630b6843f3] Successfully loaded platform independent plugin aws:downloadContent Dec 13 14:13:56.249825 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [instanceID=i-052e530630b6843f3] Successfully loaded platform independent plugin aws:runDocument Dec 13 14:13:56.249825 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [instanceID=i-052e530630b6843f3] Successfully loaded platform dependent plugin aws:runShellScript Dec 13 14:13:56.249825 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Dec 13 14:13:56.249825 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO OS: linux, Arch: arm64 Dec 13 14:13:56.263076 amazon-ssm-agent[1716]: datastore file /var/lib/amazon/ssm/i-052e530630b6843f3/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Dec 13 14:13:56.266290 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [MessagingDeliveryService] Starting document processing engine... Dec 13 14:13:56.370978 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [MessagingDeliveryService] [EngineProcessor] Starting Dec 13 14:13:56.465465 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Dec 13 14:13:56.560192 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [MessagingDeliveryService] Starting message polling Dec 13 14:13:56.660493 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [MessagingDeliveryService] Starting send replies to MDS Dec 13 14:13:56.756562 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [instanceID=i-052e530630b6843f3] Starting association polling Dec 13 14:13:56.850590 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Dec 13 14:13:56.946398 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [MessagingDeliveryService] [Association] Launching response handler Dec 13 14:13:57.009809 tar[1737]: linux-arm64/LICENSE Dec 13 14:13:57.009809 tar[1737]: linux-arm64/README.md Dec 13 14:13:57.022821 systemd[1]: Finished prepare-helm.service. Dec 13 14:13:57.041632 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Dec 13 14:13:57.139506 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Dec 13 14:13:57.237129 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Dec 13 14:13:57.336131 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [MessageGatewayService] Starting session document processing engine... Dec 13 14:13:57.357278 locksmithd[1784]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:13:57.409906 systemd[1]: Started kubelet.service. Dec 13 14:13:57.432421 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [MessageGatewayService] [EngineProcessor] Starting Dec 13 14:13:57.529026 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Dec 13 14:13:57.625910 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-052e530630b6843f3, requestId: 88e47a60-2950-4abb-b116-383245fdbbb2 Dec 13 14:13:57.722712 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [OfflineService] Starting document processing engine... Dec 13 14:13:57.819656 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [OfflineService] [EngineProcessor] Starting Dec 13 14:13:57.916851 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [OfflineService] [EngineProcessor] Initial processing Dec 13 14:13:58.014106 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [OfflineService] Starting message polling Dec 13 14:13:58.111622 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [OfflineService] Starting send replies to MDS Dec 13 14:13:58.209785 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [LongRunningPluginsManager] starting long running plugin manager Dec 13 14:13:58.307601 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [MessageGatewayService] listening reply. Dec 13 14:13:58.331611 kubelet[1931]: E1213 14:13:58.331513 1931 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:13:58.336282 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:13:58.336646 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:13:58.337138 systemd[1]: kubelet.service: Consumed 1.565s CPU time. Dec 13 14:13:58.406034 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Dec 13 14:13:58.506204 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 14:13:58.604827 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Dec 13 14:13:58.704390 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [StartupProcessor] Executing startup processor tasks Dec 13 14:13:58.803557 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Dec 13 14:13:58.903524 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Dec 13 14:13:59.002920 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.6 Dec 13 14:13:59.102915 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-052e530630b6843f3?role=subscribe&stream=input Dec 13 14:13:59.202594 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-052e530630b6843f3?role=subscribe&stream=input Dec 13 14:13:59.302594 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [MessageGatewayService] Starting receiving message from control channel Dec 13 14:13:59.402770 amazon-ssm-agent[1716]: 2024-12-13 14:13:56 INFO [MessageGatewayService] [EngineProcessor] Initial processing Dec 13 14:13:59.504341 amazon-ssm-agent[1716]: 2024-12-13 14:13:57 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Dec 13 14:14:03.221497 systemd[1]: Created slice system-sshd.slice. Dec 13 14:14:03.828895 sshd_keygen[1754]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:14:03.875435 systemd[1]: Finished sshd-keygen.service. Dec 13 14:14:03.881050 systemd[1]: Starting issuegen.service... Dec 13 14:14:03.886052 systemd[1]: Started sshd@0-172.31.20.19:22-139.178.89.65:38946.service. Dec 13 14:14:03.901321 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:14:03.901762 systemd[1]: Finished issuegen.service. Dec 13 14:14:03.907911 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:14:03.924837 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:14:03.930123 systemd[1]: Started getty@tty1.service. Dec 13 14:14:03.934900 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:14:03.937420 systemd[1]: Reached target getty.target. Dec 13 14:14:03.939449 systemd[1]: Reached target multi-user.target. Dec 13 14:14:03.944658 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:14:03.961434 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:14:03.961862 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:14:03.964086 systemd[1]: Startup finished in 1.217s (kernel) + 9.288s (initrd) + 19.081s (userspace) = 29.587s. Dec 13 14:14:04.136751 sshd[1947]: Accepted publickey for core from 139.178.89.65 port 38946 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:14:04.141741 sshd[1947]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:04.166180 systemd[1]: Created slice user-500.slice. Dec 13 14:14:04.170973 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:14:04.180050 systemd-logind[1728]: New session 1 of user core. Dec 13 14:14:04.199613 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:14:04.204039 systemd[1]: Starting user@500.service... Dec 13 14:14:04.215510 (systemd)[1956]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:04.445156 systemd[1956]: Queued start job for default target default.target. Dec 13 14:14:04.449206 systemd[1956]: Reached target paths.target. Dec 13 14:14:04.449571 systemd[1956]: Reached target sockets.target. Dec 13 14:14:04.449736 systemd[1956]: Reached target timers.target. Dec 13 14:14:04.449883 systemd[1956]: Reached target basic.target. Dec 13 14:14:04.450154 systemd[1956]: Reached target default.target. Dec 13 14:14:04.450265 systemd[1]: Started user@500.service. Dec 13 14:14:04.452062 systemd[1956]: Startup finished in 219ms. Dec 13 14:14:04.454040 systemd[1]: Started session-1.scope. Dec 13 14:14:04.613587 systemd[1]: Started sshd@1-172.31.20.19:22-139.178.89.65:38950.service. Dec 13 14:14:04.802817 sshd[1965]: Accepted publickey for core from 139.178.89.65 port 38950 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:14:04.806190 sshd[1965]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:04.816376 systemd-logind[1728]: New session 2 of user core. Dec 13 14:14:04.818863 systemd[1]: Started session-2.scope. Dec 13 14:14:04.961636 sshd[1965]: pam_unix(sshd:session): session closed for user core Dec 13 14:14:04.967801 systemd-logind[1728]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:14:04.968598 systemd[1]: sshd@1-172.31.20.19:22-139.178.89.65:38950.service: Deactivated successfully. Dec 13 14:14:04.970141 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:14:04.971766 systemd-logind[1728]: Removed session 2. Dec 13 14:14:04.994195 systemd[1]: Started sshd@2-172.31.20.19:22-139.178.89.65:38964.service. Dec 13 14:14:05.176752 sshd[1971]: Accepted publickey for core from 139.178.89.65 port 38964 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:14:05.180488 sshd[1971]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:05.191550 systemd-logind[1728]: New session 3 of user core. Dec 13 14:14:05.192966 systemd[1]: Started session-3.scope. Dec 13 14:14:05.325976 sshd[1971]: pam_unix(sshd:session): session closed for user core Dec 13 14:14:05.332758 systemd[1]: sshd@2-172.31.20.19:22-139.178.89.65:38964.service: Deactivated successfully. Dec 13 14:14:05.334008 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:14:05.335671 systemd-logind[1728]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:14:05.337754 systemd-logind[1728]: Removed session 3. Dec 13 14:14:05.353026 systemd[1]: Started sshd@3-172.31.20.19:22-139.178.89.65:38978.service. Dec 13 14:14:05.530138 sshd[1977]: Accepted publickey for core from 139.178.89.65 port 38978 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:14:05.533348 sshd[1977]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:05.545248 systemd[1]: Started session-4.scope. Dec 13 14:14:05.547521 systemd-logind[1728]: New session 4 of user core. Dec 13 14:14:05.684911 sshd[1977]: pam_unix(sshd:session): session closed for user core Dec 13 14:14:05.690561 systemd-logind[1728]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:14:05.690743 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:14:05.692261 systemd[1]: sshd@3-172.31.20.19:22-139.178.89.65:38978.service: Deactivated successfully. Dec 13 14:14:05.696953 systemd-logind[1728]: Removed session 4. Dec 13 14:14:05.715920 systemd[1]: Started sshd@4-172.31.20.19:22-139.178.89.65:38992.service. Dec 13 14:14:05.902111 sshd[1983]: Accepted publickey for core from 139.178.89.65 port 38992 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:14:05.904999 sshd[1983]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:05.914097 systemd-logind[1728]: New session 5 of user core. Dec 13 14:14:05.915347 systemd[1]: Started session-5.scope. Dec 13 14:14:06.061226 sudo[1986]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:14:06.062702 sudo[1986]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:14:06.123693 systemd[1]: Starting docker.service... Dec 13 14:14:06.217714 env[1996]: time="2024-12-13T14:14:06.217512547Z" level=info msg="Starting up" Dec 13 14:14:06.223024 env[1996]: time="2024-12-13T14:14:06.222906660Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:14:06.223024 env[1996]: time="2024-12-13T14:14:06.222988592Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:14:06.223353 env[1996]: time="2024-12-13T14:14:06.223043720Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:14:06.223353 env[1996]: time="2024-12-13T14:14:06.223071900Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:14:06.227995 env[1996]: time="2024-12-13T14:14:06.227815354Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:14:06.228388 env[1996]: time="2024-12-13T14:14:06.228338773Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:14:06.228561 env[1996]: time="2024-12-13T14:14:06.228522530Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:14:06.228737 env[1996]: time="2024-12-13T14:14:06.228704452Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:14:06.252885 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1475257416-merged.mount: Deactivated successfully. Dec 13 14:14:06.298806 env[1996]: time="2024-12-13T14:14:06.298729468Z" level=info msg="Loading containers: start." Dec 13 14:14:06.552842 kernel: Initializing XFRM netlink socket Dec 13 14:14:06.600250 env[1996]: time="2024-12-13T14:14:06.600155392Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:14:06.603729 (udev-worker)[2007]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:14:06.727208 systemd-networkd[1454]: docker0: Link UP Dec 13 14:14:06.767425 env[1996]: time="2024-12-13T14:14:06.767376872Z" level=info msg="Loading containers: done." Dec 13 14:14:06.801232 env[1996]: time="2024-12-13T14:14:06.801165634Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:14:06.802088 env[1996]: time="2024-12-13T14:14:06.802035949Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:14:06.802460 env[1996]: time="2024-12-13T14:14:06.802429796Z" level=info msg="Daemon has completed initialization" Dec 13 14:14:06.833239 systemd[1]: Started docker.service. Dec 13 14:14:06.852112 env[1996]: time="2024-12-13T14:14:06.851273817Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:14:08.304988 env[1738]: time="2024-12-13T14:14:08.304840192Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 14:14:08.340951 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:14:08.341339 systemd[1]: Stopped kubelet.service. Dec 13 14:14:08.341426 systemd[1]: kubelet.service: Consumed 1.565s CPU time. Dec 13 14:14:08.345383 systemd[1]: Starting kubelet.service... Dec 13 14:14:08.901049 systemd[1]: Started kubelet.service. Dec 13 14:14:09.018640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2936887986.mount: Deactivated successfully. Dec 13 14:14:09.052264 kubelet[2127]: E1213 14:14:09.052132 2127 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:14:09.065894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:14:09.066222 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:14:11.408898 env[1738]: time="2024-12-13T14:14:11.408769623Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:11.411755 env[1738]: time="2024-12-13T14:14:11.411700476Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:11.415620 env[1738]: time="2024-12-13T14:14:11.415496717Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:11.421344 env[1738]: time="2024-12-13T14:14:11.421244183Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:11.423577 env[1738]: time="2024-12-13T14:14:11.423517035Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 14:14:11.444199 env[1738]: time="2024-12-13T14:14:11.444109898Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 14:14:13.932890 env[1738]: time="2024-12-13T14:14:13.932809200Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:13.950124 env[1738]: time="2024-12-13T14:14:13.950035463Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:13.953834 env[1738]: time="2024-12-13T14:14:13.953725256Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:13.957649 env[1738]: time="2024-12-13T14:14:13.957536501Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:13.960624 env[1738]: time="2024-12-13T14:14:13.960501967Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 14:14:13.984106 env[1738]: time="2024-12-13T14:14:13.984021258Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 14:14:15.501375 env[1738]: time="2024-12-13T14:14:15.501270424Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:15.506554 env[1738]: time="2024-12-13T14:14:15.506471537Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:15.511199 env[1738]: time="2024-12-13T14:14:15.511125292Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:15.515560 env[1738]: time="2024-12-13T14:14:15.515490730Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:15.517488 env[1738]: time="2024-12-13T14:14:15.517433972Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 14:14:15.543386 env[1738]: time="2024-12-13T14:14:15.543332607Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:14:17.071778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3186867685.mount: Deactivated successfully. Dec 13 14:14:17.973014 env[1738]: time="2024-12-13T14:14:17.972929651Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:17.989002 env[1738]: time="2024-12-13T14:14:17.988918772Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:17.993477 env[1738]: time="2024-12-13T14:14:17.993404318Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:17.996895 env[1738]: time="2024-12-13T14:14:17.996794875Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:17.998125 env[1738]: time="2024-12-13T14:14:17.998070785Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 14:14:18.016986 env[1738]: time="2024-12-13T14:14:18.016936193Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:14:18.567792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3449955927.mount: Deactivated successfully. Dec 13 14:14:19.091034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:14:19.091383 systemd[1]: Stopped kubelet.service. Dec 13 14:14:19.095055 systemd[1]: Starting kubelet.service... Dec 13 14:14:19.540809 systemd[1]: Started kubelet.service. Dec 13 14:14:19.665304 kubelet[2159]: E1213 14:14:19.665205 2159 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:14:19.669446 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:14:19.669770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:14:20.357277 env[1738]: time="2024-12-13T14:14:20.357181565Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:20.378080 env[1738]: time="2024-12-13T14:14:20.378006073Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:20.398993 env[1738]: time="2024-12-13T14:14:20.398930162Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:20.424598 env[1738]: time="2024-12-13T14:14:20.424459625Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:20.425518 env[1738]: time="2024-12-13T14:14:20.425407302Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 14:14:20.452095 env[1738]: time="2024-12-13T14:14:20.452026389Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:14:21.264575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount915338119.mount: Deactivated successfully. Dec 13 14:14:21.276478 env[1738]: time="2024-12-13T14:14:21.276386643Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:21.280130 env[1738]: time="2024-12-13T14:14:21.279937510Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:21.284974 env[1738]: time="2024-12-13T14:14:21.284914583Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:21.289252 env[1738]: time="2024-12-13T14:14:21.289176973Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:21.292161 env[1738]: time="2024-12-13T14:14:21.290764434Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 14:14:21.308995 env[1738]: time="2024-12-13T14:14:21.308944246Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 14:14:21.893903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3660463485.mount: Deactivated successfully. Dec 13 14:14:24.970101 env[1738]: time="2024-12-13T14:14:24.970003880Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:24.973701 env[1738]: time="2024-12-13T14:14:24.973600210Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:24.977479 env[1738]: time="2024-12-13T14:14:24.977427486Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:24.981159 env[1738]: time="2024-12-13T14:14:24.981094659Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:24.983030 env[1738]: time="2024-12-13T14:14:24.982969297Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 14:14:26.025504 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 14:14:27.691976 amazon-ssm-agent[1716]: 2024-12-13 14:14:27 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Dec 13 14:14:29.840823 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 14:14:29.841179 systemd[1]: Stopped kubelet.service. Dec 13 14:14:29.847234 systemd[1]: Starting kubelet.service... Dec 13 14:14:30.299456 systemd[1]: Started kubelet.service. Dec 13 14:14:30.418435 kubelet[2242]: E1213 14:14:30.418341 2242 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:14:30.425506 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:14:30.425881 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:14:32.885320 systemd[1]: Stopped kubelet.service. Dec 13 14:14:32.892461 systemd[1]: Starting kubelet.service... Dec 13 14:14:32.937452 systemd[1]: Reloading. Dec 13 14:14:33.121453 /usr/lib/systemd/system-generators/torcx-generator[2273]: time="2024-12-13T14:14:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:14:33.122073 /usr/lib/systemd/system-generators/torcx-generator[2273]: time="2024-12-13T14:14:33Z" level=info msg="torcx already run" Dec 13 14:14:33.353843 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:14:33.354089 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:14:33.397130 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:14:33.630090 systemd[1]: Started kubelet.service. Dec 13 14:14:33.634602 systemd[1]: Stopping kubelet.service... Dec 13 14:14:33.636625 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:14:33.637018 systemd[1]: Stopped kubelet.service. Dec 13 14:14:33.640456 systemd[1]: Starting kubelet.service... Dec 13 14:14:34.070269 systemd[1]: Started kubelet.service. Dec 13 14:14:34.173471 kubelet[2337]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:14:34.173471 kubelet[2337]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:14:34.173471 kubelet[2337]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:14:34.174143 kubelet[2337]: I1213 14:14:34.173574 2337 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:14:35.436675 kubelet[2337]: I1213 14:14:35.436602 2337 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:14:35.436675 kubelet[2337]: I1213 14:14:35.436661 2337 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:14:35.437395 kubelet[2337]: I1213 14:14:35.437066 2337 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:14:35.481999 kubelet[2337]: I1213 14:14:35.481918 2337 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:14:35.483435 kubelet[2337]: E1213 14:14:35.483395 2337 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.20.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.20.19:6443: connect: connection refused Dec 13 14:14:35.502383 kubelet[2337]: I1213 14:14:35.502191 2337 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:14:35.503742 kubelet[2337]: I1213 14:14:35.503614 2337 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:14:35.505069 kubelet[2337]: I1213 14:14:35.504904 2337 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:14:35.505837 kubelet[2337]: I1213 14:14:35.505742 2337 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:14:35.506205 kubelet[2337]: I1213 14:14:35.506151 2337 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:14:35.510649 kubelet[2337]: I1213 14:14:35.510505 2337 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:14:35.516814 kubelet[2337]: I1213 14:14:35.516707 2337 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:14:35.516814 kubelet[2337]: I1213 14:14:35.516814 2337 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:14:35.517096 kubelet[2337]: I1213 14:14:35.516867 2337 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:14:35.517096 kubelet[2337]: I1213 14:14:35.516906 2337 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:14:35.519320 kubelet[2337]: W1213 14:14:35.519184 2337 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.20.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Dec 13 14:14:35.519320 kubelet[2337]: E1213 14:14:35.519349 2337 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Dec 13 14:14:35.519602 kubelet[2337]: W1213 14:14:35.519519 2337 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.20.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-19&limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Dec 13 14:14:35.519602 kubelet[2337]: E1213 14:14:35.519597 2337 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-19&limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Dec 13 14:14:35.519841 kubelet[2337]: I1213 14:14:35.519782 2337 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:14:35.520917 kubelet[2337]: I1213 14:14:35.520815 2337 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:14:35.522353 kubelet[2337]: W1213 14:14:35.522249 2337 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:14:35.524415 kubelet[2337]: I1213 14:14:35.524358 2337 server.go:1256] "Started kubelet" Dec 13 14:14:35.530445 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:14:35.531844 kubelet[2337]: I1213 14:14:35.531740 2337 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:14:35.539391 kubelet[2337]: I1213 14:14:35.539335 2337 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:14:35.541942 kubelet[2337]: I1213 14:14:35.541893 2337 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:14:35.543560 kubelet[2337]: I1213 14:14:35.543499 2337 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:14:35.546172 kubelet[2337]: I1213 14:14:35.546100 2337 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:14:35.549535 kubelet[2337]: I1213 14:14:35.546647 2337 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:14:35.550985 kubelet[2337]: I1213 14:14:35.546751 2337 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:14:35.550985 kubelet[2337]: I1213 14:14:35.550703 2337 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:14:35.552197 kubelet[2337]: W1213 14:14:35.552088 2337 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Dec 13 14:14:35.552197 kubelet[2337]: E1213 14:14:35.552196 2337 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Dec 13 14:14:35.552508 kubelet[2337]: E1213 14:14:35.552448 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-19?timeout=10s\": dial tcp 172.31.20.19:6443: connect: connection refused" interval="200ms" Dec 13 14:14:35.553454 kubelet[2337]: I1213 14:14:35.553225 2337 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:14:35.553889 kubelet[2337]: I1213 14:14:35.553789 2337 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:14:35.563075 kubelet[2337]: I1213 14:14:35.562986 2337 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:14:35.566215 kubelet[2337]: E1213 14:14:35.566069 2337 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.19:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.19:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-19.1810c21a7ae39b2b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-19,UID:ip-172-31-20-19,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-19,},FirstTimestamp:2024-12-13 14:14:35.524315947 +0000 UTC m=+1.436851237,LastTimestamp:2024-12-13 14:14:35.524315947 +0000 UTC m=+1.436851237,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-19,}" Dec 13 14:14:35.574440 kubelet[2337]: E1213 14:14:35.574383 2337 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:14:35.596025 kubelet[2337]: I1213 14:14:35.595979 2337 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:14:35.596025 kubelet[2337]: I1213 14:14:35.596027 2337 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:14:35.596273 kubelet[2337]: I1213 14:14:35.596061 2337 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:14:35.599191 kubelet[2337]: I1213 14:14:35.599136 2337 policy_none.go:49] "None policy: Start" Dec 13 14:14:35.600906 kubelet[2337]: I1213 14:14:35.600825 2337 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:14:35.601079 kubelet[2337]: I1213 14:14:35.600978 2337 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:14:35.620670 systemd[1]: Created slice kubepods.slice. Dec 13 14:14:35.622371 kubelet[2337]: I1213 14:14:35.622250 2337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:14:35.626279 kubelet[2337]: I1213 14:14:35.626154 2337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:14:35.626279 kubelet[2337]: I1213 14:14:35.626245 2337 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:14:35.626279 kubelet[2337]: I1213 14:14:35.626360 2337 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:14:35.626753 kubelet[2337]: E1213 14:14:35.626488 2337 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:14:35.638052 kubelet[2337]: W1213 14:14:35.637968 2337 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.20.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Dec 13 14:14:35.638460 kubelet[2337]: E1213 14:14:35.638423 2337 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Dec 13 14:14:35.640981 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:14:35.650513 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:14:35.653341 kubelet[2337]: I1213 14:14:35.653237 2337 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-19" Dec 13 14:14:35.655778 kubelet[2337]: E1213 14:14:35.655736 2337 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.19:6443/api/v1/nodes\": dial tcp 172.31.20.19:6443: connect: connection refused" node="ip-172-31-20-19" Dec 13 14:14:35.661194 kubelet[2337]: I1213 14:14:35.661147 2337 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:14:35.664035 kubelet[2337]: I1213 14:14:35.663968 2337 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:14:35.672544 kubelet[2337]: E1213 14:14:35.672505 2337 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-19\" not found" Dec 13 14:14:35.726895 kubelet[2337]: I1213 14:14:35.726724 2337 topology_manager.go:215] "Topology Admit Handler" podUID="75c37cbc9de5e43d88d8a5d92e5ecf90" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-20-19" Dec 13 14:14:35.730894 kubelet[2337]: I1213 14:14:35.730854 2337 topology_manager.go:215] "Topology Admit Handler" podUID="cb474f8632cd1fbc98ead956c043ce91" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-20-19" Dec 13 14:14:35.734254 kubelet[2337]: I1213 14:14:35.734200 2337 topology_manager.go:215] "Topology Admit Handler" podUID="33a079b03bda4af9fb8e8b1ca29d49e8" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-20-19" Dec 13 14:14:35.745997 systemd[1]: Created slice kubepods-burstable-pod75c37cbc9de5e43d88d8a5d92e5ecf90.slice. Dec 13 14:14:35.753083 kubelet[2337]: E1213 14:14:35.753041 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-19?timeout=10s\": dial tcp 172.31.20.19:6443: connect: connection refused" interval="400ms" Dec 13 14:14:35.759478 kubelet[2337]: I1213 14:14:35.759437 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75c37cbc9de5e43d88d8a5d92e5ecf90-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-19\" (UID: \"75c37cbc9de5e43d88d8a5d92e5ecf90\") " pod="kube-system/kube-apiserver-ip-172-31-20-19" Dec 13 14:14:35.759765 kubelet[2337]: I1213 14:14:35.759719 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75c37cbc9de5e43d88d8a5d92e5ecf90-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-19\" (UID: \"75c37cbc9de5e43d88d8a5d92e5ecf90\") " pod="kube-system/kube-apiserver-ip-172-31-20-19" Dec 13 14:14:35.761436 kubelet[2337]: I1213 14:14:35.760430 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cb474f8632cd1fbc98ead956c043ce91-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-19\" (UID: \"cb474f8632cd1fbc98ead956c043ce91\") " pod="kube-system/kube-controller-manager-ip-172-31-20-19" Dec 13 14:14:35.761436 kubelet[2337]: I1213 14:14:35.760562 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cb474f8632cd1fbc98ead956c043ce91-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-19\" (UID: \"cb474f8632cd1fbc98ead956c043ce91\") " pod="kube-system/kube-controller-manager-ip-172-31-20-19" Dec 13 14:14:35.761436 kubelet[2337]: I1213 14:14:35.760722 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75c37cbc9de5e43d88d8a5d92e5ecf90-ca-certs\") pod \"kube-apiserver-ip-172-31-20-19\" (UID: \"75c37cbc9de5e43d88d8a5d92e5ecf90\") " pod="kube-system/kube-apiserver-ip-172-31-20-19" Dec 13 14:14:35.761436 kubelet[2337]: I1213 14:14:35.760866 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cb474f8632cd1fbc98ead956c043ce91-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-19\" (UID: \"cb474f8632cd1fbc98ead956c043ce91\") " pod="kube-system/kube-controller-manager-ip-172-31-20-19" Dec 13 14:14:35.761436 kubelet[2337]: I1213 14:14:35.761003 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb474f8632cd1fbc98ead956c043ce91-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-19\" (UID: \"cb474f8632cd1fbc98ead956c043ce91\") " pod="kube-system/kube-controller-manager-ip-172-31-20-19" Dec 13 14:14:35.761834 kubelet[2337]: I1213 14:14:35.761139 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cb474f8632cd1fbc98ead956c043ce91-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-19\" (UID: \"cb474f8632cd1fbc98ead956c043ce91\") " pod="kube-system/kube-controller-manager-ip-172-31-20-19" Dec 13 14:14:35.761834 kubelet[2337]: I1213 14:14:35.761265 2337 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33a079b03bda4af9fb8e8b1ca29d49e8-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-19\" (UID: \"33a079b03bda4af9fb8e8b1ca29d49e8\") " pod="kube-system/kube-scheduler-ip-172-31-20-19" Dec 13 14:14:35.767023 systemd[1]: Created slice kubepods-burstable-pod33a079b03bda4af9fb8e8b1ca29d49e8.slice. Dec 13 14:14:35.776643 systemd[1]: Created slice kubepods-burstable-podcb474f8632cd1fbc98ead956c043ce91.slice. Dec 13 14:14:35.858368 kubelet[2337]: I1213 14:14:35.858332 2337 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-19" Dec 13 14:14:35.859272 kubelet[2337]: E1213 14:14:35.859230 2337 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.19:6443/api/v1/nodes\": dial tcp 172.31.20.19:6443: connect: connection refused" node="ip-172-31-20-19" Dec 13 14:14:36.062482 env[1738]: time="2024-12-13T14:14:36.061565033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-19,Uid:75c37cbc9de5e43d88d8a5d92e5ecf90,Namespace:kube-system,Attempt:0,}" Dec 13 14:14:36.081512 env[1738]: time="2024-12-13T14:14:36.081383789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-19,Uid:33a079b03bda4af9fb8e8b1ca29d49e8,Namespace:kube-system,Attempt:0,}" Dec 13 14:14:36.083148 env[1738]: time="2024-12-13T14:14:36.083091766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-19,Uid:cb474f8632cd1fbc98ead956c043ce91,Namespace:kube-system,Attempt:0,}" Dec 13 14:14:36.154742 kubelet[2337]: E1213 14:14:36.154661 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-19?timeout=10s\": dial tcp 172.31.20.19:6443: connect: connection refused" interval="800ms" Dec 13 14:14:36.261891 kubelet[2337]: I1213 14:14:36.261844 2337 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-19" Dec 13 14:14:36.262453 kubelet[2337]: E1213 14:14:36.262422 2337 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.19:6443/api/v1/nodes\": dial tcp 172.31.20.19:6443: connect: connection refused" node="ip-172-31-20-19" Dec 13 14:14:36.441045 kubelet[2337]: W1213 14:14:36.440832 2337 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.20.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Dec 13 14:14:36.441045 kubelet[2337]: E1213 14:14:36.441050 2337 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Dec 13 14:14:36.551576 kubelet[2337]: W1213 14:14:36.551486 2337 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Dec 13 14:14:36.551794 kubelet[2337]: E1213 14:14:36.551582 2337 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Dec 13 14:14:36.595488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2844553198.mount: Deactivated successfully. Dec 13 14:14:36.609729 env[1738]: time="2024-12-13T14:14:36.609643185Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:36.617138 env[1738]: time="2024-12-13T14:14:36.617057550Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:36.619765 env[1738]: time="2024-12-13T14:14:36.619696521Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:36.622893 env[1738]: time="2024-12-13T14:14:36.622808209Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:36.626114 env[1738]: time="2024-12-13T14:14:36.626033371Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:36.630139 env[1738]: time="2024-12-13T14:14:36.630080987Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:36.633222 env[1738]: time="2024-12-13T14:14:36.633132320Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:36.635556 env[1738]: time="2024-12-13T14:14:36.635463520Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:36.641575 env[1738]: time="2024-12-13T14:14:36.641470477Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:36.647437 env[1738]: time="2024-12-13T14:14:36.647372890Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:36.649850 env[1738]: time="2024-12-13T14:14:36.649799619Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:36.654378 env[1738]: time="2024-12-13T14:14:36.654265898Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:36.710127 env[1738]: time="2024-12-13T14:14:36.702122825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:14:36.710127 env[1738]: time="2024-12-13T14:14:36.702192784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:14:36.710127 env[1738]: time="2024-12-13T14:14:36.702218172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:14:36.710127 env[1738]: time="2024-12-13T14:14:36.702511490Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b1b9c1a6bd6ff4ef7f6077a884c3c2c18bdb10f2ff948be0f8647f7a90b1571 pid=2375 runtime=io.containerd.runc.v2 Dec 13 14:14:36.740484 env[1738]: time="2024-12-13T14:14:36.740334035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:14:36.740484 env[1738]: time="2024-12-13T14:14:36.740418279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:14:36.740805 env[1738]: time="2024-12-13T14:14:36.740445600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:14:36.740944 env[1738]: time="2024-12-13T14:14:36.740763238Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a14eb220b6ba81414191261888eb6c5149f298e3758af6f0f1857cc4bd6751cb pid=2397 runtime=io.containerd.runc.v2 Dec 13 14:14:36.746187 env[1738]: time="2024-12-13T14:14:36.746039715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:14:36.746584 env[1738]: time="2024-12-13T14:14:36.746466973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:14:36.746857 env[1738]: time="2024-12-13T14:14:36.746768058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:14:36.747573 env[1738]: time="2024-12-13T14:14:36.747458792Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eca0ea69b4f58bc054972bc0929ad2b5b69d3818a64584630a1f6eaf01eff0d6 pid=2400 runtime=io.containerd.runc.v2 Dec 13 14:14:36.766127 systemd[1]: Started cri-containerd-2b1b9c1a6bd6ff4ef7f6077a884c3c2c18bdb10f2ff948be0f8647f7a90b1571.scope. Dec 13 14:14:36.796013 systemd[1]: Started cri-containerd-eca0ea69b4f58bc054972bc0929ad2b5b69d3818a64584630a1f6eaf01eff0d6.scope. Dec 13 14:14:36.803764 kubelet[2337]: W1213 14:14:36.803558 2337 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.20.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-19&limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Dec 13 14:14:36.803764 kubelet[2337]: E1213 14:14:36.803674 2337 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-19&limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Dec 13 14:14:36.820988 systemd[1]: Started cri-containerd-a14eb220b6ba81414191261888eb6c5149f298e3758af6f0f1857cc4bd6751cb.scope. Dec 13 14:14:36.887988 env[1738]: time="2024-12-13T14:14:36.887907127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-19,Uid:75c37cbc9de5e43d88d8a5d92e5ecf90,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b1b9c1a6bd6ff4ef7f6077a884c3c2c18bdb10f2ff948be0f8647f7a90b1571\"" Dec 13 14:14:36.902174 env[1738]: time="2024-12-13T14:14:36.902108313Z" level=info msg="CreateContainer within sandbox \"2b1b9c1a6bd6ff4ef7f6077a884c3c2c18bdb10f2ff948be0f8647f7a90b1571\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:14:36.924135 kubelet[2337]: W1213 14:14:36.922448 2337 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.20.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Dec 13 14:14:36.924720 kubelet[2337]: E1213 14:14:36.924691 2337 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.19:6443: connect: connection refused Dec 13 14:14:36.940111 env[1738]: time="2024-12-13T14:14:36.940015666Z" level=info msg="CreateContainer within sandbox \"2b1b9c1a6bd6ff4ef7f6077a884c3c2c18bdb10f2ff948be0f8647f7a90b1571\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"753a580fbd2699d22d639a77ea9f79018d2ae299781b705b87beb415dd843697\"" Dec 13 14:14:36.942113 env[1738]: time="2024-12-13T14:14:36.942043801Z" level=info msg="StartContainer for \"753a580fbd2699d22d639a77ea9f79018d2ae299781b705b87beb415dd843697\"" Dec 13 14:14:36.950787 env[1738]: time="2024-12-13T14:14:36.950366413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-19,Uid:33a079b03bda4af9fb8e8b1ca29d49e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"eca0ea69b4f58bc054972bc0929ad2b5b69d3818a64584630a1f6eaf01eff0d6\"" Dec 13 14:14:36.958098 kubelet[2337]: E1213 14:14:36.957877 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-19?timeout=10s\": dial tcp 172.31.20.19:6443: connect: connection refused" interval="1.6s" Dec 13 14:14:36.959841 env[1738]: time="2024-12-13T14:14:36.959773886Z" level=info msg="CreateContainer within sandbox \"eca0ea69b4f58bc054972bc0929ad2b5b69d3818a64584630a1f6eaf01eff0d6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:14:36.990851 env[1738]: time="2024-12-13T14:14:36.990650234Z" level=info msg="CreateContainer within sandbox \"eca0ea69b4f58bc054972bc0929ad2b5b69d3818a64584630a1f6eaf01eff0d6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"401cf00a0cf33cea590a0dfc36c7e8d8870559c063b46386f5bbc5184b1059b7\"" Dec 13 14:14:36.992582 env[1738]: time="2024-12-13T14:14:36.992531884Z" level=info msg="StartContainer for \"401cf00a0cf33cea590a0dfc36c7e8d8870559c063b46386f5bbc5184b1059b7\"" Dec 13 14:14:36.999913 env[1738]: time="2024-12-13T14:14:36.999841131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-19,Uid:cb474f8632cd1fbc98ead956c043ce91,Namespace:kube-system,Attempt:0,} returns sandbox id \"a14eb220b6ba81414191261888eb6c5149f298e3758af6f0f1857cc4bd6751cb\"" Dec 13 14:14:37.004986 env[1738]: time="2024-12-13T14:14:37.004904411Z" level=info msg="CreateContainer within sandbox \"a14eb220b6ba81414191261888eb6c5149f298e3758af6f0f1857cc4bd6751cb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:14:37.009981 systemd[1]: Started cri-containerd-753a580fbd2699d22d639a77ea9f79018d2ae299781b705b87beb415dd843697.scope. Dec 13 14:14:37.044929 env[1738]: time="2024-12-13T14:14:37.044858065Z" level=info msg="CreateContainer within sandbox \"a14eb220b6ba81414191261888eb6c5149f298e3758af6f0f1857cc4bd6751cb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"77b45843ddbf6426e9a5a27756eb1b50d82e84715fb31c1177ee4bd48ceb1c15\"" Dec 13 14:14:37.046464 env[1738]: time="2024-12-13T14:14:37.046416290Z" level=info msg="StartContainer for \"77b45843ddbf6426e9a5a27756eb1b50d82e84715fb31c1177ee4bd48ceb1c15\"" Dec 13 14:14:37.072712 systemd[1]: Started cri-containerd-401cf00a0cf33cea590a0dfc36c7e8d8870559c063b46386f5bbc5184b1059b7.scope. Dec 13 14:14:37.083172 kubelet[2337]: I1213 14:14:37.083075 2337 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-19" Dec 13 14:14:37.083944 kubelet[2337]: E1213 14:14:37.083907 2337 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.19:6443/api/v1/nodes\": dial tcp 172.31.20.19:6443: connect: connection refused" node="ip-172-31-20-19" Dec 13 14:14:37.158160 systemd[1]: Started cri-containerd-77b45843ddbf6426e9a5a27756eb1b50d82e84715fb31c1177ee4bd48ceb1c15.scope. Dec 13 14:14:37.183432 env[1738]: time="2024-12-13T14:14:37.183367800Z" level=info msg="StartContainer for \"753a580fbd2699d22d639a77ea9f79018d2ae299781b705b87beb415dd843697\" returns successfully" Dec 13 14:14:37.243521 env[1738]: time="2024-12-13T14:14:37.243223721Z" level=info msg="StartContainer for \"401cf00a0cf33cea590a0dfc36c7e8d8870559c063b46386f5bbc5184b1059b7\" returns successfully" Dec 13 14:14:37.302132 env[1738]: time="2024-12-13T14:14:37.302003549Z" level=info msg="StartContainer for \"77b45843ddbf6426e9a5a27756eb1b50d82e84715fb31c1177ee4bd48ceb1c15\" returns successfully" Dec 13 14:14:38.687599 kubelet[2337]: I1213 14:14:38.687521 2337 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-19" Dec 13 14:14:40.380335 update_engine[1729]: I1213 14:14:40.379391 1729 update_attempter.cc:509] Updating boot flags... Dec 13 14:14:40.386393 kubelet[2337]: E1213 14:14:40.385774 2337 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-19\" not found" node="ip-172-31-20-19" Dec 13 14:14:40.523391 kubelet[2337]: I1213 14:14:40.521521 2337 apiserver.go:52] "Watching apiserver" Dec 13 14:14:40.588188 kubelet[2337]: I1213 14:14:40.588131 2337 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-20-19" Dec 13 14:14:40.654751 kubelet[2337]: I1213 14:14:40.654611 2337 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:14:40.707435 kubelet[2337]: E1213 14:14:40.705729 2337 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-20-19.1810c21a7ae39b2b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-19,UID:ip-172-31-20-19,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-19,},FirstTimestamp:2024-12-13 14:14:35.524315947 +0000 UTC m=+1.436851237,LastTimestamp:2024-12-13 14:14:35.524315947 +0000 UTC m=+1.436851237,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-19,}" Dec 13 14:14:40.903786 kubelet[2337]: E1213 14:14:40.903472 2337 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-20-19.1810c21a7dddc9b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-19,UID:ip-172-31-20-19,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-20-19,},FirstTimestamp:2024-12-13 14:14:35.574266293 +0000 UTC m=+1.486801619,LastTimestamp:2024-12-13 14:14:35.574266293 +0000 UTC m=+1.486801619,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-19,}" Dec 13 14:14:40.986147 kubelet[2337]: E1213 14:14:40.985983 2337 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-20-19.1810c21a7f160824 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-19,UID:ip-172-31-20-19,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-20-19 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-20-19,},FirstTimestamp:2024-12-13 14:14:35.594729508 +0000 UTC m=+1.507264762,LastTimestamp:2024-12-13 14:14:35.594729508 +0000 UTC m=+1.507264762,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-19,}" Dec 13 14:14:43.805963 systemd[1]: Reloading. Dec 13 14:14:43.971989 /usr/lib/systemd/system-generators/torcx-generator[2734]: time="2024-12-13T14:14:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:14:43.973093 /usr/lib/systemd/system-generators/torcx-generator[2734]: time="2024-12-13T14:14:43Z" level=info msg="torcx already run" Dec 13 14:14:44.153016 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:14:44.153274 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:14:44.197205 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:14:44.509885 systemd[1]: Stopping kubelet.service... Dec 13 14:14:44.529266 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:14:44.530017 systemd[1]: Stopped kubelet.service. Dec 13 14:14:44.530658 systemd[1]: kubelet.service: Consumed 2.287s CPU time. Dec 13 14:14:44.537628 systemd[1]: Starting kubelet.service... Dec 13 14:14:44.963382 systemd[1]: Started kubelet.service. Dec 13 14:14:45.134980 kubelet[2788]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:14:45.135549 kubelet[2788]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:14:45.135655 kubelet[2788]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:14:45.136093 kubelet[2788]: I1213 14:14:45.135973 2788 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:14:45.146192 kubelet[2788]: I1213 14:14:45.146152 2788 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:14:45.146597 kubelet[2788]: I1213 14:14:45.146543 2788 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:14:45.147544 kubelet[2788]: I1213 14:14:45.147494 2788 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:14:45.151625 kubelet[2788]: I1213 14:14:45.151583 2788 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:14:45.156155 kubelet[2788]: I1213 14:14:45.156085 2788 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:14:45.168269 kubelet[2788]: I1213 14:14:45.168175 2788 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:14:45.168994 kubelet[2788]: I1213 14:14:45.168952 2788 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:14:45.169326 kubelet[2788]: I1213 14:14:45.169256 2788 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:14:45.169512 kubelet[2788]: I1213 14:14:45.169419 2788 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:14:45.169512 kubelet[2788]: I1213 14:14:45.169446 2788 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:14:45.169512 kubelet[2788]: I1213 14:14:45.169508 2788 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:14:45.169767 kubelet[2788]: I1213 14:14:45.169720 2788 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:14:45.169767 kubelet[2788]: I1213 14:14:45.169764 2788 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:14:45.169940 kubelet[2788]: I1213 14:14:45.169820 2788 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:14:45.169940 kubelet[2788]: I1213 14:14:45.169852 2788 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:14:45.171857 kubelet[2788]: I1213 14:14:45.171785 2788 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:14:45.172327 kubelet[2788]: I1213 14:14:45.172212 2788 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:14:45.173549 kubelet[2788]: I1213 14:14:45.173482 2788 server.go:1256] "Started kubelet" Dec 13 14:14:45.197480 kubelet[2788]: I1213 14:14:45.197445 2788 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:14:45.201903 kubelet[2788]: I1213 14:14:45.201849 2788 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:14:45.211795 kubelet[2788]: I1213 14:14:45.208189 2788 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:14:45.213190 kubelet[2788]: I1213 14:14:45.208275 2788 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:14:45.245133 kubelet[2788]: I1213 14:14:45.244985 2788 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:14:45.247725 kubelet[2788]: I1213 14:14:45.213531 2788 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:14:45.263318 kubelet[2788]: I1213 14:14:45.213585 2788 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:14:45.263318 kubelet[2788]: I1213 14:14:45.261507 2788 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:14:45.263318 kubelet[2788]: I1213 14:14:45.262079 2788 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:14:45.263318 kubelet[2788]: I1213 14:14:45.262337 2788 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:14:45.288340 kubelet[2788]: I1213 14:14:45.286482 2788 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:14:45.291177 kubelet[2788]: I1213 14:14:45.291130 2788 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:14:45.291177 kubelet[2788]: I1213 14:14:45.291176 2788 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:14:45.291457 kubelet[2788]: I1213 14:14:45.291209 2788 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:14:45.291457 kubelet[2788]: E1213 14:14:45.291344 2788 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:14:45.299385 kubelet[2788]: I1213 14:14:45.298262 2788 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:14:45.335789 kubelet[2788]: I1213 14:14:45.335748 2788 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-19" Dec 13 14:14:45.356714 kubelet[2788]: I1213 14:14:45.356668 2788 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-20-19" Dec 13 14:14:45.357192 kubelet[2788]: I1213 14:14:45.357022 2788 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-20-19" Dec 13 14:14:45.393729 kubelet[2788]: E1213 14:14:45.393695 2788 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:14:45.476460 kubelet[2788]: I1213 14:14:45.473722 2788 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:14:45.476460 kubelet[2788]: I1213 14:14:45.473768 2788 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:14:45.476460 kubelet[2788]: I1213 14:14:45.473806 2788 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:14:45.476460 kubelet[2788]: I1213 14:14:45.474119 2788 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:14:45.476460 kubelet[2788]: I1213 14:14:45.474167 2788 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:14:45.476460 kubelet[2788]: I1213 14:14:45.474186 2788 policy_none.go:49] "None policy: Start" Dec 13 14:14:45.478306 kubelet[2788]: I1213 14:14:45.477445 2788 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:14:45.478306 kubelet[2788]: I1213 14:14:45.477533 2788 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:14:45.478306 kubelet[2788]: I1213 14:14:45.477979 2788 state_mem.go:75] "Updated machine memory state" Dec 13 14:14:45.490614 kubelet[2788]: I1213 14:14:45.490563 2788 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:14:45.494429 kubelet[2788]: I1213 14:14:45.494374 2788 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:14:45.595220 kubelet[2788]: I1213 14:14:45.595090 2788 topology_manager.go:215] "Topology Admit Handler" podUID="75c37cbc9de5e43d88d8a5d92e5ecf90" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-20-19" Dec 13 14:14:45.595705 kubelet[2788]: I1213 14:14:45.595515 2788 topology_manager.go:215] "Topology Admit Handler" podUID="cb474f8632cd1fbc98ead956c043ce91" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-20-19" Dec 13 14:14:45.595705 kubelet[2788]: I1213 14:14:45.595687 2788 topology_manager.go:215] "Topology Admit Handler" podUID="33a079b03bda4af9fb8e8b1ca29d49e8" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-20-19" Dec 13 14:14:45.613707 kubelet[2788]: E1213 14:14:45.613645 2788 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-20-19\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-20-19" Dec 13 14:14:45.665804 kubelet[2788]: I1213 14:14:45.665691 2788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75c37cbc9de5e43d88d8a5d92e5ecf90-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-19\" (UID: \"75c37cbc9de5e43d88d8a5d92e5ecf90\") " pod="kube-system/kube-apiserver-ip-172-31-20-19" Dec 13 14:14:45.666098 kubelet[2788]: I1213 14:14:45.665830 2788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75c37cbc9de5e43d88d8a5d92e5ecf90-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-19\" (UID: \"75c37cbc9de5e43d88d8a5d92e5ecf90\") " pod="kube-system/kube-apiserver-ip-172-31-20-19" Dec 13 14:14:45.666098 kubelet[2788]: I1213 14:14:45.665883 2788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb474f8632cd1fbc98ead956c043ce91-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-19\" (UID: \"cb474f8632cd1fbc98ead956c043ce91\") " pod="kube-system/kube-controller-manager-ip-172-31-20-19" Dec 13 14:14:45.666098 kubelet[2788]: I1213 14:14:45.665936 2788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33a079b03bda4af9fb8e8b1ca29d49e8-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-19\" (UID: \"33a079b03bda4af9fb8e8b1ca29d49e8\") " pod="kube-system/kube-scheduler-ip-172-31-20-19" Dec 13 14:14:45.666098 kubelet[2788]: I1213 14:14:45.665985 2788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75c37cbc9de5e43d88d8a5d92e5ecf90-ca-certs\") pod \"kube-apiserver-ip-172-31-20-19\" (UID: \"75c37cbc9de5e43d88d8a5d92e5ecf90\") " pod="kube-system/kube-apiserver-ip-172-31-20-19" Dec 13 14:14:45.666098 kubelet[2788]: I1213 14:14:45.666039 2788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cb474f8632cd1fbc98ead956c043ce91-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-19\" (UID: \"cb474f8632cd1fbc98ead956c043ce91\") " pod="kube-system/kube-controller-manager-ip-172-31-20-19" Dec 13 14:14:45.666482 kubelet[2788]: I1213 14:14:45.666086 2788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cb474f8632cd1fbc98ead956c043ce91-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-19\" (UID: \"cb474f8632cd1fbc98ead956c043ce91\") " pod="kube-system/kube-controller-manager-ip-172-31-20-19" Dec 13 14:14:45.666482 kubelet[2788]: I1213 14:14:45.666140 2788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cb474f8632cd1fbc98ead956c043ce91-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-19\" (UID: \"cb474f8632cd1fbc98ead956c043ce91\") " pod="kube-system/kube-controller-manager-ip-172-31-20-19" Dec 13 14:14:45.666482 kubelet[2788]: I1213 14:14:45.666189 2788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cb474f8632cd1fbc98ead956c043ce91-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-19\" (UID: \"cb474f8632cd1fbc98ead956c043ce91\") " pod="kube-system/kube-controller-manager-ip-172-31-20-19" Dec 13 14:14:46.170925 kubelet[2788]: I1213 14:14:46.170876 2788 apiserver.go:52] "Watching apiserver" Dec 13 14:14:46.261843 kubelet[2788]: I1213 14:14:46.261798 2788 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:14:46.491554 kubelet[2788]: I1213 14:14:46.491420 2788 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-19" podStartSLOduration=1.491340216 podStartE2EDuration="1.491340216s" podCreationTimestamp="2024-12-13 14:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:14:46.446585971 +0000 UTC m=+1.462031955" watchObservedRunningTime="2024-12-13 14:14:46.491340216 +0000 UTC m=+1.506786212" Dec 13 14:14:46.544267 kubelet[2788]: I1213 14:14:46.544211 2788 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-19" podStartSLOduration=1.544148077 podStartE2EDuration="1.544148077s" podCreationTimestamp="2024-12-13 14:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:14:46.542514833 +0000 UTC m=+1.557960817" watchObservedRunningTime="2024-12-13 14:14:46.544148077 +0000 UTC m=+1.559594049" Dec 13 14:14:46.544795 kubelet[2788]: I1213 14:14:46.544749 2788 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-19" podStartSLOduration=2.544686747 podStartE2EDuration="2.544686747s" podCreationTimestamp="2024-12-13 14:14:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:14:46.495205159 +0000 UTC m=+1.510651119" watchObservedRunningTime="2024-12-13 14:14:46.544686747 +0000 UTC m=+1.560132719" Dec 13 14:14:47.230025 sudo[1986]: pam_unix(sudo:session): session closed for user root Dec 13 14:14:47.256562 sshd[1983]: pam_unix(sshd:session): session closed for user core Dec 13 14:14:47.264500 systemd[1]: sshd@4-172.31.20.19:22-139.178.89.65:38992.service: Deactivated successfully. Dec 13 14:14:47.266493 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:14:47.266806 systemd[1]: session-5.scope: Consumed 9.844s CPU time. Dec 13 14:14:47.267980 systemd-logind[1728]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:14:47.269865 systemd-logind[1728]: Removed session 5. Dec 13 14:14:57.804870 kubelet[2788]: I1213 14:14:57.804695 2788 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:14:57.806371 env[1738]: time="2024-12-13T14:14:57.806242616Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:14:57.807265 kubelet[2788]: I1213 14:14:57.807210 2788 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:14:58.812685 kubelet[2788]: I1213 14:14:58.812596 2788 topology_manager.go:215] "Topology Admit Handler" podUID="30d74cab-2f1e-4410-842c-af2b967522c7" podNamespace="kube-flannel" podName="kube-flannel-ds-hddjb" Dec 13 14:14:58.824615 systemd[1]: Created slice kubepods-burstable-pod30d74cab_2f1e_4410_842c_af2b967522c7.slice. Dec 13 14:14:58.845301 kubelet[2788]: I1213 14:14:58.845226 2788 topology_manager.go:215] "Topology Admit Handler" podUID="6bb66082-1eae-4827-a350-56f906f6abb0" podNamespace="kube-system" podName="kube-proxy-4rs7g" Dec 13 14:14:58.846199 kubelet[2788]: I1213 14:14:58.846146 2788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/30d74cab-2f1e-4410-842c-af2b967522c7-run\") pod \"kube-flannel-ds-hddjb\" (UID: \"30d74cab-2f1e-4410-842c-af2b967522c7\") " pod="kube-flannel/kube-flannel-ds-hddjb" Dec 13 14:14:58.846377 kubelet[2788]: I1213 14:14:58.846239 2788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/30d74cab-2f1e-4410-842c-af2b967522c7-cni-plugin\") pod \"kube-flannel-ds-hddjb\" (UID: \"30d74cab-2f1e-4410-842c-af2b967522c7\") " pod="kube-flannel/kube-flannel-ds-hddjb" Dec 13 14:14:58.846377 kubelet[2788]: I1213 14:14:58.846335 2788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/30d74cab-2f1e-4410-842c-af2b967522c7-cni\") pod \"kube-flannel-ds-hddjb\" (UID: \"30d74cab-2f1e-4410-842c-af2b967522c7\") " pod="kube-flannel/kube-flannel-ds-hddjb" Dec 13 14:14:58.846505 kubelet[2788]: I1213 14:14:58.846411 2788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqwtf\" (UniqueName: \"kubernetes.io/projected/30d74cab-2f1e-4410-842c-af2b967522c7-kube-api-access-zqwtf\") pod \"kube-flannel-ds-hddjb\" (UID: \"30d74cab-2f1e-4410-842c-af2b967522c7\") " pod="kube-flannel/kube-flannel-ds-hddjb" Dec 13 14:14:58.846757 kubelet[2788]: I1213 14:14:58.846600 2788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/30d74cab-2f1e-4410-842c-af2b967522c7-flannel-cfg\") pod \"kube-flannel-ds-hddjb\" (UID: \"30d74cab-2f1e-4410-842c-af2b967522c7\") " pod="kube-flannel/kube-flannel-ds-hddjb" Dec 13 14:14:58.846757 kubelet[2788]: I1213 14:14:58.846683 2788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30d74cab-2f1e-4410-842c-af2b967522c7-xtables-lock\") pod \"kube-flannel-ds-hddjb\" (UID: \"30d74cab-2f1e-4410-842c-af2b967522c7\") " pod="kube-flannel/kube-flannel-ds-hddjb" Dec 13 14:14:58.861518 kubelet[2788]: W1213 14:14:58.861448 2788 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-20-19" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-19' and this object Dec 13 14:14:58.861718 kubelet[2788]: E1213 14:14:58.861525 2788 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-20-19" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-19' and this object Dec 13 14:14:58.862356 kubelet[2788]: W1213 14:14:58.862322 2788 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-20-19" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-19' and this object Dec 13 14:14:58.862594 kubelet[2788]: E1213 14:14:58.862570 2788 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-20-19" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-19' and this object Dec 13 14:14:58.863585 systemd[1]: Created slice kubepods-besteffort-pod6bb66082_1eae_4827_a350_56f906f6abb0.slice. Dec 13 14:14:58.947405 kubelet[2788]: I1213 14:14:58.947260 2788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6bb66082-1eae-4827-a350-56f906f6abb0-kube-proxy\") pod \"kube-proxy-4rs7g\" (UID: \"6bb66082-1eae-4827-a350-56f906f6abb0\") " pod="kube-system/kube-proxy-4rs7g" Dec 13 14:14:58.947569 kubelet[2788]: I1213 14:14:58.947444 2788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6bb66082-1eae-4827-a350-56f906f6abb0-xtables-lock\") pod \"kube-proxy-4rs7g\" (UID: \"6bb66082-1eae-4827-a350-56f906f6abb0\") " pod="kube-system/kube-proxy-4rs7g" Dec 13 14:14:58.947665 kubelet[2788]: I1213 14:14:58.947553 2788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bb66082-1eae-4827-a350-56f906f6abb0-lib-modules\") pod \"kube-proxy-4rs7g\" (UID: \"6bb66082-1eae-4827-a350-56f906f6abb0\") " pod="kube-system/kube-proxy-4rs7g" Dec 13 14:14:58.948718 kubelet[2788]: I1213 14:14:58.948656 2788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwsnp\" (UniqueName: \"kubernetes.io/projected/6bb66082-1eae-4827-a350-56f906f6abb0-kube-api-access-wwsnp\") pod \"kube-proxy-4rs7g\" (UID: \"6bb66082-1eae-4827-a350-56f906f6abb0\") " pod="kube-system/kube-proxy-4rs7g" Dec 13 14:14:59.131479 env[1738]: time="2024-12-13T14:14:59.131359993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-hddjb,Uid:30d74cab-2f1e-4410-842c-af2b967522c7,Namespace:kube-flannel,Attempt:0,}" Dec 13 14:14:59.175856 env[1738]: time="2024-12-13T14:14:59.175513560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:14:59.175856 env[1738]: time="2024-12-13T14:14:59.175590465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:14:59.175856 env[1738]: time="2024-12-13T14:14:59.175616808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:14:59.177432 env[1738]: time="2024-12-13T14:14:59.176281726Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9308fe6b031a7698dcd19ba5c209f50ce6f087eb5d2b35fd76ce4a1e6f6656b7 pid=2851 runtime=io.containerd.runc.v2 Dec 13 14:14:59.220866 systemd[1]: Started cri-containerd-9308fe6b031a7698dcd19ba5c209f50ce6f087eb5d2b35fd76ce4a1e6f6656b7.scope. Dec 13 14:14:59.302534 env[1738]: time="2024-12-13T14:14:59.302476615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-hddjb,Uid:30d74cab-2f1e-4410-842c-af2b967522c7,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"9308fe6b031a7698dcd19ba5c209f50ce6f087eb5d2b35fd76ce4a1e6f6656b7\"" Dec 13 14:14:59.311400 env[1738]: time="2024-12-13T14:14:59.309251734Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 13 14:15:00.075274 env[1738]: time="2024-12-13T14:15:00.075200846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4rs7g,Uid:6bb66082-1eae-4827-a350-56f906f6abb0,Namespace:kube-system,Attempt:0,}" Dec 13 14:15:00.105864 env[1738]: time="2024-12-13T14:15:00.105402827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:15:00.107566 env[1738]: time="2024-12-13T14:15:00.106937317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:15:00.107566 env[1738]: time="2024-12-13T14:15:00.106998524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:15:00.109533 env[1738]: time="2024-12-13T14:15:00.109346783Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/15ac1c05bec95ee05056773e2f68e6da7c526877ec535a7bdcccff1643521d0f pid=2891 runtime=io.containerd.runc.v2 Dec 13 14:15:00.141991 systemd[1]: run-containerd-runc-k8s.io-15ac1c05bec95ee05056773e2f68e6da7c526877ec535a7bdcccff1643521d0f-runc.TZSoNi.mount: Deactivated successfully. Dec 13 14:15:00.154143 systemd[1]: Started cri-containerd-15ac1c05bec95ee05056773e2f68e6da7c526877ec535a7bdcccff1643521d0f.scope. Dec 13 14:15:00.230611 env[1738]: time="2024-12-13T14:15:00.230527002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4rs7g,Uid:6bb66082-1eae-4827-a350-56f906f6abb0,Namespace:kube-system,Attempt:0,} returns sandbox id \"15ac1c05bec95ee05056773e2f68e6da7c526877ec535a7bdcccff1643521d0f\"" Dec 13 14:15:00.244083 env[1738]: time="2024-12-13T14:15:00.243976855Z" level=info msg="CreateContainer within sandbox \"15ac1c05bec95ee05056773e2f68e6da7c526877ec535a7bdcccff1643521d0f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:15:00.275000 env[1738]: time="2024-12-13T14:15:00.274930701Z" level=info msg="CreateContainer within sandbox \"15ac1c05bec95ee05056773e2f68e6da7c526877ec535a7bdcccff1643521d0f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"996445e13abe1daaf0c2c424d5d38f5d21c729c6db295222ea796ca1ecffa3ab\"" Dec 13 14:15:00.277638 env[1738]: time="2024-12-13T14:15:00.276032768Z" level=info msg="StartContainer for \"996445e13abe1daaf0c2c424d5d38f5d21c729c6db295222ea796ca1ecffa3ab\"" Dec 13 14:15:00.318619 systemd[1]: Started cri-containerd-996445e13abe1daaf0c2c424d5d38f5d21c729c6db295222ea796ca1ecffa3ab.scope. Dec 13 14:15:00.401686 env[1738]: time="2024-12-13T14:15:00.401600168Z" level=info msg="StartContainer for \"996445e13abe1daaf0c2c424d5d38f5d21c729c6db295222ea796ca1ecffa3ab\" returns successfully" Dec 13 14:15:01.285859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2563764316.mount: Deactivated successfully. Dec 13 14:15:01.389968 env[1738]: time="2024-12-13T14:15:01.389905858Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:01.393646 env[1738]: time="2024-12-13T14:15:01.393549910Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:01.397614 env[1738]: time="2024-12-13T14:15:01.397552814Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:01.402229 env[1738]: time="2024-12-13T14:15:01.402150471Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:01.403698 env[1738]: time="2024-12-13T14:15:01.403641583Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:b45062ceea496fc421523388cb91166abc7715a15c2e2cbab4e6f8c9d5dc0ab8\"" Dec 13 14:15:01.410049 env[1738]: time="2024-12-13T14:15:01.409990590Z" level=info msg="CreateContainer within sandbox \"9308fe6b031a7698dcd19ba5c209f50ce6f087eb5d2b35fd76ce4a1e6f6656b7\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 13 14:15:01.442926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3867164999.mount: Deactivated successfully. Dec 13 14:15:01.462271 env[1738]: time="2024-12-13T14:15:01.462196609Z" level=info msg="CreateContainer within sandbox \"9308fe6b031a7698dcd19ba5c209f50ce6f087eb5d2b35fd76ce4a1e6f6656b7\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"eabad25c219ea1769fe8f36ce2a5bd0ad7928dce2590a80641d953ed1870ee99\"" Dec 13 14:15:01.467190 env[1738]: time="2024-12-13T14:15:01.467064682Z" level=info msg="StartContainer for \"eabad25c219ea1769fe8f36ce2a5bd0ad7928dce2590a80641d953ed1870ee99\"" Dec 13 14:15:01.505617 systemd[1]: Started cri-containerd-eabad25c219ea1769fe8f36ce2a5bd0ad7928dce2590a80641d953ed1870ee99.scope. Dec 13 14:15:01.573614 systemd[1]: cri-containerd-eabad25c219ea1769fe8f36ce2a5bd0ad7928dce2590a80641d953ed1870ee99.scope: Deactivated successfully. Dec 13 14:15:01.574905 env[1738]: time="2024-12-13T14:15:01.574506686Z" level=info msg="StartContainer for \"eabad25c219ea1769fe8f36ce2a5bd0ad7928dce2590a80641d953ed1870ee99\" returns successfully" Dec 13 14:15:01.669025 env[1738]: time="2024-12-13T14:15:01.668934861Z" level=info msg="shim disconnected" id=eabad25c219ea1769fe8f36ce2a5bd0ad7928dce2590a80641d953ed1870ee99 Dec 13 14:15:01.670028 env[1738]: time="2024-12-13T14:15:01.669979485Z" level=warning msg="cleaning up after shim disconnected" id=eabad25c219ea1769fe8f36ce2a5bd0ad7928dce2590a80641d953ed1870ee99 namespace=k8s.io Dec 13 14:15:01.670184 env[1738]: time="2024-12-13T14:15:01.670155281Z" level=info msg="cleaning up dead shim" Dec 13 14:15:01.702304 env[1738]: time="2024-12-13T14:15:01.702222540Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:15:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3127 runtime=io.containerd.runc.v2\n" Dec 13 14:15:02.091786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3642198157.mount: Deactivated successfully. Dec 13 14:15:02.452045 env[1738]: time="2024-12-13T14:15:02.451535671Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 13 14:15:02.469619 kubelet[2788]: I1213 14:15:02.469573 2788 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4rs7g" podStartSLOduration=4.469513751 podStartE2EDuration="4.469513751s" podCreationTimestamp="2024-12-13 14:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:15:00.457001612 +0000 UTC m=+15.472447584" watchObservedRunningTime="2024-12-13 14:15:02.469513751 +0000 UTC m=+17.484959735" Dec 13 14:15:04.730137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1390833593.mount: Deactivated successfully. Dec 13 14:15:06.430407 env[1738]: time="2024-12-13T14:15:06.430281996Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:06.434927 env[1738]: time="2024-12-13T14:15:06.434854238Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:06.440403 env[1738]: time="2024-12-13T14:15:06.440346828Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:06.445137 env[1738]: time="2024-12-13T14:15:06.445072721Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:06.450393 env[1738]: time="2024-12-13T14:15:06.448524362Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:b3d1319ea6da12d4a1dd21a923f6a71f942a7b6e2c4763b8a3cca0725fb8aadf\"" Dec 13 14:15:06.458959 env[1738]: time="2024-12-13T14:15:06.458898393Z" level=info msg="CreateContainer within sandbox \"9308fe6b031a7698dcd19ba5c209f50ce6f087eb5d2b35fd76ce4a1e6f6656b7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 14:15:06.487939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2774503656.mount: Deactivated successfully. Dec 13 14:15:06.497079 env[1738]: time="2024-12-13T14:15:06.496930357Z" level=info msg="CreateContainer within sandbox \"9308fe6b031a7698dcd19ba5c209f50ce6f087eb5d2b35fd76ce4a1e6f6656b7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fda0f5c13073db81b14eb6729fef39d984ce4da79ac1d2eeb4f7d244818836ec\"" Dec 13 14:15:06.500478 env[1738]: time="2024-12-13T14:15:06.498919208Z" level=info msg="StartContainer for \"fda0f5c13073db81b14eb6729fef39d984ce4da79ac1d2eeb4f7d244818836ec\"" Dec 13 14:15:06.555593 systemd[1]: Started cri-containerd-fda0f5c13073db81b14eb6729fef39d984ce4da79ac1d2eeb4f7d244818836ec.scope. Dec 13 14:15:06.625680 systemd[1]: cri-containerd-fda0f5c13073db81b14eb6729fef39d984ce4da79ac1d2eeb4f7d244818836ec.scope: Deactivated successfully. Dec 13 14:15:06.628853 env[1738]: time="2024-12-13T14:15:06.628761540Z" level=info msg="StartContainer for \"fda0f5c13073db81b14eb6729fef39d984ce4da79ac1d2eeb4f7d244818836ec\" returns successfully" Dec 13 14:15:06.733203 kubelet[2788]: I1213 14:15:06.718244 2788 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:15:06.768846 kubelet[2788]: I1213 14:15:06.768238 2788 topology_manager.go:215] "Topology Admit Handler" podUID="5745b0c7-30d8-4b16-8f5a-34d14b1e81a7" podNamespace="kube-system" podName="coredns-76f75df574-tcp9p" Dec 13 14:15:06.777150 kubelet[2788]: W1213 14:15:06.777111 2788 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-20-19" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-19' and this object Dec 13 14:15:06.777413 kubelet[2788]: E1213 14:15:06.777389 2788 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-20-19" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-19' and this object Dec 13 14:15:06.783573 kubelet[2788]: I1213 14:15:06.783512 2788 topology_manager.go:215] "Topology Admit Handler" podUID="dfaa4715-dc26-4c4f-869a-7cf3a1365d79" podNamespace="kube-system" podName="coredns-76f75df574-2jdcw" Dec 13 14:15:06.787053 systemd[1]: Created slice kubepods-burstable-pod5745b0c7_30d8_4b16_8f5a_34d14b1e81a7.slice. Dec 13 14:15:06.812614 systemd[1]: Created slice kubepods-burstable-poddfaa4715_dc26_4c4f_869a_7cf3a1365d79.slice. Dec 13 14:15:06.904516 kubelet[2788]: I1213 14:15:06.904470 2788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh9jr\" (UniqueName: \"kubernetes.io/projected/5745b0c7-30d8-4b16-8f5a-34d14b1e81a7-kube-api-access-nh9jr\") pod \"coredns-76f75df574-tcp9p\" (UID: \"5745b0c7-30d8-4b16-8f5a-34d14b1e81a7\") " pod="kube-system/coredns-76f75df574-tcp9p" Dec 13 14:15:06.904865 kubelet[2788]: I1213 14:15:06.904842 2788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6wmk\" (UniqueName: \"kubernetes.io/projected/dfaa4715-dc26-4c4f-869a-7cf3a1365d79-kube-api-access-l6wmk\") pod \"coredns-76f75df574-2jdcw\" (UID: \"dfaa4715-dc26-4c4f-869a-7cf3a1365d79\") " pod="kube-system/coredns-76f75df574-2jdcw" Dec 13 14:15:06.905083 kubelet[2788]: I1213 14:15:06.905036 2788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5745b0c7-30d8-4b16-8f5a-34d14b1e81a7-config-volume\") pod \"coredns-76f75df574-tcp9p\" (UID: \"5745b0c7-30d8-4b16-8f5a-34d14b1e81a7\") " pod="kube-system/coredns-76f75df574-tcp9p" Dec 13 14:15:06.905726 kubelet[2788]: I1213 14:15:06.905690 2788 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfaa4715-dc26-4c4f-869a-7cf3a1365d79-config-volume\") pod \"coredns-76f75df574-2jdcw\" (UID: \"dfaa4715-dc26-4c4f-869a-7cf3a1365d79\") " pod="kube-system/coredns-76f75df574-2jdcw" Dec 13 14:15:06.964513 env[1738]: time="2024-12-13T14:15:06.964423940Z" level=info msg="shim disconnected" id=fda0f5c13073db81b14eb6729fef39d984ce4da79ac1d2eeb4f7d244818836ec Dec 13 14:15:06.964513 env[1738]: time="2024-12-13T14:15:06.964499739Z" level=warning msg="cleaning up after shim disconnected" id=fda0f5c13073db81b14eb6729fef39d984ce4da79ac1d2eeb4f7d244818836ec namespace=k8s.io Dec 13 14:15:06.964835 env[1738]: time="2024-12-13T14:15:06.964523250Z" level=info msg="cleaning up dead shim" Dec 13 14:15:06.979927 env[1738]: time="2024-12-13T14:15:06.979850356Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:15:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3186 runtime=io.containerd.runc.v2\n" Dec 13 14:15:07.479332 env[1738]: time="2024-12-13T14:15:07.471854638Z" level=info msg="CreateContainer within sandbox \"9308fe6b031a7698dcd19ba5c209f50ce6f087eb5d2b35fd76ce4a1e6f6656b7\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 13 14:15:07.479629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fda0f5c13073db81b14eb6729fef39d984ce4da79ac1d2eeb4f7d244818836ec-rootfs.mount: Deactivated successfully. Dec 13 14:15:07.514872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1436621609.mount: Deactivated successfully. Dec 13 14:15:07.532647 env[1738]: time="2024-12-13T14:15:07.532526396Z" level=info msg="CreateContainer within sandbox \"9308fe6b031a7698dcd19ba5c209f50ce6f087eb5d2b35fd76ce4a1e6f6656b7\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"eb8549923ec5d7842063f34bda2d166ff95f292693478731e487992282609e8a\"" Dec 13 14:15:07.535522 env[1738]: time="2024-12-13T14:15:07.535384175Z" level=info msg="StartContainer for \"eb8549923ec5d7842063f34bda2d166ff95f292693478731e487992282609e8a\"" Dec 13 14:15:07.540957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1002785763.mount: Deactivated successfully. Dec 13 14:15:07.579344 systemd[1]: Started cri-containerd-eb8549923ec5d7842063f34bda2d166ff95f292693478731e487992282609e8a.scope. Dec 13 14:15:07.660964 env[1738]: time="2024-12-13T14:15:07.660881932Z" level=info msg="StartContainer for \"eb8549923ec5d7842063f34bda2d166ff95f292693478731e487992282609e8a\" returns successfully" Dec 13 14:15:07.702410 env[1738]: time="2024-12-13T14:15:07.702335252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tcp9p,Uid:5745b0c7-30d8-4b16-8f5a-34d14b1e81a7,Namespace:kube-system,Attempt:0,}" Dec 13 14:15:07.721514 env[1738]: time="2024-12-13T14:15:07.721402859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2jdcw,Uid:dfaa4715-dc26-4c4f-869a-7cf3a1365d79,Namespace:kube-system,Attempt:0,}" Dec 13 14:15:07.781954 env[1738]: time="2024-12-13T14:15:07.780117261Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tcp9p,Uid:5745b0c7-30d8-4b16-8f5a-34d14b1e81a7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"68715fb698937b6f9eda8fcd3ac11bf64a605a68989db99a4ffc6c9ddc7be43c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 14:15:07.782421 kubelet[2788]: E1213 14:15:07.780928 2788 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68715fb698937b6f9eda8fcd3ac11bf64a605a68989db99a4ffc6c9ddc7be43c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 14:15:07.782421 kubelet[2788]: E1213 14:15:07.781148 2788 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68715fb698937b6f9eda8fcd3ac11bf64a605a68989db99a4ffc6c9ddc7be43c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-tcp9p" Dec 13 14:15:07.782421 kubelet[2788]: E1213 14:15:07.781208 2788 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68715fb698937b6f9eda8fcd3ac11bf64a605a68989db99a4ffc6c9ddc7be43c\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-tcp9p" Dec 13 14:15:07.782421 kubelet[2788]: E1213 14:15:07.781347 2788 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-tcp9p_kube-system(5745b0c7-30d8-4b16-8f5a-34d14b1e81a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-tcp9p_kube-system(5745b0c7-30d8-4b16-8f5a-34d14b1e81a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68715fb698937b6f9eda8fcd3ac11bf64a605a68989db99a4ffc6c9ddc7be43c\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-tcp9p" podUID="5745b0c7-30d8-4b16-8f5a-34d14b1e81a7" Dec 13 14:15:07.797358 env[1738]: time="2024-12-13T14:15:07.797192541Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2jdcw,Uid:dfaa4715-dc26-4c4f-869a-7cf3a1365d79,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eddf5405b057957518333400552609e0d78f18d1d8a3bd94e7e59be048283f6b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 14:15:07.798527 kubelet[2788]: E1213 14:15:07.797755 2788 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eddf5405b057957518333400552609e0d78f18d1d8a3bd94e7e59be048283f6b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 14:15:07.798527 kubelet[2788]: E1213 14:15:07.797885 2788 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eddf5405b057957518333400552609e0d78f18d1d8a3bd94e7e59be048283f6b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-2jdcw" Dec 13 14:15:07.798527 kubelet[2788]: E1213 14:15:07.797953 2788 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eddf5405b057957518333400552609e0d78f18d1d8a3bd94e7e59be048283f6b\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-2jdcw" Dec 13 14:15:07.801361 kubelet[2788]: E1213 14:15:07.801227 2788 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-2jdcw_kube-system(dfaa4715-dc26-4c4f-869a-7cf3a1365d79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-2jdcw_kube-system(dfaa4715-dc26-4c4f-869a-7cf3a1365d79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eddf5405b057957518333400552609e0d78f18d1d8a3bd94e7e59be048283f6b\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-2jdcw" podUID="dfaa4715-dc26-4c4f-869a-7cf3a1365d79" Dec 13 14:15:08.787739 (udev-worker)[3287]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:15:08.800790 systemd-networkd[1454]: flannel.1: Link UP Dec 13 14:15:08.800806 systemd-networkd[1454]: flannel.1: Gained carrier Dec 13 14:15:10.855479 systemd-networkd[1454]: flannel.1: Gained IPv6LL Dec 13 14:15:20.293742 env[1738]: time="2024-12-13T14:15:20.293359229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tcp9p,Uid:5745b0c7-30d8-4b16-8f5a-34d14b1e81a7,Namespace:kube-system,Attempt:0,}" Dec 13 14:15:20.294444 env[1738]: time="2024-12-13T14:15:20.294002695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2jdcw,Uid:dfaa4715-dc26-4c4f-869a-7cf3a1365d79,Namespace:kube-system,Attempt:0,}" Dec 13 14:15:20.357593 systemd-networkd[1454]: cni0: Link UP Dec 13 14:15:20.357617 systemd-networkd[1454]: cni0: Gained carrier Dec 13 14:15:20.366990 systemd-networkd[1454]: cni0: Lost carrier Dec 13 14:15:20.368266 (udev-worker)[3436]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:15:20.393464 systemd-networkd[1454]: veth1c692744: Link UP Dec 13 14:15:20.397443 systemd-networkd[1454]: veth842bfafa: Link UP Dec 13 14:15:20.406349 kernel: cni0: port 1(veth842bfafa) entered blocking state Dec 13 14:15:20.406517 kernel: cni0: port 1(veth842bfafa) entered disabled state Dec 13 14:15:20.408823 kernel: device veth842bfafa entered promiscuous mode Dec 13 14:15:20.413363 kernel: cni0: port 1(veth842bfafa) entered blocking state Dec 13 14:15:20.413689 kernel: cni0: port 1(veth842bfafa) entered forwarding state Dec 13 14:15:20.414136 (udev-worker)[3447]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:15:20.418502 kernel: cni0: port 2(veth1c692744) entered blocking state Dec 13 14:15:20.418813 kernel: cni0: port 2(veth1c692744) entered disabled state Dec 13 14:15:20.421083 kernel: device veth1c692744 entered promiscuous mode Dec 13 14:15:20.421214 kernel: cni0: port 2(veth1c692744) entered blocking state Dec 13 14:15:20.423198 kernel: cni0: port 2(veth1c692744) entered forwarding state Dec 13 14:15:20.430731 kernel: cni0: port 2(veth1c692744) entered disabled state Dec 13 14:15:20.430980 kernel: cni0: port 1(veth842bfafa) entered disabled state Dec 13 14:15:20.433933 (udev-worker)[3450]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:15:20.451089 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth842bfafa: link becomes ready Dec 13 14:15:20.451333 kernel: cni0: port 1(veth842bfafa) entered blocking state Dec 13 14:15:20.451468 kernel: cni0: port 1(veth842bfafa) entered forwarding state Dec 13 14:15:20.450956 systemd-networkd[1454]: veth842bfafa: Gained carrier Dec 13 14:15:20.453027 systemd-networkd[1454]: cni0: Gained carrier Dec 13 14:15:20.466054 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth1c692744: link becomes ready Dec 13 14:15:20.466207 kernel: cni0: port 2(veth1c692744) entered blocking state Dec 13 14:15:20.466258 kernel: cni0: port 2(veth1c692744) entered forwarding state Dec 13 14:15:20.466326 env[1738]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000928e8), "name":"cbr0", "type":"bridge"} Dec 13 14:15:20.466326 env[1738]: delegateAdd: netconf sent to delegate plugin: Dec 13 14:15:20.467803 systemd-networkd[1454]: veth1c692744: Gained carrier Dec 13 14:15:20.476248 env[1738]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"} Dec 13 14:15:20.476248 env[1738]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0x40000948e8), "name":"cbr0", "type":"bridge"} Dec 13 14:15:20.476248 env[1738]: delegateAdd: netconf sent to delegate plugin: Dec 13 14:15:20.521619 env[1738]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2024-12-13T14:15:20.520951263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:15:20.521619 env[1738]: time="2024-12-13T14:15:20.521098142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:15:20.521619 env[1738]: time="2024-12-13T14:15:20.521132177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:15:20.521947 env[1738]: time="2024-12-13T14:15:20.521707261Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4fef992523fb9188adad823c54152b7acee4243a952e3db117f11933ad99068b pid=3485 runtime=io.containerd.runc.v2 Dec 13 14:15:20.525131 env[1738]: time="2024-12-13T14:15:20.524872672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:15:20.526678 env[1738]: time="2024-12-13T14:15:20.526338716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:15:20.526678 env[1738]: time="2024-12-13T14:15:20.526467450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:15:20.527733 env[1738]: time="2024-12-13T14:15:20.527587052Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e67bd0f1f5c588d3c6bf9a15a94ab721891a9045a9dbaaff716515b75fd465c pid=3488 runtime=io.containerd.runc.v2 Dec 13 14:15:20.571061 systemd[1]: Started cri-containerd-7e67bd0f1f5c588d3c6bf9a15a94ab721891a9045a9dbaaff716515b75fd465c.scope. Dec 13 14:15:20.593362 systemd[1]: Started cri-containerd-4fef992523fb9188adad823c54152b7acee4243a952e3db117f11933ad99068b.scope. Dec 13 14:15:20.714088 env[1738]: time="2024-12-13T14:15:20.713922835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tcp9p,Uid:5745b0c7-30d8-4b16-8f5a-34d14b1e81a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e67bd0f1f5c588d3c6bf9a15a94ab721891a9045a9dbaaff716515b75fd465c\"" Dec 13 14:15:20.722400 env[1738]: time="2024-12-13T14:15:20.722236089Z" level=info msg="CreateContainer within sandbox \"7e67bd0f1f5c588d3c6bf9a15a94ab721891a9045a9dbaaff716515b75fd465c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:15:20.730061 env[1738]: time="2024-12-13T14:15:20.729993665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2jdcw,Uid:dfaa4715-dc26-4c4f-869a-7cf3a1365d79,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fef992523fb9188adad823c54152b7acee4243a952e3db117f11933ad99068b\"" Dec 13 14:15:20.739131 env[1738]: time="2024-12-13T14:15:20.739075482Z" level=info msg="CreateContainer within sandbox \"4fef992523fb9188adad823c54152b7acee4243a952e3db117f11933ad99068b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:15:20.763097 env[1738]: time="2024-12-13T14:15:20.763010311Z" level=info msg="CreateContainer within sandbox \"7e67bd0f1f5c588d3c6bf9a15a94ab721891a9045a9dbaaff716515b75fd465c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"57bb22953943c026c5dfb62f4183fff28316649b2496d753bafae8d2f395930a\"" Dec 13 14:15:20.767279 env[1738]: time="2024-12-13T14:15:20.767212134Z" level=info msg="CreateContainer within sandbox \"4fef992523fb9188adad823c54152b7acee4243a952e3db117f11933ad99068b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0c993bf2659e502e70b3771a987912d88bf7ba4d674a4286e734d844ba7bfaa3\"" Dec 13 14:15:20.768065 env[1738]: time="2024-12-13T14:15:20.767999466Z" level=info msg="StartContainer for \"57bb22953943c026c5dfb62f4183fff28316649b2496d753bafae8d2f395930a\"" Dec 13 14:15:20.774969 env[1738]: time="2024-12-13T14:15:20.774892855Z" level=info msg="StartContainer for \"0c993bf2659e502e70b3771a987912d88bf7ba4d674a4286e734d844ba7bfaa3\"" Dec 13 14:15:20.809462 systemd[1]: Started cri-containerd-57bb22953943c026c5dfb62f4183fff28316649b2496d753bafae8d2f395930a.scope. Dec 13 14:15:20.834984 systemd[1]: Started cri-containerd-0c993bf2659e502e70b3771a987912d88bf7ba4d674a4286e734d844ba7bfaa3.scope. Dec 13 14:15:20.927869 env[1738]: time="2024-12-13T14:15:20.927801908Z" level=info msg="StartContainer for \"57bb22953943c026c5dfb62f4183fff28316649b2496d753bafae8d2f395930a\" returns successfully" Dec 13 14:15:20.937529 env[1738]: time="2024-12-13T14:15:20.937446209Z" level=info msg="StartContainer for \"0c993bf2659e502e70b3771a987912d88bf7ba4d674a4286e734d844ba7bfaa3\" returns successfully" Dec 13 14:15:21.549320 kubelet[2788]: I1213 14:15:21.549167 2788 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-hddjb" podStartSLOduration=16.405678749 podStartE2EDuration="23.549095942s" podCreationTimestamp="2024-12-13 14:14:58 +0000 UTC" firstStartedPulling="2024-12-13 14:14:59.30726837 +0000 UTC m=+14.322714318" lastFinishedPulling="2024-12-13 14:15:06.450685551 +0000 UTC m=+21.466131511" observedRunningTime="2024-12-13 14:15:08.510362645 +0000 UTC m=+23.525808629" watchObservedRunningTime="2024-12-13 14:15:21.549095942 +0000 UTC m=+36.564541914" Dec 13 14:15:21.550271 kubelet[2788]: I1213 14:15:21.550229 2788 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-tcp9p" podStartSLOduration=23.550160746 podStartE2EDuration="23.550160746s" podCreationTimestamp="2024-12-13 14:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:15:21.54836025 +0000 UTC m=+36.563806234" watchObservedRunningTime="2024-12-13 14:15:21.550160746 +0000 UTC m=+36.565606730" Dec 13 14:15:21.607530 systemd-networkd[1454]: cni0: Gained IPv6LL Dec 13 14:15:22.247903 systemd-networkd[1454]: veth1c692744: Gained IPv6LL Dec 13 14:15:22.375595 systemd-networkd[1454]: veth842bfafa: Gained IPv6LL Dec 13 14:15:26.319930 systemd[1]: Started sshd@5-172.31.20.19:22-139.178.89.65:55644.service. Dec 13 14:15:26.506060 sshd[3662]: Accepted publickey for core from 139.178.89.65 port 55644 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:26.509237 sshd[3662]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:26.520013 systemd[1]: Started session-6.scope. Dec 13 14:15:26.520792 systemd-logind[1728]: New session 6 of user core. Dec 13 14:15:26.801162 sshd[3662]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:26.806784 systemd[1]: sshd@5-172.31.20.19:22-139.178.89.65:55644.service: Deactivated successfully. Dec 13 14:15:26.808194 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:15:26.809620 systemd-logind[1728]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:15:26.812343 systemd-logind[1728]: Removed session 6. Dec 13 14:15:31.833169 systemd[1]: Started sshd@6-172.31.20.19:22-139.178.89.65:35362.service. Dec 13 14:15:32.010572 sshd[3699]: Accepted publickey for core from 139.178.89.65 port 35362 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:32.013954 sshd[3699]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:32.025490 systemd-logind[1728]: New session 7 of user core. Dec 13 14:15:32.025871 systemd[1]: Started session-7.scope. Dec 13 14:15:32.296948 sshd[3699]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:32.303854 systemd-logind[1728]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:15:32.304737 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:15:32.306534 systemd[1]: sshd@6-172.31.20.19:22-139.178.89.65:35362.service: Deactivated successfully. Dec 13 14:15:32.308623 systemd-logind[1728]: Removed session 7. Dec 13 14:15:37.327805 systemd[1]: Started sshd@7-172.31.20.19:22-139.178.89.65:35374.service. Dec 13 14:15:37.507026 sshd[3733]: Accepted publickey for core from 139.178.89.65 port 35374 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:37.509037 sshd[3733]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:37.519744 systemd[1]: Started session-8.scope. Dec 13 14:15:37.521512 systemd-logind[1728]: New session 8 of user core. Dec 13 14:15:37.799597 sshd[3733]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:37.806080 systemd[1]: sshd@7-172.31.20.19:22-139.178.89.65:35374.service: Deactivated successfully. Dec 13 14:15:37.807602 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:15:37.809276 systemd-logind[1728]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:15:37.812116 systemd-logind[1728]: Removed session 8. Dec 13 14:15:37.831233 systemd[1]: Started sshd@8-172.31.20.19:22-139.178.89.65:35382.service. Dec 13 14:15:38.019045 sshd[3745]: Accepted publickey for core from 139.178.89.65 port 35382 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:38.022040 sshd[3745]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:38.033084 systemd-logind[1728]: New session 9 of user core. Dec 13 14:15:38.033134 systemd[1]: Started session-9.scope. Dec 13 14:15:38.413698 sshd[3745]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:38.421996 systemd[1]: sshd@8-172.31.20.19:22-139.178.89.65:35382.service: Deactivated successfully. Dec 13 14:15:38.423602 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:15:38.425905 systemd-logind[1728]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:15:38.429039 systemd-logind[1728]: Removed session 9. Dec 13 14:15:38.450791 systemd[1]: Started sshd@9-172.31.20.19:22-139.178.89.65:59570.service. Dec 13 14:15:38.642096 sshd[3754]: Accepted publickey for core from 139.178.89.65 port 59570 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:38.645530 sshd[3754]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:38.657269 systemd[1]: Started session-10.scope. Dec 13 14:15:38.659504 systemd-logind[1728]: New session 10 of user core. Dec 13 14:15:38.934155 sshd[3754]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:38.940973 systemd[1]: sshd@9-172.31.20.19:22-139.178.89.65:59570.service: Deactivated successfully. Dec 13 14:15:38.942853 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:15:38.942908 systemd-logind[1728]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:15:38.946435 systemd-logind[1728]: Removed session 10. Dec 13 14:15:43.962803 systemd[1]: Started sshd@10-172.31.20.19:22-139.178.89.65:59580.service. Dec 13 14:15:44.138755 sshd[3787]: Accepted publickey for core from 139.178.89.65 port 59580 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:44.142913 sshd[3787]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:44.155516 systemd[1]: Started session-11.scope. Dec 13 14:15:44.156634 systemd-logind[1728]: New session 11 of user core. Dec 13 14:15:44.417773 sshd[3787]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:44.425091 systemd[1]: sshd@10-172.31.20.19:22-139.178.89.65:59580.service: Deactivated successfully. Dec 13 14:15:44.427330 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:15:44.429945 systemd-logind[1728]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:15:44.433867 systemd-logind[1728]: Removed session 11. Dec 13 14:15:49.448018 systemd[1]: Started sshd@11-172.31.20.19:22-139.178.89.65:41576.service. Dec 13 14:15:49.628153 sshd[3845]: Accepted publickey for core from 139.178.89.65 port 41576 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:49.632740 sshd[3845]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:49.643764 systemd-logind[1728]: New session 12 of user core. Dec 13 14:15:49.645018 systemd[1]: Started session-12.scope. Dec 13 14:15:49.912135 sshd[3845]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:49.918435 systemd[1]: sshd@11-172.31.20.19:22-139.178.89.65:41576.service: Deactivated successfully. Dec 13 14:15:49.918794 systemd-logind[1728]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:15:49.920069 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:15:49.922056 systemd-logind[1728]: Removed session 12. Dec 13 14:15:49.941159 systemd[1]: Started sshd@12-172.31.20.19:22-139.178.89.65:41578.service. Dec 13 14:15:50.120624 sshd[3857]: Accepted publickey for core from 139.178.89.65 port 41578 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:50.123773 sshd[3857]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:50.134416 systemd-logind[1728]: New session 13 of user core. Dec 13 14:15:50.135080 systemd[1]: Started session-13.scope. Dec 13 14:15:50.452607 sshd[3857]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:50.458749 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:15:50.460422 systemd[1]: sshd@12-172.31.20.19:22-139.178.89.65:41578.service: Deactivated successfully. Dec 13 14:15:50.462776 systemd-logind[1728]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:15:50.466590 systemd-logind[1728]: Removed session 13. Dec 13 14:15:50.483348 systemd[1]: Started sshd@13-172.31.20.19:22-139.178.89.65:41592.service. Dec 13 14:15:50.668855 sshd[3866]: Accepted publickey for core from 139.178.89.65 port 41592 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:50.671993 sshd[3866]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:50.681125 systemd[1]: Started session-14.scope. Dec 13 14:15:50.682423 systemd-logind[1728]: New session 14 of user core. Dec 13 14:15:53.244662 sshd[3866]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:53.250235 systemd[1]: sshd@13-172.31.20.19:22-139.178.89.65:41592.service: Deactivated successfully. Dec 13 14:15:53.251594 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:15:53.253187 systemd-logind[1728]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:15:53.255134 systemd-logind[1728]: Removed session 14. Dec 13 14:15:53.278530 systemd[1]: Started sshd@14-172.31.20.19:22-139.178.89.65:41600.service. Dec 13 14:15:53.461166 sshd[3884]: Accepted publickey for core from 139.178.89.65 port 41600 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:53.463742 sshd[3884]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:53.473039 systemd-logind[1728]: New session 15 of user core. Dec 13 14:15:53.474214 systemd[1]: Started session-15.scope. Dec 13 14:15:54.014086 sshd[3884]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:54.022468 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:15:54.024228 systemd-logind[1728]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:15:54.024694 systemd[1]: sshd@14-172.31.20.19:22-139.178.89.65:41600.service: Deactivated successfully. Dec 13 14:15:54.027907 systemd-logind[1728]: Removed session 15. Dec 13 14:15:54.042340 systemd[1]: Started sshd@15-172.31.20.19:22-139.178.89.65:41616.service. Dec 13 14:15:54.221636 sshd[3894]: Accepted publickey for core from 139.178.89.65 port 41616 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:54.225692 sshd[3894]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:54.234898 systemd-logind[1728]: New session 16 of user core. Dec 13 14:15:54.236894 systemd[1]: Started session-16.scope. Dec 13 14:15:54.506510 sshd[3894]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:54.513840 systemd-logind[1728]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:15:54.514650 systemd[1]: sshd@15-172.31.20.19:22-139.178.89.65:41616.service: Deactivated successfully. Dec 13 14:15:54.516272 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:15:54.519906 systemd-logind[1728]: Removed session 16. Dec 13 14:15:59.539695 systemd[1]: Started sshd@16-172.31.20.19:22-139.178.89.65:38310.service. Dec 13 14:15:59.724416 sshd[3948]: Accepted publickey for core from 139.178.89.65 port 38310 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:15:59.727377 sshd[3948]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:15:59.736590 systemd-logind[1728]: New session 17 of user core. Dec 13 14:15:59.738070 systemd[1]: Started session-17.scope. Dec 13 14:16:00.002172 sshd[3948]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:00.008209 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:16:00.010785 systemd[1]: sshd@16-172.31.20.19:22-139.178.89.65:38310.service: Deactivated successfully. Dec 13 14:16:00.013630 systemd-logind[1728]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:16:00.015894 systemd-logind[1728]: Removed session 17. Dec 13 14:16:05.033841 systemd[1]: Started sshd@17-172.31.20.19:22-139.178.89.65:38316.service. Dec 13 14:16:05.215139 sshd[3986]: Accepted publickey for core from 139.178.89.65 port 38316 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:05.219097 sshd[3986]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:05.229440 systemd-logind[1728]: New session 18 of user core. Dec 13 14:16:05.231260 systemd[1]: Started session-18.scope. Dec 13 14:16:05.490728 sshd[3986]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:05.497024 systemd-logind[1728]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:16:05.497646 systemd[1]: sshd@17-172.31.20.19:22-139.178.89.65:38316.service: Deactivated successfully. Dec 13 14:16:05.499029 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:16:05.501523 systemd-logind[1728]: Removed session 18. Dec 13 14:16:10.524203 systemd[1]: Started sshd@18-172.31.20.19:22-139.178.89.65:42932.service. Dec 13 14:16:10.702549 sshd[4019]: Accepted publickey for core from 139.178.89.65 port 42932 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:10.705230 sshd[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:10.714093 systemd-logind[1728]: New session 19 of user core. Dec 13 14:16:10.714793 systemd[1]: Started session-19.scope. Dec 13 14:16:10.965873 sshd[4019]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:10.971352 systemd-logind[1728]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:16:10.971671 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:16:10.973579 systemd[1]: sshd@18-172.31.20.19:22-139.178.89.65:42932.service: Deactivated successfully. Dec 13 14:16:10.975858 systemd-logind[1728]: Removed session 19. Dec 13 14:16:15.994877 systemd[1]: Started sshd@19-172.31.20.19:22-139.178.89.65:42942.service. Dec 13 14:16:16.171817 sshd[4052]: Accepted publickey for core from 139.178.89.65 port 42942 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:16.175610 sshd[4052]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:16.183072 systemd-logind[1728]: New session 20 of user core. Dec 13 14:16:16.184475 systemd[1]: Started session-20.scope. Dec 13 14:16:16.440193 sshd[4052]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:16.446914 systemd-logind[1728]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:16:16.448623 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:16:16.450688 systemd[1]: sshd@19-172.31.20.19:22-139.178.89.65:42942.service: Deactivated successfully. Dec 13 14:16:16.453585 systemd-logind[1728]: Removed session 20. Dec 13 14:16:31.257509 systemd[1]: cri-containerd-77b45843ddbf6426e9a5a27756eb1b50d82e84715fb31c1177ee4bd48ceb1c15.scope: Deactivated successfully. Dec 13 14:16:31.258115 systemd[1]: cri-containerd-77b45843ddbf6426e9a5a27756eb1b50d82e84715fb31c1177ee4bd48ceb1c15.scope: Consumed 4.959s CPU time. Dec 13 14:16:31.307926 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77b45843ddbf6426e9a5a27756eb1b50d82e84715fb31c1177ee4bd48ceb1c15-rootfs.mount: Deactivated successfully. Dec 13 14:16:31.322769 env[1738]: time="2024-12-13T14:16:31.322689521Z" level=info msg="shim disconnected" id=77b45843ddbf6426e9a5a27756eb1b50d82e84715fb31c1177ee4bd48ceb1c15 Dec 13 14:16:31.323576 env[1738]: time="2024-12-13T14:16:31.322767270Z" level=warning msg="cleaning up after shim disconnected" id=77b45843ddbf6426e9a5a27756eb1b50d82e84715fb31c1177ee4bd48ceb1c15 namespace=k8s.io Dec 13 14:16:31.323576 env[1738]: time="2024-12-13T14:16:31.322804027Z" level=info msg="cleaning up dead shim" Dec 13 14:16:31.337999 env[1738]: time="2024-12-13T14:16:31.337914816Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:16:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4140 runtime=io.containerd.runc.v2\n" Dec 13 14:16:31.739906 kubelet[2788]: I1213 14:16:31.739825 2788 scope.go:117] "RemoveContainer" containerID="77b45843ddbf6426e9a5a27756eb1b50d82e84715fb31c1177ee4bd48ceb1c15" Dec 13 14:16:31.744680 env[1738]: time="2024-12-13T14:16:31.744616564Z" level=info msg="CreateContainer within sandbox \"a14eb220b6ba81414191261888eb6c5149f298e3758af6f0f1857cc4bd6751cb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 14:16:31.772546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4138475154.mount: Deactivated successfully. Dec 13 14:16:31.785862 env[1738]: time="2024-12-13T14:16:31.785782568Z" level=info msg="CreateContainer within sandbox \"a14eb220b6ba81414191261888eb6c5149f298e3758af6f0f1857cc4bd6751cb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c0827fe908cd54a8323ede0c49ba04a40b976b074dcbc9d1567e931af2525ca1\"" Dec 13 14:16:31.786782 env[1738]: time="2024-12-13T14:16:31.786730170Z" level=info msg="StartContainer for \"c0827fe908cd54a8323ede0c49ba04a40b976b074dcbc9d1567e931af2525ca1\"" Dec 13 14:16:31.824359 systemd[1]: Started cri-containerd-c0827fe908cd54a8323ede0c49ba04a40b976b074dcbc9d1567e931af2525ca1.scope. Dec 13 14:16:31.918803 env[1738]: time="2024-12-13T14:16:31.918686825Z" level=info msg="StartContainer for \"c0827fe908cd54a8323ede0c49ba04a40b976b074dcbc9d1567e931af2525ca1\" returns successfully" Dec 13 14:16:36.762542 systemd[1]: cri-containerd-401cf00a0cf33cea590a0dfc36c7e8d8870559c063b46386f5bbc5184b1059b7.scope: Deactivated successfully. Dec 13 14:16:36.763595 systemd[1]: cri-containerd-401cf00a0cf33cea590a0dfc36c7e8d8870559c063b46386f5bbc5184b1059b7.scope: Consumed 3.265s CPU time. Dec 13 14:16:36.809529 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-401cf00a0cf33cea590a0dfc36c7e8d8870559c063b46386f5bbc5184b1059b7-rootfs.mount: Deactivated successfully. Dec 13 14:16:36.824773 env[1738]: time="2024-12-13T14:16:36.824711201Z" level=info msg="shim disconnected" id=401cf00a0cf33cea590a0dfc36c7e8d8870559c063b46386f5bbc5184b1059b7 Dec 13 14:16:36.825437 env[1738]: time="2024-12-13T14:16:36.825353469Z" level=warning msg="cleaning up after shim disconnected" id=401cf00a0cf33cea590a0dfc36c7e8d8870559c063b46386f5bbc5184b1059b7 namespace=k8s.io Dec 13 14:16:36.825437 env[1738]: time="2024-12-13T14:16:36.825385378Z" level=info msg="cleaning up dead shim" Dec 13 14:16:36.838572 env[1738]: time="2024-12-13T14:16:36.838499795Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:16:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4223 runtime=io.containerd.runc.v2\n" Dec 13 14:16:37.761694 kubelet[2788]: I1213 14:16:37.761639 2788 scope.go:117] "RemoveContainer" containerID="401cf00a0cf33cea590a0dfc36c7e8d8870559c063b46386f5bbc5184b1059b7" Dec 13 14:16:37.767450 env[1738]: time="2024-12-13T14:16:37.767257042Z" level=info msg="CreateContainer within sandbox \"eca0ea69b4f58bc054972bc0929ad2b5b69d3818a64584630a1f6eaf01eff0d6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 14:16:37.792333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2958331395.mount: Deactivated successfully. Dec 13 14:16:37.803426 env[1738]: time="2024-12-13T14:16:37.803326675Z" level=info msg="CreateContainer within sandbox \"eca0ea69b4f58bc054972bc0929ad2b5b69d3818a64584630a1f6eaf01eff0d6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d5d1c9bd65e924fed078f522ee39c473fc99a3950a5f574e511d03c39c372c5e\"" Dec 13 14:16:37.804897 env[1738]: time="2024-12-13T14:16:37.804246559Z" level=info msg="StartContainer for \"d5d1c9bd65e924fed078f522ee39c473fc99a3950a5f574e511d03c39c372c5e\"" Dec 13 14:16:37.855496 systemd[1]: Started cri-containerd-d5d1c9bd65e924fed078f522ee39c473fc99a3950a5f574e511d03c39c372c5e.scope. Dec 13 14:16:37.885508 kubelet[2788]: E1213 14:16:37.884941 2788 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-19?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 14:16:37.966811 env[1738]: time="2024-12-13T14:16:37.966726691Z" level=info msg="StartContainer for \"d5d1c9bd65e924fed078f522ee39c473fc99a3950a5f574e511d03c39c372c5e\" returns successfully"