Dec 13 14:27:15.153047 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:27:15.153081 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:27:15.153097 kernel: BIOS-provided physical RAM map: Dec 13 14:27:15.153108 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 14:27:15.153118 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 14:27:15.153130 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 14:27:15.153145 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Dec 13 14:27:15.153157 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Dec 13 14:27:15.153168 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Dec 13 14:27:15.153179 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 14:27:15.153191 kernel: NX (Execute Disable) protection: active Dec 13 14:27:15.153202 kernel: SMBIOS 2.7 present. Dec 13 14:27:15.153214 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 13 14:27:15.153225 kernel: Hypervisor detected: KVM Dec 13 14:27:15.153242 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:27:15.153255 kernel: kvm-clock: cpu 0, msr 5319a001, primary cpu clock Dec 13 14:27:15.153267 kernel: kvm-clock: using sched offset of 7493760490 cycles Dec 13 14:27:15.153281 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:27:15.153293 kernel: tsc: Detected 2499.996 MHz processor Dec 13 14:27:15.153306 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:27:15.153322 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:27:15.153334 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Dec 13 14:27:15.153347 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:27:15.153359 kernel: Using GB pages for direct mapping Dec 13 14:27:15.153372 kernel: ACPI: Early table checksum verification disabled Dec 13 14:27:15.153384 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Dec 13 14:27:15.161450 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Dec 13 14:27:15.161475 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 14:27:15.161489 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 13 14:27:15.161510 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Dec 13 14:27:15.161524 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 14:27:15.161537 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 14:27:15.161550 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 13 14:27:15.161563 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 14:27:15.161576 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 13 14:27:15.161589 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 13 14:27:15.161602 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 14:27:15.161618 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Dec 13 14:27:15.161631 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Dec 13 14:27:15.161644 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Dec 13 14:27:15.161662 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Dec 13 14:27:15.161676 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Dec 13 14:27:15.161689 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Dec 13 14:27:15.161702 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Dec 13 14:27:15.161719 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Dec 13 14:27:15.161733 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Dec 13 14:27:15.161746 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Dec 13 14:27:15.161761 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 14:27:15.161774 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 14:27:15.161788 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 13 14:27:15.161897 kernel: NUMA: Initialized distance table, cnt=1 Dec 13 14:27:15.161912 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Dec 13 14:27:15.161929 kernel: Zone ranges: Dec 13 14:27:15.161943 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:27:15.161957 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Dec 13 14:27:15.161971 kernel: Normal empty Dec 13 14:27:15.161985 kernel: Movable zone start for each node Dec 13 14:27:15.161999 kernel: Early memory node ranges Dec 13 14:27:15.162013 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 14:27:15.162027 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Dec 13 14:27:15.162040 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Dec 13 14:27:15.162057 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:27:15.162071 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 14:27:15.162085 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Dec 13 14:27:15.162099 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 14:27:15.162113 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:27:15.162127 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 13 14:27:15.162141 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:27:15.162155 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:27:15.162169 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:27:15.162186 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:27:15.162200 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:27:15.162214 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 14:27:15.162228 kernel: TSC deadline timer available Dec 13 14:27:15.162242 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 14:27:15.162256 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 13 14:27:15.162268 kernel: Booting paravirtualized kernel on KVM Dec 13 14:27:15.162283 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:27:15.162297 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 14:27:15.162314 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 14:27:15.162328 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 14:27:15.162342 kernel: pcpu-alloc: [0] 0 1 Dec 13 14:27:15.162355 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Dec 13 14:27:15.162369 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:27:15.162384 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:27:15.162408 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Dec 13 14:27:15.162422 kernel: Policy zone: DMA32 Dec 13 14:27:15.162439 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:27:15.162457 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:27:15.162470 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:27:15.162484 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:27:15.162498 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:27:15.162513 kernel: Memory: 1934420K/2057760K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 123080K reserved, 0K cma-reserved) Dec 13 14:27:15.162527 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:27:15.162542 kernel: Kernel/User page tables isolation: enabled Dec 13 14:27:15.162556 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:27:15.162572 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:27:15.162586 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:27:15.162602 kernel: rcu: RCU event tracing is enabled. Dec 13 14:27:15.162616 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:27:15.162630 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:27:15.162644 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:27:15.162658 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:27:15.162672 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:27:15.162686 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 14:27:15.162702 kernel: random: crng init done Dec 13 14:27:15.162715 kernel: Console: colour VGA+ 80x25 Dec 13 14:27:15.162729 kernel: printk: console [ttyS0] enabled Dec 13 14:27:15.162743 kernel: ACPI: Core revision 20210730 Dec 13 14:27:15.162757 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 13 14:27:15.162771 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:27:15.162785 kernel: x2apic enabled Dec 13 14:27:15.162798 kernel: Switched APIC routing to physical x2apic. Dec 13 14:27:15.162812 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 13 14:27:15.162828 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Dec 13 14:27:15.162842 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 14:27:15.162856 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 14:27:15.162870 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:27:15.162894 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:27:15.162911 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:27:15.162925 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:27:15.162940 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 14:27:15.162956 kernel: RETBleed: Vulnerable Dec 13 14:27:15.162970 kernel: Speculative Store Bypass: Vulnerable Dec 13 14:27:15.162984 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:27:15.162999 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:27:15.163013 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 14:27:15.163028 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:27:15.163045 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:27:15.163060 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:27:15.163075 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 14:27:15.163089 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 14:27:15.163104 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 14:27:15.163121 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 14:27:15.163135 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 14:27:15.163150 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 13 14:27:15.163164 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:27:15.163179 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 14:27:15.163194 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 14:27:15.163208 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 13 14:27:15.163222 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 13 14:27:15.163236 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 13 14:27:15.163251 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 13 14:27:15.163266 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 13 14:27:15.163280 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:27:15.163297 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:27:15.163312 kernel: LSM: Security Framework initializing Dec 13 14:27:15.163327 kernel: SELinux: Initializing. Dec 13 14:27:15.163341 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:27:15.163356 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:27:15.163371 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 14:27:15.163441 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 14:27:15.163460 kernel: signal: max sigframe size: 3632 Dec 13 14:27:15.163475 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:27:15.163490 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 14:27:15.163507 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:27:15.163522 kernel: x86: Booting SMP configuration: Dec 13 14:27:15.163536 kernel: .... node #0, CPUs: #1 Dec 13 14:27:15.163550 kernel: kvm-clock: cpu 1, msr 5319a041, secondary cpu clock Dec 13 14:27:15.163564 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Dec 13 14:27:15.163579 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 14:27:15.163596 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 14:27:15.163610 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:27:15.163625 kernel: smpboot: Max logical packages: 1 Dec 13 14:27:15.163643 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Dec 13 14:27:15.163657 kernel: devtmpfs: initialized Dec 13 14:27:15.163672 kernel: x86/mm: Memory block size: 128MB Dec 13 14:27:15.163687 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:27:15.163702 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:27:15.163716 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:27:15.163731 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:27:15.163745 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:27:15.163760 kernel: audit: type=2000 audit(1734100033.776:1): state=initialized audit_enabled=0 res=1 Dec 13 14:27:15.163776 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:27:15.163791 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:27:15.163805 kernel: cpuidle: using governor menu Dec 13 14:27:15.163820 kernel: ACPI: bus type PCI registered Dec 13 14:27:15.163835 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:27:15.163849 kernel: dca service started, version 1.12.1 Dec 13 14:27:15.163864 kernel: PCI: Using configuration type 1 for base access Dec 13 14:27:15.163878 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:27:15.163893 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:27:15.163910 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:27:15.163925 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:27:15.163939 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:27:15.163954 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:27:15.163968 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:27:15.163982 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:27:15.163996 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:27:15.164011 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:27:15.164026 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 14:27:15.164043 kernel: ACPI: Interpreter enabled Dec 13 14:27:15.164058 kernel: ACPI: PM: (supports S0 S5) Dec 13 14:27:15.164072 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:27:15.164087 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:27:15.164102 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 14:27:15.164116 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:27:15.164520 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:27:15.164703 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 14:27:15.164729 kernel: acpiphp: Slot [3] registered Dec 13 14:27:15.164744 kernel: acpiphp: Slot [4] registered Dec 13 14:27:15.164758 kernel: acpiphp: Slot [5] registered Dec 13 14:27:15.164774 kernel: acpiphp: Slot [6] registered Dec 13 14:27:15.164788 kernel: acpiphp: Slot [7] registered Dec 13 14:27:15.164803 kernel: acpiphp: Slot [8] registered Dec 13 14:27:15.164817 kernel: acpiphp: Slot [9] registered Dec 13 14:27:15.164831 kernel: acpiphp: Slot [10] registered Dec 13 14:27:15.164846 kernel: acpiphp: Slot [11] registered Dec 13 14:27:15.164863 kernel: acpiphp: Slot [12] registered Dec 13 14:27:15.164878 kernel: acpiphp: Slot [13] registered Dec 13 14:27:15.164892 kernel: acpiphp: Slot [14] registered Dec 13 14:27:15.164906 kernel: acpiphp: Slot [15] registered Dec 13 14:27:15.164921 kernel: acpiphp: Slot [16] registered Dec 13 14:27:15.164935 kernel: acpiphp: Slot [17] registered Dec 13 14:27:15.164949 kernel: acpiphp: Slot [18] registered Dec 13 14:27:15.164964 kernel: acpiphp: Slot [19] registered Dec 13 14:27:15.164978 kernel: acpiphp: Slot [20] registered Dec 13 14:27:15.164995 kernel: acpiphp: Slot [21] registered Dec 13 14:27:15.165009 kernel: acpiphp: Slot [22] registered Dec 13 14:27:15.165024 kernel: acpiphp: Slot [23] registered Dec 13 14:27:15.165038 kernel: acpiphp: Slot [24] registered Dec 13 14:27:15.165052 kernel: acpiphp: Slot [25] registered Dec 13 14:27:15.165066 kernel: acpiphp: Slot [26] registered Dec 13 14:27:15.165081 kernel: acpiphp: Slot [27] registered Dec 13 14:27:15.165094 kernel: acpiphp: Slot [28] registered Dec 13 14:27:15.165109 kernel: acpiphp: Slot [29] registered Dec 13 14:27:15.165123 kernel: acpiphp: Slot [30] registered Dec 13 14:27:15.165140 kernel: acpiphp: Slot [31] registered Dec 13 14:27:15.165155 kernel: PCI host bridge to bus 0000:00 Dec 13 14:27:15.165284 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:27:15.173563 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:27:15.173733 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:27:15.173866 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 14:27:15.173975 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:27:15.174118 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 14:27:15.174246 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 14:27:15.174374 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Dec 13 14:27:15.174505 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 14:27:15.174634 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Dec 13 14:27:15.174747 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 13 14:27:15.174858 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 13 14:27:15.174970 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 13 14:27:15.175081 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 13 14:27:15.175191 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 13 14:27:15.175301 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 13 14:27:15.175490 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x170 took 13671 usecs Dec 13 14:27:15.175621 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Dec 13 14:27:15.175736 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Dec 13 14:27:15.175852 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 14:27:15.175963 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 14:27:15.176082 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 14:27:15.176192 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Dec 13 14:27:15.176315 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 14:27:15.176439 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Dec 13 14:27:15.176461 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:27:15.176475 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:27:15.176489 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:27:15.176503 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:27:15.176518 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 14:27:15.176532 kernel: iommu: Default domain type: Translated Dec 13 14:27:15.176546 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:27:15.176654 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 13 14:27:15.176765 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 14:27:15.176880 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 13 14:27:15.176898 kernel: vgaarb: loaded Dec 13 14:27:15.176912 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:27:15.176927 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:27:15.176941 kernel: PTP clock support registered Dec 13 14:27:15.176955 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:27:15.176968 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:27:15.176982 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 14:27:15.176999 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Dec 13 14:27:15.177013 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 14:27:15.177027 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 13 14:27:15.177041 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:27:15.177055 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:27:15.177069 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:27:15.177082 kernel: pnp: PnP ACPI init Dec 13 14:27:15.177096 kernel: pnp: PnP ACPI: found 5 devices Dec 13 14:27:15.177110 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:27:15.177126 kernel: NET: Registered PF_INET protocol family Dec 13 14:27:15.177139 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:27:15.177152 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 14:27:15.177166 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:27:15.177179 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:27:15.177193 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 14:27:15.177206 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 14:27:15.177220 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:27:15.177233 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:27:15.177249 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:27:15.177262 kernel: NET: Registered PF_XDP protocol family Dec 13 14:27:15.177373 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:27:15.188602 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:27:15.188724 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:27:15.188831 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 14:27:15.188957 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 14:27:15.189075 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Dec 13 14:27:15.189097 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:27:15.189112 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 14:27:15.189126 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 13 14:27:15.189140 kernel: clocksource: Switched to clocksource tsc Dec 13 14:27:15.189154 kernel: Initialise system trusted keyrings Dec 13 14:27:15.189167 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 14:27:15.189180 kernel: Key type asymmetric registered Dec 13 14:27:15.189194 kernel: Asymmetric key parser 'x509' registered Dec 13 14:27:15.189209 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:27:15.189223 kernel: io scheduler mq-deadline registered Dec 13 14:27:15.189236 kernel: io scheduler kyber registered Dec 13 14:27:15.189250 kernel: io scheduler bfq registered Dec 13 14:27:15.189263 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:27:15.189277 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:27:15.189290 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:27:15.189303 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:27:15.189318 kernel: i8042: Warning: Keylock active Dec 13 14:27:15.189333 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:27:15.189346 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:27:15.189478 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 14:27:15.189584 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 14:27:15.189696 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T14:27:14 UTC (1734100034) Dec 13 14:27:15.189819 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 14:27:15.189837 kernel: intel_pstate: CPU model not supported Dec 13 14:27:15.189852 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:27:15.189870 kernel: Segment Routing with IPv6 Dec 13 14:27:15.189884 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:27:15.189899 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:27:15.189914 kernel: Key type dns_resolver registered Dec 13 14:27:15.189927 kernel: IPI shorthand broadcast: enabled Dec 13 14:27:15.189940 kernel: sched_clock: Marking stable (465253982, 283124770)->(836532183, -88153431) Dec 13 14:27:15.189954 kernel: registered taskstats version 1 Dec 13 14:27:15.189966 kernel: Loading compiled-in X.509 certificates Dec 13 14:27:15.189984 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:27:15.190000 kernel: Key type .fscrypt registered Dec 13 14:27:15.190013 kernel: Key type fscrypt-provisioning registered Dec 13 14:27:15.190026 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:27:15.190040 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:27:15.190054 kernel: ima: No architecture policies found Dec 13 14:27:15.190068 kernel: clk: Disabling unused clocks Dec 13 14:27:15.190083 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:27:15.190097 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:27:15.190112 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:27:15.190129 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:27:15.190144 kernel: Run /init as init process Dec 13 14:27:15.190158 kernel: with arguments: Dec 13 14:27:15.190173 kernel: /init Dec 13 14:27:15.190187 kernel: with environment: Dec 13 14:27:15.190200 kernel: HOME=/ Dec 13 14:27:15.190214 kernel: TERM=linux Dec 13 14:27:15.190227 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:27:15.190246 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:27:15.190266 systemd[1]: Detected virtualization amazon. Dec 13 14:27:15.190281 systemd[1]: Detected architecture x86-64. Dec 13 14:27:15.190296 systemd[1]: Running in initrd. Dec 13 14:27:15.190325 systemd[1]: No hostname configured, using default hostname. Dec 13 14:27:15.190343 systemd[1]: Hostname set to . Dec 13 14:27:15.190362 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:27:15.190377 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:27:15.190405 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:27:15.190420 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:27:15.190436 systemd[1]: Reached target cryptsetup.target. Dec 13 14:27:15.190451 systemd[1]: Reached target paths.target. Dec 13 14:27:15.190466 systemd[1]: Reached target slices.target. Dec 13 14:27:15.190484 systemd[1]: Reached target swap.target. Dec 13 14:27:15.190501 systemd[1]: Reached target timers.target. Dec 13 14:27:15.190518 systemd[1]: Listening on iscsid.socket. Dec 13 14:27:15.190534 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:27:15.190550 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:27:15.190565 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:27:15.190581 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:27:15.190597 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:27:15.190613 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:27:15.190630 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:27:15.190647 systemd[1]: Reached target sockets.target. Dec 13 14:27:15.190662 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:27:15.190678 systemd[1]: Finished network-cleanup.service. Dec 13 14:27:15.190693 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:27:15.190709 systemd[1]: Starting systemd-journald.service... Dec 13 14:27:15.190724 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:27:15.190740 systemd[1]: Starting systemd-resolved.service... Dec 13 14:27:15.190756 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:27:15.190774 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:27:15.190790 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:27:15.190812 systemd-journald[185]: Journal started Dec 13 14:27:15.190883 systemd-journald[185]: Runtime Journal (/run/log/journal/ec2ff8f7b3bc4cdb76e97723d2381720) is 4.8M, max 38.7M, 33.9M free. Dec 13 14:27:15.182802 systemd-modules-load[186]: Inserted module 'overlay' Dec 13 14:27:15.383112 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:27:15.383144 kernel: Bridge firewalling registered Dec 13 14:27:15.383165 kernel: SCSI subsystem initialized Dec 13 14:27:15.383183 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:27:15.383203 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:27:15.383219 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:27:15.185244 systemd-resolved[187]: Positive Trust Anchors: Dec 13 14:27:15.394039 systemd[1]: Started systemd-journald.service. Dec 13 14:27:15.394096 kernel: audit: type=1130 audit(1734100035.384:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.185256 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:27:15.402325 kernel: audit: type=1130 audit(1734100035.393:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.185306 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:27:15.430487 kernel: audit: type=1130 audit(1734100035.401:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.192225 systemd-resolved[187]: Defaulting to hostname 'linux'. Dec 13 14:27:15.445318 kernel: audit: type=1130 audit(1734100035.429:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.229884 systemd-modules-load[186]: Inserted module 'br_netfilter' Dec 13 14:27:15.270332 systemd-modules-load[186]: Inserted module 'dm_multipath' Dec 13 14:27:15.474660 kernel: audit: type=1130 audit(1734100035.448:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.394567 systemd[1]: Started systemd-resolved.service. Dec 13 14:27:15.402850 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:27:15.430782 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:27:15.474737 systemd[1]: Reached target nss-lookup.target. Dec 13 14:27:15.485193 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:27:15.487710 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:27:15.488615 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:27:15.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.505557 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:27:15.513782 kernel: audit: type=1130 audit(1734100035.505:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.507141 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:27:15.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.519417 kernel: audit: type=1130 audit(1734100035.512:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.523439 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:27:15.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.525785 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:27:15.532282 kernel: audit: type=1130 audit(1734100035.523:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.543008 dracut-cmdline[207]: dracut-dracut-053 Dec 13 14:27:15.547021 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:27:15.642416 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:27:15.661412 kernel: iscsi: registered transport (tcp) Dec 13 14:27:15.691549 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:27:15.691642 kernel: QLogic iSCSI HBA Driver Dec 13 14:27:15.725438 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:27:15.731433 kernel: audit: type=1130 audit(1734100035.724:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.733222 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:27:15.787444 kernel: raid6: avx512x4 gen() 15766 MB/s Dec 13 14:27:15.805439 kernel: raid6: avx512x4 xor() 7356 MB/s Dec 13 14:27:15.823442 kernel: raid6: avx512x2 gen() 15489 MB/s Dec 13 14:27:15.840422 kernel: raid6: avx512x2 xor() 24910 MB/s Dec 13 14:27:15.857431 kernel: raid6: avx512x1 gen() 16281 MB/s Dec 13 14:27:15.875437 kernel: raid6: avx512x1 xor() 17972 MB/s Dec 13 14:27:15.892441 kernel: raid6: avx2x4 gen() 16914 MB/s Dec 13 14:27:15.910437 kernel: raid6: avx2x4 xor() 6038 MB/s Dec 13 14:27:15.928436 kernel: raid6: avx2x2 gen() 16842 MB/s Dec 13 14:27:15.946520 kernel: raid6: avx2x2 xor() 17859 MB/s Dec 13 14:27:15.963443 kernel: raid6: avx2x1 gen() 11868 MB/s Dec 13 14:27:15.981436 kernel: raid6: avx2x1 xor() 14874 MB/s Dec 13 14:27:15.999436 kernel: raid6: sse2x4 gen() 8884 MB/s Dec 13 14:27:16.016438 kernel: raid6: sse2x4 xor() 5941 MB/s Dec 13 14:27:16.033420 kernel: raid6: sse2x2 gen() 10800 MB/s Dec 13 14:27:16.051433 kernel: raid6: sse2x2 xor() 6103 MB/s Dec 13 14:27:16.069433 kernel: raid6: sse2x1 gen() 8932 MB/s Dec 13 14:27:16.087445 kernel: raid6: sse2x1 xor() 4612 MB/s Dec 13 14:27:16.087521 kernel: raid6: using algorithm avx2x4 gen() 16914 MB/s Dec 13 14:27:16.087550 kernel: raid6: .... xor() 6038 MB/s, rmw enabled Dec 13 14:27:16.089296 kernel: raid6: using avx512x2 recovery algorithm Dec 13 14:27:16.104421 kernel: xor: automatically using best checksumming function avx Dec 13 14:27:16.211415 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:27:16.219585 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:27:16.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:16.219000 audit: BPF prog-id=7 op=LOAD Dec 13 14:27:16.219000 audit: BPF prog-id=8 op=LOAD Dec 13 14:27:16.221965 systemd[1]: Starting systemd-udevd.service... Dec 13 14:27:16.254915 systemd-udevd[384]: Using default interface naming scheme 'v252'. Dec 13 14:27:16.267950 systemd[1]: Started systemd-udevd.service. Dec 13 14:27:16.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:16.271599 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:27:16.297692 dracut-pre-trigger[391]: rd.md=0: removing MD RAID activation Dec 13 14:27:16.353406 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:27:16.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:16.356144 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:27:16.415036 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:27:16.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:16.508411 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:27:16.544227 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 14:27:16.549408 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 14:27:16.549630 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 13 14:27:16.549762 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:bd:76:9a:9e:63 Dec 13 14:27:16.553120 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 14:27:16.553419 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 14:27:16.553441 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:27:16.553458 kernel: AES CTR mode by8 optimization enabled Dec 13 14:27:16.554911 (udev-worker)[429]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:27:16.763607 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 14:27:16.763823 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:27:16.763844 kernel: GPT:9289727 != 16777215 Dec 13 14:27:16.763861 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:27:16.763877 kernel: GPT:9289727 != 16777215 Dec 13 14:27:16.763895 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:27:16.763909 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:27:16.763924 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (424) Dec 13 14:27:16.716906 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:27:16.783865 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:27:16.790201 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:27:16.790322 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:27:16.803359 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:27:16.807376 systemd[1]: Starting disk-uuid.service... Dec 13 14:27:16.815093 disk-uuid[586]: Primary Header is updated. Dec 13 14:27:16.815093 disk-uuid[586]: Secondary Entries is updated. Dec 13 14:27:16.815093 disk-uuid[586]: Secondary Header is updated. Dec 13 14:27:16.830410 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:27:16.836410 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:27:16.841489 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:27:17.845305 disk-uuid[587]: The operation has completed successfully. Dec 13 14:27:17.847062 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:27:17.985193 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:27:17.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:17.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:17.985404 systemd[1]: Finished disk-uuid.service. Dec 13 14:27:17.991673 systemd[1]: Starting verity-setup.service... Dec 13 14:27:18.032413 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 14:27:18.152512 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:27:18.154517 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:27:18.159606 systemd[1]: Finished verity-setup.service. Dec 13 14:27:18.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.246633 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:27:18.247083 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:27:18.248596 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:27:18.250688 systemd[1]: Starting ignition-setup.service... Dec 13 14:27:18.255685 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:27:18.281431 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:27:18.281504 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:27:18.281523 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:27:18.294607 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:27:18.313406 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:27:18.332869 systemd[1]: Finished ignition-setup.service. Dec 13 14:27:18.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.334751 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:27:18.366301 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:27:18.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.366000 audit: BPF prog-id=9 op=LOAD Dec 13 14:27:18.369117 systemd[1]: Starting systemd-networkd.service... Dec 13 14:27:18.410578 systemd-networkd[1099]: lo: Link UP Dec 13 14:27:18.410590 systemd-networkd[1099]: lo: Gained carrier Dec 13 14:27:18.411410 systemd-networkd[1099]: Enumeration completed Dec 13 14:27:18.411745 systemd-networkd[1099]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:27:18.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.412254 systemd[1]: Started systemd-networkd.service. Dec 13 14:27:18.417859 systemd[1]: Reached target network.target. Dec 13 14:27:18.423032 systemd[1]: Starting iscsiuio.service... Dec 13 14:27:18.424114 systemd-networkd[1099]: eth0: Link UP Dec 13 14:27:18.424119 systemd-networkd[1099]: eth0: Gained carrier Dec 13 14:27:18.435290 systemd[1]: Started iscsiuio.service. Dec 13 14:27:18.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.437758 systemd[1]: Starting iscsid.service... Dec 13 14:27:18.441565 systemd-networkd[1099]: eth0: DHCPv4 address 172.31.29.3/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:27:18.443422 iscsid[1104]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:27:18.443422 iscsid[1104]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:27:18.443422 iscsid[1104]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:27:18.443422 iscsid[1104]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:27:18.443422 iscsid[1104]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:27:18.443422 iscsid[1104]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:27:18.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.450905 systemd[1]: Started iscsid.service. Dec 13 14:27:18.461204 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:27:18.477177 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:27:18.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.477659 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:27:18.486253 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:27:18.488427 systemd[1]: Reached target remote-fs.target. Dec 13 14:27:18.491165 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:27:18.504651 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:27:18.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.910189 ignition[1066]: Ignition 2.14.0 Dec 13 14:27:18.910201 ignition[1066]: Stage: fetch-offline Dec 13 14:27:18.910662 ignition[1066]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:27:18.910781 ignition[1066]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:27:18.931314 ignition[1066]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:27:18.931957 ignition[1066]: Ignition finished successfully Dec 13 14:27:18.934556 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:27:18.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.936416 systemd[1]: Starting ignition-fetch.service... Dec 13 14:27:18.946900 ignition[1123]: Ignition 2.14.0 Dec 13 14:27:18.947196 ignition[1123]: Stage: fetch Dec 13 14:27:18.948754 ignition[1123]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:27:18.948796 ignition[1123]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:27:18.957337 ignition[1123]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:27:18.959714 ignition[1123]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:27:18.970374 ignition[1123]: INFO : PUT result: OK Dec 13 14:27:18.972840 ignition[1123]: DEBUG : parsed url from cmdline: "" Dec 13 14:27:18.972840 ignition[1123]: INFO : no config URL provided Dec 13 14:27:18.972840 ignition[1123]: INFO : reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:27:18.976786 ignition[1123]: INFO : no config at "/usr/lib/ignition/user.ign" Dec 13 14:27:18.976786 ignition[1123]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:27:18.979465 ignition[1123]: INFO : PUT result: OK Dec 13 14:27:18.979465 ignition[1123]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 14:27:18.982095 ignition[1123]: INFO : GET result: OK Dec 13 14:27:18.982095 ignition[1123]: DEBUG : parsing config with SHA512: ccc7dfbdc88249de95bc33c2966612887bda521c23f8119b2afb5bebad25d00cd4efeb882984c934e8c4190c3d7fa934b648395f2d71716449492791a7933b2b Dec 13 14:27:18.989936 unknown[1123]: fetched base config from "system" Dec 13 14:27:18.990008 unknown[1123]: fetched base config from "system" Dec 13 14:27:18.990020 unknown[1123]: fetched user config from "aws" Dec 13 14:27:18.993354 ignition[1123]: fetch: fetch complete Dec 13 14:27:18.993367 ignition[1123]: fetch: fetch passed Dec 13 14:27:18.993452 ignition[1123]: Ignition finished successfully Dec 13 14:27:18.996822 systemd[1]: Finished ignition-fetch.service. Dec 13 14:27:18.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:18.999139 systemd[1]: Starting ignition-kargs.service... Dec 13 14:27:19.014651 ignition[1129]: Ignition 2.14.0 Dec 13 14:27:19.014666 ignition[1129]: Stage: kargs Dec 13 14:27:19.014891 ignition[1129]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:27:19.014988 ignition[1129]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:27:19.024159 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:27:19.026049 ignition[1129]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:27:19.028429 ignition[1129]: INFO : PUT result: OK Dec 13 14:27:19.032480 ignition[1129]: kargs: kargs passed Dec 13 14:27:19.032607 ignition[1129]: Ignition finished successfully Dec 13 14:27:19.035219 systemd[1]: Finished ignition-kargs.service. Dec 13 14:27:19.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.036253 systemd[1]: Starting ignition-disks.service... Dec 13 14:27:19.048900 ignition[1135]: Ignition 2.14.0 Dec 13 14:27:19.049283 ignition[1135]: Stage: disks Dec 13 14:27:19.051904 ignition[1135]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:27:19.051937 ignition[1135]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:27:19.065289 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:27:19.068211 ignition[1135]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:27:19.072381 ignition[1135]: INFO : PUT result: OK Dec 13 14:27:19.079120 ignition[1135]: disks: disks passed Dec 13 14:27:19.079193 ignition[1135]: Ignition finished successfully Dec 13 14:27:19.080541 systemd[1]: Finished ignition-disks.service. Dec 13 14:27:19.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.082419 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:27:19.085378 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:27:19.087240 systemd[1]: Reached target local-fs.target. Dec 13 14:27:19.089438 systemd[1]: Reached target sysinit.target. Dec 13 14:27:19.091078 systemd[1]: Reached target basic.target. Dec 13 14:27:19.093729 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:27:19.130056 systemd-fsck[1143]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 14:27:19.134938 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:27:19.144209 kernel: kauditd_printk_skb: 21 callbacks suppressed Dec 13 14:27:19.144244 kernel: audit: type=1130 audit(1734100039.134:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.137225 systemd[1]: Mounting sysroot.mount... Dec 13 14:27:19.166449 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:27:19.169958 systemd[1]: Mounted sysroot.mount. Dec 13 14:27:19.172346 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:27:19.176189 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:27:19.179840 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:27:19.180628 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:27:19.180669 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:27:19.191751 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:27:19.210268 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:27:19.214057 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:27:19.222528 initrd-setup-root[1165]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:27:19.238408 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1160) Dec 13 14:27:19.239299 initrd-setup-root[1173]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:27:19.245972 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:27:19.246009 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:27:19.246026 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:27:19.253798 initrd-setup-root[1197]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:27:19.258415 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:27:19.264068 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:27:19.270801 initrd-setup-root[1207]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:27:19.438216 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:27:19.449288 kernel: audit: type=1130 audit(1734100039.437:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.439591 systemd[1]: Starting ignition-mount.service... Dec 13 14:27:19.462651 systemd[1]: Starting sysroot-boot.service... Dec 13 14:27:19.474683 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:27:19.477454 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:27:19.516364 ignition[1225]: INFO : Ignition 2.14.0 Dec 13 14:27:19.517803 ignition[1225]: INFO : Stage: mount Dec 13 14:27:19.517803 ignition[1225]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:27:19.517803 ignition[1225]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:27:19.533374 ignition[1225]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:27:19.535066 ignition[1225]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:27:19.537200 ignition[1225]: INFO : PUT result: OK Dec 13 14:27:19.553849 ignition[1225]: INFO : mount: mount passed Dec 13 14:27:19.554872 ignition[1225]: INFO : Ignition finished successfully Dec 13 14:27:19.557978 systemd[1]: Finished sysroot-boot.service. Dec 13 14:27:19.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.564408 kernel: audit: type=1130 audit(1734100039.557:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.559228 systemd[1]: Finished ignition-mount.service. Dec 13 14:27:19.569664 kernel: audit: type=1130 audit(1734100039.564:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.570730 systemd[1]: Starting ignition-files.service... Dec 13 14:27:19.585599 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:27:19.606584 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1235) Dec 13 14:27:19.609217 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:27:19.609320 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:27:19.609339 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:27:19.617412 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:27:19.621291 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:27:19.647955 ignition[1254]: INFO : Ignition 2.14.0 Dec 13 14:27:19.647955 ignition[1254]: INFO : Stage: files Dec 13 14:27:19.652893 ignition[1254]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:27:19.652893 ignition[1254]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:27:19.667849 ignition[1254]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:27:19.667849 ignition[1254]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:27:19.685910 ignition[1254]: INFO : PUT result: OK Dec 13 14:27:19.690236 ignition[1254]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:27:19.698032 ignition[1254]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:27:19.699529 ignition[1254]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:27:19.727377 ignition[1254]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:27:19.729062 ignition[1254]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:27:19.731304 unknown[1254]: wrote ssh authorized keys file for user: core Dec 13 14:27:19.732590 ignition[1254]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:27:19.734162 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:27:19.734162 ignition[1254]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 14:27:19.847655 ignition[1254]: INFO : GET result: OK Dec 13 14:27:19.941528 systemd-networkd[1099]: eth0: Gained IPv6LL Dec 13 14:27:19.995619 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:27:19.997658 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:27:19.997658 ignition[1254]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:27:20.005145 ignition[1254]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem446733479" Dec 13 14:27:20.008231 ignition[1254]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem446733479": device or resource busy Dec 13 14:27:20.008231 ignition[1254]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem446733479", trying btrfs: device or resource busy Dec 13 14:27:20.008231 ignition[1254]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem446733479" Dec 13 14:27:20.013809 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1259) Dec 13 14:27:20.013843 ignition[1254]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem446733479" Dec 13 14:27:20.018964 ignition[1254]: INFO : op(3): [started] unmounting "/mnt/oem446733479" Dec 13 14:27:20.030458 ignition[1254]: INFO : op(3): [finished] unmounting "/mnt/oem446733479" Dec 13 14:27:20.030458 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:27:20.030458 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:27:20.037051 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:27:20.037051 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:27:20.043479 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:27:20.043479 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:27:20.048646 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:27:20.048646 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:27:20.048646 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:27:20.056974 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:27:20.056974 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:27:20.056974 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:27:20.056974 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:27:20.056974 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:27:20.056974 ignition[1254]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:27:20.074442 ignition[1254]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1878326961" Dec 13 14:27:20.074442 ignition[1254]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1878326961": device or resource busy Dec 13 14:27:20.074442 ignition[1254]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1878326961", trying btrfs: device or resource busy Dec 13 14:27:20.074442 ignition[1254]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1878326961" Dec 13 14:27:20.074442 ignition[1254]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1878326961" Dec 13 14:27:20.074442 ignition[1254]: INFO : op(6): [started] unmounting "/mnt/oem1878326961" Dec 13 14:27:20.074442 ignition[1254]: INFO : op(6): [finished] unmounting "/mnt/oem1878326961" Dec 13 14:27:20.074442 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:27:20.074442 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:27:20.074442 ignition[1254]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:27:20.107127 ignition[1254]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3015315075" Dec 13 14:27:20.108741 ignition[1254]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3015315075": device or resource busy Dec 13 14:27:20.108741 ignition[1254]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3015315075", trying btrfs: device or resource busy Dec 13 14:27:20.108741 ignition[1254]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3015315075" Dec 13 14:27:20.108741 ignition[1254]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3015315075" Dec 13 14:27:20.108741 ignition[1254]: INFO : op(9): [started] unmounting "/mnt/oem3015315075" Dec 13 14:27:20.108741 ignition[1254]: INFO : op(9): [finished] unmounting "/mnt/oem3015315075" Dec 13 14:27:20.108741 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:27:20.132798 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:27:20.132798 ignition[1254]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:27:20.132798 ignition[1254]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3510060037" Dec 13 14:27:20.132798 ignition[1254]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3510060037": device or resource busy Dec 13 14:27:20.132798 ignition[1254]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3510060037", trying btrfs: device or resource busy Dec 13 14:27:20.132798 ignition[1254]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3510060037" Dec 13 14:27:20.132798 ignition[1254]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3510060037" Dec 13 14:27:20.147451 ignition[1254]: INFO : op(c): [started] unmounting "/mnt/oem3510060037" Dec 13 14:27:20.147451 ignition[1254]: INFO : op(c): [finished] unmounting "/mnt/oem3510060037" Dec 13 14:27:20.147451 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:27:20.147451 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:27:20.147451 ignition[1254]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 14:27:20.699553 ignition[1254]: INFO : GET result: OK Dec 13 14:27:21.568134 ignition[1254]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:27:21.568134 ignition[1254]: INFO : files: op(f): [started] processing unit "nvidia.service" Dec 13 14:27:21.568134 ignition[1254]: INFO : files: op(f): [finished] processing unit "nvidia.service" Dec 13 14:27:21.568134 ignition[1254]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:27:21.568134 ignition[1254]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:27:21.579322 ignition[1254]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Dec 13 14:27:21.579322 ignition[1254]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:27:21.579322 ignition[1254]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:27:21.579322 ignition[1254]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Dec 13 14:27:21.579322 ignition[1254]: INFO : files: op(13): [started] processing unit "prepare-helm.service" Dec 13 14:27:21.579322 ignition[1254]: INFO : files: op(13): op(14): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:27:21.579322 ignition[1254]: INFO : files: op(13): op(14): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:27:21.579322 ignition[1254]: INFO : files: op(13): [finished] processing unit "prepare-helm.service" Dec 13 14:27:21.579322 ignition[1254]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:27:21.579322 ignition[1254]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:27:21.579322 ignition[1254]: INFO : files: op(16): [started] setting preset to enabled for "nvidia.service" Dec 13 14:27:21.579322 ignition[1254]: INFO : files: op(16): [finished] setting preset to enabled for "nvidia.service" Dec 13 14:27:21.579322 ignition[1254]: INFO : files: op(17): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:27:21.579322 ignition[1254]: INFO : files: op(17): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:27:21.579322 ignition[1254]: INFO : files: op(18): [started] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:27:21.579322 ignition[1254]: INFO : files: op(18): [finished] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:27:21.579322 ignition[1254]: INFO : files: createResultFile: createFiles: op(19): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:27:21.579322 ignition[1254]: INFO : files: createResultFile: createFiles: op(19): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:27:21.579322 ignition[1254]: INFO : files: files passed Dec 13 14:27:21.579322 ignition[1254]: INFO : Ignition finished successfully Dec 13 14:27:21.630860 kernel: audit: type=1130 audit(1734100041.599:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.598108 systemd[1]: Finished ignition-files.service. Dec 13 14:27:21.610100 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:27:21.637939 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:27:21.655856 initrd-setup-root-after-ignition[1278]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:27:21.652569 systemd[1]: Starting ignition-quench.service... Dec 13 14:27:21.663636 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:27:21.666088 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:27:21.666183 systemd[1]: Finished ignition-quench.service. Dec 13 14:27:21.680004 kernel: audit: type=1130 audit(1734100041.664:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.680035 kernel: audit: type=1130 audit(1734100041.669:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.680048 kernel: audit: type=1131 audit(1734100041.669:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.680035 systemd[1]: Reached target ignition-complete.target. Dec 13 14:27:21.682798 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:27:21.701312 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:27:21.701430 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:27:21.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.707779 systemd[1]: Reached target initrd-fs.target. Dec 13 14:27:21.713482 kernel: audit: type=1130 audit(1734100041.702:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.713513 kernel: audit: type=1131 audit(1734100041.706:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.712139 systemd[1]: Reached target initrd.target. Dec 13 14:27:21.713640 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:27:21.714836 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:27:21.728125 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:27:21.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.730861 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:27:21.743343 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:27:21.743586 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:27:21.747320 systemd[1]: Stopped target timers.target. Dec 13 14:27:21.749730 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:27:21.750888 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:27:21.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.752947 systemd[1]: Stopped target initrd.target. Dec 13 14:27:21.755158 systemd[1]: Stopped target basic.target. Dec 13 14:27:21.757275 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:27:21.769614 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:27:21.780565 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:27:21.786488 systemd[1]: Stopped target remote-fs.target. Dec 13 14:27:21.790655 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:27:21.795283 systemd[1]: Stopped target sysinit.target. Dec 13 14:27:21.800382 systemd[1]: Stopped target local-fs.target. Dec 13 14:27:21.805056 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:27:21.816157 systemd[1]: Stopped target swap.target. Dec 13 14:27:21.822657 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:27:21.822938 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:27:21.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.837134 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:27:21.842664 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:27:21.844824 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:27:21.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.849138 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:27:21.851577 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:27:21.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.853579 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:27:21.854755 systemd[1]: Stopped ignition-files.service. Dec 13 14:27:21.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.857526 systemd[1]: Stopping ignition-mount.service... Dec 13 14:27:21.902521 ignition[1292]: INFO : Ignition 2.14.0 Dec 13 14:27:21.902521 ignition[1292]: INFO : Stage: umount Dec 13 14:27:21.902521 ignition[1292]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:27:21.902521 ignition[1292]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:27:21.902521 ignition[1292]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:27:21.902521 ignition[1292]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:27:21.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.902684 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:27:21.915795 ignition[1292]: INFO : PUT result: OK Dec 13 14:27:21.915795 ignition[1292]: INFO : umount: umount passed Dec 13 14:27:21.915795 ignition[1292]: INFO : Ignition finished successfully Dec 13 14:27:21.903486 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:27:21.903677 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:27:21.904782 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:27:21.904931 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:27:21.910001 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:27:21.910220 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:27:21.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.936792 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:27:21.939759 systemd[1]: Stopped ignition-mount.service. Dec 13 14:27:21.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.942946 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:27:21.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.943022 systemd[1]: Stopped ignition-disks.service. Dec 13 14:27:21.943950 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:27:21.944002 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:27:21.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.948356 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:27:21.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.948442 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:27:21.951597 systemd[1]: Stopped target network.target. Dec 13 14:27:21.959461 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:27:21.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.959535 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:27:21.960883 systemd[1]: Stopped target paths.target. Dec 13 14:27:21.964721 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:27:21.964828 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:27:21.968214 systemd[1]: Stopped target slices.target. Dec 13 14:27:21.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.968953 systemd[1]: Stopped target sockets.target. Dec 13 14:27:21.969850 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:27:21.969883 systemd[1]: Closed iscsid.socket. Dec 13 14:27:21.970724 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:27:21.970748 systemd[1]: Closed iscsiuio.socket. Dec 13 14:27:21.971477 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:27:21.971516 systemd[1]: Stopped ignition-setup.service. Dec 13 14:27:21.972851 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:27:21.979412 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:27:21.986002 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:27:21.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.992000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:27:21.986137 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:27:21.989620 systemd-networkd[1099]: eth0: DHCPv6 lease lost Dec 13 14:27:21.990900 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:27:21.991027 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:27:21.992275 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:27:21.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.999000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:27:21.992316 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:27:22.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:21.995759 systemd[1]: Stopping network-cleanup.service... Dec 13 14:27:21.997656 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:27:21.997730 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:27:21.999492 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:27:21.999548 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:27:22.002351 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:27:22.002418 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:27:22.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:22.015716 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:27:22.019701 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:27:22.019821 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:27:22.023838 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:27:22.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:22.024044 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:27:22.025963 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:27:22.026014 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:27:22.029104 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:27:22.029157 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:27:22.033657 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:27:22.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:22.033705 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:27:22.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:22.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:22.036164 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:27:22.036208 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:27:22.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:22.038427 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:27:22.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:22.038467 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:27:22.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:22.040158 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:27:22.049563 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:27:22.049635 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 14:27:22.052412 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:27:22.052461 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:27:22.053328 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:27:22.054282 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:27:22.065167 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 14:27:22.067380 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:27:22.068498 systemd[1]: Stopped network-cleanup.service. Dec 13 14:27:22.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:22.070249 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:27:22.071781 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:27:22.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:22.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:22.142268 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:27:22.142378 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:27:22.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:22.144422 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:27:22.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:22.145975 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:27:22.146041 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:27:22.147175 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:27:22.160904 systemd[1]: Switching root. Dec 13 14:27:22.183883 iscsid[1104]: iscsid shutting down. Dec 13 14:27:22.188570 systemd-journald[185]: Received SIGTERM from PID 1 (n/a). Dec 13 14:27:22.188647 systemd-journald[185]: Journal stopped Dec 13 14:27:26.186536 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:27:26.186612 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:27:26.186634 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:27:26.186653 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:27:26.186671 kernel: SELinux: policy capability open_perms=1 Dec 13 14:27:26.186694 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:27:26.186783 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:27:26.186805 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:27:26.186823 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:27:26.186843 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:27:26.186869 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:27:26.186891 systemd[1]: Successfully loaded SELinux policy in 59.826ms. Dec 13 14:27:26.186926 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.997ms. Dec 13 14:27:26.186948 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:27:26.186968 systemd[1]: Detected virtualization amazon. Dec 13 14:27:26.186988 systemd[1]: Detected architecture x86-64. Dec 13 14:27:26.187007 systemd[1]: Detected first boot. Dec 13 14:27:26.187032 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:27:26.187054 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:27:26.187073 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:27:26.187094 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:27:26.187115 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:27:26.187138 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:27:26.187157 kernel: kauditd_printk_skb: 51 callbacks suppressed Dec 13 14:27:26.187175 kernel: audit: type=1334 audit(1734100045.894:86): prog-id=12 op=LOAD Dec 13 14:27:26.187197 kernel: audit: type=1334 audit(1734100045.894:87): prog-id=3 op=UNLOAD Dec 13 14:27:26.187216 kernel: audit: type=1334 audit(1734100045.895:88): prog-id=13 op=LOAD Dec 13 14:27:26.187235 kernel: audit: type=1334 audit(1734100045.896:89): prog-id=14 op=LOAD Dec 13 14:27:26.187253 kernel: audit: type=1334 audit(1734100045.896:90): prog-id=4 op=UNLOAD Dec 13 14:27:26.187308 kernel: audit: type=1334 audit(1734100045.896:91): prog-id=5 op=UNLOAD Dec 13 14:27:26.187327 kernel: audit: type=1334 audit(1734100045.898:92): prog-id=15 op=LOAD Dec 13 14:27:26.187345 kernel: audit: type=1334 audit(1734100045.898:93): prog-id=12 op=UNLOAD Dec 13 14:27:26.187368 kernel: audit: type=1334 audit(1734100045.899:94): prog-id=16 op=LOAD Dec 13 14:27:26.187397 kernel: audit: type=1334 audit(1734100045.900:95): prog-id=17 op=LOAD Dec 13 14:27:26.187417 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:27:26.187438 systemd[1]: Stopped iscsiuio.service. Dec 13 14:27:26.187458 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:27:26.187478 systemd[1]: Stopped iscsid.service. Dec 13 14:27:26.187496 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:27:26.187515 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:27:26.187532 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:27:26.187554 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:27:26.187572 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:27:26.187591 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:27:26.187609 systemd[1]: Created slice system-getty.slice. Dec 13 14:27:26.187627 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:27:26.187650 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:27:26.187669 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:27:26.187687 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:27:26.187753 systemd[1]: Created slice user.slice. Dec 13 14:27:26.187771 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:27:26.187788 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:27:26.187805 systemd[1]: Set up automount boot.automount. Dec 13 14:27:26.187825 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:27:26.187844 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:27:26.187864 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:27:26.187884 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:27:26.187901 systemd[1]: Reached target integritysetup.target. Dec 13 14:27:26.187921 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:27:26.187939 systemd[1]: Reached target remote-fs.target. Dec 13 14:27:26.187956 systemd[1]: Reached target slices.target. Dec 13 14:27:26.187974 systemd[1]: Reached target swap.target. Dec 13 14:27:26.187992 systemd[1]: Reached target torcx.target. Dec 13 14:27:26.188011 systemd[1]: Reached target veritysetup.target. Dec 13 14:27:26.188029 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:27:26.188049 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:27:26.188067 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:27:26.188088 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:27:26.188106 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:27:26.188126 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:27:26.188145 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:27:26.188164 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:27:26.188183 systemd[1]: Mounting media.mount... Dec 13 14:27:26.188203 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:26.188223 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:27:26.188242 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:27:26.188264 systemd[1]: Mounting tmp.mount... Dec 13 14:27:26.188281 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:27:26.188300 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:27:26.188318 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:27:26.188337 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:27:26.188354 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:27:26.188372 systemd[1]: Starting modprobe@drm.service... Dec 13 14:27:26.189359 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:27:26.189454 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:27:26.189477 systemd[1]: Starting modprobe@loop.service... Dec 13 14:27:26.189502 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:27:26.189522 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:27:26.189542 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:27:26.189560 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:27:26.189579 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:27:26.189608 systemd[1]: Stopped systemd-journald.service. Dec 13 14:27:26.189630 systemd[1]: Starting systemd-journald.service... Dec 13 14:27:26.189651 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:27:26.189671 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:27:26.189691 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:27:26.189710 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:27:26.189730 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:27:26.189750 systemd[1]: Stopped verity-setup.service. Dec 13 14:27:26.189780 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:26.189801 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:27:26.189821 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:27:26.189842 systemd[1]: Mounted media.mount. Dec 13 14:27:26.189863 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:27:26.189881 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:27:26.189902 systemd[1]: Mounted tmp.mount. Dec 13 14:27:26.189922 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:27:26.189941 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:27:26.189963 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:27:26.189982 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:27:26.190002 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:27:26.190021 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:27:26.190040 systemd[1]: Finished modprobe@drm.service. Dec 13 14:27:26.190064 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:27:26.190083 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:27:26.190103 kernel: fuse: init (API version 7.34) Dec 13 14:27:26.190122 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:27:26.190144 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:27:26.190164 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:27:26.190186 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:27:26.190206 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:27:26.190224 systemd[1]: Reached target network-pre.target. Dec 13 14:27:26.190242 kernel: loop: module loaded Dec 13 14:27:26.190260 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:27:26.190337 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:27:26.190357 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:27:26.190383 systemd-journald[1402]: Journal started Dec 13 14:27:26.190512 systemd-journald[1402]: Runtime Journal (/run/log/journal/ec2ff8f7b3bc4cdb76e97723d2381720) is 4.8M, max 38.7M, 33.9M free. Dec 13 14:27:22.677000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:27:22.744000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:27:22.744000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:27:22.744000 audit: BPF prog-id=10 op=LOAD Dec 13 14:27:22.744000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:27:22.744000 audit: BPF prog-id=11 op=LOAD Dec 13 14:27:22.744000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:27:22.877000 audit[1327]: AVC avc: denied { associate } for pid=1327 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:27:22.877000 audit[1327]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8b2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=1310 pid=1327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:22.877000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:27:22.879000 audit[1327]: AVC avc: denied { associate } for pid=1327 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:27:22.879000 audit[1327]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d989 a2=1ed a3=0 items=2 ppid=1310 pid=1327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:22.879000 audit: CWD cwd="/" Dec 13 14:27:22.879000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:22.879000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:22.879000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:27:25.894000 audit: BPF prog-id=12 op=LOAD Dec 13 14:27:25.894000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:27:25.895000 audit: BPF prog-id=13 op=LOAD Dec 13 14:27:25.896000 audit: BPF prog-id=14 op=LOAD Dec 13 14:27:25.896000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:27:25.896000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:27:25.898000 audit: BPF prog-id=15 op=LOAD Dec 13 14:27:25.898000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:27:25.899000 audit: BPF prog-id=16 op=LOAD Dec 13 14:27:25.900000 audit: BPF prog-id=17 op=LOAD Dec 13 14:27:25.900000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:27:25.900000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:27:25.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:25.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:25.910000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:27:25.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:25.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:25.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.076000 audit: BPF prog-id=18 op=LOAD Dec 13 14:27:26.077000 audit: BPF prog-id=19 op=LOAD Dec 13 14:27:26.077000 audit: BPF prog-id=20 op=LOAD Dec 13 14:27:26.077000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:27:26.077000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:27:26.206359 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:27:26.206536 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:27:26.206570 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:27:26.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.214827 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:27:26.214894 systemd[1]: Started systemd-journald.service. Dec 13 14:27:26.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.182000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:27:26.182000 audit[1402]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff06da8560 a2=4000 a3=7fff06da85fc items=0 ppid=1 pid=1402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:26.182000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:27:26.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:22.875530 /usr/lib/systemd/system-generators/torcx-generator[1327]: time="2024-12-13T14:27:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:27:25.893640 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:27:22.876456 /usr/lib/systemd/system-generators/torcx-generator[1327]: time="2024-12-13T14:27:22Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:27:25.903830 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:27:22.876484 /usr/lib/systemd/system-generators/torcx-generator[1327]: time="2024-12-13T14:27:22Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:27:26.213294 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:27:22.876527 /usr/lib/systemd/system-generators/torcx-generator[1327]: time="2024-12-13T14:27:22Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:27:26.213506 systemd[1]: Finished modprobe@loop.service. Dec 13 14:27:22.876543 /usr/lib/systemd/system-generators/torcx-generator[1327]: time="2024-12-13T14:27:22Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:27:26.214803 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:27:22.876586 /usr/lib/systemd/system-generators/torcx-generator[1327]: time="2024-12-13T14:27:22Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:27:26.215978 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:27:26.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:22.876604 /usr/lib/systemd/system-generators/torcx-generator[1327]: time="2024-12-13T14:27:22Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:27:26.219139 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:27:22.876854 /usr/lib/systemd/system-generators/torcx-generator[1327]: time="2024-12-13T14:27:22Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:27:26.220298 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:27:22.876905 /usr/lib/systemd/system-generators/torcx-generator[1327]: time="2024-12-13T14:27:22Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:27:26.224594 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:27:22.876924 /usr/lib/systemd/system-generators/torcx-generator[1327]: time="2024-12-13T14:27:22Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:27:26.231693 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:27:22.877967 /usr/lib/systemd/system-generators/torcx-generator[1327]: time="2024-12-13T14:27:22Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:27:22.878021 /usr/lib/systemd/system-generators/torcx-generator[1327]: time="2024-12-13T14:27:22Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:27:22.878052 /usr/lib/systemd/system-generators/torcx-generator[1327]: time="2024-12-13T14:27:22Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:27:26.242522 systemd-journald[1402]: Time spent on flushing to /var/log/journal/ec2ff8f7b3bc4cdb76e97723d2381720 is 69.434ms for 1184 entries. Dec 13 14:27:26.242522 systemd-journald[1402]: System Journal (/var/log/journal/ec2ff8f7b3bc4cdb76e97723d2381720) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:27:26.325804 systemd-journald[1402]: Received client request to flush runtime journal. Dec 13 14:27:26.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:22.878075 /usr/lib/systemd/system-generators/torcx-generator[1327]: time="2024-12-13T14:27:22Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:27:26.264837 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:27:22.878101 /usr/lib/systemd/system-generators/torcx-generator[1327]: time="2024-12-13T14:27:22Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:27:26.277743 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:27:22.878124 /usr/lib/systemd/system-generators/torcx-generator[1327]: time="2024-12-13T14:27:22Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:27:26.280740 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:27:25.458298 /usr/lib/systemd/system-generators/torcx-generator[1327]: time="2024-12-13T14:27:25Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:27:25.458584 /usr/lib/systemd/system-generators/torcx-generator[1327]: time="2024-12-13T14:27:25Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:27:26.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.327695 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:27:25.458687 /usr/lib/systemd/system-generators/torcx-generator[1327]: time="2024-12-13T14:27:25Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:27:25.458867 /usr/lib/systemd/system-generators/torcx-generator[1327]: time="2024-12-13T14:27:25Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:27:25.458916 /usr/lib/systemd/system-generators/torcx-generator[1327]: time="2024-12-13T14:27:25Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:27:25.458971 /usr/lib/systemd/system-generators/torcx-generator[1327]: time="2024-12-13T14:27:25Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:27:26.335778 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:27:26.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.338733 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:27:26.369899 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:27:26.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:26.372780 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:27:26.392601 udevadm[1444]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 14:27:26.423506 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:27:26.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.042428 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:27:27.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.044000 audit: BPF prog-id=21 op=LOAD Dec 13 14:27:27.045000 audit: BPF prog-id=22 op=LOAD Dec 13 14:27:27.045000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:27:27.045000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:27:27.048196 systemd[1]: Starting systemd-udevd.service... Dec 13 14:27:27.076197 systemd-udevd[1445]: Using default interface naming scheme 'v252'. Dec 13 14:27:27.127686 systemd[1]: Started systemd-udevd.service. Dec 13 14:27:27.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.130000 audit: BPF prog-id=23 op=LOAD Dec 13 14:27:27.138989 systemd[1]: Starting systemd-networkd.service... Dec 13 14:27:27.152000 audit: BPF prog-id=24 op=LOAD Dec 13 14:27:27.152000 audit: BPF prog-id=25 op=LOAD Dec 13 14:27:27.153000 audit: BPF prog-id=26 op=LOAD Dec 13 14:27:27.155637 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:27:27.225910 systemd[1]: Started systemd-userdbd.service. Dec 13 14:27:27.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.234512 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 14:27:27.248696 (udev-worker)[1451]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:27:27.337429 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 14:27:27.345933 systemd-networkd[1459]: lo: Link UP Dec 13 14:27:27.345947 systemd-networkd[1459]: lo: Gained carrier Dec 13 14:27:27.346407 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:27:27.346529 systemd-networkd[1459]: Enumeration completed Dec 13 14:27:27.346644 systemd[1]: Started systemd-networkd.service. Dec 13 14:27:27.348401 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Dec 13 14:27:27.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.350431 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:27:27.352072 systemd-networkd[1459]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:27:27.357306 systemd-networkd[1459]: eth0: Link UP Dec 13 14:27:27.357460 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:27:27.357685 systemd-networkd[1459]: eth0: Gained carrier Dec 13 14:27:27.366590 systemd-networkd[1459]: eth0: DHCPv4 address 172.31.29.3/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:27:27.377439 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 14:27:27.412415 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1452) Dec 13 14:27:27.395000 audit[1453]: AVC avc: denied { confidentiality } for pid=1453 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:27:27.395000 audit[1453]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55cfd462e8b0 a1=337fc a2=7f891a263bc5 a3=5 items=110 ppid=1445 pid=1453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:27.395000 audit: CWD cwd="/" Dec 13 14:27:27.395000 audit: PATH item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=1 name=(null) inode=15159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=2 name=(null) inode=15159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=3 name=(null) inode=15160 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=4 name=(null) inode=15159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=5 name=(null) inode=15161 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=6 name=(null) inode=15159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=7 name=(null) inode=15162 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=8 name=(null) inode=15162 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=9 name=(null) inode=15163 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=10 name=(null) inode=15162 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=11 name=(null) inode=15164 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=12 name=(null) inode=15162 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=13 name=(null) inode=15165 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=14 name=(null) inode=15162 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=15 name=(null) inode=15166 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=16 name=(null) inode=15162 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=17 name=(null) inode=15167 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=18 name=(null) inode=15159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=19 name=(null) inode=15168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=20 name=(null) inode=15168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=21 name=(null) inode=15169 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=22 name=(null) inode=15168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=23 name=(null) inode=15170 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=24 name=(null) inode=15168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=25 name=(null) inode=15171 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=26 name=(null) inode=15168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=27 name=(null) inode=15172 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=28 name=(null) inode=15168 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=29 name=(null) inode=15173 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=30 name=(null) inode=15159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=31 name=(null) inode=15174 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=32 name=(null) inode=15174 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=33 name=(null) inode=15175 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=34 name=(null) inode=15174 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=35 name=(null) inode=15176 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=36 name=(null) inode=15174 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=37 name=(null) inode=15177 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=38 name=(null) inode=15174 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=39 name=(null) inode=15178 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=40 name=(null) inode=15174 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=41 name=(null) inode=15179 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=42 name=(null) inode=15159 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=43 name=(null) inode=15180 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=44 name=(null) inode=15180 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=45 name=(null) inode=15181 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=46 name=(null) inode=15180 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=47 name=(null) inode=15182 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=48 name=(null) inode=15180 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=49 name=(null) inode=15183 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=50 name=(null) inode=15180 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=51 name=(null) inode=15184 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=52 name=(null) inode=15180 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=53 name=(null) inode=15185 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=54 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=55 name=(null) inode=15186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=56 name=(null) inode=15186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=57 name=(null) inode=15187 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=58 name=(null) inode=15186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=59 name=(null) inode=15188 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=60 name=(null) inode=15186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=61 name=(null) inode=15189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=62 name=(null) inode=15189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=63 name=(null) inode=15190 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=64 name=(null) inode=15189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=65 name=(null) inode=15191 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=66 name=(null) inode=15189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=67 name=(null) inode=15192 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=68 name=(null) inode=15189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=69 name=(null) inode=15193 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=70 name=(null) inode=15189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=71 name=(null) inode=15194 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=72 name=(null) inode=15186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=73 name=(null) inode=15195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=74 name=(null) inode=15195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=75 name=(null) inode=15196 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=76 name=(null) inode=15195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=77 name=(null) inode=15197 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=78 name=(null) inode=15195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=79 name=(null) inode=15198 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=80 name=(null) inode=15195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=81 name=(null) inode=15199 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=82 name=(null) inode=15195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=83 name=(null) inode=15200 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=84 name=(null) inode=15186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=85 name=(null) inode=15201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=86 name=(null) inode=15201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=87 name=(null) inode=15202 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=88 name=(null) inode=15201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=89 name=(null) inode=15203 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=90 name=(null) inode=15201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=91 name=(null) inode=15204 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=92 name=(null) inode=15201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=93 name=(null) inode=15205 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=94 name=(null) inode=15201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=95 name=(null) inode=15206 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=96 name=(null) inode=15186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=97 name=(null) inode=15207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=98 name=(null) inode=15207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=99 name=(null) inode=15208 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=100 name=(null) inode=15207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=101 name=(null) inode=15209 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=102 name=(null) inode=15207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=103 name=(null) inode=15210 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=104 name=(null) inode=15207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=105 name=(null) inode=15211 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=106 name=(null) inode=15207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=107 name=(null) inode=15212 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PATH item=109 name=(null) inode=15213 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.395000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:27:27.468408 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Dec 13 14:27:27.482947 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 14:27:27.490415 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:27:27.542476 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:27:27.629978 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:27:27.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.633371 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:27:27.681053 lvm[1559]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:27:27.719326 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:27:27.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.721471 systemd[1]: Reached target cryptsetup.target. Dec 13 14:27:27.724296 systemd[1]: Starting lvm2-activation.service... Dec 13 14:27:27.734949 lvm[1560]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:27:27.762046 systemd[1]: Finished lvm2-activation.service. Dec 13 14:27:27.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.763306 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:27:27.764428 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:27:27.764465 systemd[1]: Reached target local-fs.target. Dec 13 14:27:27.765753 systemd[1]: Reached target machines.target. Dec 13 14:27:27.773306 systemd[1]: Starting ldconfig.service... Dec 13 14:27:27.776522 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:27:27.776666 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:27.778150 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:27:27.781144 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:27:27.784945 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:27:27.788752 systemd[1]: Starting systemd-sysext.service... Dec 13 14:27:27.797777 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1562 (bootctl) Dec 13 14:27:27.799749 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:27:27.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.816545 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:27:27.836382 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:27:27.844687 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:27:27.844888 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:27:27.868631 kernel: loop0: detected capacity change from 0 to 205544 Dec 13 14:27:28.002440 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:27:28.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.007697 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:27:28.008829 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:27:28.027012 systemd-fsck[1571]: fsck.fat 4.2 (2021-01-31) Dec 13 14:27:28.027012 systemd-fsck[1571]: /dev/nvme0n1p1: 789 files, 119291/258078 clusters Dec 13 14:27:28.033055 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:27:28.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.036745 systemd[1]: Mounting boot.mount... Dec 13 14:27:28.042420 kernel: loop1: detected capacity change from 0 to 205544 Dec 13 14:27:28.085521 systemd[1]: Mounted boot.mount. Dec 13 14:27:28.130166 (sd-sysext)[1576]: Using extensions 'kubernetes'. Dec 13 14:27:28.131440 (sd-sysext)[1576]: Merged extensions into '/usr'. Dec 13 14:27:28.156290 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:27:28.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.157944 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:28.160460 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:27:28.162715 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:27:28.166036 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:27:28.169445 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:27:28.173712 systemd[1]: Starting modprobe@loop.service... Dec 13 14:27:28.174712 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:27:28.174907 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:28.175090 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:28.181113 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:27:28.182498 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:27:28.182707 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:27:28.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.183999 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:27:28.184114 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:27:28.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.185347 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:27:28.185642 systemd[1]: Finished modprobe@loop.service. Dec 13 14:27:28.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.188426 systemd[1]: Finished systemd-sysext.service. Dec 13 14:27:28.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.191138 systemd[1]: Starting ensure-sysext.service... Dec 13 14:27:28.192057 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:27:28.192124 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:27:28.193480 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:27:28.201040 systemd[1]: Reloading. Dec 13 14:27:28.210462 systemd-tmpfiles[1594]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:27:28.212039 systemd-tmpfiles[1594]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:27:28.216944 systemd-tmpfiles[1594]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:27:28.288139 /usr/lib/systemd/system-generators/torcx-generator[1613]: time="2024-12-13T14:27:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:27:28.288630 /usr/lib/systemd/system-generators/torcx-generator[1613]: time="2024-12-13T14:27:28Z" level=info msg="torcx already run" Dec 13 14:27:28.484930 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:27:28.484959 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:27:28.517481 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:27:28.598000 audit: BPF prog-id=27 op=LOAD Dec 13 14:27:28.598000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:27:28.599000 audit: BPF prog-id=28 op=LOAD Dec 13 14:27:28.599000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:27:28.599000 audit: BPF prog-id=29 op=LOAD Dec 13 14:27:28.599000 audit: BPF prog-id=30 op=LOAD Dec 13 14:27:28.599000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:27:28.599000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:27:28.601000 audit: BPF prog-id=31 op=LOAD Dec 13 14:27:28.601000 audit: BPF prog-id=32 op=LOAD Dec 13 14:27:28.601000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:27:28.601000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:27:28.602000 audit: BPF prog-id=33 op=LOAD Dec 13 14:27:28.602000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:27:28.602000 audit: BPF prog-id=34 op=LOAD Dec 13 14:27:28.602000 audit: BPF prog-id=35 op=LOAD Dec 13 14:27:28.602000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:27:28.602000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:27:28.607613 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:27:28.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.615743 systemd[1]: Starting audit-rules.service... Dec 13 14:27:28.618527 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:27:28.620547 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:27:28.620000 audit: BPF prog-id=36 op=LOAD Dec 13 14:27:28.623000 audit: BPF prog-id=37 op=LOAD Dec 13 14:27:28.623143 systemd[1]: Starting systemd-resolved.service... Dec 13 14:27:28.625677 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:27:28.627658 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:27:28.634000 audit[1672]: SYSTEM_BOOT pid=1672 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.638367 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:27:28.640222 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:27:28.642986 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:27:28.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.644460 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:27:28.647079 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:27:28.649302 systemd[1]: Starting modprobe@loop.service... Dec 13 14:27:28.650190 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:27:28.650330 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:28.650486 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:27:28.654998 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:27:28.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.662607 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:27:28.662730 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:27:28.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.664467 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:27:28.664587 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:27:28.665720 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:27:28.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.667937 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:27:28.668053 systemd[1]: Finished modprobe@loop.service. Dec 13 14:27:28.669265 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:27:28.673744 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:27:28.675088 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:27:28.677973 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:27:28.680181 systemd[1]: Starting modprobe@loop.service... Dec 13 14:27:28.680987 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:27:28.681117 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:28.681229 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:27:28.681969 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:27:28.682105 systemd[1]: Finished modprobe@loop.service. Dec 13 14:27:28.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.687024 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:27:28.687150 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:27:28.688336 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:27:28.691346 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:27:28.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.692826 systemd[1]: Starting modprobe@drm.service... Dec 13 14:27:28.694829 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:27:28.697145 systemd[1]: Starting modprobe@loop.service... Dec 13 14:27:28.698101 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:27:28.698237 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:28.698373 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:27:28.699100 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:27:28.699222 systemd[1]: Finished modprobe@drm.service. Dec 13 14:27:28.701280 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:27:28.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.704198 ldconfig[1561]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:27:28.708835 systemd[1]: Finished ensure-sysext.service. Dec 13 14:27:28.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.709949 systemd-networkd[1459]: eth0: Gained IPv6LL Dec 13 14:27:28.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.710466 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:27:28.710585 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:27:28.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.713902 systemd[1]: Finished ldconfig.service. Dec 13 14:27:28.715932 systemd[1]: Starting systemd-update-done.service... Dec 13 14:27:28.717171 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:27:28.719840 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:27:28.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.719970 systemd[1]: Finished modprobe@loop.service. Dec 13 14:27:28.720947 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:27:28.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.723543 systemd[1]: Finished systemd-update-done.service. Dec 13 14:27:28.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.725541 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:27:28.725662 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:27:28.727050 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:27:28.750377 augenrules[1697]: No rules Dec 13 14:27:28.748000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:27:28.748000 audit[1697]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff7c30c2a0 a2=420 a3=0 items=0 ppid=1667 pid=1697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:28.748000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:27:28.750910 systemd[1]: Finished audit-rules.service. Dec 13 14:27:28.769299 systemd-resolved[1670]: Positive Trust Anchors: Dec 13 14:27:28.771038 systemd-resolved[1670]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:27:28.771243 systemd-resolved[1670]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:27:28.774895 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:27:28.776212 systemd[1]: Reached target time-set.target. Dec 13 14:27:28.800961 systemd-timesyncd[1671]: Contacted time server 198.137.202.56:123 (0.flatcar.pool.ntp.org). Dec 13 14:27:28.801137 systemd-timesyncd[1671]: Initial clock synchronization to Fri 2024-12-13 14:27:28.843102 UTC. Dec 13 14:27:28.808062 systemd-resolved[1670]: Defaulting to hostname 'linux'. Dec 13 14:27:28.809674 systemd[1]: Started systemd-resolved.service. Dec 13 14:27:28.810731 systemd[1]: Reached target network.target. Dec 13 14:27:28.811576 systemd[1]: Reached target network-online.target. Dec 13 14:27:28.812438 systemd[1]: Reached target nss-lookup.target. Dec 13 14:27:28.813364 systemd[1]: Reached target sysinit.target. Dec 13 14:27:28.814336 systemd[1]: Started motdgen.path. Dec 13 14:27:28.815110 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:27:28.816368 systemd[1]: Started logrotate.timer. Dec 13 14:27:28.817181 systemd[1]: Started mdadm.timer. Dec 13 14:27:28.817944 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:27:28.818835 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:27:28.818875 systemd[1]: Reached target paths.target. Dec 13 14:27:28.819665 systemd[1]: Reached target timers.target. Dec 13 14:27:28.820754 systemd[1]: Listening on dbus.socket. Dec 13 14:27:28.822737 systemd[1]: Starting docker.socket... Dec 13 14:27:28.826684 systemd[1]: Listening on sshd.socket. Dec 13 14:27:28.827613 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:28.828122 systemd[1]: Listening on docker.socket. Dec 13 14:27:28.828974 systemd[1]: Reached target sockets.target. Dec 13 14:27:28.829808 systemd[1]: Reached target basic.target. Dec 13 14:27:28.830795 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:27:28.830837 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:27:28.832086 systemd[1]: Started amazon-ssm-agent.service. Dec 13 14:27:28.834567 systemd[1]: Starting containerd.service... Dec 13 14:27:28.838016 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:27:28.851800 systemd[1]: Starting dbus.service... Dec 13 14:27:28.860527 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:27:28.863686 systemd[1]: Starting extend-filesystems.service... Dec 13 14:27:28.868592 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:27:28.873175 systemd[1]: Starting kubelet.service... Dec 13 14:27:28.880644 systemd[1]: Starting motdgen.service... Dec 13 14:27:28.888840 systemd[1]: Started nvidia.service. Dec 13 14:27:28.895583 systemd[1]: Starting prepare-helm.service... Dec 13 14:27:28.898194 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:27:28.901031 systemd[1]: Starting sshd-keygen.service... Dec 13 14:27:28.907710 systemd[1]: Starting systemd-logind.service... Dec 13 14:27:28.912517 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:28.912602 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:27:28.913731 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:27:28.919788 systemd[1]: Starting update-engine.service... Dec 13 14:27:28.927241 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:27:28.977189 jq[1709]: false Dec 13 14:27:28.949633 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:28.949676 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:29.009443 jq[1719]: true Dec 13 14:27:29.008771 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:27:29.009007 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:27:29.061480 tar[1723]: linux-amd64/helm Dec 13 14:27:29.090245 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:27:29.090581 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:27:29.127952 extend-filesystems[1710]: Found loop1 Dec 13 14:27:29.129854 extend-filesystems[1710]: Found nvme0n1 Dec 13 14:27:29.129854 extend-filesystems[1710]: Found nvme0n1p1 Dec 13 14:27:29.129854 extend-filesystems[1710]: Found nvme0n1p2 Dec 13 14:27:29.129854 extend-filesystems[1710]: Found nvme0n1p3 Dec 13 14:27:29.129854 extend-filesystems[1710]: Found usr Dec 13 14:27:29.129854 extend-filesystems[1710]: Found nvme0n1p4 Dec 13 14:27:29.129854 extend-filesystems[1710]: Found nvme0n1p6 Dec 13 14:27:29.129854 extend-filesystems[1710]: Found nvme0n1p7 Dec 13 14:27:29.129854 extend-filesystems[1710]: Found nvme0n1p9 Dec 13 14:27:29.129854 extend-filesystems[1710]: Checking size of /dev/nvme0n1p9 Dec 13 14:27:29.172899 jq[1727]: true Dec 13 14:27:29.190459 dbus-daemon[1708]: [system] SELinux support is enabled Dec 13 14:27:29.190677 systemd[1]: Started dbus.service. Dec 13 14:27:29.194606 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:27:29.194648 systemd[1]: Reached target system-config.target. Dec 13 14:27:29.195670 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:27:29.195697 systemd[1]: Reached target user-config.target. Dec 13 14:27:29.220941 dbus-daemon[1708]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1459 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 14:27:29.231117 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:27:29.231404 systemd[1]: Finished motdgen.service. Dec 13 14:27:29.231593 dbus-daemon[1708]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 14:27:29.239156 systemd[1]: Starting systemd-hostnamed.service... Dec 13 14:27:29.268658 extend-filesystems[1710]: Resized partition /dev/nvme0n1p9 Dec 13 14:27:29.288672 amazon-ssm-agent[1705]: 2024/12/13 14:27:29 Failed to load instance info from vault. RegistrationKey does not exist. Dec 13 14:27:29.293727 extend-filesystems[1775]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:27:29.301726 amazon-ssm-agent[1705]: Initializing new seelog logger Dec 13 14:27:29.301726 amazon-ssm-agent[1705]: New Seelog Logger Creation Complete Dec 13 14:27:29.301726 amazon-ssm-agent[1705]: 2024/12/13 14:27:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:27:29.301726 amazon-ssm-agent[1705]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:27:29.307810 update_engine[1718]: I1213 14:27:29.305923 1718 main.cc:92] Flatcar Update Engine starting Dec 13 14:27:29.311587 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 14:27:29.311778 bash[1769]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:27:29.313385 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:27:29.316301 update_engine[1718]: I1213 14:27:29.316267 1718 update_check_scheduler.cc:74] Next update check in 8m32s Dec 13 14:27:29.316644 systemd[1]: Started update-engine.service. Dec 13 14:27:29.320137 systemd[1]: Started locksmithd.service. Dec 13 14:27:29.328560 amazon-ssm-agent[1705]: 2024/12/13 14:27:29 processing appconfig overrides Dec 13 14:27:29.351338 env[1724]: time="2024-12-13T14:27:29.351284267Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:27:29.457499 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 14:27:29.482858 extend-filesystems[1775]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 14:27:29.482858 extend-filesystems[1775]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:27:29.482858 extend-filesystems[1775]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 14:27:29.500506 extend-filesystems[1710]: Resized filesystem in /dev/nvme0n1p9 Dec 13 14:27:29.484289 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:27:29.484532 systemd[1]: Finished extend-filesystems.service. Dec 13 14:27:29.586325 systemd-logind[1717]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:27:29.590233 systemd-logind[1717]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 14:27:29.590486 systemd-logind[1717]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:27:29.592062 systemd-logind[1717]: New seat seat0. Dec 13 14:27:29.602868 systemd[1]: Started systemd-logind.service. Dec 13 14:27:29.627810 env[1724]: time="2024-12-13T14:27:29.627749566Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:27:29.627949 env[1724]: time="2024-12-13T14:27:29.627930360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:29.636951 env[1724]: time="2024-12-13T14:27:29.636895918Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:27:29.636951 env[1724]: time="2024-12-13T14:27:29.636948005Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:29.637260 env[1724]: time="2024-12-13T14:27:29.637230439Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:27:29.637315 env[1724]: time="2024-12-13T14:27:29.637261963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:29.637315 env[1724]: time="2024-12-13T14:27:29.637280804Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:27:29.637315 env[1724]: time="2024-12-13T14:27:29.637299107Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:29.637522 env[1724]: time="2024-12-13T14:27:29.637502937Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:29.646243 env[1724]: time="2024-12-13T14:27:29.646194846Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:29.646553 env[1724]: time="2024-12-13T14:27:29.646522002Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:27:29.646615 env[1724]: time="2024-12-13T14:27:29.646556128Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:27:29.646687 env[1724]: time="2024-12-13T14:27:29.646668553Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:27:29.646746 env[1724]: time="2024-12-13T14:27:29.646692319Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:27:29.653538 env[1724]: time="2024-12-13T14:27:29.652321964Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:27:29.653538 env[1724]: time="2024-12-13T14:27:29.652372736Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:27:29.653538 env[1724]: time="2024-12-13T14:27:29.652404461Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:27:29.653538 env[1724]: time="2024-12-13T14:27:29.652462804Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:27:29.653538 env[1724]: time="2024-12-13T14:27:29.652484101Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:27:29.653538 env[1724]: time="2024-12-13T14:27:29.652550689Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:27:29.653538 env[1724]: time="2024-12-13T14:27:29.652573220Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:27:29.653538 env[1724]: time="2024-12-13T14:27:29.652593724Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:27:29.653538 env[1724]: time="2024-12-13T14:27:29.652612297Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:27:29.653538 env[1724]: time="2024-12-13T14:27:29.652634021Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:27:29.653538 env[1724]: time="2024-12-13T14:27:29.652652869Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:27:29.653538 env[1724]: time="2024-12-13T14:27:29.652682957Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:27:29.653538 env[1724]: time="2024-12-13T14:27:29.652821413Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:27:29.653538 env[1724]: time="2024-12-13T14:27:29.652923119Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:27:29.654224 env[1724]: time="2024-12-13T14:27:29.653341087Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:27:29.654224 env[1724]: time="2024-12-13T14:27:29.653376414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:27:29.654224 env[1724]: time="2024-12-13T14:27:29.653414853Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:27:29.654224 env[1724]: time="2024-12-13T14:27:29.653482668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:27:29.654224 env[1724]: time="2024-12-13T14:27:29.653501564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:27:29.655910 env[1724]: time="2024-12-13T14:27:29.653520392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:27:29.655910 env[1724]: time="2024-12-13T14:27:29.654424925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:27:29.655910 env[1724]: time="2024-12-13T14:27:29.654452539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:27:29.655910 env[1724]: time="2024-12-13T14:27:29.654473166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:27:29.655910 env[1724]: time="2024-12-13T14:27:29.654495480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:27:29.655910 env[1724]: time="2024-12-13T14:27:29.654514804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:27:29.655910 env[1724]: time="2024-12-13T14:27:29.654540706Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:27:29.655910 env[1724]: time="2024-12-13T14:27:29.654725070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:27:29.655910 env[1724]: time="2024-12-13T14:27:29.654749046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:27:29.655910 env[1724]: time="2024-12-13T14:27:29.654772502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:27:29.655910 env[1724]: time="2024-12-13T14:27:29.654791411Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:27:29.655910 env[1724]: time="2024-12-13T14:27:29.654813565Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:27:29.655910 env[1724]: time="2024-12-13T14:27:29.654835893Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:27:29.655910 env[1724]: time="2024-12-13T14:27:29.654862977Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:27:29.656422 env[1724]: time="2024-12-13T14:27:29.654928571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:27:29.656474 env[1724]: time="2024-12-13T14:27:29.655201098Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:27:29.656474 env[1724]: time="2024-12-13T14:27:29.655277167Z" level=info msg="Connect containerd service" Dec 13 14:27:29.656474 env[1724]: time="2024-12-13T14:27:29.655317284Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:27:29.662728 env[1724]: time="2024-12-13T14:27:29.656997462Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:27:29.662728 env[1724]: time="2024-12-13T14:27:29.657284509Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:27:29.662728 env[1724]: time="2024-12-13T14:27:29.657331503Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:27:29.662728 env[1724]: time="2024-12-13T14:27:29.660218248Z" level=info msg="containerd successfully booted in 0.309857s" Dec 13 14:27:29.662728 env[1724]: time="2024-12-13T14:27:29.660404466Z" level=info msg="Start subscribing containerd event" Dec 13 14:27:29.662728 env[1724]: time="2024-12-13T14:27:29.660454564Z" level=info msg="Start recovering state" Dec 13 14:27:29.662728 env[1724]: time="2024-12-13T14:27:29.660532421Z" level=info msg="Start event monitor" Dec 13 14:27:29.662728 env[1724]: time="2024-12-13T14:27:29.660560203Z" level=info msg="Start snapshots syncer" Dec 13 14:27:29.662728 env[1724]: time="2024-12-13T14:27:29.660573689Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:27:29.662728 env[1724]: time="2024-12-13T14:27:29.660584382Z" level=info msg="Start streaming server" Dec 13 14:27:29.657508 systemd[1]: Started containerd.service. Dec 13 14:27:29.723881 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 14:27:29.742481 dbus-daemon[1708]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 14:27:29.742670 systemd[1]: Started systemd-hostnamed.service. Dec 13 14:27:29.745357 dbus-daemon[1708]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1765 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 14:27:29.749628 systemd[1]: Starting polkit.service... Dec 13 14:27:29.775043 polkitd[1838]: Started polkitd version 121 Dec 13 14:27:29.812723 polkitd[1838]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 14:27:29.812804 polkitd[1838]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 14:27:29.819002 polkitd[1838]: Finished loading, compiling and executing 2 rules Dec 13 14:27:29.819606 dbus-daemon[1708]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 14:27:29.819806 systemd[1]: Started polkit.service. Dec 13 14:27:29.820006 polkitd[1838]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 14:27:29.870673 systemd-hostnamed[1765]: Hostname set to (transient) Dec 13 14:27:29.870810 systemd-resolved[1670]: System hostname changed to 'ip-172-31-29-3'. Dec 13 14:27:30.007827 coreos-metadata[1707]: Dec 13 14:27:30.006 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 14:27:30.016953 coreos-metadata[1707]: Dec 13 14:27:30.016 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Dec 13 14:27:30.017851 coreos-metadata[1707]: Dec 13 14:27:30.017 INFO Fetch successful Dec 13 14:27:30.017965 coreos-metadata[1707]: Dec 13 14:27:30.017 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 14:27:30.019986 coreos-metadata[1707]: Dec 13 14:27:30.019 INFO Fetch successful Dec 13 14:27:30.021149 unknown[1707]: wrote ssh authorized keys file for user: core Dec 13 14:27:30.046033 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO Create new startup processor Dec 13 14:27:30.061136 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [LongRunningPluginsManager] registered plugins: {} Dec 13 14:27:30.061280 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO Initializing bookkeeping folders Dec 13 14:27:30.061280 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO removing the completed state files Dec 13 14:27:30.061280 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO Initializing bookkeeping folders for long running plugins Dec 13 14:27:30.061280 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Dec 13 14:27:30.061280 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO Initializing healthcheck folders for long running plugins Dec 13 14:27:30.061280 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO Initializing locations for inventory plugin Dec 13 14:27:30.061280 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO Initializing default location for custom inventory Dec 13 14:27:30.061280 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO Initializing default location for file inventory Dec 13 14:27:30.061714 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO Initializing default location for role inventory Dec 13 14:27:30.061714 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO Init the cloudwatchlogs publisher Dec 13 14:27:30.061714 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [instanceID=i-0d7f21f106a6db385] Successfully loaded platform independent plugin aws:configureDocker Dec 13 14:27:30.061714 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [instanceID=i-0d7f21f106a6db385] Successfully loaded platform independent plugin aws:refreshAssociation Dec 13 14:27:30.061714 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [instanceID=i-0d7f21f106a6db385] Successfully loaded platform independent plugin aws:downloadContent Dec 13 14:27:30.061714 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [instanceID=i-0d7f21f106a6db385] Successfully loaded platform independent plugin aws:runDockerAction Dec 13 14:27:30.061714 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [instanceID=i-0d7f21f106a6db385] Successfully loaded platform independent plugin aws:configurePackage Dec 13 14:27:30.061714 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [instanceID=i-0d7f21f106a6db385] Successfully loaded platform independent plugin aws:runDocument Dec 13 14:27:30.061714 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [instanceID=i-0d7f21f106a6db385] Successfully loaded platform independent plugin aws:softwareInventory Dec 13 14:27:30.061714 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [instanceID=i-0d7f21f106a6db385] Successfully loaded platform independent plugin aws:runPowerShellScript Dec 13 14:27:30.061714 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [instanceID=i-0d7f21f106a6db385] Successfully loaded platform independent plugin aws:updateSsmAgent Dec 13 14:27:30.061714 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [instanceID=i-0d7f21f106a6db385] Successfully loaded platform dependent plugin aws:runShellScript Dec 13 14:27:30.061714 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Dec 13 14:27:30.062185 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO OS: linux, Arch: amd64 Dec 13 14:27:30.094133 update-ssh-keys[1881]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:27:30.095318 amazon-ssm-agent[1705]: datastore file /var/lib/amazon/ssm/i-0d7f21f106a6db385/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Dec 13 14:27:30.095125 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:27:30.173163 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [MessagingDeliveryService] Starting document processing engine... Dec 13 14:27:30.266054 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [MessagingDeliveryService] [EngineProcessor] Starting Dec 13 14:27:30.360675 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Dec 13 14:27:30.455079 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [MessagingDeliveryService] Starting message polling Dec 13 14:27:30.549916 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [MessagingDeliveryService] Starting send replies to MDS Dec 13 14:27:30.644750 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [instanceID=i-0d7f21f106a6db385] Starting association polling Dec 13 14:27:30.739956 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Dec 13 14:27:30.770597 tar[1723]: linux-amd64/LICENSE Dec 13 14:27:30.770597 tar[1723]: linux-amd64/README.md Dec 13 14:27:30.783563 systemd[1]: Finished prepare-helm.service. Dec 13 14:27:30.836689 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [MessagingDeliveryService] [Association] Launching response handler Dec 13 14:27:30.932859 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Dec 13 14:27:30.949233 locksmithd[1783]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:27:31.028502 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Dec 13 14:27:31.124492 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Dec 13 14:27:31.220482 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 14:27:31.316817 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [MessageGatewayService] Starting session document processing engine... Dec 13 14:27:31.354133 sshd_keygen[1733]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:27:31.385925 systemd[1]: Started kubelet.service. Dec 13 14:27:31.387935 systemd[1]: Finished sshd-keygen.service. Dec 13 14:27:31.392590 systemd[1]: Starting issuegen.service... Dec 13 14:27:31.402534 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:27:31.402883 systemd[1]: Finished issuegen.service. Dec 13 14:27:31.405720 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:27:31.414257 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [MessageGatewayService] [EngineProcessor] Starting Dec 13 14:27:31.416961 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:27:31.421697 systemd[1]: Started getty@tty1.service. Dec 13 14:27:31.424642 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:27:31.425986 systemd[1]: Reached target getty.target. Dec 13 14:27:31.426972 systemd[1]: Reached target multi-user.target. Dec 13 14:27:31.429748 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:27:31.443713 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:27:31.443925 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:27:31.445257 systemd[1]: Startup finished in 816ms (kernel) + 7.729s (initrd) + 8.852s (userspace) = 17.398s. Dec 13 14:27:31.511001 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Dec 13 14:27:31.608570 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0d7f21f106a6db385, requestId: b5be96d5-6b05-47da-9e90-646601c9bf4d Dec 13 14:27:31.707740 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [OfflineService] Starting document processing engine... Dec 13 14:27:31.804987 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [OfflineService] [EngineProcessor] Starting Dec 13 14:27:31.902505 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [OfflineService] [EngineProcessor] Initial processing Dec 13 14:27:32.001239 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [OfflineService] Starting message polling Dec 13 14:27:32.099000 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [OfflineService] Starting send replies to MDS Dec 13 14:27:32.197033 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [LongRunningPluginsManager] starting long running plugin manager Dec 13 14:27:32.296040 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Dec 13 14:27:32.394636 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [MessageGatewayService] listening reply. Dec 13 14:27:32.447694 kubelet[1910]: E1213 14:27:32.447641 1910 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:27:32.453359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:27:32.453943 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:27:32.454266 systemd[1]: kubelet.service: Consumed 1.093s CPU time. Dec 13 14:27:32.493289 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Dec 13 14:27:32.592247 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [StartupProcessor] Executing startup processor tasks Dec 13 14:27:32.691258 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Dec 13 14:27:32.790543 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Dec 13 14:27:32.890000 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.6 Dec 13 14:27:32.989681 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0d7f21f106a6db385?role=subscribe&stream=input Dec 13 14:27:33.089562 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0d7f21f106a6db385?role=subscribe&stream=input Dec 13 14:27:33.189681 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [MessageGatewayService] Starting receiving message from control channel Dec 13 14:27:33.289953 amazon-ssm-agent[1705]: 2024-12-13 14:27:30 INFO [MessageGatewayService] [EngineProcessor] Initial processing Dec 13 14:27:37.695045 systemd[1]: Created slice system-sshd.slice. Dec 13 14:27:37.698611 systemd[1]: Started sshd@0-172.31.29.3:22-139.178.89.65:38128.service. Dec 13 14:27:37.933297 sshd[1924]: Accepted publickey for core from 139.178.89.65 port 38128 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:27:37.939303 sshd[1924]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:37.973367 systemd[1]: Created slice user-500.slice. Dec 13 14:27:37.980616 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:27:37.985462 systemd-logind[1717]: New session 1 of user core. Dec 13 14:27:37.994502 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:27:37.996823 systemd[1]: Starting user@500.service... Dec 13 14:27:38.001382 (systemd)[1927]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:38.099632 systemd[1927]: Queued start job for default target default.target. Dec 13 14:27:38.100869 systemd[1927]: Reached target paths.target. Dec 13 14:27:38.100904 systemd[1927]: Reached target sockets.target. Dec 13 14:27:38.100922 systemd[1927]: Reached target timers.target. Dec 13 14:27:38.100937 systemd[1927]: Reached target basic.target. Dec 13 14:27:38.101059 systemd[1]: Started user@500.service. Dec 13 14:27:38.102448 systemd[1]: Started session-1.scope. Dec 13 14:27:38.103217 systemd[1927]: Reached target default.target. Dec 13 14:27:38.103440 systemd[1927]: Startup finished in 94ms. Dec 13 14:27:38.251556 systemd[1]: Started sshd@1-172.31.29.3:22-139.178.89.65:35146.service. Dec 13 14:27:38.414306 sshd[1936]: Accepted publickey for core from 139.178.89.65 port 35146 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:27:38.415770 sshd[1936]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:38.420980 systemd[1]: Started session-2.scope. Dec 13 14:27:38.421955 systemd-logind[1717]: New session 2 of user core. Dec 13 14:27:38.547267 sshd[1936]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:38.550677 systemd-logind[1717]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:27:38.551006 systemd[1]: sshd@1-172.31.29.3:22-139.178.89.65:35146.service: Deactivated successfully. Dec 13 14:27:38.551947 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:27:38.552763 systemd-logind[1717]: Removed session 2. Dec 13 14:27:38.574130 systemd[1]: Started sshd@2-172.31.29.3:22-139.178.89.65:35152.service. Dec 13 14:27:38.746532 sshd[1942]: Accepted publickey for core from 139.178.89.65 port 35152 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:27:38.747920 sshd[1942]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:38.752704 systemd-logind[1717]: New session 3 of user core. Dec 13 14:27:38.753287 systemd[1]: Started session-3.scope. Dec 13 14:27:38.878032 sshd[1942]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:38.881554 systemd[1]: sshd@2-172.31.29.3:22-139.178.89.65:35152.service: Deactivated successfully. Dec 13 14:27:38.882381 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:27:38.883072 systemd-logind[1717]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:27:38.883947 systemd-logind[1717]: Removed session 3. Dec 13 14:27:38.902825 systemd[1]: Started sshd@3-172.31.29.3:22-139.178.89.65:35164.service. Dec 13 14:27:39.067706 sshd[1948]: Accepted publickey for core from 139.178.89.65 port 35164 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:27:39.069097 sshd[1948]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:39.074323 systemd[1]: Started session-4.scope. Dec 13 14:27:39.074932 systemd-logind[1717]: New session 4 of user core. Dec 13 14:27:39.202417 sshd[1948]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:39.205615 systemd[1]: sshd@3-172.31.29.3:22-139.178.89.65:35164.service: Deactivated successfully. Dec 13 14:27:39.206838 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:27:39.207829 systemd-logind[1717]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:27:39.208907 systemd-logind[1717]: Removed session 4. Dec 13 14:27:39.228673 systemd[1]: Started sshd@4-172.31.29.3:22-139.178.89.65:35166.service. Dec 13 14:27:39.396326 sshd[1954]: Accepted publickey for core from 139.178.89.65 port 35166 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:27:39.397748 sshd[1954]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:39.403102 systemd[1]: Started session-5.scope. Dec 13 14:27:39.403573 systemd-logind[1717]: New session 5 of user core. Dec 13 14:27:39.527042 sudo[1957]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:27:39.527815 sudo[1957]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:27:39.553999 systemd[1]: Starting docker.service... Dec 13 14:27:39.596871 env[1967]: time="2024-12-13T14:27:39.596818761Z" level=info msg="Starting up" Dec 13 14:27:39.598141 env[1967]: time="2024-12-13T14:27:39.598115952Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:27:39.598261 env[1967]: time="2024-12-13T14:27:39.598249106Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:27:39.598320 env[1967]: time="2024-12-13T14:27:39.598309204Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:27:39.598359 env[1967]: time="2024-12-13T14:27:39.598352199Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:27:39.600145 env[1967]: time="2024-12-13T14:27:39.600126978Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:27:39.600246 env[1967]: time="2024-12-13T14:27:39.600235875Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:27:39.600301 env[1967]: time="2024-12-13T14:27:39.600291211Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:27:39.600343 env[1967]: time="2024-12-13T14:27:39.600336116Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:27:39.640793 env[1967]: time="2024-12-13T14:27:39.640760996Z" level=info msg="Loading containers: start." Dec 13 14:27:39.804421 kernel: Initializing XFRM netlink socket Dec 13 14:27:39.864880 env[1967]: time="2024-12-13T14:27:39.864840892Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:27:39.866185 (udev-worker)[1977]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:27:40.019045 systemd-networkd[1459]: docker0: Link UP Dec 13 14:27:40.038757 env[1967]: time="2024-12-13T14:27:40.038712068Z" level=info msg="Loading containers: done." Dec 13 14:27:40.058777 env[1967]: time="2024-12-13T14:27:40.058729527Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:27:40.059164 env[1967]: time="2024-12-13T14:27:40.059034987Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:27:40.059326 env[1967]: time="2024-12-13T14:27:40.059255229Z" level=info msg="Daemon has completed initialization" Dec 13 14:27:40.080447 systemd[1]: Started docker.service. Dec 13 14:27:40.094969 env[1967]: time="2024-12-13T14:27:40.094754024Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:27:41.197292 env[1724]: time="2024-12-13T14:27:41.195612832Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 14:27:41.810655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2755337157.mount: Deactivated successfully. Dec 13 14:27:42.638641 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:27:42.638882 systemd[1]: Stopped kubelet.service. Dec 13 14:27:42.638944 systemd[1]: kubelet.service: Consumed 1.093s CPU time. Dec 13 14:27:42.640816 systemd[1]: Starting kubelet.service... Dec 13 14:27:42.979614 systemd[1]: Started kubelet.service. Dec 13 14:27:43.043120 kubelet[2094]: E1213 14:27:43.043084 2094 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:27:43.046916 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:27:43.047041 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:27:44.802073 env[1724]: time="2024-12-13T14:27:44.802018619Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:44.807344 env[1724]: time="2024-12-13T14:27:44.807282669Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:44.811915 env[1724]: time="2024-12-13T14:27:44.811874123Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:44.821259 env[1724]: time="2024-12-13T14:27:44.821201333Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:44.822170 env[1724]: time="2024-12-13T14:27:44.822130946Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Dec 13 14:27:44.825364 env[1724]: time="2024-12-13T14:27:44.825335688Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 14:27:47.434741 env[1724]: time="2024-12-13T14:27:47.434688452Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:47.440329 env[1724]: time="2024-12-13T14:27:47.440255397Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:47.443727 env[1724]: time="2024-12-13T14:27:47.443684475Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:47.445487 env[1724]: time="2024-12-13T14:27:47.445448647Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:47.446412 env[1724]: time="2024-12-13T14:27:47.446361991Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Dec 13 14:27:47.447306 env[1724]: time="2024-12-13T14:27:47.447281232Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 14:27:49.444502 env[1724]: time="2024-12-13T14:27:49.444453262Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:49.447066 env[1724]: time="2024-12-13T14:27:49.447024140Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:49.449129 env[1724]: time="2024-12-13T14:27:49.449090512Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:49.450922 env[1724]: time="2024-12-13T14:27:49.450887938Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:49.451707 env[1724]: time="2024-12-13T14:27:49.451674091Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Dec 13 14:27:49.452571 env[1724]: time="2024-12-13T14:27:49.452544092Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 14:27:50.739693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2801495582.mount: Deactivated successfully. Dec 13 14:27:51.594986 env[1724]: time="2024-12-13T14:27:51.594933116Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:51.596905 env[1724]: time="2024-12-13T14:27:51.596866709Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:51.598564 env[1724]: time="2024-12-13T14:27:51.598533461Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:51.599832 env[1724]: time="2024-12-13T14:27:51.599801953Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:51.600263 env[1724]: time="2024-12-13T14:27:51.600234234Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 14:27:51.600870 env[1724]: time="2024-12-13T14:27:51.600847215Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:27:52.172093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3033485084.mount: Deactivated successfully. Dec 13 14:27:53.139007 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:27:53.139271 systemd[1]: Stopped kubelet.service. Dec 13 14:27:53.142733 systemd[1]: Starting kubelet.service... Dec 13 14:27:53.424296 systemd[1]: Started kubelet.service. Dec 13 14:27:53.500659 kubelet[2104]: E1213 14:27:53.500612 2104 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:27:53.502793 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:27:53.502927 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:27:53.635904 env[1724]: time="2024-12-13T14:27:53.635848313Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:53.640585 env[1724]: time="2024-12-13T14:27:53.640538128Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:53.644189 env[1724]: time="2024-12-13T14:27:53.644111578Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:53.648063 env[1724]: time="2024-12-13T14:27:53.648020573Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:53.649460 env[1724]: time="2024-12-13T14:27:53.649376441Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 14:27:53.650508 env[1724]: time="2024-12-13T14:27:53.650477996Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 14:27:54.224244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount459792385.mount: Deactivated successfully. Dec 13 14:27:54.237596 env[1724]: time="2024-12-13T14:27:54.237488167Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:54.241998 env[1724]: time="2024-12-13T14:27:54.241951183Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:54.244935 env[1724]: time="2024-12-13T14:27:54.244895702Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:54.249412 env[1724]: time="2024-12-13T14:27:54.249359313Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:54.250147 env[1724]: time="2024-12-13T14:27:54.250107905Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 13 14:27:54.251071 env[1724]: time="2024-12-13T14:27:54.251044090Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 14:27:54.783471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1475207969.mount: Deactivated successfully. Dec 13 14:27:55.529578 amazon-ssm-agent[1705]: 2024-12-13 14:27:55 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Dec 13 14:27:57.752074 env[1724]: time="2024-12-13T14:27:57.752020615Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:57.756905 env[1724]: time="2024-12-13T14:27:57.756864028Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:57.759367 env[1724]: time="2024-12-13T14:27:57.759322314Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:57.762180 env[1724]: time="2024-12-13T14:27:57.762134458Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:57.763031 env[1724]: time="2024-12-13T14:27:57.762994924Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Dec 13 14:27:59.903251 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 14:27:59.982188 systemd[1]: Stopped kubelet.service. Dec 13 14:27:59.985128 systemd[1]: Starting kubelet.service... Dec 13 14:28:00.019722 systemd[1]: Reloading. Dec 13 14:28:00.147038 /usr/lib/systemd/system-generators/torcx-generator[2155]: time="2024-12-13T14:28:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:28:00.147081 /usr/lib/systemd/system-generators/torcx-generator[2155]: time="2024-12-13T14:28:00Z" level=info msg="torcx already run" Dec 13 14:28:00.263560 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:28:00.263584 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:28:00.288090 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:28:00.415897 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 14:28:00.415966 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 14:28:00.416166 systemd[1]: Stopped kubelet.service. Dec 13 14:28:00.418311 systemd[1]: Starting kubelet.service... Dec 13 14:28:00.571460 systemd[1]: Started kubelet.service. Dec 13 14:28:00.637009 kubelet[2211]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:28:00.637009 kubelet[2211]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:28:00.637009 kubelet[2211]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:28:00.637526 kubelet[2211]: I1213 14:28:00.637082 2211 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:28:01.128466 kubelet[2211]: I1213 14:28:01.128424 2211 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 14:28:01.128466 kubelet[2211]: I1213 14:28:01.128457 2211 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:28:01.130189 kubelet[2211]: I1213 14:28:01.130159 2211 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 14:28:01.188268 kubelet[2211]: E1213 14:28:01.188221 2211 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.29.3:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.29.3:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:28:01.195193 kubelet[2211]: I1213 14:28:01.195144 2211 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:28:01.207353 kubelet[2211]: E1213 14:28:01.207295 2211 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 14:28:01.207353 kubelet[2211]: I1213 14:28:01.207338 2211 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 14:28:01.212084 kubelet[2211]: I1213 14:28:01.212055 2211 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:28:01.212263 kubelet[2211]: I1213 14:28:01.212185 2211 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 14:28:01.212376 kubelet[2211]: I1213 14:28:01.212335 2211 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:28:01.212588 kubelet[2211]: I1213 14:28:01.212374 2211 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 14:28:01.212717 kubelet[2211]: I1213 14:28:01.212595 2211 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:28:01.212717 kubelet[2211]: I1213 14:28:01.212609 2211 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 14:28:01.212800 kubelet[2211]: I1213 14:28:01.212755 2211 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:28:01.220555 kubelet[2211]: I1213 14:28:01.220504 2211 kubelet.go:408] "Attempting to sync node with API server" Dec 13 14:28:01.220725 kubelet[2211]: I1213 14:28:01.220568 2211 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:28:01.220725 kubelet[2211]: I1213 14:28:01.220615 2211 kubelet.go:314] "Adding apiserver pod source" Dec 13 14:28:01.220725 kubelet[2211]: I1213 14:28:01.220634 2211 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:28:01.251977 kubelet[2211]: W1213 14:28:01.251901 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-3&limit=500&resourceVersion=0": dial tcp 172.31.29.3:6443: connect: connection refused Dec 13 14:28:01.252238 kubelet[2211]: E1213 14:28:01.252215 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.29.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-3&limit=500&resourceVersion=0\": dial tcp 172.31.29.3:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:28:01.252451 kubelet[2211]: I1213 14:28:01.252438 2211 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:28:01.263808 kubelet[2211]: I1213 14:28:01.263764 2211 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:28:01.273253 kubelet[2211]: W1213 14:28:01.273221 2211 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:28:01.274093 kubelet[2211]: I1213 14:28:01.274073 2211 server.go:1269] "Started kubelet" Dec 13 14:28:01.275967 kubelet[2211]: W1213 14:28:01.275917 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.3:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.29.3:6443: connect: connection refused Dec 13 14:28:01.276077 kubelet[2211]: E1213 14:28:01.275977 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.29.3:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.3:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:28:01.276077 kubelet[2211]: I1213 14:28:01.276019 2211 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:28:01.288048 kubelet[2211]: I1213 14:28:01.287526 2211 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:28:01.288322 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:28:01.288484 kubelet[2211]: I1213 14:28:01.288463 2211 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:28:01.288689 kubelet[2211]: I1213 14:28:01.288677 2211 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:28:01.293597 kubelet[2211]: E1213 14:28:01.290802 2211 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.29.3:6443/api/v1/namespaces/default/events\": dial tcp 172.31.29.3:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-29-3.1810c2d61550c260 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-29-3,UID:ip-172-31-29-3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-29-3,},FirstTimestamp:2024-12-13 14:28:01.274045024 +0000 UTC m=+0.694284215,LastTimestamp:2024-12-13 14:28:01.274045024 +0000 UTC m=+0.694284215,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-29-3,}" Dec 13 14:28:01.294650 kubelet[2211]: I1213 14:28:01.294631 2211 server.go:460] "Adding debug handlers to kubelet server" Dec 13 14:28:01.296149 kubelet[2211]: I1213 14:28:01.296123 2211 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 14:28:01.296472 kubelet[2211]: I1213 14:28:01.296458 2211 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 14:28:01.299215 kubelet[2211]: I1213 14:28:01.299195 2211 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:28:01.299523 kubelet[2211]: I1213 14:28:01.299501 2211 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:28:01.299719 kubelet[2211]: E1213 14:28:01.296780 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:01.299796 kubelet[2211]: I1213 14:28:01.299782 2211 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:28:01.300467 kubelet[2211]: I1213 14:28:01.298281 2211 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 14:28:01.300709 kubelet[2211]: W1213 14:28:01.300663 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.3:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.3:6443: connect: connection refused Dec 13 14:28:01.300785 kubelet[2211]: E1213 14:28:01.300729 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.29.3:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.3:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:28:01.300842 kubelet[2211]: E1213 14:28:01.300799 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-3?timeout=10s\": dial tcp 172.31.29.3:6443: connect: connection refused" interval="200ms" Dec 13 14:28:01.303008 kubelet[2211]: E1213 14:28:01.302977 2211 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:28:01.303173 kubelet[2211]: I1213 14:28:01.303158 2211 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:28:01.330743 kubelet[2211]: I1213 14:28:01.330690 2211 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:28:01.333308 kubelet[2211]: I1213 14:28:01.333282 2211 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:28:01.333506 kubelet[2211]: I1213 14:28:01.333494 2211 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:28:01.333647 kubelet[2211]: I1213 14:28:01.333635 2211 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 14:28:01.333803 kubelet[2211]: E1213 14:28:01.333783 2211 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:28:01.335924 kubelet[2211]: I1213 14:28:01.333494 2211 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:28:01.336087 kubelet[2211]: I1213 14:28:01.336073 2211 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:28:01.336167 kubelet[2211]: I1213 14:28:01.336157 2211 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:28:01.339718 kubelet[2211]: I1213 14:28:01.339693 2211 policy_none.go:49] "None policy: Start" Dec 13 14:28:01.340718 kubelet[2211]: W1213 14:28:01.340666 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.3:6443: connect: connection refused Dec 13 14:28:01.340905 kubelet[2211]: E1213 14:28:01.340885 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.29.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.3:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:28:01.342127 kubelet[2211]: I1213 14:28:01.342109 2211 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:28:01.342330 kubelet[2211]: I1213 14:28:01.342293 2211 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:28:01.349695 systemd[1]: Created slice kubepods.slice. Dec 13 14:28:01.355935 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:28:01.359451 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:28:01.366215 kubelet[2211]: I1213 14:28:01.366177 2211 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:28:01.366435 kubelet[2211]: I1213 14:28:01.366417 2211 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 14:28:01.366497 kubelet[2211]: I1213 14:28:01.366435 2211 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:28:01.367914 kubelet[2211]: I1213 14:28:01.367467 2211 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:28:01.370446 kubelet[2211]: E1213 14:28:01.370248 2211 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-3\" not found" Dec 13 14:28:01.446910 systemd[1]: Created slice kubepods-burstable-pode9723e33c4d3f0ec20162031230d1bd4.slice. Dec 13 14:28:01.454783 systemd[1]: Created slice kubepods-burstable-podfc8191bfb8c65b059be15b6517e67074.slice. Dec 13 14:28:01.460844 systemd[1]: Created slice kubepods-burstable-pod9bf252bce4b3dfd66dfad262d49e0afa.slice. Dec 13 14:28:01.468605 kubelet[2211]: I1213 14:28:01.468569 2211 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-3" Dec 13 14:28:01.469132 kubelet[2211]: E1213 14:28:01.469089 2211 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.29.3:6443/api/v1/nodes\": dial tcp 172.31.29.3:6443: connect: connection refused" node="ip-172-31-29-3" Dec 13 14:28:01.501848 kubelet[2211]: E1213 14:28:01.501804 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-3?timeout=10s\": dial tcp 172.31.29.3:6443: connect: connection refused" interval="400ms" Dec 13 14:28:01.601584 kubelet[2211]: I1213 14:28:01.601464 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc8191bfb8c65b059be15b6517e67074-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-3\" (UID: \"fc8191bfb8c65b059be15b6517e67074\") " pod="kube-system/kube-controller-manager-ip-172-31-29-3" Dec 13 14:28:01.601743 kubelet[2211]: I1213 14:28:01.601596 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9723e33c4d3f0ec20162031230d1bd4-ca-certs\") pod \"kube-apiserver-ip-172-31-29-3\" (UID: \"e9723e33c4d3f0ec20162031230d1bd4\") " pod="kube-system/kube-apiserver-ip-172-31-29-3" Dec 13 14:28:01.601743 kubelet[2211]: I1213 14:28:01.601621 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9723e33c4d3f0ec20162031230d1bd4-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-3\" (UID: \"e9723e33c4d3f0ec20162031230d1bd4\") " pod="kube-system/kube-apiserver-ip-172-31-29-3" Dec 13 14:28:01.601743 kubelet[2211]: I1213 14:28:01.601646 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9723e33c4d3f0ec20162031230d1bd4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-3\" (UID: \"e9723e33c4d3f0ec20162031230d1bd4\") " pod="kube-system/kube-apiserver-ip-172-31-29-3" Dec 13 14:28:01.601743 kubelet[2211]: I1213 14:28:01.601678 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fc8191bfb8c65b059be15b6517e67074-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-3\" (UID: \"fc8191bfb8c65b059be15b6517e67074\") " pod="kube-system/kube-controller-manager-ip-172-31-29-3" Dec 13 14:28:01.601743 kubelet[2211]: I1213 14:28:01.601698 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc8191bfb8c65b059be15b6517e67074-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-3\" (UID: \"fc8191bfb8c65b059be15b6517e67074\") " pod="kube-system/kube-controller-manager-ip-172-31-29-3" Dec 13 14:28:01.601943 kubelet[2211]: I1213 14:28:01.601719 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fc8191bfb8c65b059be15b6517e67074-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-3\" (UID: \"fc8191bfb8c65b059be15b6517e67074\") " pod="kube-system/kube-controller-manager-ip-172-31-29-3" Dec 13 14:28:01.601943 kubelet[2211]: I1213 14:28:01.601751 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9bf252bce4b3dfd66dfad262d49e0afa-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-3\" (UID: \"9bf252bce4b3dfd66dfad262d49e0afa\") " pod="kube-system/kube-scheduler-ip-172-31-29-3" Dec 13 14:28:01.601943 kubelet[2211]: I1213 14:28:01.601772 2211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc8191bfb8c65b059be15b6517e67074-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-3\" (UID: \"fc8191bfb8c65b059be15b6517e67074\") " pod="kube-system/kube-controller-manager-ip-172-31-29-3" Dec 13 14:28:01.670849 kubelet[2211]: I1213 14:28:01.670815 2211 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-3" Dec 13 14:28:01.671360 kubelet[2211]: E1213 14:28:01.671326 2211 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.29.3:6443/api/v1/nodes\": dial tcp 172.31.29.3:6443: connect: connection refused" node="ip-172-31-29-3" Dec 13 14:28:01.756541 env[1724]: time="2024-12-13T14:28:01.756419338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-3,Uid:e9723e33c4d3f0ec20162031230d1bd4,Namespace:kube-system,Attempt:0,}" Dec 13 14:28:01.760409 env[1724]: time="2024-12-13T14:28:01.760355940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-3,Uid:fc8191bfb8c65b059be15b6517e67074,Namespace:kube-system,Attempt:0,}" Dec 13 14:28:01.764384 env[1724]: time="2024-12-13T14:28:01.764339028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-3,Uid:9bf252bce4b3dfd66dfad262d49e0afa,Namespace:kube-system,Attempt:0,}" Dec 13 14:28:01.902648 kubelet[2211]: E1213 14:28:01.902602 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-3?timeout=10s\": dial tcp 172.31.29.3:6443: connect: connection refused" interval="800ms" Dec 13 14:28:02.073315 kubelet[2211]: I1213 14:28:02.073020 2211 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-3" Dec 13 14:28:02.073516 kubelet[2211]: E1213 14:28:02.073490 2211 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.29.3:6443/api/v1/nodes\": dial tcp 172.31.29.3:6443: connect: connection refused" node="ip-172-31-29-3" Dec 13 14:28:02.228989 kubelet[2211]: W1213 14:28:02.228919 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-3&limit=500&resourceVersion=0": dial tcp 172.31.29.3:6443: connect: connection refused Dec 13 14:28:02.229160 kubelet[2211]: E1213 14:28:02.228998 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.29.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-3&limit=500&resourceVersion=0\": dial tcp 172.31.29.3:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:28:02.281580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3851542015.mount: Deactivated successfully. Dec 13 14:28:02.302494 env[1724]: time="2024-12-13T14:28:02.302413674Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:02.308668 env[1724]: time="2024-12-13T14:28:02.308617360Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:02.314761 env[1724]: time="2024-12-13T14:28:02.314709765Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:02.318681 env[1724]: time="2024-12-13T14:28:02.318634830Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:02.320466 env[1724]: time="2024-12-13T14:28:02.320425975Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:02.321497 env[1724]: time="2024-12-13T14:28:02.321462020Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:02.326281 env[1724]: time="2024-12-13T14:28:02.325651186Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:02.328214 env[1724]: time="2024-12-13T14:28:02.328166733Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:02.330184 env[1724]: time="2024-12-13T14:28:02.330141258Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:02.334284 env[1724]: time="2024-12-13T14:28:02.334237452Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:02.345972 env[1724]: time="2024-12-13T14:28:02.345925029Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:02.363235 kubelet[2211]: W1213 14:28:02.363176 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.29.3:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.3:6443: connect: connection refused Dec 13 14:28:02.363372 kubelet[2211]: E1213 14:28:02.363255 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.29.3:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.29.3:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:28:02.367271 env[1724]: time="2024-12-13T14:28:02.367224562Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:02.391019 env[1724]: time="2024-12-13T14:28:02.386142603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:02.391019 env[1724]: time="2024-12-13T14:28:02.386202317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:02.391019 env[1724]: time="2024-12-13T14:28:02.386218679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:02.391019 env[1724]: time="2024-12-13T14:28:02.386451219Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/51b545986e68f3f51c1c8b5fbdd8d609046d99de5664cb35273349d8ee9ef439 pid=2249 runtime=io.containerd.runc.v2 Dec 13 14:28:02.406859 env[1724]: time="2024-12-13T14:28:02.404017138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:02.406859 env[1724]: time="2024-12-13T14:28:02.404089304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:02.406859 env[1724]: time="2024-12-13T14:28:02.404105345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:02.406859 env[1724]: time="2024-12-13T14:28:02.404579763Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/811582d1267b4fa101312263e3e168356f4d993f72f0a8959461e1a6a777a5e2 pid=2267 runtime=io.containerd.runc.v2 Dec 13 14:28:02.430779 systemd[1]: Started cri-containerd-51b545986e68f3f51c1c8b5fbdd8d609046d99de5664cb35273349d8ee9ef439.scope. Dec 13 14:28:02.438936 systemd[1]: Started cri-containerd-811582d1267b4fa101312263e3e168356f4d993f72f0a8959461e1a6a777a5e2.scope. Dec 13 14:28:02.466629 env[1724]: time="2024-12-13T14:28:02.466328828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:02.466629 env[1724]: time="2024-12-13T14:28:02.466379051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:02.466629 env[1724]: time="2024-12-13T14:28:02.466427769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:02.466984 env[1724]: time="2024-12-13T14:28:02.466585431Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4753aae7c95ce227c821cb7198b3d5218c68092a34c90c5daad90c086b92b4cc pid=2310 runtime=io.containerd.runc.v2 Dec 13 14:28:02.499573 systemd[1]: Started cri-containerd-4753aae7c95ce227c821cb7198b3d5218c68092a34c90c5daad90c086b92b4cc.scope. Dec 13 14:28:02.551805 env[1724]: time="2024-12-13T14:28:02.551753432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-3,Uid:fc8191bfb8c65b059be15b6517e67074,Namespace:kube-system,Attempt:0,} returns sandbox id \"811582d1267b4fa101312263e3e168356f4d993f72f0a8959461e1a6a777a5e2\"" Dec 13 14:28:02.557375 env[1724]: time="2024-12-13T14:28:02.557333076Z" level=info msg="CreateContainer within sandbox \"811582d1267b4fa101312263e3e168356f4d993f72f0a8959461e1a6a777a5e2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:28:02.564936 env[1724]: time="2024-12-13T14:28:02.564879443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-3,Uid:e9723e33c4d3f0ec20162031230d1bd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"51b545986e68f3f51c1c8b5fbdd8d609046d99de5664cb35273349d8ee9ef439\"" Dec 13 14:28:02.568498 env[1724]: time="2024-12-13T14:28:02.568432935Z" level=info msg="CreateContainer within sandbox \"51b545986e68f3f51c1c8b5fbdd8d609046d99de5664cb35273349d8ee9ef439\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:28:02.590760 env[1724]: time="2024-12-13T14:28:02.590613421Z" level=info msg="CreateContainer within sandbox \"811582d1267b4fa101312263e3e168356f4d993f72f0a8959461e1a6a777a5e2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1092861ea850823f4a9ccca7a2f2df195433bc8d1e2eab384696aba769748243\"" Dec 13 14:28:02.592440 env[1724]: time="2024-12-13T14:28:02.592369775Z" level=info msg="StartContainer for \"1092861ea850823f4a9ccca7a2f2df195433bc8d1e2eab384696aba769748243\"" Dec 13 14:28:02.599766 env[1724]: time="2024-12-13T14:28:02.599648656Z" level=info msg="CreateContainer within sandbox \"51b545986e68f3f51c1c8b5fbdd8d609046d99de5664cb35273349d8ee9ef439\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9d8f698d0ea50612448c45d3ac44499ce24cfd70cd027994b855bf632688a12e\"" Dec 13 14:28:02.600026 env[1724]: time="2024-12-13T14:28:02.599991810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-3,Uid:9bf252bce4b3dfd66dfad262d49e0afa,Namespace:kube-system,Attempt:0,} returns sandbox id \"4753aae7c95ce227c821cb7198b3d5218c68092a34c90c5daad90c086b92b4cc\"" Dec 13 14:28:02.600585 env[1724]: time="2024-12-13T14:28:02.600559112Z" level=info msg="StartContainer for \"9d8f698d0ea50612448c45d3ac44499ce24cfd70cd027994b855bf632688a12e\"" Dec 13 14:28:02.603235 env[1724]: time="2024-12-13T14:28:02.603206168Z" level=info msg="CreateContainer within sandbox \"4753aae7c95ce227c821cb7198b3d5218c68092a34c90c5daad90c086b92b4cc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:28:02.624262 systemd[1]: Started cri-containerd-1092861ea850823f4a9ccca7a2f2df195433bc8d1e2eab384696aba769748243.scope. Dec 13 14:28:02.633439 env[1724]: time="2024-12-13T14:28:02.629318142Z" level=info msg="CreateContainer within sandbox \"4753aae7c95ce227c821cb7198b3d5218c68092a34c90c5daad90c086b92b4cc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2ca56ea9e4fe0947d796576425c63a7fde60b36db4618e9df9cbb8a413eea651\"" Dec 13 14:28:02.633439 env[1724]: time="2024-12-13T14:28:02.629922500Z" level=info msg="StartContainer for \"2ca56ea9e4fe0947d796576425c63a7fde60b36db4618e9df9cbb8a413eea651\"" Dec 13 14:28:02.642642 systemd[1]: Started cri-containerd-9d8f698d0ea50612448c45d3ac44499ce24cfd70cd027994b855bf632688a12e.scope. Dec 13 14:28:02.672350 systemd[1]: Started cri-containerd-2ca56ea9e4fe0947d796576425c63a7fde60b36db4618e9df9cbb8a413eea651.scope. Dec 13 14:28:02.704164 kubelet[2211]: E1213 14:28:02.704110 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-3?timeout=10s\": dial tcp 172.31.29.3:6443: connect: connection refused" interval="1.6s" Dec 13 14:28:02.750989 kubelet[2211]: W1213 14:28:02.750890 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.29.3:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.29.3:6443: connect: connection refused Dec 13 14:28:02.751157 kubelet[2211]: E1213 14:28:02.751009 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.29.3:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.29.3:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:28:02.758516 env[1724]: time="2024-12-13T14:28:02.758463254Z" level=info msg="StartContainer for \"1092861ea850823f4a9ccca7a2f2df195433bc8d1e2eab384696aba769748243\" returns successfully" Dec 13 14:28:02.801381 env[1724]: time="2024-12-13T14:28:02.801284665Z" level=info msg="StartContainer for \"9d8f698d0ea50612448c45d3ac44499ce24cfd70cd027994b855bf632688a12e\" returns successfully" Dec 13 14:28:02.834272 env[1724]: time="2024-12-13T14:28:02.834204951Z" level=info msg="StartContainer for \"2ca56ea9e4fe0947d796576425c63a7fde60b36db4618e9df9cbb8a413eea651\" returns successfully" Dec 13 14:28:02.840833 kubelet[2211]: W1213 14:28:02.840669 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.29.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.3:6443: connect: connection refused Dec 13 14:28:02.840833 kubelet[2211]: E1213 14:28:02.840776 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.29.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.29.3:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:28:02.876000 kubelet[2211]: I1213 14:28:02.875957 2211 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-3" Dec 13 14:28:02.876563 kubelet[2211]: E1213 14:28:02.876531 2211 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.29.3:6443/api/v1/nodes\": dial tcp 172.31.29.3:6443: connect: connection refused" node="ip-172-31-29-3" Dec 13 14:28:03.375292 kubelet[2211]: E1213 14:28:03.375252 2211 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.29.3:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.29.3:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:28:03.905061 kubelet[2211]: W1213 14:28:03.904993 2211 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.29.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-3&limit=500&resourceVersion=0": dial tcp 172.31.29.3:6443: connect: connection refused Dec 13 14:28:03.905925 kubelet[2211]: E1213 14:28:03.905898 2211 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.29.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-3&limit=500&resourceVersion=0\": dial tcp 172.31.29.3:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:28:04.478521 kubelet[2211]: I1213 14:28:04.478495 2211 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-3" Dec 13 14:28:06.170173 kubelet[2211]: E1213 14:28:06.170131 2211 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-3\" not found" node="ip-172-31-29-3" Dec 13 14:28:06.300561 kubelet[2211]: I1213 14:28:06.300525 2211 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-29-3" Dec 13 14:28:06.300561 kubelet[2211]: E1213 14:28:06.300565 2211 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-29-3\": node \"ip-172-31-29-3\" not found" Dec 13 14:28:06.314673 kubelet[2211]: E1213 14:28:06.314642 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:06.415345 kubelet[2211]: E1213 14:28:06.415311 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:06.516348 kubelet[2211]: E1213 14:28:06.516228 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:06.616581 kubelet[2211]: E1213 14:28:06.616527 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:06.717468 kubelet[2211]: E1213 14:28:06.717416 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:06.818661 kubelet[2211]: E1213 14:28:06.818542 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:06.919272 kubelet[2211]: E1213 14:28:06.919220 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:07.020080 kubelet[2211]: E1213 14:28:07.020040 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:07.124645 kubelet[2211]: E1213 14:28:07.124531 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:07.225768 kubelet[2211]: E1213 14:28:07.225729 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:07.326574 kubelet[2211]: E1213 14:28:07.326537 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:07.427739 kubelet[2211]: E1213 14:28:07.427674 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:07.528792 kubelet[2211]: E1213 14:28:07.528748 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:07.629487 kubelet[2211]: E1213 14:28:07.629442 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:07.730327 kubelet[2211]: E1213 14:28:07.730217 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:07.831403 kubelet[2211]: E1213 14:28:07.831352 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:07.932507 kubelet[2211]: E1213 14:28:07.932402 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:08.033583 kubelet[2211]: E1213 14:28:08.033449 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:08.134484 kubelet[2211]: E1213 14:28:08.134422 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:08.234724 kubelet[2211]: E1213 14:28:08.234683 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:08.335608 kubelet[2211]: E1213 14:28:08.335506 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:08.436002 kubelet[2211]: E1213 14:28:08.435957 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:08.537019 kubelet[2211]: E1213 14:28:08.536987 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:08.573015 systemd[1]: Reloading. Dec 13 14:28:08.638946 kubelet[2211]: E1213 14:28:08.638908 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:08.741559 kubelet[2211]: E1213 14:28:08.741492 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:08.779599 /usr/lib/systemd/system-generators/torcx-generator[2494]: time="2024-12-13T14:28:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:28:08.779635 /usr/lib/systemd/system-generators/torcx-generator[2494]: time="2024-12-13T14:28:08Z" level=info msg="torcx already run" Dec 13 14:28:08.842550 kubelet[2211]: E1213 14:28:08.842518 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:08.913784 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:28:08.913807 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:28:08.937726 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:28:08.942894 kubelet[2211]: E1213 14:28:08.942859 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:09.043843 kubelet[2211]: E1213 14:28:09.043788 2211 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-29-3\" not found" Dec 13 14:28:09.078177 systemd[1]: Stopping kubelet.service... Dec 13 14:28:09.097146 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:28:09.097380 systemd[1]: Stopped kubelet.service. Dec 13 14:28:09.100019 systemd[1]: Starting kubelet.service... Dec 13 14:28:10.663183 systemd[1]: Started kubelet.service. Dec 13 14:28:10.771927 kubelet[2550]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:28:10.772366 kubelet[2550]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:28:10.772462 kubelet[2550]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:28:10.772701 kubelet[2550]: I1213 14:28:10.772669 2550 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:28:10.781769 kubelet[2550]: I1213 14:28:10.781740 2550 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 14:28:10.781923 kubelet[2550]: I1213 14:28:10.781915 2550 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:28:10.782421 kubelet[2550]: I1213 14:28:10.782401 2550 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 14:28:10.786359 kubelet[2550]: I1213 14:28:10.786336 2550 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:28:10.794698 kubelet[2550]: I1213 14:28:10.794669 2550 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:28:10.807328 kubelet[2550]: E1213 14:28:10.807275 2550 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 14:28:10.807587 kubelet[2550]: I1213 14:28:10.807556 2550 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 14:28:10.826377 kubelet[2550]: I1213 14:28:10.826335 2550 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:28:10.826631 kubelet[2550]: I1213 14:28:10.826564 2550 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 14:28:10.826767 kubelet[2550]: I1213 14:28:10.826728 2550 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:28:10.827047 kubelet[2550]: I1213 14:28:10.826765 2550 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-29-3","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 14:28:10.827187 kubelet[2550]: I1213 14:28:10.827054 2550 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:28:10.827187 kubelet[2550]: I1213 14:28:10.827069 2550 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 14:28:10.827187 kubelet[2550]: I1213 14:28:10.827108 2550 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:28:10.829043 kubelet[2550]: I1213 14:28:10.829013 2550 kubelet.go:408] "Attempting to sync node with API server" Dec 13 14:28:10.829043 kubelet[2550]: I1213 14:28:10.829039 2550 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:28:10.830540 kubelet[2550]: I1213 14:28:10.830506 2550 kubelet.go:314] "Adding apiserver pod source" Dec 13 14:28:10.836652 kubelet[2550]: I1213 14:28:10.831454 2550 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:28:10.863473 kubelet[2550]: I1213 14:28:10.862135 2550 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:28:10.864290 kubelet[2550]: I1213 14:28:10.864167 2550 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:28:10.867173 kubelet[2550]: I1213 14:28:10.867153 2550 server.go:1269] "Started kubelet" Dec 13 14:28:10.882378 kubelet[2550]: I1213 14:28:10.878977 2550 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:28:10.884160 kubelet[2550]: I1213 14:28:10.883353 2550 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:28:10.884160 kubelet[2550]: I1213 14:28:10.883964 2550 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:28:10.887290 kubelet[2550]: I1213 14:28:10.887240 2550 server.go:460] "Adding debug handlers to kubelet server" Dec 13 14:28:10.899205 kubelet[2550]: I1213 14:28:10.899181 2550 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:28:10.902636 kubelet[2550]: E1213 14:28:10.902606 2550 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:28:10.904667 kubelet[2550]: I1213 14:28:10.902997 2550 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 14:28:10.904667 kubelet[2550]: I1213 14:28:10.903210 2550 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 14:28:10.906864 kubelet[2550]: I1213 14:28:10.906848 2550 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 14:28:10.907232 kubelet[2550]: I1213 14:28:10.907181 2550 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:28:10.910269 kubelet[2550]: I1213 14:28:10.910252 2550 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:28:10.910551 kubelet[2550]: I1213 14:28:10.910531 2550 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:28:10.915498 kubelet[2550]: I1213 14:28:10.913651 2550 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:28:10.924079 kubelet[2550]: I1213 14:28:10.924038 2550 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:28:10.930796 kubelet[2550]: I1213 14:28:10.930756 2550 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:28:10.930796 kubelet[2550]: I1213 14:28:10.930788 2550 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:28:10.931172 kubelet[2550]: I1213 14:28:10.930830 2550 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 14:28:10.931172 kubelet[2550]: E1213 14:28:10.930993 2550 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:28:11.028707 kubelet[2550]: I1213 14:28:11.028687 2550 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:28:11.028864 kubelet[2550]: I1213 14:28:11.028855 2550 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:28:11.028918 kubelet[2550]: I1213 14:28:11.028913 2550 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:28:11.029098 kubelet[2550]: I1213 14:28:11.029089 2550 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:28:11.029164 kubelet[2550]: I1213 14:28:11.029147 2550 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:28:11.029204 kubelet[2550]: I1213 14:28:11.029199 2550 policy_none.go:49] "None policy: Start" Dec 13 14:28:11.029994 kubelet[2550]: I1213 14:28:11.029970 2550 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:28:11.029994 kubelet[2550]: I1213 14:28:11.029996 2550 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:28:11.030201 kubelet[2550]: I1213 14:28:11.030183 2550 state_mem.go:75] "Updated machine memory state" Dec 13 14:28:11.031112 kubelet[2550]: E1213 14:28:11.031043 2550 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:28:11.042916 kubelet[2550]: I1213 14:28:11.042262 2550 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:28:11.042916 kubelet[2550]: I1213 14:28:11.042488 2550 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 14:28:11.042916 kubelet[2550]: I1213 14:28:11.042505 2550 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:28:11.055771 kubelet[2550]: I1213 14:28:11.055037 2550 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:28:11.163788 kubelet[2550]: I1213 14:28:11.163752 2550 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-29-3" Dec 13 14:28:11.173124 kubelet[2550]: I1213 14:28:11.172821 2550 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-29-3" Dec 13 14:28:11.173124 kubelet[2550]: I1213 14:28:11.172916 2550 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-29-3" Dec 13 14:28:11.309327 kubelet[2550]: I1213 14:28:11.309297 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9723e33c4d3f0ec20162031230d1bd4-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-3\" (UID: \"e9723e33c4d3f0ec20162031230d1bd4\") " pod="kube-system/kube-apiserver-ip-172-31-29-3" Dec 13 14:28:11.309629 kubelet[2550]: I1213 14:28:11.309611 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9723e33c4d3f0ec20162031230d1bd4-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-3\" (UID: \"e9723e33c4d3f0ec20162031230d1bd4\") " pod="kube-system/kube-apiserver-ip-172-31-29-3" Dec 13 14:28:11.309722 kubelet[2550]: I1213 14:28:11.309709 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc8191bfb8c65b059be15b6517e67074-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-3\" (UID: \"fc8191bfb8c65b059be15b6517e67074\") " pod="kube-system/kube-controller-manager-ip-172-31-29-3" Dec 13 14:28:11.309788 kubelet[2550]: I1213 14:28:11.309779 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fc8191bfb8c65b059be15b6517e67074-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-3\" (UID: \"fc8191bfb8c65b059be15b6517e67074\") " pod="kube-system/kube-controller-manager-ip-172-31-29-3" Dec 13 14:28:11.309894 kubelet[2550]: I1213 14:28:11.309879 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc8191bfb8c65b059be15b6517e67074-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-3\" (UID: \"fc8191bfb8c65b059be15b6517e67074\") " pod="kube-system/kube-controller-manager-ip-172-31-29-3" Dec 13 14:28:11.309988 kubelet[2550]: I1213 14:28:11.309976 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fc8191bfb8c65b059be15b6517e67074-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-3\" (UID: \"fc8191bfb8c65b059be15b6517e67074\") " pod="kube-system/kube-controller-manager-ip-172-31-29-3" Dec 13 14:28:11.310070 kubelet[2550]: I1213 14:28:11.310059 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9723e33c4d3f0ec20162031230d1bd4-ca-certs\") pod \"kube-apiserver-ip-172-31-29-3\" (UID: \"e9723e33c4d3f0ec20162031230d1bd4\") " pod="kube-system/kube-apiserver-ip-172-31-29-3" Dec 13 14:28:11.310162 kubelet[2550]: I1213 14:28:11.310147 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc8191bfb8c65b059be15b6517e67074-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-3\" (UID: \"fc8191bfb8c65b059be15b6517e67074\") " pod="kube-system/kube-controller-manager-ip-172-31-29-3" Dec 13 14:28:11.310274 kubelet[2550]: I1213 14:28:11.310235 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9bf252bce4b3dfd66dfad262d49e0afa-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-3\" (UID: \"9bf252bce4b3dfd66dfad262d49e0afa\") " pod="kube-system/kube-scheduler-ip-172-31-29-3" Dec 13 14:28:11.862507 kubelet[2550]: I1213 14:28:11.862351 2550 apiserver.go:52] "Watching apiserver" Dec 13 14:28:11.907872 kubelet[2550]: I1213 14:28:11.907795 2550 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 14:28:12.020280 kubelet[2550]: E1213 14:28:12.020233 2550 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-29-3\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-3" Dec 13 14:28:12.089742 kubelet[2550]: I1213 14:28:12.089676 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-3" podStartSLOduration=1.089653267 podStartE2EDuration="1.089653267s" podCreationTimestamp="2024-12-13 14:28:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:28:12.073074177 +0000 UTC m=+1.381758806" watchObservedRunningTime="2024-12-13 14:28:12.089653267 +0000 UTC m=+1.398337889" Dec 13 14:28:12.110859 kubelet[2550]: I1213 14:28:12.110805 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-3" podStartSLOduration=1.110767746 podStartE2EDuration="1.110767746s" podCreationTimestamp="2024-12-13 14:28:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:28:12.091729338 +0000 UTC m=+1.400413970" watchObservedRunningTime="2024-12-13 14:28:12.110767746 +0000 UTC m=+1.419452364" Dec 13 14:28:12.125927 kubelet[2550]: I1213 14:28:12.125774 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-3" podStartSLOduration=1.125755836 podStartE2EDuration="1.125755836s" podCreationTimestamp="2024-12-13 14:28:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:28:12.112365024 +0000 UTC m=+1.421049645" watchObservedRunningTime="2024-12-13 14:28:12.125755836 +0000 UTC m=+1.434440465" Dec 13 14:28:12.475996 sudo[1957]: pam_unix(sudo:session): session closed for user root Dec 13 14:28:12.502453 sshd[1954]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:12.505507 systemd[1]: sshd@4-172.31.29.3:22-139.178.89.65:35166.service: Deactivated successfully. Dec 13 14:28:12.506373 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:28:12.506553 systemd[1]: session-5.scope: Consumed 3.191s CPU time. Dec 13 14:28:12.507087 systemd-logind[1717]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:28:12.508081 systemd-logind[1717]: Removed session 5. Dec 13 14:28:14.103438 kubelet[2550]: I1213 14:28:14.103407 2550 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:28:14.103881 env[1724]: time="2024-12-13T14:28:14.103830141Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:28:14.104259 kubelet[2550]: I1213 14:28:14.104146 2550 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:28:14.796501 systemd[1]: Created slice kubepods-besteffort-pod3ee46c76_e949_4661_b8f8_3efb8cf1df6d.slice. Dec 13 14:28:14.818167 systemd[1]: Created slice kubepods-burstable-pod9e44366d_9e62_43de_9cd7_b78a1d04b12d.slice. Dec 13 14:28:14.946526 kubelet[2550]: I1213 14:28:14.946455 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ee46c76-e949-4661-b8f8-3efb8cf1df6d-lib-modules\") pod \"kube-proxy-gjdfx\" (UID: \"3ee46c76-e949-4661-b8f8-3efb8cf1df6d\") " pod="kube-system/kube-proxy-gjdfx" Dec 13 14:28:14.946782 kubelet[2550]: I1213 14:28:14.946754 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/9e44366d-9e62-43de-9cd7-b78a1d04b12d-cni-plugin\") pod \"kube-flannel-ds-m2q6m\" (UID: \"9e44366d-9e62-43de-9cd7-b78a1d04b12d\") " pod="kube-flannel/kube-flannel-ds-m2q6m" Dec 13 14:28:14.946957 kubelet[2550]: I1213 14:28:14.946941 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e44366d-9e62-43de-9cd7-b78a1d04b12d-xtables-lock\") pod \"kube-flannel-ds-m2q6m\" (UID: \"9e44366d-9e62-43de-9cd7-b78a1d04b12d\") " pod="kube-flannel/kube-flannel-ds-m2q6m" Dec 13 14:28:14.947107 kubelet[2550]: I1213 14:28:14.947092 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ee46c76-e949-4661-b8f8-3efb8cf1df6d-xtables-lock\") pod \"kube-proxy-gjdfx\" (UID: \"3ee46c76-e949-4661-b8f8-3efb8cf1df6d\") " pod="kube-system/kube-proxy-gjdfx" Dec 13 14:28:14.947256 kubelet[2550]: I1213 14:28:14.947239 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9e44366d-9e62-43de-9cd7-b78a1d04b12d-run\") pod \"kube-flannel-ds-m2q6m\" (UID: \"9e44366d-9e62-43de-9cd7-b78a1d04b12d\") " pod="kube-flannel/kube-flannel-ds-m2q6m" Dec 13 14:28:14.947448 kubelet[2550]: I1213 14:28:14.947381 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3ee46c76-e949-4661-b8f8-3efb8cf1df6d-kube-proxy\") pod \"kube-proxy-gjdfx\" (UID: \"3ee46c76-e949-4661-b8f8-3efb8cf1df6d\") " pod="kube-system/kube-proxy-gjdfx" Dec 13 14:28:14.947583 kubelet[2550]: I1213 14:28:14.947566 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bchrp\" (UniqueName: \"kubernetes.io/projected/3ee46c76-e949-4661-b8f8-3efb8cf1df6d-kube-api-access-bchrp\") pod \"kube-proxy-gjdfx\" (UID: \"3ee46c76-e949-4661-b8f8-3efb8cf1df6d\") " pod="kube-system/kube-proxy-gjdfx" Dec 13 14:28:14.947733 kubelet[2550]: I1213 14:28:14.947719 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/9e44366d-9e62-43de-9cd7-b78a1d04b12d-cni\") pod \"kube-flannel-ds-m2q6m\" (UID: \"9e44366d-9e62-43de-9cd7-b78a1d04b12d\") " pod="kube-flannel/kube-flannel-ds-m2q6m" Dec 13 14:28:14.947872 kubelet[2550]: I1213 14:28:14.947855 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/9e44366d-9e62-43de-9cd7-b78a1d04b12d-flannel-cfg\") pod \"kube-flannel-ds-m2q6m\" (UID: \"9e44366d-9e62-43de-9cd7-b78a1d04b12d\") " pod="kube-flannel/kube-flannel-ds-m2q6m" Dec 13 14:28:14.948021 kubelet[2550]: I1213 14:28:14.948006 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfqcf\" (UniqueName: \"kubernetes.io/projected/9e44366d-9e62-43de-9cd7-b78a1d04b12d-kube-api-access-xfqcf\") pod \"kube-flannel-ds-m2q6m\" (UID: \"9e44366d-9e62-43de-9cd7-b78a1d04b12d\") " pod="kube-flannel/kube-flannel-ds-m2q6m" Dec 13 14:28:15.050499 update_engine[1718]: I1213 14:28:15.049141 1718 update_attempter.cc:509] Updating boot flags... Dec 13 14:28:15.079264 kubelet[2550]: I1213 14:28:15.079225 2550 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 13 14:28:15.109097 env[1724]: time="2024-12-13T14:28:15.108561268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gjdfx,Uid:3ee46c76-e949-4661-b8f8-3efb8cf1df6d,Namespace:kube-system,Attempt:0,}" Dec 13 14:28:15.125231 env[1724]: time="2024-12-13T14:28:15.124686977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-m2q6m,Uid:9e44366d-9e62-43de-9cd7-b78a1d04b12d,Namespace:kube-flannel,Attempt:0,}" Dec 13 14:28:15.177313 env[1724]: time="2024-12-13T14:28:15.170678659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:15.177313 env[1724]: time="2024-12-13T14:28:15.170772995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:15.177313 env[1724]: time="2024-12-13T14:28:15.170806491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:15.177313 env[1724]: time="2024-12-13T14:28:15.170988556Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/20f3abc722bc0ec80d4a19026139457a3d96b283e0b22dc41bc6b0de485e8dd0 pid=2630 runtime=io.containerd.runc.v2 Dec 13 14:28:15.234447 systemd[1]: Started cri-containerd-20f3abc722bc0ec80d4a19026139457a3d96b283e0b22dc41bc6b0de485e8dd0.scope. Dec 13 14:28:15.244338 env[1724]: time="2024-12-13T14:28:15.244192423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:15.244656 env[1724]: time="2024-12-13T14:28:15.244609224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:15.244819 env[1724]: time="2024-12-13T14:28:15.244787909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:15.245306 env[1724]: time="2024-12-13T14:28:15.245234899Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c71286311168bc8f377455e2ead2216e857e93bdff5d28a8359d6d53eaa542f pid=2664 runtime=io.containerd.runc.v2 Dec 13 14:28:15.343666 systemd[1]: Started cri-containerd-9c71286311168bc8f377455e2ead2216e857e93bdff5d28a8359d6d53eaa542f.scope. Dec 13 14:28:15.449505 env[1724]: time="2024-12-13T14:28:15.449344235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gjdfx,Uid:3ee46c76-e949-4661-b8f8-3efb8cf1df6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"20f3abc722bc0ec80d4a19026139457a3d96b283e0b22dc41bc6b0de485e8dd0\"" Dec 13 14:28:15.464785 env[1724]: time="2024-12-13T14:28:15.464743662Z" level=info msg="CreateContainer within sandbox \"20f3abc722bc0ec80d4a19026139457a3d96b283e0b22dc41bc6b0de485e8dd0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:28:15.536054 env[1724]: time="2024-12-13T14:28:15.536006178Z" level=info msg="CreateContainer within sandbox \"20f3abc722bc0ec80d4a19026139457a3d96b283e0b22dc41bc6b0de485e8dd0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"49ef60f5d7e15f6d5464511aa3efe1d917c83944789c3ba9aa49fab39b5eb654\"" Dec 13 14:28:15.553494 env[1724]: time="2024-12-13T14:28:15.553451250Z" level=info msg="StartContainer for \"49ef60f5d7e15f6d5464511aa3efe1d917c83944789c3ba9aa49fab39b5eb654\"" Dec 13 14:28:15.610648 env[1724]: time="2024-12-13T14:28:15.609197103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-m2q6m,Uid:9e44366d-9e62-43de-9cd7-b78a1d04b12d,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"9c71286311168bc8f377455e2ead2216e857e93bdff5d28a8359d6d53eaa542f\"" Dec 13 14:28:15.615072 env[1724]: time="2024-12-13T14:28:15.615031363Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 13 14:28:15.643422 systemd[1]: Started cri-containerd-49ef60f5d7e15f6d5464511aa3efe1d917c83944789c3ba9aa49fab39b5eb654.scope. Dec 13 14:28:15.845760 env[1724]: time="2024-12-13T14:28:15.845598446Z" level=info msg="StartContainer for \"49ef60f5d7e15f6d5464511aa3efe1d917c83944789c3ba9aa49fab39b5eb654\" returns successfully" Dec 13 14:28:16.019063 kubelet[2550]: I1213 14:28:16.018980 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gjdfx" podStartSLOduration=2.018956571 podStartE2EDuration="2.018956571s" podCreationTimestamp="2024-12-13 14:28:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:28:16.018613872 +0000 UTC m=+5.327298550" watchObservedRunningTime="2024-12-13 14:28:16.018956571 +0000 UTC m=+5.327641201" Dec 13 14:28:17.617765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount503328849.mount: Deactivated successfully. Dec 13 14:28:17.688046 env[1724]: time="2024-12-13T14:28:17.687990589Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:17.692131 env[1724]: time="2024-12-13T14:28:17.692085561Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:17.695228 env[1724]: time="2024-12-13T14:28:17.695187775Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:17.697929 env[1724]: time="2024-12-13T14:28:17.697893055Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:17.698359 env[1724]: time="2024-12-13T14:28:17.698324766Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Dec 13 14:28:17.701797 env[1724]: time="2024-12-13T14:28:17.701759097Z" level=info msg="CreateContainer within sandbox \"9c71286311168bc8f377455e2ead2216e857e93bdff5d28a8359d6d53eaa542f\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 13 14:28:17.728669 env[1724]: time="2024-12-13T14:28:17.728617928Z" level=info msg="CreateContainer within sandbox \"9c71286311168bc8f377455e2ead2216e857e93bdff5d28a8359d6d53eaa542f\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"6faccc571d535625a15c149672e9f7dbf2e96b58646feae25993e3b6f2530218\"" Dec 13 14:28:17.730812 env[1724]: time="2024-12-13T14:28:17.729471965Z" level=info msg="StartContainer for \"6faccc571d535625a15c149672e9f7dbf2e96b58646feae25993e3b6f2530218\"" Dec 13 14:28:17.753951 systemd[1]: Started cri-containerd-6faccc571d535625a15c149672e9f7dbf2e96b58646feae25993e3b6f2530218.scope. Dec 13 14:28:17.788559 systemd[1]: cri-containerd-6faccc571d535625a15c149672e9f7dbf2e96b58646feae25993e3b6f2530218.scope: Deactivated successfully. Dec 13 14:28:17.790670 env[1724]: time="2024-12-13T14:28:17.790626641Z" level=info msg="StartContainer for \"6faccc571d535625a15c149672e9f7dbf2e96b58646feae25993e3b6f2530218\" returns successfully" Dec 13 14:28:17.846985 env[1724]: time="2024-12-13T14:28:17.846936335Z" level=info msg="shim disconnected" id=6faccc571d535625a15c149672e9f7dbf2e96b58646feae25993e3b6f2530218 Dec 13 14:28:17.846985 env[1724]: time="2024-12-13T14:28:17.846982649Z" level=warning msg="cleaning up after shim disconnected" id=6faccc571d535625a15c149672e9f7dbf2e96b58646feae25993e3b6f2530218 namespace=k8s.io Dec 13 14:28:17.846985 env[1724]: time="2024-12-13T14:28:17.846994286Z" level=info msg="cleaning up dead shim" Dec 13 14:28:17.855746 env[1724]: time="2024-12-13T14:28:17.855690660Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3156 runtime=io.containerd.runc.v2\n" Dec 13 14:28:18.017634 env[1724]: time="2024-12-13T14:28:18.017593032Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 13 14:28:18.510175 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6faccc571d535625a15c149672e9f7dbf2e96b58646feae25993e3b6f2530218-rootfs.mount: Deactivated successfully. Dec 13 14:28:20.195494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2326792427.mount: Deactivated successfully. Dec 13 14:28:21.141707 env[1724]: time="2024-12-13T14:28:21.141649101Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:21.145902 env[1724]: time="2024-12-13T14:28:21.145596729Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:21.149006 env[1724]: time="2024-12-13T14:28:21.148948832Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:21.152148 env[1724]: time="2024-12-13T14:28:21.152092756Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:21.152970 env[1724]: time="2024-12-13T14:28:21.152924971Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Dec 13 14:28:21.156967 env[1724]: time="2024-12-13T14:28:21.156929377Z" level=info msg="CreateContainer within sandbox \"9c71286311168bc8f377455e2ead2216e857e93bdff5d28a8359d6d53eaa542f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 14:28:21.178929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount710629863.mount: Deactivated successfully. Dec 13 14:28:21.181877 env[1724]: time="2024-12-13T14:28:21.181822956Z" level=info msg="CreateContainer within sandbox \"9c71286311168bc8f377455e2ead2216e857e93bdff5d28a8359d6d53eaa542f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d75d9a711574ce33ab737c36e7b180a4d56f78dbc45f06522917fbe7c700734c\"" Dec 13 14:28:21.184205 env[1724]: time="2024-12-13T14:28:21.182455231Z" level=info msg="StartContainer for \"d75d9a711574ce33ab737c36e7b180a4d56f78dbc45f06522917fbe7c700734c\"" Dec 13 14:28:21.210846 systemd[1]: Started cri-containerd-d75d9a711574ce33ab737c36e7b180a4d56f78dbc45f06522917fbe7c700734c.scope. Dec 13 14:28:21.227168 systemd[1]: run-containerd-runc-k8s.io-d75d9a711574ce33ab737c36e7b180a4d56f78dbc45f06522917fbe7c700734c-runc.stErUy.mount: Deactivated successfully. Dec 13 14:28:21.269139 systemd[1]: cri-containerd-d75d9a711574ce33ab737c36e7b180a4d56f78dbc45f06522917fbe7c700734c.scope: Deactivated successfully. Dec 13 14:28:21.276210 env[1724]: time="2024-12-13T14:28:21.276158173Z" level=info msg="StartContainer for \"d75d9a711574ce33ab737c36e7b180a4d56f78dbc45f06522917fbe7c700734c\" returns successfully" Dec 13 14:28:21.337201 kubelet[2550]: I1213 14:28:21.336558 2550 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 14:28:21.362684 env[1724]: time="2024-12-13T14:28:21.362617071Z" level=info msg="shim disconnected" id=d75d9a711574ce33ab737c36e7b180a4d56f78dbc45f06522917fbe7c700734c Dec 13 14:28:21.363229 env[1724]: time="2024-12-13T14:28:21.363200421Z" level=warning msg="cleaning up after shim disconnected" id=d75d9a711574ce33ab737c36e7b180a4d56f78dbc45f06522917fbe7c700734c namespace=k8s.io Dec 13 14:28:21.363477 env[1724]: time="2024-12-13T14:28:21.363456796Z" level=info msg="cleaning up dead shim" Dec 13 14:28:21.385439 systemd[1]: Created slice kubepods-burstable-pod69f4c1e9_e7c9_4d61_814f_340bbc069df2.slice. Dec 13 14:28:21.398282 systemd[1]: Created slice kubepods-burstable-pod41cfe610_dcaa_4be3_ad13_4ea5c361bfe0.slice. Dec 13 14:28:21.404709 env[1724]: time="2024-12-13T14:28:21.404660936Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3212 runtime=io.containerd.runc.v2\n" Dec 13 14:28:21.407621 kubelet[2550]: I1213 14:28:21.407583 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6hn7\" (UniqueName: \"kubernetes.io/projected/41cfe610-dcaa-4be3-ad13-4ea5c361bfe0-kube-api-access-l6hn7\") pod \"coredns-6f6b679f8f-s8t58\" (UID: \"41cfe610-dcaa-4be3-ad13-4ea5c361bfe0\") " pod="kube-system/coredns-6f6b679f8f-s8t58" Dec 13 14:28:21.407893 kubelet[2550]: I1213 14:28:21.407634 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j62gn\" (UniqueName: \"kubernetes.io/projected/69f4c1e9-e7c9-4d61-814f-340bbc069df2-kube-api-access-j62gn\") pod \"coredns-6f6b679f8f-ljpjx\" (UID: \"69f4c1e9-e7c9-4d61-814f-340bbc069df2\") " pod="kube-system/coredns-6f6b679f8f-ljpjx" Dec 13 14:28:21.407893 kubelet[2550]: I1213 14:28:21.407661 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41cfe610-dcaa-4be3-ad13-4ea5c361bfe0-config-volume\") pod \"coredns-6f6b679f8f-s8t58\" (UID: \"41cfe610-dcaa-4be3-ad13-4ea5c361bfe0\") " pod="kube-system/coredns-6f6b679f8f-s8t58" Dec 13 14:28:21.407893 kubelet[2550]: I1213 14:28:21.407687 2550 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69f4c1e9-e7c9-4d61-814f-340bbc069df2-config-volume\") pod \"coredns-6f6b679f8f-ljpjx\" (UID: \"69f4c1e9-e7c9-4d61-814f-340bbc069df2\") " pod="kube-system/coredns-6f6b679f8f-ljpjx" Dec 13 14:28:21.696244 env[1724]: time="2024-12-13T14:28:21.696126281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ljpjx,Uid:69f4c1e9-e7c9-4d61-814f-340bbc069df2,Namespace:kube-system,Attempt:0,}" Dec 13 14:28:21.708088 env[1724]: time="2024-12-13T14:28:21.707919600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-s8t58,Uid:41cfe610-dcaa-4be3-ad13-4ea5c361bfe0,Namespace:kube-system,Attempt:0,}" Dec 13 14:28:21.765985 env[1724]: time="2024-12-13T14:28:21.765877171Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ljpjx,Uid:69f4c1e9-e7c9-4d61-814f-340bbc069df2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b06df693edf961b2fbabcd80204d95e86fdcfb0528394f3f2250134ada099660\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 14:28:21.766438 kubelet[2550]: E1213 14:28:21.766382 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b06df693edf961b2fbabcd80204d95e86fdcfb0528394f3f2250134ada099660\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 14:28:21.766557 kubelet[2550]: E1213 14:28:21.766471 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b06df693edf961b2fbabcd80204d95e86fdcfb0528394f3f2250134ada099660\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-ljpjx" Dec 13 14:28:21.768331 kubelet[2550]: E1213 14:28:21.767944 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b06df693edf961b2fbabcd80204d95e86fdcfb0528394f3f2250134ada099660\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-ljpjx" Dec 13 14:28:21.768606 kubelet[2550]: E1213 14:28:21.768528 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-ljpjx_kube-system(69f4c1e9-e7c9-4d61-814f-340bbc069df2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-ljpjx_kube-system(69f4c1e9-e7c9-4d61-814f-340bbc069df2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b06df693edf961b2fbabcd80204d95e86fdcfb0528394f3f2250134ada099660\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-ljpjx" podUID="69f4c1e9-e7c9-4d61-814f-340bbc069df2" Dec 13 14:28:21.770175 env[1724]: time="2024-12-13T14:28:21.770106183Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-s8t58,Uid:41cfe610-dcaa-4be3-ad13-4ea5c361bfe0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6011ae15b1db4cb7c4bf34ac095021d8a236fe822a157308ab099f06ee8c49d4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 14:28:21.770502 kubelet[2550]: E1213 14:28:21.770358 2550 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6011ae15b1db4cb7c4bf34ac095021d8a236fe822a157308ab099f06ee8c49d4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 14:28:21.770502 kubelet[2550]: E1213 14:28:21.770431 2550 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6011ae15b1db4cb7c4bf34ac095021d8a236fe822a157308ab099f06ee8c49d4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-s8t58" Dec 13 14:28:21.770502 kubelet[2550]: E1213 14:28:21.770461 2550 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6011ae15b1db4cb7c4bf34ac095021d8a236fe822a157308ab099f06ee8c49d4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-s8t58" Dec 13 14:28:21.770675 kubelet[2550]: E1213 14:28:21.770505 2550 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-s8t58_kube-system(41cfe610-dcaa-4be3-ad13-4ea5c361bfe0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-s8t58_kube-system(41cfe610-dcaa-4be3-ad13-4ea5c361bfe0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6011ae15b1db4cb7c4bf34ac095021d8a236fe822a157308ab099f06ee8c49d4\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-s8t58" podUID="41cfe610-dcaa-4be3-ad13-4ea5c361bfe0" Dec 13 14:28:22.062443 env[1724]: time="2024-12-13T14:28:22.061764394Z" level=info msg="CreateContainer within sandbox \"9c71286311168bc8f377455e2ead2216e857e93bdff5d28a8359d6d53eaa542f\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 13 14:28:22.084967 env[1724]: time="2024-12-13T14:28:22.084914722Z" level=info msg="CreateContainer within sandbox \"9c71286311168bc8f377455e2ead2216e857e93bdff5d28a8359d6d53eaa542f\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"d9a0dff4124481f2abb37628b1f963b9bf1f83ed465f12d4d4cbf1e0552246b3\"" Dec 13 14:28:22.085705 env[1724]: time="2024-12-13T14:28:22.085670367Z" level=info msg="StartContainer for \"d9a0dff4124481f2abb37628b1f963b9bf1f83ed465f12d4d4cbf1e0552246b3\"" Dec 13 14:28:22.107188 systemd[1]: Started cri-containerd-d9a0dff4124481f2abb37628b1f963b9bf1f83ed465f12d4d4cbf1e0552246b3.scope. Dec 13 14:28:22.143277 env[1724]: time="2024-12-13T14:28:22.143227697Z" level=info msg="StartContainer for \"d9a0dff4124481f2abb37628b1f963b9bf1f83ed465f12d4d4cbf1e0552246b3\" returns successfully" Dec 13 14:28:22.176988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d75d9a711574ce33ab737c36e7b180a4d56f78dbc45f06522917fbe7c700734c-rootfs.mount: Deactivated successfully. Dec 13 14:28:23.214458 (udev-worker)[3318]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:28:23.248366 systemd-networkd[1459]: flannel.1: Link UP Dec 13 14:28:23.248375 systemd-networkd[1459]: flannel.1: Gained carrier Dec 13 14:28:24.645623 systemd-networkd[1459]: flannel.1: Gained IPv6LL Dec 13 14:28:25.555159 amazon-ssm-agent[1705]: 2024-12-13 14:28:25 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Dec 13 14:28:31.932356 env[1724]: time="2024-12-13T14:28:31.932299541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ljpjx,Uid:69f4c1e9-e7c9-4d61-814f-340bbc069df2,Namespace:kube-system,Attempt:0,}" Dec 13 14:28:32.022784 (udev-worker)[3434]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:28:32.023851 systemd-networkd[1459]: cni0: Link UP Dec 13 14:28:32.042330 (udev-worker)[3438]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:28:32.043620 systemd-networkd[1459]: veth3d6aa294: Link UP Dec 13 14:28:32.046621 kernel: cni0: port 1(veth3d6aa294) entered blocking state Dec 13 14:28:32.046713 kernel: cni0: port 1(veth3d6aa294) entered disabled state Dec 13 14:28:32.047913 kernel: device veth3d6aa294 entered promiscuous mode Dec 13 14:28:32.051368 kernel: cni0: port 1(veth3d6aa294) entered blocking state Dec 13 14:28:32.051692 kernel: cni0: port 1(veth3d6aa294) entered forwarding state Dec 13 14:28:32.051734 kernel: cni0: port 1(veth3d6aa294) entered disabled state Dec 13 14:28:32.066645 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth3d6aa294: link becomes ready Dec 13 14:28:32.066757 kernel: cni0: port 1(veth3d6aa294) entered blocking state Dec 13 14:28:32.066787 kernel: cni0: port 1(veth3d6aa294) entered forwarding state Dec 13 14:28:32.069352 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cni0: link becomes ready Dec 13 14:28:32.069477 systemd-networkd[1459]: veth3d6aa294: Gained carrier Dec 13 14:28:32.070037 systemd-networkd[1459]: cni0: Gained carrier Dec 13 14:28:32.080433 env[1724]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Dec 13 14:28:32.080433 env[1724]: delegateAdd: netconf sent to delegate plugin: Dec 13 14:28:32.121651 env[1724]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2024-12-13T14:28:32.121468038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:32.121651 env[1724]: time="2024-12-13T14:28:32.121509929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:32.121651 env[1724]: time="2024-12-13T14:28:32.121523685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:32.122043 env[1724]: time="2024-12-13T14:28:32.121963118Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad1f8a6fadcde872d484295a00bb362cd829ebc0436c3c243b29d9f9044bcf98 pid=3460 runtime=io.containerd.runc.v2 Dec 13 14:28:32.218172 systemd[1]: run-containerd-runc-k8s.io-ad1f8a6fadcde872d484295a00bb362cd829ebc0436c3c243b29d9f9044bcf98-runc.cknApw.mount: Deactivated successfully. Dec 13 14:28:32.226696 systemd[1]: Started cri-containerd-ad1f8a6fadcde872d484295a00bb362cd829ebc0436c3c243b29d9f9044bcf98.scope. Dec 13 14:28:32.295914 env[1724]: time="2024-12-13T14:28:32.295082246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-ljpjx,Uid:69f4c1e9-e7c9-4d61-814f-340bbc069df2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad1f8a6fadcde872d484295a00bb362cd829ebc0436c3c243b29d9f9044bcf98\"" Dec 13 14:28:32.299833 env[1724]: time="2024-12-13T14:28:32.299727484Z" level=info msg="CreateContainer within sandbox \"ad1f8a6fadcde872d484295a00bb362cd829ebc0436c3c243b29d9f9044bcf98\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:28:32.326778 env[1724]: time="2024-12-13T14:28:32.326727535Z" level=info msg="CreateContainer within sandbox \"ad1f8a6fadcde872d484295a00bb362cd829ebc0436c3c243b29d9f9044bcf98\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"431ba07687a6802d070c54a2655d5d715d7ed82e6b196d6e7d6f23022acd981c\"" Dec 13 14:28:32.328650 env[1724]: time="2024-12-13T14:28:32.328617400Z" level=info msg="StartContainer for \"431ba07687a6802d070c54a2655d5d715d7ed82e6b196d6e7d6f23022acd981c\"" Dec 13 14:28:32.356257 systemd[1]: Started cri-containerd-431ba07687a6802d070c54a2655d5d715d7ed82e6b196d6e7d6f23022acd981c.scope. Dec 13 14:28:32.413226 env[1724]: time="2024-12-13T14:28:32.413173449Z" level=info msg="StartContainer for \"431ba07687a6802d070c54a2655d5d715d7ed82e6b196d6e7d6f23022acd981c\" returns successfully" Dec 13 14:28:33.094300 kubelet[2550]: I1213 14:28:33.094239 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-m2q6m" podStartSLOduration=13.551441768 podStartE2EDuration="19.09421825s" podCreationTimestamp="2024-12-13 14:28:14 +0000 UTC" firstStartedPulling="2024-12-13 14:28:15.611687855 +0000 UTC m=+4.920372469" lastFinishedPulling="2024-12-13 14:28:21.154464341 +0000 UTC m=+10.463148951" observedRunningTime="2024-12-13 14:28:23.084267611 +0000 UTC m=+12.392952239" watchObservedRunningTime="2024-12-13 14:28:33.09421825 +0000 UTC m=+22.402902880" Dec 13 14:28:33.109932 kubelet[2550]: I1213 14:28:33.109870 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-ljpjx" podStartSLOduration=18.109848161 podStartE2EDuration="18.109848161s" podCreationTimestamp="2024-12-13 14:28:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:28:33.095346349 +0000 UTC m=+22.404030978" watchObservedRunningTime="2024-12-13 14:28:33.109848161 +0000 UTC m=+22.418532797" Dec 13 14:28:33.349822 systemd-networkd[1459]: veth3d6aa294: Gained IPv6LL Dec 13 14:28:33.861739 systemd-networkd[1459]: cni0: Gained IPv6LL Dec 13 14:28:35.081668 amazon-ssm-agent[1705]: 2024-12-13 14:28:35 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 14:28:35.932051 env[1724]: time="2024-12-13T14:28:35.932005512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-s8t58,Uid:41cfe610-dcaa-4be3-ad13-4ea5c361bfe0,Namespace:kube-system,Attempt:0,}" Dec 13 14:28:35.970350 systemd-networkd[1459]: vethe2fd6217: Link UP Dec 13 14:28:35.973853 kernel: cni0: port 2(vethe2fd6217) entered blocking state Dec 13 14:28:35.973964 kernel: cni0: port 2(vethe2fd6217) entered disabled state Dec 13 14:28:35.975998 (udev-worker)[3579]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:28:35.976521 kernel: device vethe2fd6217 entered promiscuous mode Dec 13 14:28:36.016520 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:28:36.016659 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethe2fd6217: link becomes ready Dec 13 14:28:36.016700 kernel: cni0: port 2(vethe2fd6217) entered blocking state Dec 13 14:28:36.016727 kernel: cni0: port 2(vethe2fd6217) entered forwarding state Dec 13 14:28:36.016950 systemd-networkd[1459]: vethe2fd6217: Gained carrier Dec 13 14:28:36.019148 env[1724]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001e928), "name":"cbr0", "type":"bridge"} Dec 13 14:28:36.019148 env[1724]: delegateAdd: netconf sent to delegate plugin: Dec 13 14:28:36.034776 env[1724]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2024-12-13T14:28:36.034703011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:36.035114 env[1724]: time="2024-12-13T14:28:36.035089607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:36.035212 env[1724]: time="2024-12-13T14:28:36.035191608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:36.035598 env[1724]: time="2024-12-13T14:28:36.035538693Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe2c77be08e25fe3bff28eabd188667802ca950ff23d5594fc4c37058a54c447 pid=3600 runtime=io.containerd.runc.v2 Dec 13 14:28:36.075827 systemd[1]: run-containerd-runc-k8s.io-fe2c77be08e25fe3bff28eabd188667802ca950ff23d5594fc4c37058a54c447-runc.47Y9Zw.mount: Deactivated successfully. Dec 13 14:28:36.078821 systemd[1]: Started cri-containerd-fe2c77be08e25fe3bff28eabd188667802ca950ff23d5594fc4c37058a54c447.scope. Dec 13 14:28:36.142484 env[1724]: time="2024-12-13T14:28:36.142442224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-s8t58,Uid:41cfe610-dcaa-4be3-ad13-4ea5c361bfe0,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe2c77be08e25fe3bff28eabd188667802ca950ff23d5594fc4c37058a54c447\"" Dec 13 14:28:36.146491 env[1724]: time="2024-12-13T14:28:36.146452231Z" level=info msg="CreateContainer within sandbox \"fe2c77be08e25fe3bff28eabd188667802ca950ff23d5594fc4c37058a54c447\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:28:36.171248 env[1724]: time="2024-12-13T14:28:36.171122811Z" level=info msg="CreateContainer within sandbox \"fe2c77be08e25fe3bff28eabd188667802ca950ff23d5594fc4c37058a54c447\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8361bd3505aa0fcc957b8f9a36dbe62330a041fa50e0d3f3fcdb347790e59151\"" Dec 13 14:28:36.173314 env[1724]: time="2024-12-13T14:28:36.173279036Z" level=info msg="StartContainer for \"8361bd3505aa0fcc957b8f9a36dbe62330a041fa50e0d3f3fcdb347790e59151\"" Dec 13 14:28:36.202476 systemd[1]: Started cri-containerd-8361bd3505aa0fcc957b8f9a36dbe62330a041fa50e0d3f3fcdb347790e59151.scope. Dec 13 14:28:36.252412 env[1724]: time="2024-12-13T14:28:36.247964741Z" level=info msg="StartContainer for \"8361bd3505aa0fcc957b8f9a36dbe62330a041fa50e0d3f3fcdb347790e59151\" returns successfully" Dec 13 14:28:37.112524 kubelet[2550]: I1213 14:28:37.112464 2550 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-s8t58" podStartSLOduration=22.112442176 podStartE2EDuration="22.112442176s" podCreationTimestamp="2024-12-13 14:28:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:28:37.112201216 +0000 UTC m=+26.420885844" watchObservedRunningTime="2024-12-13 14:28:37.112442176 +0000 UTC m=+26.421126806" Dec 13 14:28:37.253625 systemd-networkd[1459]: vethe2fd6217: Gained IPv6LL Dec 13 14:28:56.832047 systemd[1]: Started sshd@5-172.31.29.3:22-139.178.89.65:33264.service. Dec 13 14:28:57.003547 sshd[3768]: Accepted publickey for core from 139.178.89.65 port 33264 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:28:57.005124 sshd[3768]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:57.010556 systemd[1]: Started session-6.scope. Dec 13 14:28:57.011206 systemd-logind[1717]: New session 6 of user core. Dec 13 14:28:57.281844 sshd[3768]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:57.285585 systemd-logind[1717]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:28:57.285928 systemd[1]: sshd@5-172.31.29.3:22-139.178.89.65:33264.service: Deactivated successfully. Dec 13 14:28:57.286810 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:28:57.287852 systemd-logind[1717]: Removed session 6. Dec 13 14:29:02.315299 systemd[1]: Started sshd@6-172.31.29.3:22-139.178.89.65:58840.service. Dec 13 14:29:02.515728 sshd[3802]: Accepted publickey for core from 139.178.89.65 port 58840 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:02.517201 sshd[3802]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:02.522736 systemd[1]: Started session-7.scope. Dec 13 14:29:02.523467 systemd-logind[1717]: New session 7 of user core. Dec 13 14:29:02.720758 sshd[3802]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:02.724338 systemd[1]: sshd@6-172.31.29.3:22-139.178.89.65:58840.service: Deactivated successfully. Dec 13 14:29:02.725153 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:29:02.725653 systemd-logind[1717]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:29:02.726745 systemd-logind[1717]: Removed session 7. Dec 13 14:29:07.747833 systemd[1]: Started sshd@7-172.31.29.3:22-139.178.89.65:58850.service. Dec 13 14:29:07.917700 sshd[3836]: Accepted publickey for core from 139.178.89.65 port 58850 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:07.919355 sshd[3836]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:07.924762 systemd[1]: Started session-8.scope. Dec 13 14:29:07.925567 systemd-logind[1717]: New session 8 of user core. Dec 13 14:29:08.168027 sshd[3836]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:08.174346 systemd[1]: sshd@7-172.31.29.3:22-139.178.89.65:58850.service: Deactivated successfully. Dec 13 14:29:08.175906 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:29:08.176568 systemd-logind[1717]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:29:08.177836 systemd-logind[1717]: Removed session 8. Dec 13 14:29:13.200497 systemd[1]: Started sshd@8-172.31.29.3:22-139.178.89.65:52608.service. Dec 13 14:29:13.378491 sshd[3871]: Accepted publickey for core from 139.178.89.65 port 52608 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:13.380216 sshd[3871]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:13.386229 systemd[1]: Started session-9.scope. Dec 13 14:29:13.387105 systemd-logind[1717]: New session 9 of user core. Dec 13 14:29:13.599379 sshd[3871]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:13.603127 systemd[1]: sshd@8-172.31.29.3:22-139.178.89.65:52608.service: Deactivated successfully. Dec 13 14:29:13.604144 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:29:13.604927 systemd-logind[1717]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:29:13.606223 systemd-logind[1717]: Removed session 9. Dec 13 14:29:13.629614 systemd[1]: Started sshd@9-172.31.29.3:22-139.178.89.65:52614.service. Dec 13 14:29:13.796721 sshd[3890]: Accepted publickey for core from 139.178.89.65 port 52614 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:13.798262 sshd[3890]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:13.813800 systemd[1]: Started session-10.scope. Dec 13 14:29:13.814468 systemd-logind[1717]: New session 10 of user core. Dec 13 14:29:14.102435 sshd[3890]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:14.107024 systemd-logind[1717]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:29:14.108260 systemd[1]: sshd@9-172.31.29.3:22-139.178.89.65:52614.service: Deactivated successfully. Dec 13 14:29:14.109221 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:29:14.111836 systemd-logind[1717]: Removed session 10. Dec 13 14:29:14.129501 systemd[1]: Started sshd@10-172.31.29.3:22-139.178.89.65:52622.service. Dec 13 14:29:14.286908 sshd[3915]: Accepted publickey for core from 139.178.89.65 port 52622 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:14.289071 sshd[3915]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:14.295286 systemd-logind[1717]: New session 11 of user core. Dec 13 14:29:14.295965 systemd[1]: Started session-11.scope. Dec 13 14:29:14.531835 sshd[3915]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:14.534924 systemd[1]: sshd@10-172.31.29.3:22-139.178.89.65:52622.service: Deactivated successfully. Dec 13 14:29:14.535859 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:29:14.536731 systemd-logind[1717]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:29:14.537707 systemd-logind[1717]: Removed session 11. Dec 13 14:29:19.559064 systemd[1]: Started sshd@11-172.31.29.3:22-139.178.89.65:60136.service. Dec 13 14:29:19.722290 sshd[3950]: Accepted publickey for core from 139.178.89.65 port 60136 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:19.724249 sshd[3950]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:19.730487 systemd-logind[1717]: New session 12 of user core. Dec 13 14:29:19.731308 systemd[1]: Started session-12.scope. Dec 13 14:29:19.933274 sshd[3950]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:19.937067 systemd[1]: sshd@11-172.31.29.3:22-139.178.89.65:60136.service: Deactivated successfully. Dec 13 14:29:19.938129 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:29:19.939065 systemd-logind[1717]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:29:19.940437 systemd-logind[1717]: Removed session 12. Dec 13 14:29:24.959227 systemd[1]: Started sshd@12-172.31.29.3:22-139.178.89.65:60142.service. Dec 13 14:29:25.120425 sshd[3982]: Accepted publickey for core from 139.178.89.65 port 60142 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:25.122074 sshd[3982]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:25.127494 systemd[1]: Started session-13.scope. Dec 13 14:29:25.128221 systemd-logind[1717]: New session 13 of user core. Dec 13 14:29:25.335937 sshd[3982]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:25.339517 systemd[1]: sshd@12-172.31.29.3:22-139.178.89.65:60142.service: Deactivated successfully. Dec 13 14:29:25.340372 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:29:25.341116 systemd-logind[1717]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:29:25.342086 systemd-logind[1717]: Removed session 13. Dec 13 14:29:25.363080 systemd[1]: Started sshd@13-172.31.29.3:22-139.178.89.65:60150.service. Dec 13 14:29:25.531629 sshd[3994]: Accepted publickey for core from 139.178.89.65 port 60150 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:25.533236 sshd[3994]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:25.539127 systemd[1]: Started session-14.scope. Dec 13 14:29:25.541066 systemd-logind[1717]: New session 14 of user core. Dec 13 14:29:26.108311 sshd[3994]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:26.112421 systemd[1]: sshd@13-172.31.29.3:22-139.178.89.65:60150.service: Deactivated successfully. Dec 13 14:29:26.113485 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:29:26.114313 systemd-logind[1717]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:29:26.115255 systemd-logind[1717]: Removed session 14. Dec 13 14:29:26.133571 systemd[1]: Started sshd@14-172.31.29.3:22-139.178.89.65:60156.service. Dec 13 14:29:26.308871 sshd[4003]: Accepted publickey for core from 139.178.89.65 port 60156 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:26.310268 sshd[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:26.317313 systemd[1]: Started session-15.scope. Dec 13 14:29:26.318045 systemd-logind[1717]: New session 15 of user core. Dec 13 14:29:28.150422 sshd[4003]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:28.154376 systemd[1]: sshd@14-172.31.29.3:22-139.178.89.65:60156.service: Deactivated successfully. Dec 13 14:29:28.156443 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:29:28.156470 systemd-logind[1717]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:29:28.158100 systemd-logind[1717]: Removed session 15. Dec 13 14:29:28.184042 systemd[1]: Started sshd@15-172.31.29.3:22-139.178.89.65:47452.service. Dec 13 14:29:28.354882 sshd[4020]: Accepted publickey for core from 139.178.89.65 port 47452 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:28.357032 sshd[4020]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:28.365546 systemd[1]: Started session-16.scope. Dec 13 14:29:28.367656 systemd-logind[1717]: New session 16 of user core. Dec 13 14:29:28.854506 sshd[4020]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:28.858887 systemd[1]: sshd@15-172.31.29.3:22-139.178.89.65:47452.service: Deactivated successfully. Dec 13 14:29:28.859876 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:29:28.861043 systemd-logind[1717]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:29:28.862806 systemd-logind[1717]: Removed session 16. Dec 13 14:29:28.880543 systemd[1]: Started sshd@16-172.31.29.3:22-139.178.89.65:47458.service. Dec 13 14:29:29.044542 sshd[4051]: Accepted publickey for core from 139.178.89.65 port 47458 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:29.046382 sshd[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:29.052080 systemd[1]: Started session-17.scope. Dec 13 14:29:29.052866 systemd-logind[1717]: New session 17 of user core. Dec 13 14:29:29.253599 sshd[4051]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:29.256564 systemd[1]: sshd@16-172.31.29.3:22-139.178.89.65:47458.service: Deactivated successfully. Dec 13 14:29:29.257489 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:29:29.258164 systemd-logind[1717]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:29:29.259228 systemd-logind[1717]: Removed session 17. Dec 13 14:29:34.282215 systemd[1]: Started sshd@17-172.31.29.3:22-139.178.89.65:47470.service. Dec 13 14:29:34.444478 sshd[4084]: Accepted publickey for core from 139.178.89.65 port 47470 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:34.445952 sshd[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:34.452293 systemd[1]: Started session-18.scope. Dec 13 14:29:34.452955 systemd-logind[1717]: New session 18 of user core. Dec 13 14:29:34.698581 sshd[4084]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:34.704830 systemd[1]: sshd@17-172.31.29.3:22-139.178.89.65:47470.service: Deactivated successfully. Dec 13 14:29:34.708910 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:29:34.709825 systemd-logind[1717]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:29:34.713074 systemd-logind[1717]: Removed session 18. Dec 13 14:29:39.728311 systemd[1]: Started sshd@18-172.31.29.3:22-139.178.89.65:55124.service. Dec 13 14:29:39.910531 sshd[4120]: Accepted publickey for core from 139.178.89.65 port 55124 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:39.912242 sshd[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:39.924156 systemd[1]: Started session-19.scope. Dec 13 14:29:39.925074 systemd-logind[1717]: New session 19 of user core. Dec 13 14:29:40.106844 sshd[4120]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:40.110462 systemd[1]: sshd@18-172.31.29.3:22-139.178.89.65:55124.service: Deactivated successfully. Dec 13 14:29:40.111305 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:29:40.112075 systemd-logind[1717]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:29:40.112997 systemd-logind[1717]: Removed session 19. Dec 13 14:29:45.135111 systemd[1]: Started sshd@19-172.31.29.3:22-139.178.89.65:55136.service. Dec 13 14:29:45.307697 sshd[4153]: Accepted publickey for core from 139.178.89.65 port 55136 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:45.309185 sshd[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:45.317838 systemd[1]: Started session-20.scope. Dec 13 14:29:45.318587 systemd-logind[1717]: New session 20 of user core. Dec 13 14:29:45.514057 sshd[4153]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:45.517851 systemd[1]: sshd@19-172.31.29.3:22-139.178.89.65:55136.service: Deactivated successfully. Dec 13 14:29:45.518686 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:29:45.519286 systemd-logind[1717]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:29:45.520350 systemd-logind[1717]: Removed session 20. Dec 13 14:29:50.548095 systemd[1]: Started sshd@20-172.31.29.3:22-139.178.89.65:34264.service. Dec 13 14:29:50.718334 sshd[4187]: Accepted publickey for core from 139.178.89.65 port 34264 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:50.720034 sshd[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:50.725361 systemd[1]: Started session-21.scope. Dec 13 14:29:50.726165 systemd-logind[1717]: New session 21 of user core. Dec 13 14:29:50.921881 sshd[4187]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:50.925229 systemd[1]: sshd@20-172.31.29.3:22-139.178.89.65:34264.service: Deactivated successfully. Dec 13 14:29:50.926073 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:29:50.926570 systemd-logind[1717]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:29:50.927538 systemd-logind[1717]: Removed session 21. Dec 13 14:30:05.740257 systemd[1]: cri-containerd-1092861ea850823f4a9ccca7a2f2df195433bc8d1e2eab384696aba769748243.scope: Deactivated successfully. Dec 13 14:30:05.740610 systemd[1]: cri-containerd-1092861ea850823f4a9ccca7a2f2df195433bc8d1e2eab384696aba769748243.scope: Consumed 2.929s CPU time. Dec 13 14:30:05.780123 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1092861ea850823f4a9ccca7a2f2df195433bc8d1e2eab384696aba769748243-rootfs.mount: Deactivated successfully. Dec 13 14:30:05.807921 env[1724]: time="2024-12-13T14:30:05.807869096Z" level=info msg="shim disconnected" id=1092861ea850823f4a9ccca7a2f2df195433bc8d1e2eab384696aba769748243 Dec 13 14:30:05.807921 env[1724]: time="2024-12-13T14:30:05.807920299Z" level=warning msg="cleaning up after shim disconnected" id=1092861ea850823f4a9ccca7a2f2df195433bc8d1e2eab384696aba769748243 namespace=k8s.io Dec 13 14:30:05.808635 env[1724]: time="2024-12-13T14:30:05.807932694Z" level=info msg="cleaning up dead shim" Dec 13 14:30:05.816641 env[1724]: time="2024-12-13T14:30:05.816594355Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4274 runtime=io.containerd.runc.v2\n" Dec 13 14:30:06.299250 kubelet[2550]: I1213 14:30:06.299219 2550 scope.go:117] "RemoveContainer" containerID="1092861ea850823f4a9ccca7a2f2df195433bc8d1e2eab384696aba769748243" Dec 13 14:30:06.316096 env[1724]: time="2024-12-13T14:30:06.316053319Z" level=info msg="CreateContainer within sandbox \"811582d1267b4fa101312263e3e168356f4d993f72f0a8959461e1a6a777a5e2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 14:30:06.354092 env[1724]: time="2024-12-13T14:30:06.354039245Z" level=info msg="CreateContainer within sandbox \"811582d1267b4fa101312263e3e168356f4d993f72f0a8959461e1a6a777a5e2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2e30813433bd87178560e93372f2ebd2013233b6f8b6802266cc8496c233b262\"" Dec 13 14:30:06.354657 env[1724]: time="2024-12-13T14:30:06.354621815Z" level=info msg="StartContainer for \"2e30813433bd87178560e93372f2ebd2013233b6f8b6802266cc8496c233b262\"" Dec 13 14:30:06.387311 systemd[1]: Started cri-containerd-2e30813433bd87178560e93372f2ebd2013233b6f8b6802266cc8496c233b262.scope. Dec 13 14:30:06.482088 env[1724]: time="2024-12-13T14:30:06.482028729Z" level=info msg="StartContainer for \"2e30813433bd87178560e93372f2ebd2013233b6f8b6802266cc8496c233b262\" returns successfully" Dec 13 14:30:06.780353 systemd[1]: run-containerd-runc-k8s.io-2e30813433bd87178560e93372f2ebd2013233b6f8b6802266cc8496c233b262-runc.eLYihg.mount: Deactivated successfully. Dec 13 14:30:09.910805 systemd[1]: cri-containerd-2ca56ea9e4fe0947d796576425c63a7fde60b36db4618e9df9cbb8a413eea651.scope: Deactivated successfully. Dec 13 14:30:09.911127 systemd[1]: cri-containerd-2ca56ea9e4fe0947d796576425c63a7fde60b36db4618e9df9cbb8a413eea651.scope: Consumed 1.669s CPU time. Dec 13 14:30:09.947888 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ca56ea9e4fe0947d796576425c63a7fde60b36db4618e9df9cbb8a413eea651-rootfs.mount: Deactivated successfully. Dec 13 14:30:09.962604 env[1724]: time="2024-12-13T14:30:09.962475931Z" level=info msg="shim disconnected" id=2ca56ea9e4fe0947d796576425c63a7fde60b36db4618e9df9cbb8a413eea651 Dec 13 14:30:09.963100 env[1724]: time="2024-12-13T14:30:09.962611724Z" level=warning msg="cleaning up after shim disconnected" id=2ca56ea9e4fe0947d796576425c63a7fde60b36db4618e9df9cbb8a413eea651 namespace=k8s.io Dec 13 14:30:09.963100 env[1724]: time="2024-12-13T14:30:09.962629470Z" level=info msg="cleaning up dead shim" Dec 13 14:30:09.973096 env[1724]: time="2024-12-13T14:30:09.973037913Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4356 runtime=io.containerd.runc.v2\n" Dec 13 14:30:10.314002 kubelet[2550]: I1213 14:30:10.313477 2550 scope.go:117] "RemoveContainer" containerID="2ca56ea9e4fe0947d796576425c63a7fde60b36db4618e9df9cbb8a413eea651" Dec 13 14:30:10.315773 env[1724]: time="2024-12-13T14:30:10.315733006Z" level=info msg="CreateContainer within sandbox \"4753aae7c95ce227c821cb7198b3d5218c68092a34c90c5daad90c086b92b4cc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 14:30:10.337375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount262691199.mount: Deactivated successfully. Dec 13 14:30:10.348649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount66075993.mount: Deactivated successfully. Dec 13 14:30:10.353659 env[1724]: time="2024-12-13T14:30:10.353608404Z" level=info msg="CreateContainer within sandbox \"4753aae7c95ce227c821cb7198b3d5218c68092a34c90c5daad90c086b92b4cc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"690d9d76187ef19c03705b34b36baabd9c0d8b7170d2600021dbb67efb6baba6\"" Dec 13 14:30:10.354212 env[1724]: time="2024-12-13T14:30:10.354184605Z" level=info msg="StartContainer for \"690d9d76187ef19c03705b34b36baabd9c0d8b7170d2600021dbb67efb6baba6\"" Dec 13 14:30:10.374074 systemd[1]: Started cri-containerd-690d9d76187ef19c03705b34b36baabd9c0d8b7170d2600021dbb67efb6baba6.scope. Dec 13 14:30:10.444411 env[1724]: time="2024-12-13T14:30:10.444342270Z" level=info msg="StartContainer for \"690d9d76187ef19c03705b34b36baabd9c0d8b7170d2600021dbb67efb6baba6\" returns successfully" Dec 13 14:30:12.707600 kubelet[2550]: E1213 14:30:12.707543 2550 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-3?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 14:30:22.709555 kubelet[2550]: E1213 14:30:22.709499 2550 controller.go:195] "Failed to update lease" err="Put \"https://172.31.29.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-3?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"