Jan 13 20:34:47.060453 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 13 20:34:47.060683 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:34:47.060701 kernel: BIOS-provided physical RAM map: Jan 13 20:34:47.060714 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 20:34:47.060725 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 20:34:47.060735 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 20:34:47.060752 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jan 13 20:34:47.060763 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jan 13 20:34:47.060774 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jan 13 20:34:47.060784 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 20:34:47.060795 kernel: NX (Execute Disable) protection: active Jan 13 20:34:47.060806 kernel: APIC: Static calls initialized Jan 13 20:34:47.060817 kernel: SMBIOS 2.7 present. Jan 13 20:34:47.060828 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jan 13 20:34:47.060845 kernel: Hypervisor detected: KVM Jan 13 20:34:47.060857 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:34:47.060869 kernel: kvm-clock: using sched offset of 8155021562 cycles Jan 13 20:34:47.060881 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:34:47.060894 kernel: tsc: Detected 2499.998 MHz processor Jan 13 20:34:47.060907 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:34:47.060920 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:34:47.060935 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jan 13 20:34:47.060948 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 20:34:47.060967 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:34:47.060980 kernel: Using GB pages for direct mapping Jan 13 20:34:47.060993 kernel: ACPI: Early table checksum verification disabled Jan 13 20:34:47.061005 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jan 13 20:34:47.061018 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jan 13 20:34:47.061030 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 13 20:34:47.061042 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 13 20:34:47.061058 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jan 13 20:34:47.061070 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 13 20:34:47.061082 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 13 20:34:47.061094 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jan 13 20:34:47.061106 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 13 20:34:47.061118 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jan 13 20:34:47.061130 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jan 13 20:34:47.061142 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jan 13 20:34:47.061154 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jan 13 20:34:47.061170 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jan 13 20:34:47.061194 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jan 13 20:34:47.061207 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jan 13 20:34:47.061220 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jan 13 20:34:47.061233 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jan 13 20:34:47.061248 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jan 13 20:34:47.061261 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jan 13 20:34:47.061273 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jan 13 20:34:47.061285 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jan 13 20:34:47.061298 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 20:34:47.061310 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 20:34:47.061323 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jan 13 20:34:47.061336 kernel: NUMA: Initialized distance table, cnt=1 Jan 13 20:34:47.061349 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jan 13 20:34:47.061365 kernel: Zone ranges: Jan 13 20:34:47.061378 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:34:47.061391 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jan 13 20:34:47.061410 kernel: Normal empty Jan 13 20:34:47.061422 kernel: Movable zone start for each node Jan 13 20:34:47.061435 kernel: Early memory node ranges Jan 13 20:34:47.061448 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 20:34:47.061460 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jan 13 20:34:47.061473 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jan 13 20:34:47.061503 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:34:47.061519 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 20:34:47.061532 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jan 13 20:34:47.061545 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 13 20:34:47.061558 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:34:47.061571 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jan 13 20:34:47.061584 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:34:47.061597 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:34:47.061610 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:34:47.061629 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:34:47.061645 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:34:47.061658 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 20:34:47.061670 kernel: TSC deadline timer available Jan 13 20:34:47.061683 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 20:34:47.061696 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 20:34:47.061709 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 13 20:34:47.061722 kernel: Booting paravirtualized kernel on KVM Jan 13 20:34:47.061734 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:34:47.061747 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 20:34:47.061763 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 20:34:47.061775 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 20:34:47.061787 kernel: pcpu-alloc: [0] 0 1 Jan 13 20:34:47.061799 kernel: kvm-guest: PV spinlocks enabled Jan 13 20:34:47.061812 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 20:34:47.061827 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:34:47.061840 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:34:47.061857 kernel: random: crng init done Jan 13 20:34:47.061873 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:34:47.061886 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 20:34:47.061898 kernel: Fallback order for Node 0: 0 Jan 13 20:34:47.061910 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jan 13 20:34:47.061925 kernel: Policy zone: DMA32 Jan 13 20:34:47.061938 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:34:47.061951 kernel: Memory: 1930300K/2057760K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 127200K reserved, 0K cma-reserved) Jan 13 20:34:47.061963 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:34:47.061977 kernel: Kernel/User page tables isolation: enabled Jan 13 20:34:47.061993 kernel: ftrace: allocating 37890 entries in 149 pages Jan 13 20:34:47.062005 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:34:47.062017 kernel: Dynamic Preempt: voluntary Jan 13 20:34:47.062030 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:34:47.062044 kernel: rcu: RCU event tracing is enabled. Jan 13 20:34:47.062057 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:34:47.062315 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:34:47.062332 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:34:47.062348 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:34:47.062367 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:34:47.062383 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:34:47.062399 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 20:34:47.062415 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:34:47.062430 kernel: Console: colour VGA+ 80x25 Jan 13 20:34:47.062446 kernel: printk: console [ttyS0] enabled Jan 13 20:34:47.062461 kernel: ACPI: Core revision 20230628 Jan 13 20:34:47.062492 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jan 13 20:34:47.062507 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:34:47.062526 kernel: x2apic enabled Jan 13 20:34:47.062541 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:34:47.062569 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 13 20:34:47.062590 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 13 20:34:47.062606 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 13 20:34:47.062623 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 13 20:34:47.062639 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:34:47.062655 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 20:34:47.062671 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:34:47.062687 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:34:47.062703 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 13 20:34:47.062719 kernel: RETBleed: Vulnerable Jan 13 20:34:47.062736 kernel: Speculative Store Bypass: Vulnerable Jan 13 20:34:47.062755 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 20:34:47.062771 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 20:34:47.062788 kernel: GDS: Unknown: Dependent on hypervisor status Jan 13 20:34:47.062803 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:34:47.062819 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:34:47.062836 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:34:47.062855 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jan 13 20:34:47.062871 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jan 13 20:34:47.062888 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 13 20:34:47.062904 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 13 20:34:47.062920 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 13 20:34:47.062936 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 13 20:34:47.062952 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:34:47.062968 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jan 13 20:34:47.062984 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jan 13 20:34:47.063000 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jan 13 20:34:47.063016 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jan 13 20:34:47.063035 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jan 13 20:34:47.063051 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jan 13 20:34:47.063067 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jan 13 20:34:47.063083 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:34:47.063099 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:34:47.063115 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:34:47.063131 kernel: landlock: Up and running. Jan 13 20:34:47.063147 kernel: SELinux: Initializing. Jan 13 20:34:47.063163 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 20:34:47.063179 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 13 20:34:47.063195 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 13 20:34:47.063215 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:34:47.063232 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:34:47.063249 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:34:47.063265 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 13 20:34:47.063281 kernel: signal: max sigframe size: 3632 Jan 13 20:34:47.063298 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:34:47.063322 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:34:47.063339 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 20:34:47.063356 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:34:47.063376 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:34:47.063392 kernel: .... node #0, CPUs: #1 Jan 13 20:34:47.063410 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 13 20:34:47.063428 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 20:34:47.063445 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:34:47.063461 kernel: smpboot: Max logical packages: 1 Jan 13 20:34:47.063497 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 13 20:34:47.063514 kernel: devtmpfs: initialized Jan 13 20:34:47.063530 kernel: x86/mm: Memory block size: 128MB Jan 13 20:34:47.063550 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:34:47.063567 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:34:47.063583 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:34:47.063599 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:34:47.063615 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:34:47.063633 kernel: audit: type=2000 audit(1736800486.874:1): state=initialized audit_enabled=0 res=1 Jan 13 20:34:47.063650 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:34:47.063666 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:34:47.063683 kernel: cpuidle: using governor menu Jan 13 20:34:47.063702 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:34:47.063718 kernel: dca service started, version 1.12.1 Jan 13 20:34:47.063734 kernel: PCI: Using configuration type 1 for base access Jan 13 20:34:47.063751 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:34:47.063768 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:34:47.063784 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:34:47.063801 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:34:47.063818 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:34:47.063835 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:34:47.063854 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:34:47.063871 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:34:47.063888 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:34:47.063904 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 13 20:34:47.063921 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:34:47.063937 kernel: ACPI: Interpreter enabled Jan 13 20:34:47.063954 kernel: ACPI: PM: (supports S0 S5) Jan 13 20:34:47.063970 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:34:47.063987 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:34:47.064007 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 20:34:47.064023 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 13 20:34:47.064040 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:34:47.064381 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:34:47.066633 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 20:34:47.066793 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 20:34:47.066813 kernel: acpiphp: Slot [3] registered Jan 13 20:34:47.066836 kernel: acpiphp: Slot [4] registered Jan 13 20:34:47.066852 kernel: acpiphp: Slot [5] registered Jan 13 20:34:47.066867 kernel: acpiphp: Slot [6] registered Jan 13 20:34:47.066883 kernel: acpiphp: Slot [7] registered Jan 13 20:34:47.066898 kernel: acpiphp: Slot [8] registered Jan 13 20:34:47.066913 kernel: acpiphp: Slot [9] registered Jan 13 20:34:47.066929 kernel: acpiphp: Slot [10] registered Jan 13 20:34:47.066944 kernel: acpiphp: Slot [11] registered Jan 13 20:34:47.066960 kernel: acpiphp: Slot [12] registered Jan 13 20:34:47.066979 kernel: acpiphp: Slot [13] registered Jan 13 20:34:47.066994 kernel: acpiphp: Slot [14] registered Jan 13 20:34:47.067010 kernel: acpiphp: Slot [15] registered Jan 13 20:34:47.067025 kernel: acpiphp: Slot [16] registered Jan 13 20:34:47.067041 kernel: acpiphp: Slot [17] registered Jan 13 20:34:47.067057 kernel: acpiphp: Slot [18] registered Jan 13 20:34:47.067072 kernel: acpiphp: Slot [19] registered Jan 13 20:34:47.067088 kernel: acpiphp: Slot [20] registered Jan 13 20:34:47.067103 kernel: acpiphp: Slot [21] registered Jan 13 20:34:47.067119 kernel: acpiphp: Slot [22] registered Jan 13 20:34:47.067137 kernel: acpiphp: Slot [23] registered Jan 13 20:34:47.067153 kernel: acpiphp: Slot [24] registered Jan 13 20:34:47.067168 kernel: acpiphp: Slot [25] registered Jan 13 20:34:47.067184 kernel: acpiphp: Slot [26] registered Jan 13 20:34:47.067199 kernel: acpiphp: Slot [27] registered Jan 13 20:34:47.067215 kernel: acpiphp: Slot [28] registered Jan 13 20:34:47.067230 kernel: acpiphp: Slot [29] registered Jan 13 20:34:47.067245 kernel: acpiphp: Slot [30] registered Jan 13 20:34:47.067261 kernel: acpiphp: Slot [31] registered Jan 13 20:34:47.067279 kernel: PCI host bridge to bus 0000:00 Jan 13 20:34:47.067442 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:34:47.067680 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:34:47.067802 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:34:47.068009 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 13 20:34:47.068141 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:34:47.068296 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 20:34:47.068453 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 13 20:34:47.068791 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jan 13 20:34:47.068929 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 13 20:34:47.069060 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jan 13 20:34:47.069363 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jan 13 20:34:47.070729 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jan 13 20:34:47.070881 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jan 13 20:34:47.071040 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jan 13 20:34:47.071181 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jan 13 20:34:47.071324 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jan 13 20:34:47.071462 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 10742 usecs Jan 13 20:34:47.072058 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jan 13 20:34:47.072198 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jan 13 20:34:47.072335 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 13 20:34:47.072474 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 20:34:47.073701 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 13 20:34:47.073858 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jan 13 20:34:47.074019 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 13 20:34:47.075537 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jan 13 20:34:47.075569 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:34:47.075592 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:34:47.075608 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:34:47.075624 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:34:47.075639 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 20:34:47.075655 kernel: iommu: Default domain type: Translated Jan 13 20:34:47.075671 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:34:47.075686 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:34:47.075702 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:34:47.075718 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 20:34:47.075736 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jan 13 20:34:47.075897 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jan 13 20:34:47.076056 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jan 13 20:34:47.076211 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 20:34:47.076234 kernel: vgaarb: loaded Jan 13 20:34:47.076252 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jan 13 20:34:47.076269 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jan 13 20:34:47.076288 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:34:47.076305 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:34:47.076327 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:34:47.076344 kernel: pnp: PnP ACPI init Jan 13 20:34:47.076361 kernel: pnp: PnP ACPI: found 5 devices Jan 13 20:34:47.076380 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:34:47.076397 kernel: NET: Registered PF_INET protocol family Jan 13 20:34:47.076414 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:34:47.076430 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 13 20:34:47.076447 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:34:47.077561 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 20:34:47.077592 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 13 20:34:47.077608 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 13 20:34:47.077623 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 20:34:47.077638 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 13 20:34:47.077653 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:34:47.077668 kernel: NET: Registered PF_XDP protocol family Jan 13 20:34:47.077811 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:34:47.077925 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:34:47.078150 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:34:47.078308 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 13 20:34:47.078448 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 20:34:47.078469 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:34:47.078499 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 20:34:47.078516 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 13 20:34:47.078532 kernel: clocksource: Switched to clocksource tsc Jan 13 20:34:47.078548 kernel: Initialise system trusted keyrings Jan 13 20:34:47.078568 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 13 20:34:47.078584 kernel: Key type asymmetric registered Jan 13 20:34:47.078600 kernel: Asymmetric key parser 'x509' registered Jan 13 20:34:47.078615 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:34:47.078631 kernel: io scheduler mq-deadline registered Jan 13 20:34:47.078646 kernel: io scheduler kyber registered Jan 13 20:34:47.078662 kernel: io scheduler bfq registered Jan 13 20:34:47.078679 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:34:47.078695 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:34:47.078714 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:34:47.078730 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:34:47.078746 kernel: i8042: Warning: Keylock active Jan 13 20:34:47.078761 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:34:47.078777 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:34:47.078917 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 13 20:34:47.079041 kernel: rtc_cmos 00:00: registered as rtc0 Jan 13 20:34:47.079162 kernel: rtc_cmos 00:00: setting system clock to 2025-01-13T20:34:46 UTC (1736800486) Jan 13 20:34:47.079284 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 13 20:34:47.079384 kernel: intel_pstate: CPU model not supported Jan 13 20:34:47.079401 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:34:47.079418 kernel: Segment Routing with IPv6 Jan 13 20:34:47.079435 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:34:47.079451 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:34:47.079467 kernel: Key type dns_resolver registered Jan 13 20:34:47.087541 kernel: IPI shorthand broadcast: enabled Jan 13 20:34:47.087574 kernel: sched_clock: Marking stable (586001770, 268605273)->(976318780, -121711737) Jan 13 20:34:47.087598 kernel: registered taskstats version 1 Jan 13 20:34:47.087615 kernel: Loading compiled-in X.509 certificates Jan 13 20:34:47.087629 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 13 20:34:47.087644 kernel: Key type .fscrypt registered Jan 13 20:34:47.087661 kernel: Key type fscrypt-provisioning registered Jan 13 20:34:47.087676 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:34:47.087690 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:34:47.087705 kernel: ima: No architecture policies found Jan 13 20:34:47.087720 kernel: clk: Disabling unused clocks Jan 13 20:34:47.087739 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 13 20:34:47.087753 kernel: Write protecting the kernel read-only data: 38912k Jan 13 20:34:47.087769 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 13 20:34:47.087783 kernel: Run /init as init process Jan 13 20:34:47.087800 kernel: with arguments: Jan 13 20:34:47.087817 kernel: /init Jan 13 20:34:47.087834 kernel: with environment: Jan 13 20:34:47.087848 kernel: HOME=/ Jan 13 20:34:47.087862 kernel: TERM=linux Jan 13 20:34:47.087880 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:34:47.087926 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:34:47.087945 systemd[1]: Detected virtualization amazon. Jan 13 20:34:47.087960 systemd[1]: Detected architecture x86-64. Jan 13 20:34:47.087974 systemd[1]: Running in initrd. Jan 13 20:34:47.087989 systemd[1]: No hostname configured, using default hostname. Jan 13 20:34:47.088003 systemd[1]: Hostname set to . Jan 13 20:34:47.088021 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:34:47.088036 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:34:47.088051 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:34:47.088066 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:34:47.088083 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:34:47.088099 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:34:47.088113 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:34:47.088129 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:34:47.088150 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:34:47.088321 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:34:47.088343 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:34:47.088359 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:34:47.088375 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:34:47.088391 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:34:47.088407 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:34:47.088429 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:34:47.088445 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:34:47.088462 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:34:47.088489 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:34:47.088506 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:34:47.088523 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:34:47.088539 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:34:47.088556 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:34:47.088576 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:34:47.088593 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:34:47.088609 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:34:47.088625 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:34:47.088642 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:34:47.088672 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:34:47.088709 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:34:47.088791 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:34:47.088812 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:34:47.088829 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:34:47.088887 systemd-journald[179]: Collecting audit messages is disabled. Jan 13 20:34:47.088930 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:34:47.088947 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:34:47.088965 systemd-journald[179]: Journal started Jan 13 20:34:47.089003 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2dc5fd27d88845fbf14439557da460) is 4.8M, max 38.5M, 33.7M free. Jan 13 20:34:47.092955 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:34:47.089124 systemd-modules-load[180]: Inserted module 'overlay' Jan 13 20:34:47.097496 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:34:47.107863 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:34:47.308261 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:34:47.308298 kernel: Bridge firewalling registered Jan 13 20:34:47.141393 systemd-modules-load[180]: Inserted module 'br_netfilter' Jan 13 20:34:47.320004 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:34:47.323145 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:34:47.326255 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:34:47.341799 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:34:47.347035 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:34:47.357395 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:34:47.359257 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:34:47.371177 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:34:47.376318 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:34:47.384914 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:34:47.393021 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:34:47.405976 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:34:47.429456 systemd-resolved[209]: Positive Trust Anchors: Jan 13 20:34:47.429915 systemd-resolved[209]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:34:47.433271 dracut-cmdline[217]: dracut-dracut-053 Jan 13 20:34:47.436615 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:34:47.434509 systemd-resolved[209]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:34:47.456143 systemd-resolved[209]: Defaulting to hostname 'linux'. Jan 13 20:34:47.458935 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:34:47.463189 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:34:47.520688 kernel: SCSI subsystem initialized Jan 13 20:34:47.534618 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:34:47.547513 kernel: iscsi: registered transport (tcp) Jan 13 20:34:47.569597 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:34:47.569678 kernel: QLogic iSCSI HBA Driver Jan 13 20:34:47.610903 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:34:47.620668 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:34:47.659568 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:34:47.659643 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:34:47.659664 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:34:47.702510 kernel: raid6: avx512x4 gen() 14484 MB/s Jan 13 20:34:47.719510 kernel: raid6: avx512x2 gen() 14348 MB/s Jan 13 20:34:47.737513 kernel: raid6: avx512x1 gen() 14408 MB/s Jan 13 20:34:47.754514 kernel: raid6: avx2x4 gen() 12143 MB/s Jan 13 20:34:47.771524 kernel: raid6: avx2x2 gen() 13930 MB/s Jan 13 20:34:47.788800 kernel: raid6: avx2x1 gen() 9618 MB/s Jan 13 20:34:47.788876 kernel: raid6: using algorithm avx512x4 gen() 14484 MB/s Jan 13 20:34:47.806781 kernel: raid6: .... xor() 7518 MB/s, rmw enabled Jan 13 20:34:47.806894 kernel: raid6: using avx512x2 recovery algorithm Jan 13 20:34:47.829518 kernel: xor: automatically using best checksumming function avx Jan 13 20:34:47.993508 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:34:48.003696 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:34:48.010682 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:34:48.027563 systemd-udevd[399]: Using default interface naming scheme 'v255'. Jan 13 20:34:48.033371 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:34:48.047843 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:34:48.064597 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jan 13 20:34:48.097176 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:34:48.103702 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:34:48.201942 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:34:48.217690 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:34:48.266232 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:34:48.270935 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:34:48.275554 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:34:48.279139 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:34:48.286886 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:34:48.322901 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:34:48.347554 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:34:48.370935 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:34:48.389094 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:34:48.389130 kernel: AES CTR mode by8 optimization enabled Jan 13 20:34:48.371087 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:34:48.382871 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:34:48.387438 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:34:48.391100 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:34:48.398499 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:34:48.403173 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 13 20:34:48.420446 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 13 20:34:48.420646 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jan 13 20:34:48.420831 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:f2:f8:94:f3:93 Jan 13 20:34:48.420992 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 13 20:34:48.421161 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 13 20:34:48.408500 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:34:48.433506 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 13 20:34:48.445499 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:34:48.445568 kernel: GPT:9289727 != 16777215 Jan 13 20:34:48.445590 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:34:48.445619 kernel: GPT:9289727 != 16777215 Jan 13 20:34:48.445637 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:34:48.445655 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:34:48.450023 (udev-worker)[445]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:34:48.705831 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (455) Jan 13 20:34:48.705865 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/nvme0n1p3 scanned by (udev-worker) (449) Jan 13 20:34:48.631922 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:34:48.706044 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:34:48.718833 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 13 20:34:48.737121 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 13 20:34:48.746719 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 13 20:34:48.751711 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 13 20:34:48.766759 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:34:48.769595 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:34:48.782305 disk-uuid[619]: Primary Header is updated. Jan 13 20:34:48.782305 disk-uuid[619]: Secondary Entries is updated. Jan 13 20:34:48.782305 disk-uuid[619]: Secondary Header is updated. Jan 13 20:34:48.791532 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:34:48.800153 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:34:49.814509 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 13 20:34:49.815661 disk-uuid[624]: The operation has completed successfully. Jan 13 20:34:50.023790 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:34:50.023937 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:34:50.064812 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:34:50.084333 sh[888]: Success Jan 13 20:34:50.112525 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 20:34:50.278563 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:34:50.294637 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:34:50.302655 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:34:50.351277 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 13 20:34:50.351362 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:34:50.351382 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:34:50.352463 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:34:50.355017 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:34:50.462514 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:34:50.491645 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:34:50.494935 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:34:50.512731 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:34:50.526351 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:34:50.567288 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:34:50.567369 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:34:50.567393 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:34:50.583251 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:34:50.598508 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:34:50.598012 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:34:50.609788 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:34:50.618707 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:34:50.674742 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:34:50.686801 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:34:50.740661 systemd-networkd[1080]: lo: Link UP Jan 13 20:34:50.740676 systemd-networkd[1080]: lo: Gained carrier Jan 13 20:34:50.744026 systemd-networkd[1080]: Enumeration completed Jan 13 20:34:50.744163 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:34:50.745977 systemd[1]: Reached target network.target - Network. Jan 13 20:34:50.754893 systemd-networkd[1080]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:34:50.754899 systemd-networkd[1080]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:34:50.790796 systemd-networkd[1080]: eth0: Link UP Jan 13 20:34:50.790807 systemd-networkd[1080]: eth0: Gained carrier Jan 13 20:34:50.790825 systemd-networkd[1080]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:34:50.811578 systemd-networkd[1080]: eth0: DHCPv4 address 172.31.24.144/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:34:51.089355 ignition[1001]: Ignition 2.20.0 Jan 13 20:34:51.089367 ignition[1001]: Stage: fetch-offline Jan 13 20:34:51.089589 ignition[1001]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:34:51.089602 ignition[1001]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:34:51.092018 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:34:51.090037 ignition[1001]: Ignition finished successfully Jan 13 20:34:51.102739 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:34:51.119570 ignition[1090]: Ignition 2.20.0 Jan 13 20:34:51.119584 ignition[1090]: Stage: fetch Jan 13 20:34:51.120003 ignition[1090]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:34:51.120014 ignition[1090]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:34:51.120116 ignition[1090]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:34:51.162088 ignition[1090]: PUT result: OK Jan 13 20:34:51.169638 ignition[1090]: parsed url from cmdline: "" Jan 13 20:34:51.169651 ignition[1090]: no config URL provided Jan 13 20:34:51.169660 ignition[1090]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:34:51.169675 ignition[1090]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:34:51.169707 ignition[1090]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:34:51.176748 ignition[1090]: PUT result: OK Jan 13 20:34:51.176892 ignition[1090]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 13 20:34:51.181517 ignition[1090]: GET result: OK Jan 13 20:34:51.181620 ignition[1090]: parsing config with SHA512: e6aa25925560b4890df7e8539ca7e7c5c855cbd981987b3e739cec357c54a3ecd418214e1394f211341f6059b71c321e2189fc6fbb9b4de1cc14df4549aaca76 Jan 13 20:34:51.190423 unknown[1090]: fetched base config from "system" Jan 13 20:34:51.190534 unknown[1090]: fetched base config from "system" Jan 13 20:34:51.198975 ignition[1090]: fetch: fetch complete Jan 13 20:34:51.190542 unknown[1090]: fetched user config from "aws" Jan 13 20:34:51.199131 ignition[1090]: fetch: fetch passed Jan 13 20:34:51.199259 ignition[1090]: Ignition finished successfully Jan 13 20:34:51.227425 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:34:51.244840 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:34:51.287135 ignition[1097]: Ignition 2.20.0 Jan 13 20:34:51.287146 ignition[1097]: Stage: kargs Jan 13 20:34:51.287623 ignition[1097]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:34:51.287646 ignition[1097]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:34:51.287757 ignition[1097]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:34:51.289474 ignition[1097]: PUT result: OK Jan 13 20:34:51.297735 ignition[1097]: kargs: kargs passed Jan 13 20:34:51.297811 ignition[1097]: Ignition finished successfully Jan 13 20:34:51.301087 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:34:51.309762 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:34:51.338296 ignition[1104]: Ignition 2.20.0 Jan 13 20:34:51.338314 ignition[1104]: Stage: disks Jan 13 20:34:51.342234 ignition[1104]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:34:51.342250 ignition[1104]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:34:51.342366 ignition[1104]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:34:51.345057 ignition[1104]: PUT result: OK Jan 13 20:34:51.356086 ignition[1104]: disks: disks passed Jan 13 20:34:51.356175 ignition[1104]: Ignition finished successfully Jan 13 20:34:51.357848 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:34:51.362564 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:34:51.365555 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:34:51.367085 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:34:51.368320 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:34:51.369447 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:34:51.382931 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:34:51.450278 systemd-fsck[1112]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:34:51.455011 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:34:51.465502 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:34:51.637735 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 13 20:34:51.639076 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:34:51.640750 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:34:51.657797 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:34:51.662603 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:34:51.664497 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:34:51.664556 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:34:51.664591 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:34:51.683793 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:34:51.692522 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1131) Jan 13 20:34:51.693122 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:34:51.699897 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:34:51.699969 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:34:51.699992 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:34:51.708504 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:34:51.710027 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:34:51.957645 systemd-networkd[1080]: eth0: Gained IPv6LL Jan 13 20:34:52.049562 initrd-setup-root[1155]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:34:52.067415 initrd-setup-root[1162]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:34:52.077275 initrd-setup-root[1169]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:34:52.098490 initrd-setup-root[1176]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:34:52.413185 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:34:52.422655 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:34:52.434844 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:34:52.446445 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:34:52.447790 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:34:52.478942 ignition[1243]: INFO : Ignition 2.20.0 Jan 13 20:34:52.480835 ignition[1243]: INFO : Stage: mount Jan 13 20:34:52.480835 ignition[1243]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:34:52.480835 ignition[1243]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:34:52.480835 ignition[1243]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:34:52.496805 ignition[1243]: INFO : PUT result: OK Jan 13 20:34:52.502854 ignition[1243]: INFO : mount: mount passed Jan 13 20:34:52.504001 ignition[1243]: INFO : Ignition finished successfully Jan 13 20:34:52.505841 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:34:52.516640 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:34:52.519422 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:34:52.647751 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:34:52.673500 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1255) Jan 13 20:34:52.675794 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:34:52.675860 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:34:52.675878 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 13 20:34:52.681499 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 13 20:34:52.684006 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:34:52.710449 ignition[1272]: INFO : Ignition 2.20.0 Jan 13 20:34:52.710449 ignition[1272]: INFO : Stage: files Jan 13 20:34:52.712959 ignition[1272]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:34:52.712959 ignition[1272]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:34:52.712959 ignition[1272]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:34:52.717707 ignition[1272]: INFO : PUT result: OK Jan 13 20:34:52.722197 ignition[1272]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:34:52.723951 ignition[1272]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:34:52.723951 ignition[1272]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:34:52.740758 ignition[1272]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:34:52.742427 ignition[1272]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:34:52.744065 ignition[1272]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:34:52.742491 unknown[1272]: wrote ssh authorized keys file for user: core Jan 13 20:34:52.756778 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:34:52.758960 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 20:34:52.851988 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:34:53.130359 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 20:34:53.130359 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:34:53.135857 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:34:53.135857 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:34:53.144274 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:34:53.144274 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:34:53.144274 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:34:53.144274 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:34:53.155338 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:34:53.155338 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:34:53.155338 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:34:53.155338 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:34:53.155338 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:34:53.155338 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:34:53.155338 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 20:34:53.672373 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 20:34:56.075953 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 20:34:56.075953 ignition[1272]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 20:34:56.088653 ignition[1272]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:34:56.091058 ignition[1272]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:34:56.091058 ignition[1272]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 20:34:56.091058 ignition[1272]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:34:56.091058 ignition[1272]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:34:56.091058 ignition[1272]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:34:56.091058 ignition[1272]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:34:56.091058 ignition[1272]: INFO : files: files passed Jan 13 20:34:56.091058 ignition[1272]: INFO : Ignition finished successfully Jan 13 20:34:56.106145 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:34:56.117749 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:34:56.123673 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:34:56.127893 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:34:56.128089 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:34:56.169153 initrd-setup-root-after-ignition[1301]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:34:56.173656 initrd-setup-root-after-ignition[1301]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:34:56.175655 initrd-setup-root-after-ignition[1305]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:34:56.178518 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:34:56.181771 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:34:56.185706 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:34:56.229711 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:34:56.229846 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:34:56.235401 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:34:56.235566 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:34:56.241167 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:34:56.248301 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:34:56.274818 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:34:56.284690 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:34:56.297774 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:34:56.301026 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:34:56.304407 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:34:56.309972 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:34:56.310136 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:34:56.324848 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:34:56.330618 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:34:56.339074 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:34:56.344222 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:34:56.344533 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:34:56.361639 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:34:56.361946 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:34:56.370227 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:34:56.375455 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:34:56.375764 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:34:56.387614 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:34:56.387747 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:34:56.394292 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:34:56.403100 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:34:56.403880 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:34:56.407568 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:34:56.411956 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:34:56.412154 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:34:56.415950 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:34:56.416140 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:34:56.425280 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:34:56.425407 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:34:56.448857 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:34:56.450127 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:34:56.451369 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:34:56.475565 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:34:56.477496 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:34:56.477810 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:34:56.480403 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:34:56.480737 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:34:56.501655 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:34:56.502339 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:34:56.514587 ignition[1325]: INFO : Ignition 2.20.0 Jan 13 20:34:56.514587 ignition[1325]: INFO : Stage: umount Jan 13 20:34:56.514587 ignition[1325]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:34:56.514587 ignition[1325]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 13 20:34:56.514587 ignition[1325]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 13 20:34:56.522596 ignition[1325]: INFO : PUT result: OK Jan 13 20:34:56.525951 ignition[1325]: INFO : umount: umount passed Jan 13 20:34:56.527025 ignition[1325]: INFO : Ignition finished successfully Jan 13 20:34:56.528420 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:34:56.528668 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:34:56.532358 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:34:56.532522 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:34:56.534944 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:34:56.535052 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:34:56.536277 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:34:56.536336 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:34:56.537469 systemd[1]: Stopped target network.target - Network. Jan 13 20:34:56.538461 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:34:56.538555 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:34:56.541106 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:34:56.542080 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:34:56.554816 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:34:56.559029 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:34:56.567686 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:34:56.567848 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:34:56.568406 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:34:56.588378 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:34:56.588448 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:34:56.590567 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:34:56.590734 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:34:56.594020 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:34:56.594076 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:34:56.596384 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:34:56.602468 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:34:56.604770 systemd-networkd[1080]: eth0: DHCPv6 lease lost Jan 13 20:34:56.606359 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:34:56.607429 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:34:56.607584 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:34:56.610791 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:34:56.610887 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:34:56.617158 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:34:56.617506 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:34:56.621612 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:34:56.621672 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:34:56.622430 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:34:56.622505 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:34:56.632684 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:34:56.634500 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:34:56.634582 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:34:56.637020 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:34:56.637100 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:34:56.638674 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:34:56.638744 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:34:56.639015 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:34:56.639058 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:34:56.639288 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:34:56.668105 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:34:56.668321 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:34:56.679551 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:34:56.679660 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:34:56.687663 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:34:56.687716 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:34:56.691200 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:34:56.691302 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:34:56.692202 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:34:56.692263 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:34:56.696907 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:34:56.696984 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:34:56.712094 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:34:56.717807 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:34:56.717988 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:34:56.722517 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:34:56.722586 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:34:56.726325 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:34:56.726428 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:34:56.744766 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:34:56.744869 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:34:56.747871 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:34:56.761730 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:34:56.771998 systemd[1]: Switching root. Jan 13 20:34:56.806530 systemd-journald[179]: Journal stopped Jan 13 20:34:59.813361 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Jan 13 20:34:59.838753 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:34:59.838816 kernel: SELinux: policy capability open_perms=1 Jan 13 20:34:59.838839 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:34:59.838875 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:34:59.838896 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:34:59.838917 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:34:59.839141 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:34:59.839173 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:34:59.839193 kernel: audit: type=1403 audit(1736800497.819:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:34:59.839216 systemd[1]: Successfully loaded SELinux policy in 60.619ms. Jan 13 20:34:59.839317 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.936ms. Jan 13 20:34:59.839345 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:34:59.839370 systemd[1]: Detected virtualization amazon. Jan 13 20:34:59.839389 systemd[1]: Detected architecture x86-64. Jan 13 20:34:59.839407 systemd[1]: Detected first boot. Jan 13 20:34:59.839426 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:34:59.839451 zram_generator::config[1368]: No configuration found. Jan 13 20:34:59.839471 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:34:59.842631 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:34:59.842664 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:34:59.842683 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:34:59.842704 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:34:59.842722 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:34:59.842741 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:34:59.842767 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:34:59.842785 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:34:59.842804 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:34:59.842822 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:34:59.842858 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:34:59.842878 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:34:59.842897 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:34:59.842916 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:34:59.842935 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:34:59.842959 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:34:59.842977 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:34:59.842996 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:34:59.843021 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:34:59.843147 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:34:59.843170 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:34:59.843196 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:34:59.843219 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:34:59.843238 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:34:59.843257 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:34:59.843275 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:34:59.848064 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:34:59.848093 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:34:59.848114 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:34:59.848134 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:34:59.848154 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:34:59.848174 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:34:59.848201 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:34:59.848221 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:34:59.848241 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:34:59.848267 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:34:59.848288 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:34:59.848308 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:34:59.848328 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:34:59.848348 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:34:59.848373 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:34:59.848394 systemd[1]: Reached target machines.target - Containers. Jan 13 20:34:59.848414 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:34:59.848435 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:34:59.848455 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:34:59.858570 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:34:59.858638 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:34:59.858666 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:34:59.858691 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:34:59.858725 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:34:59.858750 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:34:59.858776 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:34:59.858801 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:34:59.858826 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:34:59.858861 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:34:59.858886 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:34:59.858910 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:34:59.858937 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:34:59.858962 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:34:59.858986 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:34:59.859011 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:34:59.859032 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:34:59.859057 systemd[1]: Stopped verity-setup.service. Jan 13 20:34:59.859081 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:34:59.859104 kernel: loop: module loaded Jan 13 20:34:59.859131 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:34:59.859161 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:34:59.859185 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:34:59.859204 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:34:59.859229 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:34:59.859254 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:34:59.859283 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:34:59.859306 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:34:59.859332 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:34:59.859357 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:34:59.859382 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:34:59.859406 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:34:59.859427 kernel: fuse: init (API version 7.39) Jan 13 20:34:59.859451 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:34:59.864635 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:34:59.864718 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:34:59.864741 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:34:59.864760 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:34:59.864783 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:34:59.864871 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:34:59.864900 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:34:59.864920 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:34:59.864940 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:34:59.864959 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:34:59.864980 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:34:59.864999 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:34:59.865020 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:34:59.865083 systemd-journald[1443]: Collecting audit messages is disabled. Jan 13 20:34:59.865130 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:34:59.865153 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:34:59.865176 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:34:59.865198 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:34:59.865220 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:34:59.865242 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:34:59.865265 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:34:59.865292 systemd-journald[1443]: Journal started Jan 13 20:34:59.865334 systemd-journald[1443]: Runtime Journal (/run/log/journal/ec2dc5fd27d88845fbf14439557da460) is 4.8M, max 38.5M, 33.7M free. Jan 13 20:34:59.894446 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:34:59.894550 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:34:59.155152 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:34:59.188804 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 13 20:34:59.189217 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:34:59.902518 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:34:59.905572 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:34:59.907299 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:34:59.909795 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:34:59.911928 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:34:59.934802 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:34:59.999784 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:35:00.001703 kernel: ACPI: bus type drm_connector registered Jan 13 20:35:00.012722 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:35:00.024384 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:35:00.026500 kernel: loop0: detected capacity change from 0 to 141000 Jan 13 20:35:00.032728 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:35:00.034761 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:35:00.035024 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:35:00.038546 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:35:00.040940 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:35:00.059418 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:35:00.064902 systemd-journald[1443]: Time spent on flushing to /var/log/journal/ec2dc5fd27d88845fbf14439557da460 is 68.100ms for 965 entries. Jan 13 20:35:00.064902 systemd-journald[1443]: System Journal (/var/log/journal/ec2dc5fd27d88845fbf14439557da460) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:35:00.143055 systemd-journald[1443]: Received client request to flush runtime journal. Jan 13 20:35:00.103108 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:35:00.104925 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:35:00.145656 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:35:00.154414 udevadm[1506]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:35:00.210507 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:35:00.213817 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:35:00.233735 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:35:00.243511 kernel: loop1: detected capacity change from 0 to 211296 Jan 13 20:35:00.325453 systemd-tmpfiles[1513]: ACLs are not supported, ignoring. Jan 13 20:35:00.325936 systemd-tmpfiles[1513]: ACLs are not supported, ignoring. Jan 13 20:35:00.333752 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:35:00.408617 kernel: loop2: detected capacity change from 0 to 62848 Jan 13 20:35:00.490533 kernel: loop3: detected capacity change from 0 to 138184 Jan 13 20:35:00.659560 kernel: loop4: detected capacity change from 0 to 141000 Jan 13 20:35:00.703735 kernel: loop5: detected capacity change from 0 to 211296 Jan 13 20:35:00.791462 kernel: loop6: detected capacity change from 0 to 62848 Jan 13 20:35:00.832510 kernel: loop7: detected capacity change from 0 to 138184 Jan 13 20:35:00.867838 (sd-merge)[1519]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 13 20:35:00.868716 (sd-merge)[1519]: Merged extensions into '/usr'. Jan 13 20:35:00.924089 systemd[1]: Reloading requested from client PID 1476 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:35:00.924108 systemd[1]: Reloading... Jan 13 20:35:01.045574 zram_generator::config[1545]: No configuration found. Jan 13 20:35:01.297378 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:35:01.413891 systemd[1]: Reloading finished in 488 ms. Jan 13 20:35:01.453231 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:35:01.459400 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:35:01.469767 systemd[1]: Starting ensure-sysext.service... Jan 13 20:35:01.483915 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:35:01.488828 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:35:01.527873 systemd[1]: Reloading requested from client PID 1594 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:35:01.527899 systemd[1]: Reloading... Jan 13 20:35:01.530108 systemd-tmpfiles[1595]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:35:01.531026 systemd-tmpfiles[1595]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:35:01.532659 systemd-tmpfiles[1595]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:35:01.533628 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Jan 13 20:35:01.533830 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Jan 13 20:35:01.554594 systemd-tmpfiles[1595]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:35:01.554609 systemd-tmpfiles[1595]: Skipping /boot Jan 13 20:35:01.584428 systemd-tmpfiles[1595]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:35:01.585657 systemd-tmpfiles[1595]: Skipping /boot Jan 13 20:35:01.604927 systemd-udevd[1596]: Using default interface naming scheme 'v255'. Jan 13 20:35:01.659538 zram_generator::config[1619]: No configuration found. Jan 13 20:35:01.862989 (udev-worker)[1661]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:35:01.959383 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:35:01.975512 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jan 13 20:35:01.987148 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 20:35:01.997821 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:35:01.997917 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Jan 13 20:35:02.016509 kernel: ACPI: button: Sleep Button [SLPF] Jan 13 20:35:02.044505 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Jan 13 20:35:02.088585 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (1669) Jan 13 20:35:02.123465 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:35:02.125054 systemd[1]: Reloading finished in 596 ms. Jan 13 20:35:02.153203 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:35:02.157296 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:35:02.160504 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:35:02.212608 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:35:02.224867 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:35:02.229674 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:35:02.244420 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:35:02.255900 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:35:02.264912 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:35:02.293078 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:35:02.302198 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:35:02.303623 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:35:02.311938 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:35:02.320991 ldconfig[1472]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:35:02.329091 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:35:02.345575 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:35:02.347566 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:35:02.347844 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:35:02.351474 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:35:02.352632 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:35:02.378716 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:35:02.397844 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:35:02.412433 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:35:02.412934 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:35:02.422053 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:35:02.431462 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:35:02.432915 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:35:02.434232 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:35:02.450608 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:35:02.452037 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:35:02.468896 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:35:02.469130 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:35:02.485427 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:35:02.486011 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:35:02.489868 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:35:02.507249 systemd[1]: Finished ensure-sysext.service. Jan 13 20:35:02.507749 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:35:02.507913 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:35:02.522886 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:35:02.524612 augenrules[1822]: No rules Jan 13 20:35:02.523132 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:35:02.532340 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:35:02.533613 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:35:02.533936 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:35:02.535172 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:35:02.547121 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 13 20:35:02.553814 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:35:02.560130 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:35:02.560344 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:35:02.566735 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:35:02.592759 lvm[1831]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:35:02.596616 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:35:02.601072 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:35:02.621738 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:35:02.640612 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:35:02.642543 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:35:02.644183 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:35:02.652851 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:35:02.669680 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:35:02.677523 lvm[1844]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:35:02.732553 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:35:02.797114 systemd-resolved[1770]: Positive Trust Anchors: Jan 13 20:35:02.797140 systemd-resolved[1770]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:35:02.797191 systemd-resolved[1770]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:35:02.804763 systemd-networkd[1763]: lo: Link UP Jan 13 20:35:02.804773 systemd-networkd[1763]: lo: Gained carrier Jan 13 20:35:02.807203 systemd-networkd[1763]: Enumeration completed Jan 13 20:35:02.807884 systemd-networkd[1763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:35:02.808131 systemd-networkd[1763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:35:02.810345 systemd-networkd[1763]: eth0: Link UP Jan 13 20:35:02.810705 systemd-networkd[1763]: eth0: Gained carrier Jan 13 20:35:02.810733 systemd-networkd[1763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:35:02.815809 systemd-resolved[1770]: Defaulting to hostname 'linux'. Jan 13 20:35:02.823618 systemd-networkd[1763]: eth0: DHCPv4 address 172.31.24.144/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 13 20:35:02.864108 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:35:02.864851 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:35:02.865391 systemd[1]: Reached target network.target - Network. Jan 13 20:35:02.865967 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:35:02.881963 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:35:02.885064 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:35:02.888348 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:35:02.891046 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:35:02.893011 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:35:02.898429 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:35:02.904827 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:35:02.906578 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:35:02.907881 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:35:02.908052 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:35:02.909324 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:35:02.912800 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:35:02.916137 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:35:02.926853 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:35:02.930211 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:35:02.934192 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:35:02.937252 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:35:02.940246 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:35:02.940277 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:35:02.951244 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:35:02.982777 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:35:02.993130 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:35:03.012631 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:35:03.020974 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:35:03.025185 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:35:03.040136 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:35:03.070993 jq[1857]: false Jan 13 20:35:03.075553 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 20:35:03.089494 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:35:03.098165 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 20:35:03.101942 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:35:03.107397 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:35:03.120769 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:35:03.122967 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:35:03.123729 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:35:03.128765 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:35:03.152721 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:35:03.167921 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:35:03.172570 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:35:03.217924 update_engine[1868]: I20250113 20:35:03.213773 1868 main.cc:92] Flatcar Update Engine starting Jan 13 20:35:03.238292 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:35:03.239987 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:35:03.279284 dbus-daemon[1856]: [system] SELinux support is enabled Jan 13 20:35:03.282794 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:35:03.288633 jq[1869]: true Jan 13 20:35:03.300512 extend-filesystems[1858]: Found loop4 Jan 13 20:35:03.300512 extend-filesystems[1858]: Found loop5 Jan 13 20:35:03.300512 extend-filesystems[1858]: Found loop6 Jan 13 20:35:03.300512 extend-filesystems[1858]: Found loop7 Jan 13 20:35:03.300512 extend-filesystems[1858]: Found nvme0n1 Jan 13 20:35:03.300512 extend-filesystems[1858]: Found nvme0n1p1 Jan 13 20:35:03.300512 extend-filesystems[1858]: Found nvme0n1p2 Jan 13 20:35:03.300512 extend-filesystems[1858]: Found nvme0n1p3 Jan 13 20:35:03.300512 extend-filesystems[1858]: Found usr Jan 13 20:35:03.300512 extend-filesystems[1858]: Found nvme0n1p4 Jan 13 20:35:03.300512 extend-filesystems[1858]: Found nvme0n1p6 Jan 13 20:35:03.300512 extend-filesystems[1858]: Found nvme0n1p7 Jan 13 20:35:03.300512 extend-filesystems[1858]: Found nvme0n1p9 Jan 13 20:35:03.300512 extend-filesystems[1858]: Checking size of /dev/nvme0n1p9 Jan 13 20:35:03.306006 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:35:03.318602 update_engine[1868]: I20250113 20:35:03.308421 1868 update_check_scheduler.cc:74] Next update check in 4m56s Jan 13 20:35:03.306203 dbus-daemon[1856]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1763 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 20:35:03.306068 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:35:03.308872 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:35:03.308901 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:35:03.327336 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:35:03.327606 dbus-daemon[1856]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 20:35:03.336424 jq[1892]: true Jan 13 20:35:03.336780 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:35:03.339719 (ntainerd)[1891]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:35:03.340573 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:35:03.351044 ntpd[1860]: 13 Jan 20:35:03 ntpd[1860]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:25:52 UTC 2025 (1): Starting Jan 13 20:35:03.351044 ntpd[1860]: 13 Jan 20:35:03 ntpd[1860]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:35:03.351044 ntpd[1860]: 13 Jan 20:35:03 ntpd[1860]: ---------------------------------------------------- Jan 13 20:35:03.351044 ntpd[1860]: 13 Jan 20:35:03 ntpd[1860]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:35:03.351044 ntpd[1860]: 13 Jan 20:35:03 ntpd[1860]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:35:03.351044 ntpd[1860]: 13 Jan 20:35:03 ntpd[1860]: corporation. Support and training for ntp-4 are Jan 13 20:35:03.351044 ntpd[1860]: 13 Jan 20:35:03 ntpd[1860]: available at https://www.nwtime.org/support Jan 13 20:35:03.351044 ntpd[1860]: 13 Jan 20:35:03 ntpd[1860]: ---------------------------------------------------- Jan 13 20:35:03.348672 ntpd[1860]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:25:52 UTC 2025 (1): Starting Jan 13 20:35:03.348697 ntpd[1860]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:35:03.348707 ntpd[1860]: ---------------------------------------------------- Jan 13 20:35:03.348717 ntpd[1860]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:35:03.348726 ntpd[1860]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:35:03.348735 ntpd[1860]: corporation. Support and training for ntp-4 are Jan 13 20:35:03.348745 ntpd[1860]: available at https://www.nwtime.org/support Jan 13 20:35:03.348754 ntpd[1860]: ---------------------------------------------------- Jan 13 20:35:03.362031 ntpd[1860]: proto: precision = 0.080 usec (-23) Jan 13 20:35:03.362179 ntpd[1860]: 13 Jan 20:35:03 ntpd[1860]: proto: precision = 0.080 usec (-23) Jan 13 20:35:03.363836 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 20:35:03.367689 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:35:03.371681 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 20:35:03.372156 ntpd[1860]: basedate set to 2025-01-01 Jan 13 20:35:03.373230 ntpd[1860]: 13 Jan 20:35:03 ntpd[1860]: basedate set to 2025-01-01 Jan 13 20:35:03.373230 ntpd[1860]: 13 Jan 20:35:03 ntpd[1860]: gps base set to 2025-01-05 (week 2348) Jan 13 20:35:03.372180 ntpd[1860]: gps base set to 2025-01-05 (week 2348) Jan 13 20:35:03.390950 ntpd[1860]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:35:03.391038 ntpd[1860]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:35:03.391166 ntpd[1860]: 13 Jan 20:35:03 ntpd[1860]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:35:03.391166 ntpd[1860]: 13 Jan 20:35:03 ntpd[1860]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:35:03.397238 tar[1873]: linux-amd64/helm Jan 13 20:35:03.398656 ntpd[1860]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:35:03.398709 ntpd[1860]: Listen normally on 3 eth0 172.31.24.144:123 Jan 13 20:35:03.398838 ntpd[1860]: 13 Jan 20:35:03 ntpd[1860]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:35:03.398838 ntpd[1860]: 13 Jan 20:35:03 ntpd[1860]: Listen normally on 3 eth0 172.31.24.144:123 Jan 13 20:35:03.398838 ntpd[1860]: 13 Jan 20:35:03 ntpd[1860]: Listen normally on 4 lo [::1]:123 Jan 13 20:35:03.398838 ntpd[1860]: 13 Jan 20:35:03 ntpd[1860]: bind(21) AF_INET6 fe80::4f2:f8ff:fe94:f393%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:35:03.398766 ntpd[1860]: Listen normally on 4 lo [::1]:123 Jan 13 20:35:03.399005 ntpd[1860]: 13 Jan 20:35:03 ntpd[1860]: unable to create socket on eth0 (5) for fe80::4f2:f8ff:fe94:f393%2#123 Jan 13 20:35:03.399005 ntpd[1860]: 13 Jan 20:35:03 ntpd[1860]: failed to init interface for address fe80::4f2:f8ff:fe94:f393%2 Jan 13 20:35:03.399005 ntpd[1860]: 13 Jan 20:35:03 ntpd[1860]: Listening on routing socket on fd #21 for interface updates Jan 13 20:35:03.398819 ntpd[1860]: bind(21) AF_INET6 fe80::4f2:f8ff:fe94:f393%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:35:03.398841 ntpd[1860]: unable to create socket on eth0 (5) for fe80::4f2:f8ff:fe94:f393%2#123 Jan 13 20:35:03.398856 ntpd[1860]: failed to init interface for address fe80::4f2:f8ff:fe94:f393%2 Jan 13 20:35:03.398890 ntpd[1860]: Listening on routing socket on fd #21 for interface updates Jan 13 20:35:03.405286 extend-filesystems[1858]: Resized partition /dev/nvme0n1p9 Jan 13 20:35:03.418414 ntpd[1860]: 13 Jan 20:35:03 ntpd[1860]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:35:03.418414 ntpd[1860]: 13 Jan 20:35:03 ntpd[1860]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:35:03.418597 extend-filesystems[1911]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:35:03.415552 ntpd[1860]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:35:03.424351 systemd-logind[1867]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:35:03.448714 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 13 20:35:03.415600 ntpd[1860]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:35:03.424376 systemd-logind[1867]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 13 20:35:03.424399 systemd-logind[1867]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:35:03.424849 systemd-logind[1867]: New seat seat0. Jan 13 20:35:03.439215 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:35:03.510500 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (1670) Jan 13 20:35:03.576259 coreos-metadata[1855]: Jan 13 20:35:03.575 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:35:03.590753 coreos-metadata[1855]: Jan 13 20:35:03.587 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 13 20:35:03.590753 coreos-metadata[1855]: Jan 13 20:35:03.590 INFO Fetch successful Jan 13 20:35:03.590753 coreos-metadata[1855]: Jan 13 20:35:03.590 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 13 20:35:03.615397 coreos-metadata[1855]: Jan 13 20:35:03.612 INFO Fetch successful Jan 13 20:35:03.615397 coreos-metadata[1855]: Jan 13 20:35:03.612 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 13 20:35:03.627523 coreos-metadata[1855]: Jan 13 20:35:03.626 INFO Fetch successful Jan 13 20:35:03.627523 coreos-metadata[1855]: Jan 13 20:35:03.626 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 13 20:35:03.628325 coreos-metadata[1855]: Jan 13 20:35:03.627 INFO Fetch successful Jan 13 20:35:03.628325 coreos-metadata[1855]: Jan 13 20:35:03.628 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 13 20:35:03.641405 coreos-metadata[1855]: Jan 13 20:35:03.639 INFO Fetch failed with 404: resource not found Jan 13 20:35:03.641405 coreos-metadata[1855]: Jan 13 20:35:03.640 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 13 20:35:03.646007 coreos-metadata[1855]: Jan 13 20:35:03.644 INFO Fetch successful Jan 13 20:35:03.646007 coreos-metadata[1855]: Jan 13 20:35:03.644 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 13 20:35:03.648824 coreos-metadata[1855]: Jan 13 20:35:03.648 INFO Fetch successful Jan 13 20:35:03.648824 coreos-metadata[1855]: Jan 13 20:35:03.648 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 13 20:35:03.653897 coreos-metadata[1855]: Jan 13 20:35:03.653 INFO Fetch successful Jan 13 20:35:03.653897 coreos-metadata[1855]: Jan 13 20:35:03.653 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 13 20:35:03.659031 coreos-metadata[1855]: Jan 13 20:35:03.654 INFO Fetch successful Jan 13 20:35:03.659031 coreos-metadata[1855]: Jan 13 20:35:03.658 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 13 20:35:03.664367 coreos-metadata[1855]: Jan 13 20:35:03.661 INFO Fetch successful Jan 13 20:35:03.664495 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 13 20:35:03.714816 extend-filesystems[1911]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 13 20:35:03.714816 extend-filesystems[1911]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:35:03.714816 extend-filesystems[1911]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 13 20:35:03.733181 extend-filesystems[1858]: Resized filesystem in /dev/nvme0n1p9 Jan 13 20:35:03.731921 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:35:03.745832 bash[1940]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:35:03.732321 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:35:03.735064 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:35:03.759347 systemd[1]: Starting sshkeys.service... Jan 13 20:35:03.762399 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:35:03.767065 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:35:03.871432 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:35:03.881252 locksmithd[1905]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:35:03.885026 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:35:03.989646 systemd-networkd[1763]: eth0: Gained IPv6LL Jan 13 20:35:03.994810 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:35:03.999098 dbus-daemon[1856]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 20:35:04.001035 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:35:04.022921 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 13 20:35:04.033450 dbus-daemon[1856]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1904 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 20:35:04.038861 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:35:04.050678 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:35:04.054398 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 20:35:04.070143 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 20:35:04.141287 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:35:04.196118 coreos-metadata[2023]: Jan 13 20:35:04.195 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 13 20:35:04.202106 coreos-metadata[2023]: Jan 13 20:35:04.201 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 13 20:35:04.203966 polkitd[2043]: Started polkitd version 121 Jan 13 20:35:04.208859 coreos-metadata[2023]: Jan 13 20:35:04.208 INFO Fetch successful Jan 13 20:35:04.208859 coreos-metadata[2023]: Jan 13 20:35:04.208 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 20:35:04.209941 coreos-metadata[2023]: Jan 13 20:35:04.209 INFO Fetch successful Jan 13 20:35:04.220587 unknown[2023]: wrote ssh authorized keys file for user: core Jan 13 20:35:04.237463 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:35:04.264263 polkitd[2043]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 20:35:04.264353 polkitd[2043]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 20:35:04.267466 polkitd[2043]: Finished loading, compiling and executing 2 rules Jan 13 20:35:04.270732 dbus-daemon[1856]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 20:35:04.274869 polkitd[2043]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 20:35:04.275184 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 20:35:04.297554 update-ssh-keys[2063]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:35:04.304349 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:35:04.316890 systemd[1]: Finished sshkeys.service. Jan 13 20:35:04.346380 amazon-ssm-agent[2038]: Initializing new seelog logger Jan 13 20:35:04.348725 systemd-hostnamed[1904]: Hostname set to (transient) Jan 13 20:35:04.350498 amazon-ssm-agent[2038]: New Seelog Logger Creation Complete Jan 13 20:35:04.350498 amazon-ssm-agent[2038]: 2025/01/13 20:35:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:35:04.350498 amazon-ssm-agent[2038]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:35:04.350498 amazon-ssm-agent[2038]: 2025/01/13 20:35:04 processing appconfig overrides Jan 13 20:35:04.353994 systemd-resolved[1770]: System hostname changed to 'ip-172-31-24-144'. Jan 13 20:35:04.357581 amazon-ssm-agent[2038]: 2025/01/13 20:35:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:35:04.357581 amazon-ssm-agent[2038]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:35:04.357728 amazon-ssm-agent[2038]: 2025/01/13 20:35:04 processing appconfig overrides Jan 13 20:35:04.358052 amazon-ssm-agent[2038]: 2025/01/13 20:35:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:35:04.358052 amazon-ssm-agent[2038]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:35:04.358146 amazon-ssm-agent[2038]: 2025/01/13 20:35:04 processing appconfig overrides Jan 13 20:35:04.362039 amazon-ssm-agent[2038]: 2025-01-13 20:35:04 INFO Proxy environment variables: Jan 13 20:35:04.372504 amazon-ssm-agent[2038]: 2025/01/13 20:35:04 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:35:04.372504 amazon-ssm-agent[2038]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 13 20:35:04.372504 amazon-ssm-agent[2038]: 2025/01/13 20:35:04 processing appconfig overrides Jan 13 20:35:04.470861 amazon-ssm-agent[2038]: 2025-01-13 20:35:04 INFO no_proxy: Jan 13 20:35:04.522570 sshd_keygen[1893]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:35:04.531372 containerd[1891]: time="2025-01-13T20:35:04.530851602Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:35:04.561839 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:35:04.573883 amazon-ssm-agent[2038]: 2025-01-13 20:35:04 INFO https_proxy: Jan 13 20:35:04.572383 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:35:04.581077 systemd[1]: Started sshd@0-172.31.24.144:22-139.178.89.65:45626.service - OpenSSH per-connection server daemon (139.178.89.65:45626). Jan 13 20:35:04.635084 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:35:04.635765 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:35:04.644923 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:35:04.674170 amazon-ssm-agent[2038]: 2025-01-13 20:35:04 INFO http_proxy: Jan 13 20:35:04.690831 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:35:04.693153 containerd[1891]: time="2025-01-13T20:35:04.693078497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:35:04.696878 containerd[1891]: time="2025-01-13T20:35:04.695266495Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:35:04.696878 containerd[1891]: time="2025-01-13T20:35:04.695315964Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:35:04.696878 containerd[1891]: time="2025-01-13T20:35:04.695341509Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:35:04.696878 containerd[1891]: time="2025-01-13T20:35:04.695706457Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:35:04.696878 containerd[1891]: time="2025-01-13T20:35:04.695778129Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:35:04.696878 containerd[1891]: time="2025-01-13T20:35:04.695890750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:35:04.696878 containerd[1891]: time="2025-01-13T20:35:04.695911875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:35:04.696878 containerd[1891]: time="2025-01-13T20:35:04.696179400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:35:04.696878 containerd[1891]: time="2025-01-13T20:35:04.696202121Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:35:04.696878 containerd[1891]: time="2025-01-13T20:35:04.696222990Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:35:04.696878 containerd[1891]: time="2025-01-13T20:35:04.696240308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:35:04.697334 containerd[1891]: time="2025-01-13T20:35:04.696338967Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:35:04.699516 containerd[1891]: time="2025-01-13T20:35:04.697556772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:35:04.699516 containerd[1891]: time="2025-01-13T20:35:04.697741098Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:35:04.699516 containerd[1891]: time="2025-01-13T20:35:04.697765553Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:35:04.699516 containerd[1891]: time="2025-01-13T20:35:04.697886994Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:35:04.699516 containerd[1891]: time="2025-01-13T20:35:04.697954636Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:35:04.701059 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:35:04.709073 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:35:04.711249 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:35:04.725509 containerd[1891]: time="2025-01-13T20:35:04.723697655Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:35:04.725509 containerd[1891]: time="2025-01-13T20:35:04.723777044Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:35:04.725509 containerd[1891]: time="2025-01-13T20:35:04.723805427Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:35:04.725509 containerd[1891]: time="2025-01-13T20:35:04.723831921Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:35:04.725509 containerd[1891]: time="2025-01-13T20:35:04.723854063Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:35:04.725509 containerd[1891]: time="2025-01-13T20:35:04.724149171Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:35:04.725509 containerd[1891]: time="2025-01-13T20:35:04.724456796Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:35:04.725509 containerd[1891]: time="2025-01-13T20:35:04.724660434Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:35:04.725509 containerd[1891]: time="2025-01-13T20:35:04.724685401Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:35:04.725509 containerd[1891]: time="2025-01-13T20:35:04.724706625Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:35:04.725509 containerd[1891]: time="2025-01-13T20:35:04.724727025Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:35:04.725509 containerd[1891]: time="2025-01-13T20:35:04.724746635Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:35:04.725509 containerd[1891]: time="2025-01-13T20:35:04.724765121Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:35:04.725509 containerd[1891]: time="2025-01-13T20:35:04.724785728Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:35:04.726049 containerd[1891]: time="2025-01-13T20:35:04.724806939Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:35:04.726049 containerd[1891]: time="2025-01-13T20:35:04.724827995Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:35:04.726049 containerd[1891]: time="2025-01-13T20:35:04.724848550Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:35:04.726049 containerd[1891]: time="2025-01-13T20:35:04.724868160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:35:04.726049 containerd[1891]: time="2025-01-13T20:35:04.724896537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:35:04.726049 containerd[1891]: time="2025-01-13T20:35:04.724919087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:35:04.726049 containerd[1891]: time="2025-01-13T20:35:04.724937870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:35:04.726049 containerd[1891]: time="2025-01-13T20:35:04.724957236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:35:04.726049 containerd[1891]: time="2025-01-13T20:35:04.724975321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:35:04.726049 containerd[1891]: time="2025-01-13T20:35:04.724995345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:35:04.726049 containerd[1891]: time="2025-01-13T20:35:04.725034735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:35:04.726049 containerd[1891]: time="2025-01-13T20:35:04.725054981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:35:04.726049 containerd[1891]: time="2025-01-13T20:35:04.725074568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:35:04.726049 containerd[1891]: time="2025-01-13T20:35:04.725094979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:35:04.726522 containerd[1891]: time="2025-01-13T20:35:04.725113190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:35:04.726522 containerd[1891]: time="2025-01-13T20:35:04.725130045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:35:04.726522 containerd[1891]: time="2025-01-13T20:35:04.725147676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:35:04.726522 containerd[1891]: time="2025-01-13T20:35:04.725168561Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:35:04.726522 containerd[1891]: time="2025-01-13T20:35:04.725200981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:35:04.726522 containerd[1891]: time="2025-01-13T20:35:04.725220518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:35:04.726522 containerd[1891]: time="2025-01-13T20:35:04.725236805Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:35:04.726522 containerd[1891]: time="2025-01-13T20:35:04.725301452Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:35:04.726522 containerd[1891]: time="2025-01-13T20:35:04.725327078Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:35:04.726522 containerd[1891]: time="2025-01-13T20:35:04.725342212Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:35:04.726522 containerd[1891]: time="2025-01-13T20:35:04.725363446Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:35:04.726522 containerd[1891]: time="2025-01-13T20:35:04.725378490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:35:04.726522 containerd[1891]: time="2025-01-13T20:35:04.725397422Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:35:04.726522 containerd[1891]: time="2025-01-13T20:35:04.725411881Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:35:04.729002 containerd[1891]: time="2025-01-13T20:35:04.725427772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:35:04.735181 containerd[1891]: time="2025-01-13T20:35:04.731141826Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:35:04.735181 containerd[1891]: time="2025-01-13T20:35:04.731268201Z" level=info msg="Connect containerd service" Jan 13 20:35:04.735181 containerd[1891]: time="2025-01-13T20:35:04.731321890Z" level=info msg="using legacy CRI server" Jan 13 20:35:04.735181 containerd[1891]: time="2025-01-13T20:35:04.731333135Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:35:04.735181 containerd[1891]: time="2025-01-13T20:35:04.731666671Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:35:04.736283 containerd[1891]: time="2025-01-13T20:35:04.736111744Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:35:04.736283 containerd[1891]: time="2025-01-13T20:35:04.736234438Z" level=info msg="Start subscribing containerd event" Jan 13 20:35:04.736489 containerd[1891]: time="2025-01-13T20:35:04.736438468Z" level=info msg="Start recovering state" Jan 13 20:35:04.736625 containerd[1891]: time="2025-01-13T20:35:04.736603437Z" level=info msg="Start event monitor" Jan 13 20:35:04.736768 containerd[1891]: time="2025-01-13T20:35:04.736754182Z" level=info msg="Start snapshots syncer" Jan 13 20:35:04.736847 containerd[1891]: time="2025-01-13T20:35:04.736835438Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:35:04.736920 containerd[1891]: time="2025-01-13T20:35:04.736909817Z" level=info msg="Start streaming server" Jan 13 20:35:04.737666 containerd[1891]: time="2025-01-13T20:35:04.737621525Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:35:04.738497 containerd[1891]: time="2025-01-13T20:35:04.737852010Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:35:04.746013 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:35:04.748187 containerd[1891]: time="2025-01-13T20:35:04.746156803Z" level=info msg="containerd successfully booted in 0.218567s" Jan 13 20:35:04.771907 amazon-ssm-agent[2038]: 2025-01-13 20:35:04 INFO Checking if agent identity type OnPrem can be assumed Jan 13 20:35:04.871634 amazon-ssm-agent[2038]: 2025-01-13 20:35:04 INFO Checking if agent identity type EC2 can be assumed Jan 13 20:35:04.886083 amazon-ssm-agent[2038]: 2025-01-13 20:35:04 INFO Agent will take identity from EC2 Jan 13 20:35:04.887638 amazon-ssm-agent[2038]: 2025-01-13 20:35:04 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:35:04.887892 amazon-ssm-agent[2038]: 2025-01-13 20:35:04 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:35:04.887892 amazon-ssm-agent[2038]: 2025-01-13 20:35:04 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 13 20:35:04.887892 amazon-ssm-agent[2038]: 2025-01-13 20:35:04 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 13 20:35:04.887892 amazon-ssm-agent[2038]: 2025-01-13 20:35:04 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jan 13 20:35:04.887892 amazon-ssm-agent[2038]: 2025-01-13 20:35:04 INFO [amazon-ssm-agent] Starting Core Agent Jan 13 20:35:04.887892 amazon-ssm-agent[2038]: 2025-01-13 20:35:04 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 13 20:35:04.887892 amazon-ssm-agent[2038]: 2025-01-13 20:35:04 INFO [Registrar] Starting registrar module Jan 13 20:35:04.887892 amazon-ssm-agent[2038]: 2025-01-13 20:35:04 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 13 20:35:04.887892 amazon-ssm-agent[2038]: 2025-01-13 20:35:04 INFO [EC2Identity] EC2 registration was successful. Jan 13 20:35:04.887892 amazon-ssm-agent[2038]: 2025-01-13 20:35:04 INFO [CredentialRefresher] credentialRefresher has started Jan 13 20:35:04.887892 amazon-ssm-agent[2038]: 2025-01-13 20:35:04 INFO [CredentialRefresher] Starting credentials refresher loop Jan 13 20:35:04.887892 amazon-ssm-agent[2038]: 2025-01-13 20:35:04 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 13 20:35:04.913313 sshd[2085]: Accepted publickey for core from 139.178.89.65 port 45626 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:35:04.916743 sshd-session[2085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:04.934233 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:35:04.945736 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:35:04.965683 systemd-logind[1867]: New session 1 of user core. Jan 13 20:35:04.969517 amazon-ssm-agent[2038]: 2025-01-13 20:35:04 INFO [CredentialRefresher] Next credential rotation will be in 30.841624995016666 minutes Jan 13 20:35:04.983490 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:35:04.997001 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:35:05.010044 (systemd)[2100]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:35:05.169312 tar[1873]: linux-amd64/LICENSE Jan 13 20:35:05.172084 tar[1873]: linux-amd64/README.md Jan 13 20:35:05.191565 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:35:05.208167 systemd[2100]: Queued start job for default target default.target. Jan 13 20:35:05.224776 systemd[2100]: Created slice app.slice - User Application Slice. Jan 13 20:35:05.224821 systemd[2100]: Reached target paths.target - Paths. Jan 13 20:35:05.224844 systemd[2100]: Reached target timers.target - Timers. Jan 13 20:35:05.226678 systemd[2100]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:35:05.245041 systemd[2100]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:35:05.245188 systemd[2100]: Reached target sockets.target - Sockets. Jan 13 20:35:05.245209 systemd[2100]: Reached target basic.target - Basic System. Jan 13 20:35:05.245260 systemd[2100]: Reached target default.target - Main User Target. Jan 13 20:35:05.245298 systemd[2100]: Startup finished in 224ms. Jan 13 20:35:05.245554 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:35:05.253995 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:35:05.407860 systemd[1]: Started sshd@1-172.31.24.144:22-139.178.89.65:45638.service - OpenSSH per-connection server daemon (139.178.89.65:45638). Jan 13 20:35:05.573855 sshd[2114]: Accepted publickey for core from 139.178.89.65 port 45638 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:35:05.575105 sshd-session[2114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:05.581147 systemd-logind[1867]: New session 2 of user core. Jan 13 20:35:05.584650 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:35:05.708413 sshd[2116]: Connection closed by 139.178.89.65 port 45638 Jan 13 20:35:05.710201 sshd-session[2114]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:05.713991 systemd[1]: sshd@1-172.31.24.144:22-139.178.89.65:45638.service: Deactivated successfully. Jan 13 20:35:05.716272 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:35:05.718956 systemd-logind[1867]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:35:05.720251 systemd-logind[1867]: Removed session 2. Jan 13 20:35:05.747832 systemd[1]: Started sshd@2-172.31.24.144:22-139.178.89.65:45654.service - OpenSSH per-connection server daemon (139.178.89.65:45654). Jan 13 20:35:05.912565 amazon-ssm-agent[2038]: 2025-01-13 20:35:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 13 20:35:05.937989 sshd[2121]: Accepted publickey for core from 139.178.89.65 port 45654 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:35:05.939737 sshd-session[2121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:05.952623 systemd-logind[1867]: New session 3 of user core. Jan 13 20:35:05.956126 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:35:06.014091 amazon-ssm-agent[2038]: 2025-01-13 20:35:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2124) started Jan 13 20:35:06.092527 sshd[2125]: Connection closed by 139.178.89.65 port 45654 Jan 13 20:35:06.093186 sshd-session[2121]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:06.114883 amazon-ssm-agent[2038]: 2025-01-13 20:35:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 13 20:35:06.114546 systemd[1]: sshd@2-172.31.24.144:22-139.178.89.65:45654.service: Deactivated successfully. Jan 13 20:35:06.126908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:35:06.130082 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:35:06.136908 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:35:06.137310 systemd-logind[1867]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:35:06.140080 systemd[1]: Startup finished in 744ms (kernel) + 11.070s (initrd) + 8.378s (userspace) = 20.193s. Jan 13 20:35:06.144136 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:35:06.145164 systemd-logind[1867]: Removed session 3. Jan 13 20:35:06.350708 ntpd[1860]: Listen normally on 6 eth0 [fe80::4f2:f8ff:fe94:f393%2]:123 Jan 13 20:35:06.351264 ntpd[1860]: 13 Jan 20:35:06 ntpd[1860]: Listen normally on 6 eth0 [fe80::4f2:f8ff:fe94:f393%2]:123 Jan 13 20:35:06.394547 agetty[2093]: failed to open credentials directory Jan 13 20:35:06.394548 agetty[2094]: failed to open credentials directory Jan 13 20:35:07.449437 kubelet[2138]: E0113 20:35:07.449350 2138 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:35:07.452317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:35:07.452553 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:35:07.453067 systemd[1]: kubelet.service: Consumed 1.030s CPU time. Jan 13 20:35:11.597592 systemd-resolved[1770]: Clock change detected. Flushing caches. Jan 13 20:35:17.373834 systemd[1]: Started sshd@3-172.31.24.144:22-139.178.89.65:36690.service - OpenSSH per-connection server daemon (139.178.89.65:36690). Jan 13 20:35:17.535533 sshd[2156]: Accepted publickey for core from 139.178.89.65 port 36690 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:35:17.537039 sshd-session[2156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:17.542864 systemd-logind[1867]: New session 4 of user core. Jan 13 20:35:17.555802 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:35:17.694948 sshd[2158]: Connection closed by 139.178.89.65 port 36690 Jan 13 20:35:17.696506 sshd-session[2156]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:17.701781 systemd[1]: sshd@3-172.31.24.144:22-139.178.89.65:36690.service: Deactivated successfully. Jan 13 20:35:17.705112 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:35:17.707378 systemd-logind[1867]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:35:17.708938 systemd-logind[1867]: Removed session 4. Jan 13 20:35:17.743299 systemd[1]: Started sshd@4-172.31.24.144:22-139.178.89.65:36692.service - OpenSSH per-connection server daemon (139.178.89.65:36692). Jan 13 20:35:17.964034 sshd[2163]: Accepted publickey for core from 139.178.89.65 port 36692 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:35:17.966969 sshd-session[2163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:17.985378 systemd-logind[1867]: New session 5 of user core. Jan 13 20:35:17.998088 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:35:18.121189 sshd[2165]: Connection closed by 139.178.89.65 port 36692 Jan 13 20:35:18.121917 sshd-session[2163]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:18.130448 systemd[1]: sshd@4-172.31.24.144:22-139.178.89.65:36692.service: Deactivated successfully. Jan 13 20:35:18.134337 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:35:18.136919 systemd-logind[1867]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:35:18.139000 systemd-logind[1867]: Removed session 5. Jan 13 20:35:18.164239 systemd[1]: Started sshd@5-172.31.24.144:22-139.178.89.65:36708.service - OpenSSH per-connection server daemon (139.178.89.65:36708). Jan 13 20:35:18.335529 sshd[2170]: Accepted publickey for core from 139.178.89.65 port 36708 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:35:18.336549 sshd-session[2170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:18.344752 systemd-logind[1867]: New session 6 of user core. Jan 13 20:35:18.355035 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:35:18.474378 sshd[2172]: Connection closed by 139.178.89.65 port 36708 Jan 13 20:35:18.475120 sshd-session[2170]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:18.479596 systemd[1]: sshd@5-172.31.24.144:22-139.178.89.65:36708.service: Deactivated successfully. Jan 13 20:35:18.481741 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:35:18.483332 systemd-logind[1867]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:35:18.485129 systemd-logind[1867]: Removed session 6. Jan 13 20:35:18.510278 systemd[1]: Started sshd@6-172.31.24.144:22-139.178.89.65:36716.service - OpenSSH per-connection server daemon (139.178.89.65:36716). Jan 13 20:35:18.688869 sshd[2177]: Accepted publickey for core from 139.178.89.65 port 36716 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:35:18.690533 sshd-session[2177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:35:18.695685 systemd-logind[1867]: New session 7 of user core. Jan 13 20:35:18.706010 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:35:18.708077 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:35:18.716095 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:35:18.834218 sudo[2183]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:35:18.835169 sudo[2183]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:35:18.966055 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:35:18.966307 (kubelet)[2197]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:35:19.059848 kubelet[2197]: E0113 20:35:19.059117 2197 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:35:19.064244 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:35:19.064448 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:35:19.301177 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:35:19.304004 (dockerd)[2215]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:35:19.693855 dockerd[2215]: time="2025-01-13T20:35:19.693708535Z" level=info msg="Starting up" Jan 13 20:35:19.821671 systemd[1]: var-lib-docker-metacopy\x2dcheck2471339893-merged.mount: Deactivated successfully. Jan 13 20:35:19.844533 dockerd[2215]: time="2025-01-13T20:35:19.844486642Z" level=info msg="Loading containers: start." Jan 13 20:35:20.035090 kernel: Initializing XFRM netlink socket Jan 13 20:35:20.074227 (udev-worker)[2238]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:35:20.146910 systemd-networkd[1763]: docker0: Link UP Jan 13 20:35:20.177437 dockerd[2215]: time="2025-01-13T20:35:20.177392170Z" level=info msg="Loading containers: done." Jan 13 20:35:20.198678 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck932336617-merged.mount: Deactivated successfully. Jan 13 20:35:20.205855 dockerd[2215]: time="2025-01-13T20:35:20.205781675Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:35:20.206061 dockerd[2215]: time="2025-01-13T20:35:20.205931022Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 13 20:35:20.206110 dockerd[2215]: time="2025-01-13T20:35:20.206085124Z" level=info msg="Daemon has completed initialization" Jan 13 20:35:20.246932 dockerd[2215]: time="2025-01-13T20:35:20.246230956Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:35:20.246487 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:35:21.684835 containerd[1891]: time="2025-01-13T20:35:21.684390945Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 20:35:22.321305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1489123199.mount: Deactivated successfully. Jan 13 20:35:24.953080 containerd[1891]: time="2025-01-13T20:35:24.953023622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:24.954378 containerd[1891]: time="2025-01-13T20:35:24.954311112Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Jan 13 20:35:24.956299 containerd[1891]: time="2025-01-13T20:35:24.955468845Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:24.959627 containerd[1891]: time="2025-01-13T20:35:24.959571224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:24.961682 containerd[1891]: time="2025-01-13T20:35:24.960860900Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 3.276410718s" Jan 13 20:35:24.961682 containerd[1891]: time="2025-01-13T20:35:24.960906080Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 13 20:35:24.988328 containerd[1891]: time="2025-01-13T20:35:24.988285847Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 20:35:28.473310 containerd[1891]: time="2025-01-13T20:35:28.473261354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:28.474649 containerd[1891]: time="2025-01-13T20:35:28.474601377Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Jan 13 20:35:28.476897 containerd[1891]: time="2025-01-13T20:35:28.476769201Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:28.481759 containerd[1891]: time="2025-01-13T20:35:28.480436262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:28.481759 containerd[1891]: time="2025-01-13T20:35:28.481600427Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 3.493273956s" Jan 13 20:35:28.481759 containerd[1891]: time="2025-01-13T20:35:28.481639241Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 13 20:35:28.511568 containerd[1891]: time="2025-01-13T20:35:28.511532236Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 20:35:29.314795 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:35:29.320089 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:35:29.753046 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:35:29.755399 (kubelet)[2489]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:35:29.846099 kubelet[2489]: E0113 20:35:29.846045 2489 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:35:29.850581 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:35:29.850781 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:35:30.224012 containerd[1891]: time="2025-01-13T20:35:30.223563655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:30.225388 containerd[1891]: time="2025-01-13T20:35:30.225323337Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Jan 13 20:35:30.226959 containerd[1891]: time="2025-01-13T20:35:30.226904772Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:30.230404 containerd[1891]: time="2025-01-13T20:35:30.230346295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:30.231616 containerd[1891]: time="2025-01-13T20:35:30.231448463Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.719875863s" Jan 13 20:35:30.231616 containerd[1891]: time="2025-01-13T20:35:30.231487139Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 13 20:35:30.264481 containerd[1891]: time="2025-01-13T20:35:30.264440184Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:35:31.638601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3963577413.mount: Deactivated successfully. Jan 13 20:35:32.369576 containerd[1891]: time="2025-01-13T20:35:32.369520566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:32.400831 containerd[1891]: time="2025-01-13T20:35:32.400733041Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Jan 13 20:35:32.436691 containerd[1891]: time="2025-01-13T20:35:32.436589627Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:32.462739 containerd[1891]: time="2025-01-13T20:35:32.462638004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:32.463873 containerd[1891]: time="2025-01-13T20:35:32.463661783Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.199179925s" Jan 13 20:35:32.463873 containerd[1891]: time="2025-01-13T20:35:32.463707641Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 20:35:32.531949 containerd[1891]: time="2025-01-13T20:35:32.531889069Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:35:33.272544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount86513734.mount: Deactivated successfully. Jan 13 20:35:35.048981 containerd[1891]: time="2025-01-13T20:35:35.048920604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:35.059093 containerd[1891]: time="2025-01-13T20:35:35.059011344Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 20:35:35.064586 containerd[1891]: time="2025-01-13T20:35:35.064511763Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:35.072609 containerd[1891]: time="2025-01-13T20:35:35.072535794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:35.074174 containerd[1891]: time="2025-01-13T20:35:35.073996958Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.542060846s" Jan 13 20:35:35.074174 containerd[1891]: time="2025-01-13T20:35:35.074047128Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 20:35:35.109537 containerd[1891]: time="2025-01-13T20:35:35.108801684Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:35:35.605749 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 20:35:35.982080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3581072182.mount: Deactivated successfully. Jan 13 20:35:35.994415 containerd[1891]: time="2025-01-13T20:35:35.994372529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:35.996618 containerd[1891]: time="2025-01-13T20:35:35.996551681Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 20:35:35.998741 containerd[1891]: time="2025-01-13T20:35:35.997912989Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:36.001929 containerd[1891]: time="2025-01-13T20:35:36.001881276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:36.004182 containerd[1891]: time="2025-01-13T20:35:36.003888389Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 894.995774ms" Jan 13 20:35:36.004182 containerd[1891]: time="2025-01-13T20:35:36.003961651Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 20:35:36.058917 containerd[1891]: time="2025-01-13T20:35:36.058864228Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 20:35:36.767448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount766035972.mount: Deactivated successfully. Jan 13 20:35:40.101212 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 20:35:40.109006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:35:40.483665 containerd[1891]: time="2025-01-13T20:35:40.483527995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:40.493080 containerd[1891]: time="2025-01-13T20:35:40.493008693Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 13 20:35:40.498835 containerd[1891]: time="2025-01-13T20:35:40.497705150Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:40.507657 containerd[1891]: time="2025-01-13T20:35:40.507604395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:40.510858 containerd[1891]: time="2025-01-13T20:35:40.510648360Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.451668822s" Jan 13 20:35:40.510858 containerd[1891]: time="2025-01-13T20:35:40.510703983Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 13 20:35:40.675110 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:35:40.675542 (kubelet)[2638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:35:40.789416 kubelet[2638]: E0113 20:35:40.789276 2638 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:35:40.793459 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:35:40.794196 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:35:44.697011 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:35:44.706788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:35:44.755203 systemd[1]: Reloading requested from client PID 2703 ('systemctl') (unit session-7.scope)... Jan 13 20:35:44.755536 systemd[1]: Reloading... Jan 13 20:35:44.999856 zram_generator::config[2755]: No configuration found. Jan 13 20:35:45.116554 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:35:45.206131 systemd[1]: Reloading finished in 449 ms. Jan 13 20:35:45.274980 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:35:45.275099 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:35:45.275418 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:35:45.289371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:35:45.690427 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:35:45.698527 (kubelet)[2803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:35:45.785464 kubelet[2803]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:35:45.785464 kubelet[2803]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:35:45.785464 kubelet[2803]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:35:45.785464 kubelet[2803]: I0113 20:35:45.784884 2803 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:35:46.156822 kubelet[2803]: I0113 20:35:46.156772 2803 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:35:46.156822 kubelet[2803]: I0113 20:35:46.156806 2803 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:35:46.157114 kubelet[2803]: I0113 20:35:46.157092 2803 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:35:46.191091 kubelet[2803]: I0113 20:35:46.190740 2803 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:35:46.191091 kubelet[2803]: E0113 20:35:46.191047 2803 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.24.144:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.24.144:6443: connect: connection refused Jan 13 20:35:46.211711 kubelet[2803]: I0113 20:35:46.211256 2803 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:35:46.211711 kubelet[2803]: I0113 20:35:46.211544 2803 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:35:46.212982 kubelet[2803]: I0113 20:35:46.212943 2803 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:35:46.212982 kubelet[2803]: I0113 20:35:46.212982 2803 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:35:46.213174 kubelet[2803]: I0113 20:35:46.213001 2803 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:35:46.213174 kubelet[2803]: I0113 20:35:46.213129 2803 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:35:46.213262 kubelet[2803]: I0113 20:35:46.213250 2803 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:35:46.213313 kubelet[2803]: I0113 20:35:46.213269 2803 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:35:46.213349 kubelet[2803]: I0113 20:35:46.213313 2803 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:35:46.213349 kubelet[2803]: I0113 20:35:46.213334 2803 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:35:46.221781 kubelet[2803]: W0113 20:35:46.220765 2803 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.24.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-144&limit=500&resourceVersion=0": dial tcp 172.31.24.144:6443: connect: connection refused Jan 13 20:35:46.221781 kubelet[2803]: E0113 20:35:46.220860 2803 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.24.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-144&limit=500&resourceVersion=0": dial tcp 172.31.24.144:6443: connect: connection refused Jan 13 20:35:46.221781 kubelet[2803]: I0113 20:35:46.220999 2803 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:35:46.221781 kubelet[2803]: W0113 20:35:46.221688 2803 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.24.144:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.144:6443: connect: connection refused Jan 13 20:35:46.221781 kubelet[2803]: E0113 20:35:46.221726 2803 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.24.144:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.144:6443: connect: connection refused Jan 13 20:35:46.227379 kubelet[2803]: I0113 20:35:46.227332 2803 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:35:46.229091 kubelet[2803]: W0113 20:35:46.229057 2803 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:35:46.232830 kubelet[2803]: I0113 20:35:46.231024 2803 server.go:1256] "Started kubelet" Jan 13 20:35:46.232830 kubelet[2803]: I0113 20:35:46.231126 2803 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:35:46.232830 kubelet[2803]: I0113 20:35:46.231903 2803 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:35:46.234750 kubelet[2803]: I0113 20:35:46.234714 2803 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:35:46.237558 kubelet[2803]: I0113 20:35:46.237519 2803 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:35:46.237736 kubelet[2803]: I0113 20:35:46.237714 2803 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:35:46.241006 kubelet[2803]: E0113 20:35:46.239743 2803 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.144:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.144:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-144.181a5ae513cd84da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-144,UID:ip-172-31-24-144,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-144,},FirstTimestamp:2025-01-13 20:35:46.230994138 +0000 UTC m=+0.527095398,LastTimestamp:2025-01-13 20:35:46.230994138 +0000 UTC m=+0.527095398,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-144,}" Jan 13 20:35:46.247006 kubelet[2803]: E0113 20:35:46.246979 2803 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-24-144\" not found" Jan 13 20:35:46.247142 kubelet[2803]: I0113 20:35:46.247036 2803 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:35:46.247142 kubelet[2803]: I0113 20:35:46.247138 2803 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:35:46.247226 kubelet[2803]: I0113 20:35:46.247200 2803 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:35:46.247954 kubelet[2803]: W0113 20:35:46.247646 2803 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.24.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.144:6443: connect: connection refused Jan 13 20:35:46.247954 kubelet[2803]: E0113 20:35:46.247704 2803 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.24.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.144:6443: connect: connection refused Jan 13 20:35:46.249124 kubelet[2803]: E0113 20:35:46.249098 2803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-144?timeout=10s\": dial tcp 172.31.24.144:6443: connect: connection refused" interval="200ms" Jan 13 20:35:46.254459 kubelet[2803]: E0113 20:35:46.254420 2803 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:35:46.260835 kubelet[2803]: I0113 20:35:46.258879 2803 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:35:46.260835 kubelet[2803]: I0113 20:35:46.258912 2803 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:35:46.260835 kubelet[2803]: I0113 20:35:46.259028 2803 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:35:46.279875 kubelet[2803]: I0113 20:35:46.279842 2803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:35:46.282631 kubelet[2803]: I0113 20:35:46.282552 2803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:35:46.282631 kubelet[2803]: I0113 20:35:46.282606 2803 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:35:46.282799 kubelet[2803]: I0113 20:35:46.282654 2803 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:35:46.282799 kubelet[2803]: E0113 20:35:46.282711 2803 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:35:46.287755 kubelet[2803]: W0113 20:35:46.287676 2803 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.24.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.144:6443: connect: connection refused Jan 13 20:35:46.287755 kubelet[2803]: E0113 20:35:46.287754 2803 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.24.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.144:6443: connect: connection refused Jan 13 20:35:46.290018 kubelet[2803]: I0113 20:35:46.289990 2803 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:35:46.290018 kubelet[2803]: I0113 20:35:46.290012 2803 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:35:46.290244 kubelet[2803]: I0113 20:35:46.290033 2803 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:35:46.294022 kubelet[2803]: I0113 20:35:46.293979 2803 policy_none.go:49] "None policy: Start" Jan 13 20:35:46.294887 kubelet[2803]: I0113 20:35:46.294864 2803 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:35:46.294887 kubelet[2803]: I0113 20:35:46.294892 2803 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:35:46.310529 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:35:46.319098 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:35:46.322784 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:35:46.327694 kubelet[2803]: I0113 20:35:46.327664 2803 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:35:46.328356 kubelet[2803]: I0113 20:35:46.328333 2803 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:35:46.331577 kubelet[2803]: E0113 20:35:46.331435 2803 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-144\" not found" Jan 13 20:35:46.349711 kubelet[2803]: I0113 20:35:46.349242 2803 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-144" Jan 13 20:35:46.349711 kubelet[2803]: E0113 20:35:46.349630 2803 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.144:6443/api/v1/nodes\": dial tcp 172.31.24.144:6443: connect: connection refused" node="ip-172-31-24-144" Jan 13 20:35:46.383246 kubelet[2803]: I0113 20:35:46.383201 2803 topology_manager.go:215] "Topology Admit Handler" podUID="8a08e657e20e69bf2b55c0049bbc6525" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-24-144" Jan 13 20:35:46.385314 kubelet[2803]: I0113 20:35:46.384960 2803 topology_manager.go:215] "Topology Admit Handler" podUID="4c20b61328533d06a55d0036bc99d051" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-24-144" Jan 13 20:35:46.387465 kubelet[2803]: I0113 20:35:46.386987 2803 topology_manager.go:215] "Topology Admit Handler" podUID="dd64092ecb1f38c67e2490d06a02b197" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-24-144" Jan 13 20:35:46.396044 systemd[1]: Created slice kubepods-burstable-pod8a08e657e20e69bf2b55c0049bbc6525.slice - libcontainer container kubepods-burstable-pod8a08e657e20e69bf2b55c0049bbc6525.slice. Jan 13 20:35:46.411783 systemd[1]: Created slice kubepods-burstable-pod4c20b61328533d06a55d0036bc99d051.slice - libcontainer container kubepods-burstable-pod4c20b61328533d06a55d0036bc99d051.slice. Jan 13 20:35:46.422490 systemd[1]: Created slice kubepods-burstable-poddd64092ecb1f38c67e2490d06a02b197.slice - libcontainer container kubepods-burstable-poddd64092ecb1f38c67e2490d06a02b197.slice. Jan 13 20:35:46.450300 kubelet[2803]: E0113 20:35:46.450259 2803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-144?timeout=10s\": dial tcp 172.31.24.144:6443: connect: connection refused" interval="400ms" Jan 13 20:35:46.551706 kubelet[2803]: I0113 20:35:46.551287 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a08e657e20e69bf2b55c0049bbc6525-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-144\" (UID: \"8a08e657e20e69bf2b55c0049bbc6525\") " pod="kube-system/kube-apiserver-ip-172-31-24-144" Jan 13 20:35:46.551706 kubelet[2803]: I0113 20:35:46.551361 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a08e657e20e69bf2b55c0049bbc6525-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-144\" (UID: \"8a08e657e20e69bf2b55c0049bbc6525\") " pod="kube-system/kube-apiserver-ip-172-31-24-144" Jan 13 20:35:46.551706 kubelet[2803]: I0113 20:35:46.551413 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c20b61328533d06a55d0036bc99d051-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-144\" (UID: \"4c20b61328533d06a55d0036bc99d051\") " pod="kube-system/kube-controller-manager-ip-172-31-24-144" Jan 13 20:35:46.551706 kubelet[2803]: I0113 20:35:46.551454 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4c20b61328533d06a55d0036bc99d051-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-144\" (UID: \"4c20b61328533d06a55d0036bc99d051\") " pod="kube-system/kube-controller-manager-ip-172-31-24-144" Jan 13 20:35:46.551706 kubelet[2803]: I0113 20:35:46.551481 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c20b61328533d06a55d0036bc99d051-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-144\" (UID: \"4c20b61328533d06a55d0036bc99d051\") " pod="kube-system/kube-controller-manager-ip-172-31-24-144" Jan 13 20:35:46.552042 kubelet[2803]: I0113 20:35:46.551522 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c20b61328533d06a55d0036bc99d051-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-144\" (UID: \"4c20b61328533d06a55d0036bc99d051\") " pod="kube-system/kube-controller-manager-ip-172-31-24-144" Jan 13 20:35:46.552042 kubelet[2803]: I0113 20:35:46.551555 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd64092ecb1f38c67e2490d06a02b197-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-144\" (UID: \"dd64092ecb1f38c67e2490d06a02b197\") " pod="kube-system/kube-scheduler-ip-172-31-24-144" Jan 13 20:35:46.552042 kubelet[2803]: I0113 20:35:46.551585 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c20b61328533d06a55d0036bc99d051-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-144\" (UID: \"4c20b61328533d06a55d0036bc99d051\") " pod="kube-system/kube-controller-manager-ip-172-31-24-144" Jan 13 20:35:46.552042 kubelet[2803]: I0113 20:35:46.551670 2803 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a08e657e20e69bf2b55c0049bbc6525-ca-certs\") pod \"kube-apiserver-ip-172-31-24-144\" (UID: \"8a08e657e20e69bf2b55c0049bbc6525\") " pod="kube-system/kube-apiserver-ip-172-31-24-144" Jan 13 20:35:46.552286 kubelet[2803]: I0113 20:35:46.552237 2803 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-144" Jan 13 20:35:46.552622 kubelet[2803]: E0113 20:35:46.552598 2803 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.144:6443/api/v1/nodes\": dial tcp 172.31.24.144:6443: connect: connection refused" node="ip-172-31-24-144" Jan 13 20:35:46.709602 containerd[1891]: time="2025-01-13T20:35:46.709466801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-144,Uid:8a08e657e20e69bf2b55c0049bbc6525,Namespace:kube-system,Attempt:0,}" Jan 13 20:35:46.725274 containerd[1891]: time="2025-01-13T20:35:46.725178143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-144,Uid:4c20b61328533d06a55d0036bc99d051,Namespace:kube-system,Attempt:0,}" Jan 13 20:35:46.726137 containerd[1891]: time="2025-01-13T20:35:46.726104311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-144,Uid:dd64092ecb1f38c67e2490d06a02b197,Namespace:kube-system,Attempt:0,}" Jan 13 20:35:46.851478 kubelet[2803]: E0113 20:35:46.851440 2803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-144?timeout=10s\": dial tcp 172.31.24.144:6443: connect: connection refused" interval="800ms" Jan 13 20:35:46.954457 kubelet[2803]: I0113 20:35:46.954419 2803 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-144" Jan 13 20:35:46.954783 kubelet[2803]: E0113 20:35:46.954760 2803 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.144:6443/api/v1/nodes\": dial tcp 172.31.24.144:6443: connect: connection refused" node="ip-172-31-24-144" Jan 13 20:35:47.153221 kubelet[2803]: W0113 20:35:47.153180 2803 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.24.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.144:6443: connect: connection refused Jan 13 20:35:47.153221 kubelet[2803]: E0113 20:35:47.153229 2803 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.24.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.144:6443: connect: connection refused Jan 13 20:35:47.237475 kubelet[2803]: W0113 20:35:47.237435 2803 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.24.144:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.144:6443: connect: connection refused Jan 13 20:35:47.237475 kubelet[2803]: E0113 20:35:47.237481 2803 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.24.144:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.144:6443: connect: connection refused Jan 13 20:35:47.302198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount375789653.mount: Deactivated successfully. Jan 13 20:35:47.308158 containerd[1891]: time="2025-01-13T20:35:47.308112144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:35:47.312352 containerd[1891]: time="2025-01-13T20:35:47.311961507Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 20:35:47.313514 containerd[1891]: time="2025-01-13T20:35:47.313476118Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:35:47.314885 containerd[1891]: time="2025-01-13T20:35:47.314849480Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:35:47.316864 containerd[1891]: time="2025-01-13T20:35:47.316831598Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:35:47.318014 containerd[1891]: time="2025-01-13T20:35:47.317967862Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:35:47.319253 containerd[1891]: time="2025-01-13T20:35:47.318957890Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:35:47.323781 containerd[1891]: time="2025-01-13T20:35:47.323736760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:35:47.325020 containerd[1891]: time="2025-01-13T20:35:47.324887092Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 613.650137ms" Jan 13 20:35:47.326980 kubelet[2803]: W0113 20:35:47.326613 2803 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.24.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.144:6443: connect: connection refused Jan 13 20:35:47.326980 kubelet[2803]: E0113 20:35:47.326680 2803 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.24.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.144:6443: connect: connection refused Jan 13 20:35:47.328490 containerd[1891]: time="2025-01-13T20:35:47.328455171Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 602.133409ms" Jan 13 20:35:47.336839 containerd[1891]: time="2025-01-13T20:35:47.335721673Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 610.433588ms" Jan 13 20:35:47.622943 containerd[1891]: time="2025-01-13T20:35:47.618365396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:35:47.622943 containerd[1891]: time="2025-01-13T20:35:47.618438914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:35:47.622943 containerd[1891]: time="2025-01-13T20:35:47.618461575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:47.622943 containerd[1891]: time="2025-01-13T20:35:47.618569643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:47.623587 containerd[1891]: time="2025-01-13T20:35:47.614225925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:35:47.623587 containerd[1891]: time="2025-01-13T20:35:47.614335514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:35:47.623587 containerd[1891]: time="2025-01-13T20:35:47.614358339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:47.623587 containerd[1891]: time="2025-01-13T20:35:47.614482991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:47.630284 containerd[1891]: time="2025-01-13T20:35:47.630181906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:35:47.630676 containerd[1891]: time="2025-01-13T20:35:47.630604369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:35:47.630889 containerd[1891]: time="2025-01-13T20:35:47.630843927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:47.635247 containerd[1891]: time="2025-01-13T20:35:47.635168008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:47.653438 kubelet[2803]: E0113 20:35:47.653352 2803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-144?timeout=10s\": dial tcp 172.31.24.144:6443: connect: connection refused" interval="1.6s" Jan 13 20:35:47.670075 systemd[1]: Started cri-containerd-22ee1af457fd66b236038f9ac41982dccd991b90d7581bf9a77a979699852cfe.scope - libcontainer container 22ee1af457fd66b236038f9ac41982dccd991b90d7581bf9a77a979699852cfe. Jan 13 20:35:47.672878 systemd[1]: Started cri-containerd-82450a4de7422f08cd8471ddca161b272368172ee565ce271ddf8478dddb3972.scope - libcontainer container 82450a4de7422f08cd8471ddca161b272368172ee565ce271ddf8478dddb3972. Jan 13 20:35:47.680022 systemd[1]: Started cri-containerd-d509b960d39be3946ed7d1e64770796571719b3b7bf6da3c538b57aba1d0b66a.scope - libcontainer container d509b960d39be3946ed7d1e64770796571719b3b7bf6da3c538b57aba1d0b66a. Jan 13 20:35:47.752499 kubelet[2803]: W0113 20:35:47.752020 2803 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.24.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-144&limit=500&resourceVersion=0": dial tcp 172.31.24.144:6443: connect: connection refused Jan 13 20:35:47.752499 kubelet[2803]: E0113 20:35:47.752502 2803 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.24.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-144&limit=500&resourceVersion=0": dial tcp 172.31.24.144:6443: connect: connection refused Jan 13 20:35:47.765831 kubelet[2803]: I0113 20:35:47.764094 2803 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-144" Jan 13 20:35:47.765831 kubelet[2803]: E0113 20:35:47.765211 2803 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.144:6443/api/v1/nodes\": dial tcp 172.31.24.144:6443: connect: connection refused" node="ip-172-31-24-144" Jan 13 20:35:47.780085 containerd[1891]: time="2025-01-13T20:35:47.780018084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-144,Uid:4c20b61328533d06a55d0036bc99d051,Namespace:kube-system,Attempt:0,} returns sandbox id \"d509b960d39be3946ed7d1e64770796571719b3b7bf6da3c538b57aba1d0b66a\"" Jan 13 20:35:47.794134 containerd[1891]: time="2025-01-13T20:35:47.793948837Z" level=info msg="CreateContainer within sandbox \"d509b960d39be3946ed7d1e64770796571719b3b7bf6da3c538b57aba1d0b66a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:35:47.799854 containerd[1891]: time="2025-01-13T20:35:47.799711943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-144,Uid:8a08e657e20e69bf2b55c0049bbc6525,Namespace:kube-system,Attempt:0,} returns sandbox id \"22ee1af457fd66b236038f9ac41982dccd991b90d7581bf9a77a979699852cfe\"" Jan 13 20:35:47.808380 containerd[1891]: time="2025-01-13T20:35:47.808338527Z" level=info msg="CreateContainer within sandbox \"22ee1af457fd66b236038f9ac41982dccd991b90d7581bf9a77a979699852cfe\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:35:47.846115 containerd[1891]: time="2025-01-13T20:35:47.846070629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-144,Uid:dd64092ecb1f38c67e2490d06a02b197,Namespace:kube-system,Attempt:0,} returns sandbox id \"82450a4de7422f08cd8471ddca161b272368172ee565ce271ddf8478dddb3972\"" Jan 13 20:35:47.853427 containerd[1891]: time="2025-01-13T20:35:47.853387474Z" level=info msg="CreateContainer within sandbox \"82450a4de7422f08cd8471ddca161b272368172ee565ce271ddf8478dddb3972\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:35:47.853999 containerd[1891]: time="2025-01-13T20:35:47.853829137Z" level=info msg="CreateContainer within sandbox \"d509b960d39be3946ed7d1e64770796571719b3b7bf6da3c538b57aba1d0b66a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a2c5f6b8256c404893b5b096db0e5451b8de55677f536f1adc3c43c791ea8b9b\"" Jan 13 20:35:47.856211 containerd[1891]: time="2025-01-13T20:35:47.856180427Z" level=info msg="StartContainer for \"a2c5f6b8256c404893b5b096db0e5451b8de55677f536f1adc3c43c791ea8b9b\"" Jan 13 20:35:47.875194 containerd[1891]: time="2025-01-13T20:35:47.875087857Z" level=info msg="CreateContainer within sandbox \"22ee1af457fd66b236038f9ac41982dccd991b90d7581bf9a77a979699852cfe\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d1ed53d5d04797981ea4bb6c489e6d2ec5d023aa260ecacdd7c800ac06570779\"" Jan 13 20:35:47.877832 containerd[1891]: time="2025-01-13T20:35:47.877249688Z" level=info msg="StartContainer for \"d1ed53d5d04797981ea4bb6c489e6d2ec5d023aa260ecacdd7c800ac06570779\"" Jan 13 20:35:47.889562 containerd[1891]: time="2025-01-13T20:35:47.889504501Z" level=info msg="CreateContainer within sandbox \"82450a4de7422f08cd8471ddca161b272368172ee565ce271ddf8478dddb3972\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e8ad9ac2b81de376b7d0e92f592cee6b6ce4049e59792a2db835181f3cc027f9\"" Jan 13 20:35:47.890259 containerd[1891]: time="2025-01-13T20:35:47.890226892Z" level=info msg="StartContainer for \"e8ad9ac2b81de376b7d0e92f592cee6b6ce4049e59792a2db835181f3cc027f9\"" Jan 13 20:35:47.909076 systemd[1]: Started cri-containerd-a2c5f6b8256c404893b5b096db0e5451b8de55677f536f1adc3c43c791ea8b9b.scope - libcontainer container a2c5f6b8256c404893b5b096db0e5451b8de55677f536f1adc3c43c791ea8b9b. Jan 13 20:35:47.946005 systemd[1]: Started cri-containerd-d1ed53d5d04797981ea4bb6c489e6d2ec5d023aa260ecacdd7c800ac06570779.scope - libcontainer container d1ed53d5d04797981ea4bb6c489e6d2ec5d023aa260ecacdd7c800ac06570779. Jan 13 20:35:47.956174 systemd[1]: Started cri-containerd-e8ad9ac2b81de376b7d0e92f592cee6b6ce4049e59792a2db835181f3cc027f9.scope - libcontainer container e8ad9ac2b81de376b7d0e92f592cee6b6ce4049e59792a2db835181f3cc027f9. Jan 13 20:35:48.039266 containerd[1891]: time="2025-01-13T20:35:48.039224514Z" level=info msg="StartContainer for \"d1ed53d5d04797981ea4bb6c489e6d2ec5d023aa260ecacdd7c800ac06570779\" returns successfully" Jan 13 20:35:48.040005 containerd[1891]: time="2025-01-13T20:35:48.039473231Z" level=info msg="StartContainer for \"a2c5f6b8256c404893b5b096db0e5451b8de55677f536f1adc3c43c791ea8b9b\" returns successfully" Jan 13 20:35:48.077839 containerd[1891]: time="2025-01-13T20:35:48.077685991Z" level=info msg="StartContainer for \"e8ad9ac2b81de376b7d0e92f592cee6b6ce4049e59792a2db835181f3cc027f9\" returns successfully" Jan 13 20:35:48.363231 kubelet[2803]: E0113 20:35:48.363195 2803 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.24.144:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.24.144:6443: connect: connection refused Jan 13 20:35:48.977407 kubelet[2803]: E0113 20:35:48.977367 2803 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.144:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.144:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-144.181a5ae513cd84da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-144,UID:ip-172-31-24-144,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-144,},FirstTimestamp:2025-01-13 20:35:46.230994138 +0000 UTC m=+0.527095398,LastTimestamp:2025-01-13 20:35:46.230994138 +0000 UTC m=+0.527095398,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-144,}" Jan 13 20:35:48.977935 kubelet[2803]: W0113 20:35:48.977868 2803 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.24.144:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.144:6443: connect: connection refused Jan 13 20:35:48.978014 kubelet[2803]: E0113 20:35:48.977949 2803 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.24.144:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.144:6443: connect: connection refused Jan 13 20:35:49.368190 kubelet[2803]: I0113 20:35:49.368078 2803 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-144" Jan 13 20:35:50.199726 update_engine[1868]: I20250113 20:35:50.198851 1868 update_attempter.cc:509] Updating boot flags... Jan 13 20:35:50.324838 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (3094) Jan 13 20:35:50.648837 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (3085) Jan 13 20:35:51.024116 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 44 scanned by (udev-worker) (3085) Jan 13 20:35:51.440864 kubelet[2803]: E0113 20:35:51.440831 2803 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-144\" not found" node="ip-172-31-24-144" Jan 13 20:35:51.498325 kubelet[2803]: I0113 20:35:51.498267 2803 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-24-144" Jan 13 20:35:52.221794 kubelet[2803]: I0113 20:35:52.221512 2803 apiserver.go:52] "Watching apiserver" Jan 13 20:35:52.247487 kubelet[2803]: I0113 20:35:52.247443 2803 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:35:53.902412 systemd[1]: Reloading requested from client PID 3348 ('systemctl') (unit session-7.scope)... Jan 13 20:35:53.902429 systemd[1]: Reloading... Jan 13 20:35:54.073841 zram_generator::config[3394]: No configuration found. Jan 13 20:35:54.253851 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:35:54.391884 systemd[1]: Reloading finished in 487 ms. Jan 13 20:35:54.445938 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:35:54.446851 kubelet[2803]: I0113 20:35:54.446600 2803 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:35:54.461432 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:35:54.461774 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:35:54.480179 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:35:54.777140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:35:54.792650 (kubelet)[3445]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:35:54.949338 kubelet[3445]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:35:54.949338 kubelet[3445]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:35:54.949338 kubelet[3445]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:35:54.950253 kubelet[3445]: I0113 20:35:54.949439 3445 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:35:54.962102 kubelet[3445]: I0113 20:35:54.961567 3445 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:35:54.962102 kubelet[3445]: I0113 20:35:54.961607 3445 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:35:54.962102 kubelet[3445]: I0113 20:35:54.962019 3445 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:35:54.965907 kubelet[3445]: I0113 20:35:54.965311 3445 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:35:54.970682 kubelet[3445]: I0113 20:35:54.970218 3445 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:35:54.987428 kubelet[3445]: I0113 20:35:54.987396 3445 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:35:54.988057 kubelet[3445]: I0113 20:35:54.987908 3445 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:35:54.988302 kubelet[3445]: I0113 20:35:54.988284 3445 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:35:54.988879 kubelet[3445]: I0113 20:35:54.988445 3445 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:35:54.988879 kubelet[3445]: I0113 20:35:54.988466 3445 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:35:54.988879 kubelet[3445]: I0113 20:35:54.988516 3445 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:35:54.988879 kubelet[3445]: I0113 20:35:54.988636 3445 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:35:54.988879 kubelet[3445]: I0113 20:35:54.988653 3445 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:35:54.988879 kubelet[3445]: I0113 20:35:54.988754 3445 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:35:54.988879 kubelet[3445]: I0113 20:35:54.988790 3445 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:35:54.998907 kubelet[3445]: I0113 20:35:54.998874 3445 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:35:54.999116 kubelet[3445]: I0113 20:35:54.999101 3445 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:35:54.999760 kubelet[3445]: I0113 20:35:54.999687 3445 server.go:1256] "Started kubelet" Jan 13 20:35:55.001505 kubelet[3445]: I0113 20:35:55.001489 3445 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:35:55.002376 kubelet[3445]: I0113 20:35:55.002361 3445 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:35:55.002574 kubelet[3445]: I0113 20:35:55.002534 3445 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:35:55.003803 kubelet[3445]: I0113 20:35:55.003775 3445 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:35:55.013693 kubelet[3445]: I0113 20:35:55.013630 3445 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:35:55.021124 kubelet[3445]: I0113 20:35:55.020001 3445 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:35:55.021124 kubelet[3445]: I0113 20:35:55.020705 3445 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:35:55.021124 kubelet[3445]: I0113 20:35:55.021081 3445 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:35:55.079941 kubelet[3445]: I0113 20:35:55.079321 3445 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:35:55.084583 kubelet[3445]: E0113 20:35:55.084554 3445 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:35:55.085671 kubelet[3445]: I0113 20:35:55.085630 3445 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:35:55.085957 kubelet[3445]: I0113 20:35:55.085943 3445 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:35:55.086111 kubelet[3445]: I0113 20:35:55.086099 3445 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:35:55.087549 kubelet[3445]: E0113 20:35:55.086281 3445 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:35:55.101330 kubelet[3445]: I0113 20:35:55.101273 3445 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:35:55.101330 kubelet[3445]: I0113 20:35:55.101296 3445 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:35:55.106538 kubelet[3445]: I0113 20:35:55.106337 3445 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:35:55.143367 kubelet[3445]: I0113 20:35:55.142347 3445 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-144" Jan 13 20:35:55.170137 kubelet[3445]: I0113 20:35:55.170105 3445 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-24-144" Jan 13 20:35:55.170370 kubelet[3445]: I0113 20:35:55.170296 3445 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-24-144" Jan 13 20:35:55.188020 kubelet[3445]: E0113 20:35:55.187985 3445 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:35:55.217848 kubelet[3445]: I0113 20:35:55.217359 3445 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:35:55.217848 kubelet[3445]: I0113 20:35:55.217385 3445 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:35:55.217848 kubelet[3445]: I0113 20:35:55.217407 3445 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:35:55.217848 kubelet[3445]: I0113 20:35:55.217625 3445 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:35:55.217848 kubelet[3445]: I0113 20:35:55.217655 3445 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:35:55.217848 kubelet[3445]: I0113 20:35:55.217665 3445 policy_none.go:49] "None policy: Start" Jan 13 20:35:55.219014 kubelet[3445]: I0113 20:35:55.218994 3445 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:35:55.219139 kubelet[3445]: I0113 20:35:55.219131 3445 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:35:55.219479 kubelet[3445]: I0113 20:35:55.219468 3445 state_mem.go:75] "Updated machine memory state" Jan 13 20:35:55.228601 kubelet[3445]: I0113 20:35:55.228574 3445 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:35:55.229396 kubelet[3445]: I0113 20:35:55.229377 3445 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:35:55.389407 kubelet[3445]: I0113 20:35:55.388367 3445 topology_manager.go:215] "Topology Admit Handler" podUID="dd64092ecb1f38c67e2490d06a02b197" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-24-144" Jan 13 20:35:55.389407 kubelet[3445]: I0113 20:35:55.388477 3445 topology_manager.go:215] "Topology Admit Handler" podUID="8a08e657e20e69bf2b55c0049bbc6525" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-24-144" Jan 13 20:35:55.389407 kubelet[3445]: I0113 20:35:55.388527 3445 topology_manager.go:215] "Topology Admit Handler" podUID="4c20b61328533d06a55d0036bc99d051" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-24-144" Jan 13 20:35:55.399963 kubelet[3445]: E0113 20:35:55.399930 3445 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-24-144\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-144" Jan 13 20:35:55.438237 kubelet[3445]: I0113 20:35:55.437365 3445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd64092ecb1f38c67e2490d06a02b197-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-144\" (UID: \"dd64092ecb1f38c67e2490d06a02b197\") " pod="kube-system/kube-scheduler-ip-172-31-24-144" Jan 13 20:35:55.438237 kubelet[3445]: I0113 20:35:55.437425 3445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a08e657e20e69bf2b55c0049bbc6525-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-144\" (UID: \"8a08e657e20e69bf2b55c0049bbc6525\") " pod="kube-system/kube-apiserver-ip-172-31-24-144" Jan 13 20:35:55.438237 kubelet[3445]: I0113 20:35:55.437461 3445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4c20b61328533d06a55d0036bc99d051-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-144\" (UID: \"4c20b61328533d06a55d0036bc99d051\") " pod="kube-system/kube-controller-manager-ip-172-31-24-144" Jan 13 20:35:55.438237 kubelet[3445]: I0113 20:35:55.437499 3445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c20b61328533d06a55d0036bc99d051-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-144\" (UID: \"4c20b61328533d06a55d0036bc99d051\") " pod="kube-system/kube-controller-manager-ip-172-31-24-144" Jan 13 20:35:55.438237 kubelet[3445]: I0113 20:35:55.437533 3445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a08e657e20e69bf2b55c0049bbc6525-ca-certs\") pod \"kube-apiserver-ip-172-31-24-144\" (UID: \"8a08e657e20e69bf2b55c0049bbc6525\") " pod="kube-system/kube-apiserver-ip-172-31-24-144" Jan 13 20:35:55.438482 kubelet[3445]: I0113 20:35:55.437563 3445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a08e657e20e69bf2b55c0049bbc6525-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-144\" (UID: \"8a08e657e20e69bf2b55c0049bbc6525\") " pod="kube-system/kube-apiserver-ip-172-31-24-144" Jan 13 20:35:55.438482 kubelet[3445]: I0113 20:35:55.437593 3445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c20b61328533d06a55d0036bc99d051-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-144\" (UID: \"4c20b61328533d06a55d0036bc99d051\") " pod="kube-system/kube-controller-manager-ip-172-31-24-144" Jan 13 20:35:55.438482 kubelet[3445]: I0113 20:35:55.437627 3445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c20b61328533d06a55d0036bc99d051-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-144\" (UID: \"4c20b61328533d06a55d0036bc99d051\") " pod="kube-system/kube-controller-manager-ip-172-31-24-144" Jan 13 20:35:55.438482 kubelet[3445]: I0113 20:35:55.437662 3445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4c20b61328533d06a55d0036bc99d051-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-144\" (UID: \"4c20b61328533d06a55d0036bc99d051\") " pod="kube-system/kube-controller-manager-ip-172-31-24-144" Jan 13 20:35:55.999107 kubelet[3445]: I0113 20:35:55.997442 3445 apiserver.go:52] "Watching apiserver" Jan 13 20:35:56.021745 kubelet[3445]: I0113 20:35:56.021693 3445 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:35:56.219692 kubelet[3445]: I0113 20:35:56.219445 3445 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-144" podStartSLOduration=1.219361005 podStartE2EDuration="1.219361005s" podCreationTimestamp="2025-01-13 20:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:35:56.2083312 +0000 UTC m=+1.387056284" watchObservedRunningTime="2025-01-13 20:35:56.219361005 +0000 UTC m=+1.398086085" Jan 13 20:35:56.220615 kubelet[3445]: I0113 20:35:56.220587 3445 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-144" podStartSLOduration=1.22044295 podStartE2EDuration="1.22044295s" podCreationTimestamp="2025-01-13 20:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:35:56.220092724 +0000 UTC m=+1.398817807" watchObservedRunningTime="2025-01-13 20:35:56.22044295 +0000 UTC m=+1.399168034" Jan 13 20:35:56.248102 kubelet[3445]: I0113 20:35:56.248068 3445 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-144" podStartSLOduration=4.248017216 podStartE2EDuration="4.248017216s" podCreationTimestamp="2025-01-13 20:35:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:35:56.233172185 +0000 UTC m=+1.411897270" watchObservedRunningTime="2025-01-13 20:35:56.248017216 +0000 UTC m=+1.426742296" Jan 13 20:35:56.336839 sudo[2183]: pam_unix(sudo:session): session closed for user root Jan 13 20:35:56.362137 sshd[2180]: Connection closed by 139.178.89.65 port 36716 Jan 13 20:35:56.363664 sshd-session[2177]: pam_unix(sshd:session): session closed for user core Jan 13 20:35:56.366968 systemd[1]: sshd@6-172.31.24.144:22-139.178.89.65:36716.service: Deactivated successfully. Jan 13 20:35:56.369431 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:35:56.369666 systemd[1]: session-7.scope: Consumed 4.251s CPU time, 186.9M memory peak, 0B memory swap peak. Jan 13 20:35:56.371295 systemd-logind[1867]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:35:56.372837 systemd-logind[1867]: Removed session 7. Jan 13 20:36:07.385567 kubelet[3445]: I0113 20:36:07.385523 3445 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:36:07.386357 containerd[1891]: time="2025-01-13T20:36:07.386238044Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:36:07.386677 kubelet[3445]: I0113 20:36:07.386458 3445 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:36:07.677827 kubelet[3445]: I0113 20:36:07.677370 3445 topology_manager.go:215] "Topology Admit Handler" podUID="28dbdf09-2fcd-4a5e-928e-dd8938a3ea51" podNamespace="kube-system" podName="kube-proxy-84kqd" Jan 13 20:36:07.687510 kubelet[3445]: I0113 20:36:07.687480 3445 topology_manager.go:215] "Topology Admit Handler" podUID="a4d4e97b-eccf-4d09-87f5-ca2f4c3a1a74" podNamespace="kube-flannel" podName="kube-flannel-ds-mnmq6" Jan 13 20:36:07.716874 systemd[1]: Created slice kubepods-besteffort-pod28dbdf09_2fcd_4a5e_928e_dd8938a3ea51.slice - libcontainer container kubepods-besteffort-pod28dbdf09_2fcd_4a5e_928e_dd8938a3ea51.slice. Jan 13 20:36:07.748872 kubelet[3445]: I0113 20:36:07.747021 3445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28dbdf09-2fcd-4a5e-928e-dd8938a3ea51-lib-modules\") pod \"kube-proxy-84kqd\" (UID: \"28dbdf09-2fcd-4a5e-928e-dd8938a3ea51\") " pod="kube-system/kube-proxy-84kqd" Jan 13 20:36:07.748872 kubelet[3445]: I0113 20:36:07.747128 3445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/a4d4e97b-eccf-4d09-87f5-ca2f4c3a1a74-cni\") pod \"kube-flannel-ds-mnmq6\" (UID: \"a4d4e97b-eccf-4d09-87f5-ca2f4c3a1a74\") " pod="kube-flannel/kube-flannel-ds-mnmq6" Jan 13 20:36:07.748872 kubelet[3445]: I0113 20:36:07.747161 3445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq56h\" (UniqueName: \"kubernetes.io/projected/28dbdf09-2fcd-4a5e-928e-dd8938a3ea51-kube-api-access-lq56h\") pod \"kube-proxy-84kqd\" (UID: \"28dbdf09-2fcd-4a5e-928e-dd8938a3ea51\") " pod="kube-system/kube-proxy-84kqd" Jan 13 20:36:07.748872 kubelet[3445]: I0113 20:36:07.747191 3445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/a4d4e97b-eccf-4d09-87f5-ca2f4c3a1a74-cni-plugin\") pod \"kube-flannel-ds-mnmq6\" (UID: \"a4d4e97b-eccf-4d09-87f5-ca2f4c3a1a74\") " pod="kube-flannel/kube-flannel-ds-mnmq6" Jan 13 20:36:07.748872 kubelet[3445]: I0113 20:36:07.747219 3445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28dbdf09-2fcd-4a5e-928e-dd8938a3ea51-xtables-lock\") pod \"kube-proxy-84kqd\" (UID: \"28dbdf09-2fcd-4a5e-928e-dd8938a3ea51\") " pod="kube-system/kube-proxy-84kqd" Jan 13 20:36:07.749192 kubelet[3445]: I0113 20:36:07.747249 3445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4d4e97b-eccf-4d09-87f5-ca2f4c3a1a74-xtables-lock\") pod \"kube-flannel-ds-mnmq6\" (UID: \"a4d4e97b-eccf-4d09-87f5-ca2f4c3a1a74\") " pod="kube-flannel/kube-flannel-ds-mnmq6" Jan 13 20:36:07.749192 kubelet[3445]: I0113 20:36:07.747277 3445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c598f\" (UniqueName: \"kubernetes.io/projected/a4d4e97b-eccf-4d09-87f5-ca2f4c3a1a74-kube-api-access-c598f\") pod \"kube-flannel-ds-mnmq6\" (UID: \"a4d4e97b-eccf-4d09-87f5-ca2f4c3a1a74\") " pod="kube-flannel/kube-flannel-ds-mnmq6" Jan 13 20:36:07.749192 kubelet[3445]: I0113 20:36:07.747303 3445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a4d4e97b-eccf-4d09-87f5-ca2f4c3a1a74-run\") pod \"kube-flannel-ds-mnmq6\" (UID: \"a4d4e97b-eccf-4d09-87f5-ca2f4c3a1a74\") " pod="kube-flannel/kube-flannel-ds-mnmq6" Jan 13 20:36:07.749192 kubelet[3445]: I0113 20:36:07.747331 3445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/a4d4e97b-eccf-4d09-87f5-ca2f4c3a1a74-flannel-cfg\") pod \"kube-flannel-ds-mnmq6\" (UID: \"a4d4e97b-eccf-4d09-87f5-ca2f4c3a1a74\") " pod="kube-flannel/kube-flannel-ds-mnmq6" Jan 13 20:36:07.749192 kubelet[3445]: I0113 20:36:07.747359 3445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/28dbdf09-2fcd-4a5e-928e-dd8938a3ea51-kube-proxy\") pod \"kube-proxy-84kqd\" (UID: \"28dbdf09-2fcd-4a5e-928e-dd8938a3ea51\") " pod="kube-system/kube-proxy-84kqd" Jan 13 20:36:07.755872 systemd[1]: Created slice kubepods-burstable-poda4d4e97b_eccf_4d09_87f5_ca2f4c3a1a74.slice - libcontainer container kubepods-burstable-poda4d4e97b_eccf_4d09_87f5_ca2f4c3a1a74.slice. Jan 13 20:36:07.860882 kubelet[3445]: E0113 20:36:07.860783 3445 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 13 20:36:07.860882 kubelet[3445]: E0113 20:36:07.860885 3445 projected.go:200] Error preparing data for projected volume kube-api-access-lq56h for pod kube-system/kube-proxy-84kqd: configmap "kube-root-ca.crt" not found Jan 13 20:36:07.861121 kubelet[3445]: E0113 20:36:07.860968 3445 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/28dbdf09-2fcd-4a5e-928e-dd8938a3ea51-kube-api-access-lq56h podName:28dbdf09-2fcd-4a5e-928e-dd8938a3ea51 nodeName:}" failed. No retries permitted until 2025-01-13 20:36:08.360942438 +0000 UTC m=+13.539667519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lq56h" (UniqueName: "kubernetes.io/projected/28dbdf09-2fcd-4a5e-928e-dd8938a3ea51-kube-api-access-lq56h") pod "kube-proxy-84kqd" (UID: "28dbdf09-2fcd-4a5e-928e-dd8938a3ea51") : configmap "kube-root-ca.crt" not found Jan 13 20:36:07.862879 kubelet[3445]: E0113 20:36:07.862840 3445 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 13 20:36:07.863013 kubelet[3445]: E0113 20:36:07.862886 3445 projected.go:200] Error preparing data for projected volume kube-api-access-c598f for pod kube-flannel/kube-flannel-ds-mnmq6: configmap "kube-root-ca.crt" not found Jan 13 20:36:07.863013 kubelet[3445]: E0113 20:36:07.862953 3445 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a4d4e97b-eccf-4d09-87f5-ca2f4c3a1a74-kube-api-access-c598f podName:a4d4e97b-eccf-4d09-87f5-ca2f4c3a1a74 nodeName:}" failed. No retries permitted until 2025-01-13 20:36:08.362931128 +0000 UTC m=+13.541656206 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c598f" (UniqueName: "kubernetes.io/projected/a4d4e97b-eccf-4d09-87f5-ca2f4c3a1a74-kube-api-access-c598f") pod "kube-flannel-ds-mnmq6" (UID: "a4d4e97b-eccf-4d09-87f5-ca2f4c3a1a74") : configmap "kube-root-ca.crt" not found Jan 13 20:36:08.638031 containerd[1891]: time="2025-01-13T20:36:08.637976513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-84kqd,Uid:28dbdf09-2fcd-4a5e-928e-dd8938a3ea51,Namespace:kube-system,Attempt:0,}" Jan 13 20:36:08.664617 containerd[1891]: time="2025-01-13T20:36:08.664181115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-mnmq6,Uid:a4d4e97b-eccf-4d09-87f5-ca2f4c3a1a74,Namespace:kube-flannel,Attempt:0,}" Jan 13 20:36:08.676593 containerd[1891]: time="2025-01-13T20:36:08.676387557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:36:08.676593 containerd[1891]: time="2025-01-13T20:36:08.676537700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:36:08.676593 containerd[1891]: time="2025-01-13T20:36:08.676565065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:36:08.682183 containerd[1891]: time="2025-01-13T20:36:08.682024783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:36:08.748607 containerd[1891]: time="2025-01-13T20:36:08.748516251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:36:08.748617 systemd[1]: Started cri-containerd-afe83d63cb690e6f43dcf4be079b7efe40fafcdbda30f35ca2ed4861738a2b00.scope - libcontainer container afe83d63cb690e6f43dcf4be079b7efe40fafcdbda30f35ca2ed4861738a2b00. Jan 13 20:36:08.749557 containerd[1891]: time="2025-01-13T20:36:08.748876248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:36:08.749557 containerd[1891]: time="2025-01-13T20:36:08.749276195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:36:08.750292 containerd[1891]: time="2025-01-13T20:36:08.749730905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:36:08.792419 systemd[1]: Started cri-containerd-add853bf63e97367bae0d34426258b358aa67fa6fc8ea929dcc55c3cb9d6ec2c.scope - libcontainer container add853bf63e97367bae0d34426258b358aa67fa6fc8ea929dcc55c3cb9d6ec2c. Jan 13 20:36:08.818132 containerd[1891]: time="2025-01-13T20:36:08.817658732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-84kqd,Uid:28dbdf09-2fcd-4a5e-928e-dd8938a3ea51,Namespace:kube-system,Attempt:0,} returns sandbox id \"afe83d63cb690e6f43dcf4be079b7efe40fafcdbda30f35ca2ed4861738a2b00\"" Jan 13 20:36:08.826992 containerd[1891]: time="2025-01-13T20:36:08.826908891Z" level=info msg="CreateContainer within sandbox \"afe83d63cb690e6f43dcf4be079b7efe40fafcdbda30f35ca2ed4861738a2b00\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:36:08.872218 containerd[1891]: time="2025-01-13T20:36:08.872153279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-mnmq6,Uid:a4d4e97b-eccf-4d09-87f5-ca2f4c3a1a74,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"add853bf63e97367bae0d34426258b358aa67fa6fc8ea929dcc55c3cb9d6ec2c\"" Jan 13 20:36:08.874919 containerd[1891]: time="2025-01-13T20:36:08.874878321Z" level=info msg="CreateContainer within sandbox \"afe83d63cb690e6f43dcf4be079b7efe40fafcdbda30f35ca2ed4861738a2b00\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7cd7a32811048fd3861cf496456189b41227f9eeb35de384badf9d413ab348c7\"" Jan 13 20:36:08.875269 containerd[1891]: time="2025-01-13T20:36:08.875238402Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 13 20:36:08.876386 containerd[1891]: time="2025-01-13T20:36:08.876268086Z" level=info msg="StartContainer for \"7cd7a32811048fd3861cf496456189b41227f9eeb35de384badf9d413ab348c7\"" Jan 13 20:36:08.914448 systemd[1]: Started cri-containerd-7cd7a32811048fd3861cf496456189b41227f9eeb35de384badf9d413ab348c7.scope - libcontainer container 7cd7a32811048fd3861cf496456189b41227f9eeb35de384badf9d413ab348c7. Jan 13 20:36:08.964367 containerd[1891]: time="2025-01-13T20:36:08.964297439Z" level=info msg="StartContainer for \"7cd7a32811048fd3861cf496456189b41227f9eeb35de384badf9d413ab348c7\" returns successfully" Jan 13 20:36:09.225914 kubelet[3445]: I0113 20:36:09.225407 3445 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-84kqd" podStartSLOduration=2.225356436 podStartE2EDuration="2.225356436s" podCreationTimestamp="2025-01-13 20:36:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:36:09.22500242 +0000 UTC m=+14.403727503" watchObservedRunningTime="2025-01-13 20:36:09.225356436 +0000 UTC m=+14.404081518" Jan 13 20:36:09.463390 systemd[1]: run-containerd-runc-k8s.io-afe83d63cb690e6f43dcf4be079b7efe40fafcdbda30f35ca2ed4861738a2b00-runc.XawByI.mount: Deactivated successfully. Jan 13 20:36:10.676722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1998320562.mount: Deactivated successfully. Jan 13 20:36:10.735622 containerd[1891]: time="2025-01-13T20:36:10.735567550Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:10.737058 containerd[1891]: time="2025-01-13T20:36:10.736970361Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jan 13 20:36:10.739849 containerd[1891]: time="2025-01-13T20:36:10.738203978Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:10.742655 containerd[1891]: time="2025-01-13T20:36:10.742612056Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:10.744332 containerd[1891]: time="2025-01-13T20:36:10.744290727Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 1.869019583s" Jan 13 20:36:10.744481 containerd[1891]: time="2025-01-13T20:36:10.744462732Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 13 20:36:10.747516 containerd[1891]: time="2025-01-13T20:36:10.747479911Z" level=info msg="CreateContainer within sandbox \"add853bf63e97367bae0d34426258b358aa67fa6fc8ea929dcc55c3cb9d6ec2c\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 13 20:36:10.765101 containerd[1891]: time="2025-01-13T20:36:10.764987838Z" level=info msg="CreateContainer within sandbox \"add853bf63e97367bae0d34426258b358aa67fa6fc8ea929dcc55c3cb9d6ec2c\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"bd6b9461d9f47096647386fc9c8bbb626ffede6d5d5449c035cfd204b926b35d\"" Jan 13 20:36:10.766086 containerd[1891]: time="2025-01-13T20:36:10.766034112Z" level=info msg="StartContainer for \"bd6b9461d9f47096647386fc9c8bbb626ffede6d5d5449c035cfd204b926b35d\"" Jan 13 20:36:10.813995 systemd[1]: Started cri-containerd-bd6b9461d9f47096647386fc9c8bbb626ffede6d5d5449c035cfd204b926b35d.scope - libcontainer container bd6b9461d9f47096647386fc9c8bbb626ffede6d5d5449c035cfd204b926b35d. Jan 13 20:36:10.852053 systemd[1]: cri-containerd-bd6b9461d9f47096647386fc9c8bbb626ffede6d5d5449c035cfd204b926b35d.scope: Deactivated successfully. Jan 13 20:36:10.859734 containerd[1891]: time="2025-01-13T20:36:10.859634833Z" level=info msg="StartContainer for \"bd6b9461d9f47096647386fc9c8bbb626ffede6d5d5449c035cfd204b926b35d\" returns successfully" Jan 13 20:36:10.908046 containerd[1891]: time="2025-01-13T20:36:10.907962445Z" level=info msg="shim disconnected" id=bd6b9461d9f47096647386fc9c8bbb626ffede6d5d5449c035cfd204b926b35d namespace=k8s.io Jan 13 20:36:10.908363 containerd[1891]: time="2025-01-13T20:36:10.908339484Z" level=warning msg="cleaning up after shim disconnected" id=bd6b9461d9f47096647386fc9c8bbb626ffede6d5d5449c035cfd204b926b35d namespace=k8s.io Jan 13 20:36:10.908476 containerd[1891]: time="2025-01-13T20:36:10.908451038Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:36:11.224213 containerd[1891]: time="2025-01-13T20:36:11.223934514Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 13 20:36:11.577537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd6b9461d9f47096647386fc9c8bbb626ffede6d5d5449c035cfd204b926b35d-rootfs.mount: Deactivated successfully. Jan 13 20:36:13.200341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1698392492.mount: Deactivated successfully. Jan 13 20:36:15.282022 containerd[1891]: time="2025-01-13T20:36:15.281634296Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:15.290847 containerd[1891]: time="2025-01-13T20:36:15.286951628Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Jan 13 20:36:15.296854 containerd[1891]: time="2025-01-13T20:36:15.295663737Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:15.302378 containerd[1891]: time="2025-01-13T20:36:15.302333341Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:36:15.303471 containerd[1891]: time="2025-01-13T20:36:15.303174085Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 4.079197352s" Jan 13 20:36:15.303471 containerd[1891]: time="2025-01-13T20:36:15.303211816Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 13 20:36:15.306455 containerd[1891]: time="2025-01-13T20:36:15.306421037Z" level=info msg="CreateContainer within sandbox \"add853bf63e97367bae0d34426258b358aa67fa6fc8ea929dcc55c3cb9d6ec2c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:36:15.379991 containerd[1891]: time="2025-01-13T20:36:15.379356215Z" level=info msg="CreateContainer within sandbox \"add853bf63e97367bae0d34426258b358aa67fa6fc8ea929dcc55c3cb9d6ec2c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"69dd3cae562119bffacd6140ff7112dc75d7f7738457be1e7624a845a5416928\"" Jan 13 20:36:15.384034 containerd[1891]: time="2025-01-13T20:36:15.380788646Z" level=info msg="StartContainer for \"69dd3cae562119bffacd6140ff7112dc75d7f7738457be1e7624a845a5416928\"" Jan 13 20:36:15.393212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2436383131.mount: Deactivated successfully. Jan 13 20:36:15.439064 systemd[1]: Started cri-containerd-69dd3cae562119bffacd6140ff7112dc75d7f7738457be1e7624a845a5416928.scope - libcontainer container 69dd3cae562119bffacd6140ff7112dc75d7f7738457be1e7624a845a5416928. Jan 13 20:36:15.479597 systemd[1]: cri-containerd-69dd3cae562119bffacd6140ff7112dc75d7f7738457be1e7624a845a5416928.scope: Deactivated successfully. Jan 13 20:36:15.482875 containerd[1891]: time="2025-01-13T20:36:15.482837435Z" level=info msg="StartContainer for \"69dd3cae562119bffacd6140ff7112dc75d7f7738457be1e7624a845a5416928\" returns successfully" Jan 13 20:36:15.505155 kubelet[3445]: I0113 20:36:15.505125 3445 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:36:15.527818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69dd3cae562119bffacd6140ff7112dc75d7f7738457be1e7624a845a5416928-rootfs.mount: Deactivated successfully. Jan 13 20:36:15.544069 containerd[1891]: time="2025-01-13T20:36:15.543900405Z" level=info msg="shim disconnected" id=69dd3cae562119bffacd6140ff7112dc75d7f7738457be1e7624a845a5416928 namespace=k8s.io Jan 13 20:36:15.544069 containerd[1891]: time="2025-01-13T20:36:15.543985337Z" level=warning msg="cleaning up after shim disconnected" id=69dd3cae562119bffacd6140ff7112dc75d7f7738457be1e7624a845a5416928 namespace=k8s.io Jan 13 20:36:15.544069 containerd[1891]: time="2025-01-13T20:36:15.544012206Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:36:15.557230 kubelet[3445]: I0113 20:36:15.557189 3445 topology_manager.go:215] "Topology Admit Handler" podUID="ce29a6d0-bfdc-4202-9962-b36f5beb0539" podNamespace="kube-system" podName="coredns-76f75df574-nnks6" Jan 13 20:36:15.558975 kubelet[3445]: I0113 20:36:15.557707 3445 topology_manager.go:215] "Topology Admit Handler" podUID="b9140be9-1ccd-4961-8015-ba225ab44dbe" podNamespace="kube-system" podName="coredns-76f75df574-gfqqp" Jan 13 20:36:15.593291 systemd[1]: Created slice kubepods-burstable-podce29a6d0_bfdc_4202_9962_b36f5beb0539.slice - libcontainer container kubepods-burstable-podce29a6d0_bfdc_4202_9962_b36f5beb0539.slice. Jan 13 20:36:15.611480 systemd[1]: Created slice kubepods-burstable-podb9140be9_1ccd_4961_8015_ba225ab44dbe.slice - libcontainer container kubepods-burstable-podb9140be9_1ccd_4961_8015_ba225ab44dbe.slice. Jan 13 20:36:15.613284 kubelet[3445]: I0113 20:36:15.612448 3445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpm69\" (UniqueName: \"kubernetes.io/projected/ce29a6d0-bfdc-4202-9962-b36f5beb0539-kube-api-access-wpm69\") pod \"coredns-76f75df574-nnks6\" (UID: \"ce29a6d0-bfdc-4202-9962-b36f5beb0539\") " pod="kube-system/coredns-76f75df574-nnks6" Jan 13 20:36:15.613284 kubelet[3445]: I0113 20:36:15.612501 3445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9140be9-1ccd-4961-8015-ba225ab44dbe-config-volume\") pod \"coredns-76f75df574-gfqqp\" (UID: \"b9140be9-1ccd-4961-8015-ba225ab44dbe\") " pod="kube-system/coredns-76f75df574-gfqqp" Jan 13 20:36:15.613284 kubelet[3445]: I0113 20:36:15.612769 3445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kzn7\" (UniqueName: \"kubernetes.io/projected/b9140be9-1ccd-4961-8015-ba225ab44dbe-kube-api-access-2kzn7\") pod \"coredns-76f75df574-gfqqp\" (UID: \"b9140be9-1ccd-4961-8015-ba225ab44dbe\") " pod="kube-system/coredns-76f75df574-gfqqp" Jan 13 20:36:15.613872 kubelet[3445]: I0113 20:36:15.612823 3445 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce29a6d0-bfdc-4202-9962-b36f5beb0539-config-volume\") pod \"coredns-76f75df574-nnks6\" (UID: \"ce29a6d0-bfdc-4202-9962-b36f5beb0539\") " pod="kube-system/coredns-76f75df574-nnks6" Jan 13 20:36:15.907999 containerd[1891]: time="2025-01-13T20:36:15.907943219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nnks6,Uid:ce29a6d0-bfdc-4202-9962-b36f5beb0539,Namespace:kube-system,Attempt:0,}" Jan 13 20:36:15.919435 containerd[1891]: time="2025-01-13T20:36:15.919019555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gfqqp,Uid:b9140be9-1ccd-4961-8015-ba225ab44dbe,Namespace:kube-system,Attempt:0,}" Jan 13 20:36:16.001025 containerd[1891]: time="2025-01-13T20:36:16.000966103Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gfqqp,Uid:b9140be9-1ccd-4961-8015-ba225ab44dbe,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b4383b6f5ac5d7ca13198ec3f0f1748f7df94b7266a3b8853a42eac9cb489d22\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 20:36:16.001517 kubelet[3445]: E0113 20:36:16.001460 3445 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4383b6f5ac5d7ca13198ec3f0f1748f7df94b7266a3b8853a42eac9cb489d22\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 20:36:16.001837 kubelet[3445]: E0113 20:36:16.001561 3445 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4383b6f5ac5d7ca13198ec3f0f1748f7df94b7266a3b8853a42eac9cb489d22\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-gfqqp" Jan 13 20:36:16.001837 kubelet[3445]: E0113 20:36:16.001621 3445 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4383b6f5ac5d7ca13198ec3f0f1748f7df94b7266a3b8853a42eac9cb489d22\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-gfqqp" Jan 13 20:36:16.001837 kubelet[3445]: E0113 20:36:16.001831 3445 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-gfqqp_kube-system(b9140be9-1ccd-4961-8015-ba225ab44dbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-gfqqp_kube-system(b9140be9-1ccd-4961-8015-ba225ab44dbe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4383b6f5ac5d7ca13198ec3f0f1748f7df94b7266a3b8853a42eac9cb489d22\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-gfqqp" podUID="b9140be9-1ccd-4961-8015-ba225ab44dbe" Jan 13 20:36:16.002372 containerd[1891]: time="2025-01-13T20:36:16.002178463Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nnks6,Uid:ce29a6d0-bfdc-4202-9962-b36f5beb0539,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"889df4b943d6165b8e796b9a27969c4b60f2d40de87604ca22bbe51766d17ef8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 20:36:16.003076 kubelet[3445]: E0113 20:36:16.003053 3445 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"889df4b943d6165b8e796b9a27969c4b60f2d40de87604ca22bbe51766d17ef8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 20:36:16.003172 kubelet[3445]: E0113 20:36:16.003104 3445 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"889df4b943d6165b8e796b9a27969c4b60f2d40de87604ca22bbe51766d17ef8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-nnks6" Jan 13 20:36:16.003172 kubelet[3445]: E0113 20:36:16.003131 3445 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"889df4b943d6165b8e796b9a27969c4b60f2d40de87604ca22bbe51766d17ef8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-nnks6" Jan 13 20:36:16.003422 kubelet[3445]: E0113 20:36:16.003198 3445 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-nnks6_kube-system(ce29a6d0-bfdc-4202-9962-b36f5beb0539)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-nnks6_kube-system(ce29a6d0-bfdc-4202-9962-b36f5beb0539)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"889df4b943d6165b8e796b9a27969c4b60f2d40de87604ca22bbe51766d17ef8\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-nnks6" podUID="ce29a6d0-bfdc-4202-9962-b36f5beb0539" Jan 13 20:36:16.242399 containerd[1891]: time="2025-01-13T20:36:16.241798615Z" level=info msg="CreateContainer within sandbox \"add853bf63e97367bae0d34426258b358aa67fa6fc8ea929dcc55c3cb9d6ec2c\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 13 20:36:16.291058 containerd[1891]: time="2025-01-13T20:36:16.291010545Z" level=info msg="CreateContainer within sandbox \"add853bf63e97367bae0d34426258b358aa67fa6fc8ea929dcc55c3cb9d6ec2c\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"9fab015da9dadf977fef69f35864d147d616e6f472d787fa7c70b71bc7b5fc79\"" Jan 13 20:36:16.293216 containerd[1891]: time="2025-01-13T20:36:16.292039084Z" level=info msg="StartContainer for \"9fab015da9dadf977fef69f35864d147d616e6f472d787fa7c70b71bc7b5fc79\"" Jan 13 20:36:16.326097 systemd[1]: Started cri-containerd-9fab015da9dadf977fef69f35864d147d616e6f472d787fa7c70b71bc7b5fc79.scope - libcontainer container 9fab015da9dadf977fef69f35864d147d616e6f472d787fa7c70b71bc7b5fc79. Jan 13 20:36:16.377397 containerd[1891]: time="2025-01-13T20:36:16.377239470Z" level=info msg="StartContainer for \"9fab015da9dadf977fef69f35864d147d616e6f472d787fa7c70b71bc7b5fc79\" returns successfully" Jan 13 20:36:17.261978 kubelet[3445]: I0113 20:36:17.261244 3445 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-mnmq6" podStartSLOduration=3.831567112 podStartE2EDuration="10.261190442s" podCreationTimestamp="2025-01-13 20:36:07 +0000 UTC" firstStartedPulling="2025-01-13 20:36:08.874086068 +0000 UTC m=+14.052811132" lastFinishedPulling="2025-01-13 20:36:15.30370939 +0000 UTC m=+20.482434462" observedRunningTime="2025-01-13 20:36:17.260943161 +0000 UTC m=+22.439668245" watchObservedRunningTime="2025-01-13 20:36:17.261190442 +0000 UTC m=+22.439915527" Jan 13 20:36:17.442501 (udev-worker)[3982]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:36:17.466465 systemd-networkd[1763]: flannel.1: Link UP Jan 13 20:36:17.466476 systemd-networkd[1763]: flannel.1: Gained carrier Jan 13 20:36:19.158089 systemd-networkd[1763]: flannel.1: Gained IPv6LL Jan 13 20:36:21.597428 ntpd[1860]: Listen normally on 7 flannel.1 192.168.0.0:123 Jan 13 20:36:21.597522 ntpd[1860]: Listen normally on 8 flannel.1 [fe80::b864:c0ff:feb6:c121%4]:123 Jan 13 20:36:21.599281 ntpd[1860]: 13 Jan 20:36:21 ntpd[1860]: Listen normally on 7 flannel.1 192.168.0.0:123 Jan 13 20:36:21.599281 ntpd[1860]: 13 Jan 20:36:21 ntpd[1860]: Listen normally on 8 flannel.1 [fe80::b864:c0ff:feb6:c121%4]:123 Jan 13 20:36:30.088563 containerd[1891]: time="2025-01-13T20:36:30.088129689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gfqqp,Uid:b9140be9-1ccd-4961-8015-ba225ab44dbe,Namespace:kube-system,Attempt:0,}" Jan 13 20:36:30.088563 containerd[1891]: time="2025-01-13T20:36:30.088187773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nnks6,Uid:ce29a6d0-bfdc-4202-9962-b36f5beb0539,Namespace:kube-system,Attempt:0,}" Jan 13 20:36:30.180326 systemd-networkd[1763]: cni0: Link UP Jan 13 20:36:30.180373 systemd-networkd[1763]: cni0: Gained carrier Jan 13 20:36:30.185852 (udev-worker)[4137]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:36:30.189108 systemd-networkd[1763]: cni0: Lost carrier Jan 13 20:36:30.204567 (udev-worker)[4139]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:36:30.204588 systemd-networkd[1763]: vethb63d9b02: Link UP Jan 13 20:36:30.207673 kernel: cni0: port 1(vethb63d9b02) entered blocking state Jan 13 20:36:30.207754 kernel: cni0: port 1(vethb63d9b02) entered disabled state Jan 13 20:36:30.215937 kernel: vethb63d9b02: entered allmulticast mode Jan 13 20:36:30.216046 kernel: vethb63d9b02: entered promiscuous mode Jan 13 20:36:30.218188 kernel: cni0: port 1(vethb63d9b02) entered blocking state Jan 13 20:36:30.218278 kernel: cni0: port 1(vethb63d9b02) entered forwarding state Jan 13 20:36:30.218311 kernel: cni0: port 1(vethb63d9b02) entered disabled state Jan 13 20:36:30.220316 (udev-worker)[4141]: Network interface NamePolicy= disabled on kernel command line. Jan 13 20:36:30.225049 kernel: cni0: port 2(vetha634d6b5) entered blocking state Jan 13 20:36:30.225288 kernel: cni0: port 2(vetha634d6b5) entered disabled state Jan 13 20:36:30.233862 kernel: vetha634d6b5: entered allmulticast mode Jan 13 20:36:30.235840 kernel: vetha634d6b5: entered promiscuous mode Jan 13 20:36:30.238045 kernel: cni0: port 2(vetha634d6b5) entered blocking state Jan 13 20:36:30.238230 kernel: cni0: port 2(vetha634d6b5) entered forwarding state Jan 13 20:36:30.238265 kernel: cni0: port 2(vetha634d6b5) entered disabled state Jan 13 20:36:30.238915 systemd-networkd[1763]: vetha634d6b5: Link UP Jan 13 20:36:30.245324 kernel: cni0: port 1(vethb63d9b02) entered blocking state Jan 13 20:36:30.245427 kernel: cni0: port 1(vethb63d9b02) entered forwarding state Jan 13 20:36:30.246301 systemd-networkd[1763]: vethb63d9b02: Gained carrier Jan 13 20:36:30.248116 systemd-networkd[1763]: cni0: Gained carrier Jan 13 20:36:30.260222 containerd[1891]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000022938), "name":"cbr0", "type":"bridge"} Jan 13 20:36:30.260222 containerd[1891]: delegateAdd: netconf sent to delegate plugin: Jan 13 20:36:30.263955 kernel: cni0: port 2(vetha634d6b5) entered blocking state Jan 13 20:36:30.264176 kernel: cni0: port 2(vetha634d6b5) entered forwarding state Jan 13 20:36:30.264498 systemd-networkd[1763]: vetha634d6b5: Gained carrier Jan 13 20:36:30.273029 containerd[1891]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"} Jan 13 20:36:30.273029 containerd[1891]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000018938), "name":"cbr0", "type":"bridge"} Jan 13 20:36:30.273029 containerd[1891]: delegateAdd: netconf sent to delegate plugin: Jan 13 20:36:30.312657 containerd[1891]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-01-13T20:36:30.312121717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:36:30.312657 containerd[1891]: time="2025-01-13T20:36:30.312214586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:36:30.312657 containerd[1891]: time="2025-01-13T20:36:30.312238663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:36:30.313951 containerd[1891]: time="2025-01-13T20:36:30.312347352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:36:30.334436 containerd[1891]: time="2025-01-13T20:36:30.334273552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:36:30.336306 containerd[1891]: time="2025-01-13T20:36:30.336031616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:36:30.336306 containerd[1891]: time="2025-01-13T20:36:30.336090590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:36:30.336306 containerd[1891]: time="2025-01-13T20:36:30.336199094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:36:30.357062 systemd[1]: Started cri-containerd-456aabb108fba9d12777fab7c0e53920bf46fc5cdf4f5005c5f7dc3f409e500f.scope - libcontainer container 456aabb108fba9d12777fab7c0e53920bf46fc5cdf4f5005c5f7dc3f409e500f. Jan 13 20:36:30.382089 systemd[1]: Started cri-containerd-5e13be225723f0273b7c946e1c19741cc5aff8e679653c43a4c179bff8835a95.scope - libcontainer container 5e13be225723f0273b7c946e1c19741cc5aff8e679653c43a4c179bff8835a95. Jan 13 20:36:30.470497 containerd[1891]: time="2025-01-13T20:36:30.470453882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gfqqp,Uid:b9140be9-1ccd-4961-8015-ba225ab44dbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e13be225723f0273b7c946e1c19741cc5aff8e679653c43a4c179bff8835a95\"" Jan 13 20:36:30.475310 containerd[1891]: time="2025-01-13T20:36:30.475272991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-nnks6,Uid:ce29a6d0-bfdc-4202-9962-b36f5beb0539,Namespace:kube-system,Attempt:0,} returns sandbox id \"456aabb108fba9d12777fab7c0e53920bf46fc5cdf4f5005c5f7dc3f409e500f\"" Jan 13 20:36:30.481571 containerd[1891]: time="2025-01-13T20:36:30.476681507Z" level=info msg="CreateContainer within sandbox \"5e13be225723f0273b7c946e1c19741cc5aff8e679653c43a4c179bff8835a95\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:36:30.526214 containerd[1891]: time="2025-01-13T20:36:30.525965857Z" level=info msg="CreateContainer within sandbox \"456aabb108fba9d12777fab7c0e53920bf46fc5cdf4f5005c5f7dc3f409e500f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:36:30.535886 containerd[1891]: time="2025-01-13T20:36:30.535759593Z" level=info msg="CreateContainer within sandbox \"5e13be225723f0273b7c946e1c19741cc5aff8e679653c43a4c179bff8835a95\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"40735eb1cc8331c5e542220fb2656eb40c082b6b09dc9fac7d39802d6dbfb60b\"" Jan 13 20:36:30.539856 containerd[1891]: time="2025-01-13T20:36:30.539601515Z" level=info msg="StartContainer for \"40735eb1cc8331c5e542220fb2656eb40c082b6b09dc9fac7d39802d6dbfb60b\"" Jan 13 20:36:30.586738 containerd[1891]: time="2025-01-13T20:36:30.586598208Z" level=info msg="CreateContainer within sandbox \"456aabb108fba9d12777fab7c0e53920bf46fc5cdf4f5005c5f7dc3f409e500f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"04200da623df513f56595d3778ad71864762c154bf3dc57f0c30b7fedee2936e\"" Jan 13 20:36:30.595958 containerd[1891]: time="2025-01-13T20:36:30.595507824Z" level=info msg="StartContainer for \"04200da623df513f56595d3778ad71864762c154bf3dc57f0c30b7fedee2936e\"" Jan 13 20:36:30.626608 systemd[1]: Started cri-containerd-40735eb1cc8331c5e542220fb2656eb40c082b6b09dc9fac7d39802d6dbfb60b.scope - libcontainer container 40735eb1cc8331c5e542220fb2656eb40c082b6b09dc9fac7d39802d6dbfb60b. Jan 13 20:36:30.681329 systemd[1]: Started cri-containerd-04200da623df513f56595d3778ad71864762c154bf3dc57f0c30b7fedee2936e.scope - libcontainer container 04200da623df513f56595d3778ad71864762c154bf3dc57f0c30b7fedee2936e. Jan 13 20:36:30.686780 containerd[1891]: time="2025-01-13T20:36:30.686648617Z" level=info msg="StartContainer for \"40735eb1cc8331c5e542220fb2656eb40c082b6b09dc9fac7d39802d6dbfb60b\" returns successfully" Jan 13 20:36:30.729357 containerd[1891]: time="2025-01-13T20:36:30.729228091Z" level=info msg="StartContainer for \"04200da623df513f56595d3778ad71864762c154bf3dc57f0c30b7fedee2936e\" returns successfully" Jan 13 20:36:31.111672 systemd[1]: run-containerd-runc-k8s.io-456aabb108fba9d12777fab7c0e53920bf46fc5cdf4f5005c5f7dc3f409e500f-runc.qAZZq2.mount: Deactivated successfully. Jan 13 20:36:31.386500 systemd-networkd[1763]: vetha634d6b5: Gained IPv6LL Jan 13 20:36:31.435798 kubelet[3445]: I0113 20:36:31.435107 3445 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-nnks6" podStartSLOduration=23.42171773 podStartE2EDuration="23.42171773s" podCreationTimestamp="2025-01-13 20:36:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:36:31.387434557 +0000 UTC m=+36.566159638" watchObservedRunningTime="2025-01-13 20:36:31.42171773 +0000 UTC m=+36.600442795" Jan 13 20:36:31.472789 kubelet[3445]: I0113 20:36:31.472750 3445 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-gfqqp" podStartSLOduration=23.472706261 podStartE2EDuration="23.472706261s" podCreationTimestamp="2025-01-13 20:36:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:36:31.471450176 +0000 UTC m=+36.650175271" watchObservedRunningTime="2025-01-13 20:36:31.472706261 +0000 UTC m=+36.651431327" Jan 13 20:36:31.701999 systemd-networkd[1763]: vethb63d9b02: Gained IPv6LL Jan 13 20:36:32.022042 systemd-networkd[1763]: cni0: Gained IPv6LL Jan 13 20:36:34.597516 ntpd[1860]: Listen normally on 9 cni0 192.168.0.1:123 Jan 13 20:36:34.597612 ntpd[1860]: Listen normally on 10 cni0 [fe80::586f:9fff:fe24:be7d%5]:123 Jan 13 20:36:34.598640 ntpd[1860]: 13 Jan 20:36:34 ntpd[1860]: Listen normally on 9 cni0 192.168.0.1:123 Jan 13 20:36:34.598640 ntpd[1860]: 13 Jan 20:36:34 ntpd[1860]: Listen normally on 10 cni0 [fe80::586f:9fff:fe24:be7d%5]:123 Jan 13 20:36:34.598640 ntpd[1860]: 13 Jan 20:36:34 ntpd[1860]: Listen normally on 11 vethb63d9b02 [fe80::30c3:39ff:fed4:35ac%6]:123 Jan 13 20:36:34.598640 ntpd[1860]: 13 Jan 20:36:34 ntpd[1860]: Listen normally on 12 vetha634d6b5 [fe80::481d:57ff:fed0:39a7%7]:123 Jan 13 20:36:34.597671 ntpd[1860]: Listen normally on 11 vethb63d9b02 [fe80::30c3:39ff:fed4:35ac%6]:123 Jan 13 20:36:34.597711 ntpd[1860]: Listen normally on 12 vetha634d6b5 [fe80::481d:57ff:fed0:39a7%7]:123 Jan 13 20:36:39.357948 systemd[1]: Started sshd@7-172.31.24.144:22-139.178.89.65:49304.service - OpenSSH per-connection server daemon (139.178.89.65:49304). Jan 13 20:36:39.561008 sshd[4384]: Accepted publickey for core from 139.178.89.65 port 49304 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:36:39.562721 sshd-session[4384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:39.568723 systemd-logind[1867]: New session 8 of user core. Jan 13 20:36:39.577040 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:36:39.812456 sshd[4386]: Connection closed by 139.178.89.65 port 49304 Jan 13 20:36:39.815109 sshd-session[4384]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:39.822956 systemd[1]: sshd@7-172.31.24.144:22-139.178.89.65:49304.service: Deactivated successfully. Jan 13 20:36:39.827275 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:36:39.829111 systemd-logind[1867]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:36:39.832687 systemd-logind[1867]: Removed session 8. Jan 13 20:36:44.847436 systemd[1]: Started sshd@8-172.31.24.144:22-139.178.89.65:46586.service - OpenSSH per-connection server daemon (139.178.89.65:46586). Jan 13 20:36:45.023198 sshd[4419]: Accepted publickey for core from 139.178.89.65 port 46586 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:36:45.023928 sshd-session[4419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:45.029494 systemd-logind[1867]: New session 9 of user core. Jan 13 20:36:45.037551 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:36:45.234302 sshd[4421]: Connection closed by 139.178.89.65 port 46586 Jan 13 20:36:45.236002 sshd-session[4419]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:45.240366 systemd-logind[1867]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:36:45.241541 systemd[1]: sshd@8-172.31.24.144:22-139.178.89.65:46586.service: Deactivated successfully. Jan 13 20:36:45.245035 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:36:45.246488 systemd-logind[1867]: Removed session 9. Jan 13 20:36:50.278252 systemd[1]: Started sshd@9-172.31.24.144:22-139.178.89.65:46594.service - OpenSSH per-connection server daemon (139.178.89.65:46594). Jan 13 20:36:50.448850 sshd[4456]: Accepted publickey for core from 139.178.89.65 port 46594 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:36:50.450255 sshd-session[4456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:50.456379 systemd-logind[1867]: New session 10 of user core. Jan 13 20:36:50.465021 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:36:50.718444 sshd[4458]: Connection closed by 139.178.89.65 port 46594 Jan 13 20:36:50.719068 sshd-session[4456]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:50.723489 systemd-logind[1867]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:36:50.724525 systemd[1]: sshd@9-172.31.24.144:22-139.178.89.65:46594.service: Deactivated successfully. Jan 13 20:36:50.726977 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:36:50.728417 systemd-logind[1867]: Removed session 10. Jan 13 20:36:50.756597 systemd[1]: Started sshd@10-172.31.24.144:22-139.178.89.65:46598.service - OpenSSH per-connection server daemon (139.178.89.65:46598). Jan 13 20:36:50.916827 sshd[4470]: Accepted publickey for core from 139.178.89.65 port 46598 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:36:50.917720 sshd-session[4470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:50.923472 systemd-logind[1867]: New session 11 of user core. Jan 13 20:36:50.927994 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:36:51.178625 sshd[4472]: Connection closed by 139.178.89.65 port 46598 Jan 13 20:36:51.182139 sshd-session[4470]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:51.188348 systemd-logind[1867]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:36:51.189794 systemd[1]: sshd@10-172.31.24.144:22-139.178.89.65:46598.service: Deactivated successfully. Jan 13 20:36:51.195916 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:36:51.216609 systemd-logind[1867]: Removed session 11. Jan 13 20:36:51.228278 systemd[1]: Started sshd@11-172.31.24.144:22-139.178.89.65:38218.service - OpenSSH per-connection server daemon (139.178.89.65:38218). Jan 13 20:36:51.418793 sshd[4481]: Accepted publickey for core from 139.178.89.65 port 38218 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:36:51.419564 sshd-session[4481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:51.431393 systemd-logind[1867]: New session 12 of user core. Jan 13 20:36:51.438026 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:36:51.717413 sshd[4483]: Connection closed by 139.178.89.65 port 38218 Jan 13 20:36:51.719055 sshd-session[4481]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:51.722692 systemd[1]: sshd@11-172.31.24.144:22-139.178.89.65:38218.service: Deactivated successfully. Jan 13 20:36:51.726142 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:36:51.729123 systemd-logind[1867]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:36:51.730603 systemd-logind[1867]: Removed session 12. Jan 13 20:36:56.776033 systemd[1]: Started sshd@12-172.31.24.144:22-139.178.89.65:38230.service - OpenSSH per-connection server daemon (139.178.89.65:38230). Jan 13 20:36:56.988135 sshd[4516]: Accepted publickey for core from 139.178.89.65 port 38230 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:36:56.988943 sshd-session[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:36:57.007168 systemd-logind[1867]: New session 13 of user core. Jan 13 20:36:57.010101 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:36:57.315233 sshd[4518]: Connection closed by 139.178.89.65 port 38230 Jan 13 20:36:57.315727 sshd-session[4516]: pam_unix(sshd:session): session closed for user core Jan 13 20:36:57.323045 systemd-logind[1867]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:36:57.324250 systemd[1]: sshd@12-172.31.24.144:22-139.178.89.65:38230.service: Deactivated successfully. Jan 13 20:36:57.328943 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:36:57.330578 systemd-logind[1867]: Removed session 13. Jan 13 20:37:02.362189 systemd[1]: Started sshd@13-172.31.24.144:22-139.178.89.65:59972.service - OpenSSH per-connection server daemon (139.178.89.65:59972). Jan 13 20:37:02.538884 sshd[4552]: Accepted publickey for core from 139.178.89.65 port 59972 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:37:02.541949 sshd-session[4552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:02.551636 systemd-logind[1867]: New session 14 of user core. Jan 13 20:37:02.555113 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:37:02.751050 sshd[4554]: Connection closed by 139.178.89.65 port 59972 Jan 13 20:37:02.752644 sshd-session[4552]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:02.756862 systemd-logind[1867]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:37:02.758628 systemd[1]: sshd@13-172.31.24.144:22-139.178.89.65:59972.service: Deactivated successfully. Jan 13 20:37:02.761759 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:37:02.763113 systemd-logind[1867]: Removed session 14. Jan 13 20:37:02.791462 systemd[1]: Started sshd@14-172.31.24.144:22-139.178.89.65:59978.service - OpenSSH per-connection server daemon (139.178.89.65:59978). Jan 13 20:37:02.960469 sshd[4571]: Accepted publickey for core from 139.178.89.65 port 59978 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:37:02.962199 sshd-session[4571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:02.968007 systemd-logind[1867]: New session 15 of user core. Jan 13 20:37:02.975035 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:37:03.587390 sshd[4588]: Connection closed by 139.178.89.65 port 59978 Jan 13 20:37:03.588310 sshd-session[4571]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:03.592288 systemd[1]: sshd@14-172.31.24.144:22-139.178.89.65:59978.service: Deactivated successfully. Jan 13 20:37:03.594970 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:37:03.596541 systemd-logind[1867]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:37:03.599301 systemd-logind[1867]: Removed session 15. Jan 13 20:37:03.622572 systemd[1]: Started sshd@15-172.31.24.144:22-139.178.89.65:59990.service - OpenSSH per-connection server daemon (139.178.89.65:59990). Jan 13 20:37:03.834597 sshd[4597]: Accepted publickey for core from 139.178.89.65 port 59990 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:37:03.835353 sshd-session[4597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:03.841122 systemd-logind[1867]: New session 16 of user core. Jan 13 20:37:03.845016 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:37:05.525263 sshd[4599]: Connection closed by 139.178.89.65 port 59990 Jan 13 20:37:05.527585 sshd-session[4597]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:05.533710 systemd-logind[1867]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:37:05.534180 systemd[1]: sshd@15-172.31.24.144:22-139.178.89.65:59990.service: Deactivated successfully. Jan 13 20:37:05.539195 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:37:05.544290 systemd-logind[1867]: Removed session 16. Jan 13 20:37:05.561169 systemd[1]: Started sshd@16-172.31.24.144:22-139.178.89.65:60006.service - OpenSSH per-connection server daemon (139.178.89.65:60006). Jan 13 20:37:05.736640 sshd[4615]: Accepted publickey for core from 139.178.89.65 port 60006 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:37:05.738259 sshd-session[4615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:05.747897 systemd-logind[1867]: New session 17 of user core. Jan 13 20:37:05.755116 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:37:06.083953 sshd[4617]: Connection closed by 139.178.89.65 port 60006 Jan 13 20:37:06.085723 sshd-session[4615]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:06.089370 systemd-logind[1867]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:37:06.090034 systemd[1]: sshd@16-172.31.24.144:22-139.178.89.65:60006.service: Deactivated successfully. Jan 13 20:37:06.092536 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:37:06.094850 systemd-logind[1867]: Removed session 17. Jan 13 20:37:06.117359 systemd[1]: Started sshd@17-172.31.24.144:22-139.178.89.65:60022.service - OpenSSH per-connection server daemon (139.178.89.65:60022). Jan 13 20:37:06.296924 sshd[4626]: Accepted publickey for core from 139.178.89.65 port 60022 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:37:06.297613 sshd-session[4626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:06.305485 systemd-logind[1867]: New session 18 of user core. Jan 13 20:37:06.308997 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:37:06.557295 sshd[4628]: Connection closed by 139.178.89.65 port 60022 Jan 13 20:37:06.558223 sshd-session[4626]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:06.568869 systemd[1]: sshd@17-172.31.24.144:22-139.178.89.65:60022.service: Deactivated successfully. Jan 13 20:37:06.575590 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:37:06.583294 systemd-logind[1867]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:37:06.588673 systemd-logind[1867]: Removed session 18. Jan 13 20:37:11.594455 systemd[1]: Started sshd@18-172.31.24.144:22-139.178.89.65:45902.service - OpenSSH per-connection server daemon (139.178.89.65:45902). Jan 13 20:37:11.758211 sshd[4663]: Accepted publickey for core from 139.178.89.65 port 45902 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:37:11.760445 sshd-session[4663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:11.766074 systemd-logind[1867]: New session 19 of user core. Jan 13 20:37:11.770416 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:37:11.961502 sshd[4665]: Connection closed by 139.178.89.65 port 45902 Jan 13 20:37:11.962537 sshd-session[4663]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:11.966078 systemd[1]: sshd@18-172.31.24.144:22-139.178.89.65:45902.service: Deactivated successfully. Jan 13 20:37:11.968415 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:37:11.970789 systemd-logind[1867]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:37:11.972333 systemd-logind[1867]: Removed session 19. Jan 13 20:37:17.029303 systemd[1]: Started sshd@19-172.31.24.144:22-139.178.89.65:45910.service - OpenSSH per-connection server daemon (139.178.89.65:45910). Jan 13 20:37:17.242682 sshd[4699]: Accepted publickey for core from 139.178.89.65 port 45910 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:37:17.244700 sshd-session[4699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:17.254591 systemd-logind[1867]: New session 20 of user core. Jan 13 20:37:17.260323 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:37:17.469802 sshd[4701]: Connection closed by 139.178.89.65 port 45910 Jan 13 20:37:17.471973 sshd-session[4699]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:17.476736 systemd-logind[1867]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:37:17.477727 systemd[1]: sshd@19-172.31.24.144:22-139.178.89.65:45910.service: Deactivated successfully. Jan 13 20:37:17.481173 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:37:17.482627 systemd-logind[1867]: Removed session 20. Jan 13 20:37:22.511624 systemd[1]: Started sshd@20-172.31.24.144:22-139.178.89.65:49920.service - OpenSSH per-connection server daemon (139.178.89.65:49920). Jan 13 20:37:22.710020 sshd[4733]: Accepted publickey for core from 139.178.89.65 port 49920 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:37:22.711716 sshd-session[4733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:22.721020 systemd-logind[1867]: New session 21 of user core. Jan 13 20:37:22.724111 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:37:22.941975 sshd[4735]: Connection closed by 139.178.89.65 port 49920 Jan 13 20:37:22.942745 sshd-session[4733]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:22.948017 systemd-logind[1867]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:37:22.950550 systemd[1]: sshd@20-172.31.24.144:22-139.178.89.65:49920.service: Deactivated successfully. Jan 13 20:37:22.953791 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:37:22.956661 systemd-logind[1867]: Removed session 21. Jan 13 20:37:27.981156 systemd[1]: Started sshd@21-172.31.24.144:22-139.178.89.65:49934.service - OpenSSH per-connection server daemon (139.178.89.65:49934). Jan 13 20:37:28.158398 sshd[4773]: Accepted publickey for core from 139.178.89.65 port 49934 ssh2: RSA SHA256:EuSc9fTRQXLwCQZEkDl5fiJPvgrOIGSulDG6+Z++tMY Jan 13 20:37:28.161209 sshd-session[4773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:37:28.166394 systemd-logind[1867]: New session 22 of user core. Jan 13 20:37:28.171989 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:37:28.386921 sshd[4790]: Connection closed by 139.178.89.65 port 49934 Jan 13 20:37:28.385555 sshd-session[4773]: pam_unix(sshd:session): session closed for user core Jan 13 20:37:28.391220 systemd-logind[1867]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:37:28.391761 systemd[1]: sshd@21-172.31.24.144:22-139.178.89.65:49934.service: Deactivated successfully. Jan 13 20:37:28.394183 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:37:28.397762 systemd-logind[1867]: Removed session 22. Jan 13 20:38:04.055154 systemd[1]: cri-containerd-a2c5f6b8256c404893b5b096db0e5451b8de55677f536f1adc3c43c791ea8b9b.scope: Deactivated successfully. Jan 13 20:38:04.056199 systemd[1]: cri-containerd-a2c5f6b8256c404893b5b096db0e5451b8de55677f536f1adc3c43c791ea8b9b.scope: Consumed 2.406s CPU time, 25.6M memory peak, 0B memory swap peak. Jan 13 20:38:04.091733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2c5f6b8256c404893b5b096db0e5451b8de55677f536f1adc3c43c791ea8b9b-rootfs.mount: Deactivated successfully. Jan 13 20:38:04.095060 containerd[1891]: time="2025-01-13T20:38:04.094986967Z" level=info msg="shim disconnected" id=a2c5f6b8256c404893b5b096db0e5451b8de55677f536f1adc3c43c791ea8b9b namespace=k8s.io Jan 13 20:38:04.095060 containerd[1891]: time="2025-01-13T20:38:04.095056500Z" level=warning msg="cleaning up after shim disconnected" id=a2c5f6b8256c404893b5b096db0e5451b8de55677f536f1adc3c43c791ea8b9b namespace=k8s.io Jan 13 20:38:04.095774 containerd[1891]: time="2025-01-13T20:38:04.095068515Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:38:04.573073 kubelet[3445]: I0113 20:38:04.573034 3445 scope.go:117] "RemoveContainer" containerID="a2c5f6b8256c404893b5b096db0e5451b8de55677f536f1adc3c43c791ea8b9b" Jan 13 20:38:04.579778 containerd[1891]: time="2025-01-13T20:38:04.579731593Z" level=info msg="CreateContainer within sandbox \"d509b960d39be3946ed7d1e64770796571719b3b7bf6da3c538b57aba1d0b66a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 13 20:38:04.598088 containerd[1891]: time="2025-01-13T20:38:04.598046485Z" level=info msg="CreateContainer within sandbox \"d509b960d39be3946ed7d1e64770796571719b3b7bf6da3c538b57aba1d0b66a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8872871a98cf05693844bb144a6059bf2e9fd2b25d596c726cbab988bdaf44d6\"" Jan 13 20:38:04.598681 containerd[1891]: time="2025-01-13T20:38:04.598653232Z" level=info msg="StartContainer for \"8872871a98cf05693844bb144a6059bf2e9fd2b25d596c726cbab988bdaf44d6\"" Jan 13 20:38:04.641317 systemd[1]: Started cri-containerd-8872871a98cf05693844bb144a6059bf2e9fd2b25d596c726cbab988bdaf44d6.scope - libcontainer container 8872871a98cf05693844bb144a6059bf2e9fd2b25d596c726cbab988bdaf44d6. Jan 13 20:38:04.702502 containerd[1891]: time="2025-01-13T20:38:04.702406818Z" level=info msg="StartContainer for \"8872871a98cf05693844bb144a6059bf2e9fd2b25d596c726cbab988bdaf44d6\" returns successfully" Jan 13 20:38:05.091544 systemd[1]: run-containerd-runc-k8s.io-8872871a98cf05693844bb144a6059bf2e9fd2b25d596c726cbab988bdaf44d6-runc.IDbWVG.mount: Deactivated successfully. Jan 13 20:38:07.062386 kubelet[3445]: E0113 20:38:07.062347 3445 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-144?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 20:38:08.883859 systemd[1]: cri-containerd-e8ad9ac2b81de376b7d0e92f592cee6b6ce4049e59792a2db835181f3cc027f9.scope: Deactivated successfully. Jan 13 20:38:08.884151 systemd[1]: cri-containerd-e8ad9ac2b81de376b7d0e92f592cee6b6ce4049e59792a2db835181f3cc027f9.scope: Consumed 1.473s CPU time, 18.8M memory peak, 0B memory swap peak. Jan 13 20:38:08.918355 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8ad9ac2b81de376b7d0e92f592cee6b6ce4049e59792a2db835181f3cc027f9-rootfs.mount: Deactivated successfully. Jan 13 20:38:08.927625 containerd[1891]: time="2025-01-13T20:38:08.927478275Z" level=info msg="shim disconnected" id=e8ad9ac2b81de376b7d0e92f592cee6b6ce4049e59792a2db835181f3cc027f9 namespace=k8s.io Jan 13 20:38:08.927625 containerd[1891]: time="2025-01-13T20:38:08.927538878Z" level=warning msg="cleaning up after shim disconnected" id=e8ad9ac2b81de376b7d0e92f592cee6b6ce4049e59792a2db835181f3cc027f9 namespace=k8s.io Jan 13 20:38:08.927625 containerd[1891]: time="2025-01-13T20:38:08.927556213Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:38:09.585794 kubelet[3445]: I0113 20:38:09.585765 3445 scope.go:117] "RemoveContainer" containerID="e8ad9ac2b81de376b7d0e92f592cee6b6ce4049e59792a2db835181f3cc027f9" Jan 13 20:38:09.588613 containerd[1891]: time="2025-01-13T20:38:09.588577695Z" level=info msg="CreateContainer within sandbox \"82450a4de7422f08cd8471ddca161b272368172ee565ce271ddf8478dddb3972\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 13 20:38:09.608358 containerd[1891]: time="2025-01-13T20:38:09.608318540Z" level=info msg="CreateContainer within sandbox \"82450a4de7422f08cd8471ddca161b272368172ee565ce271ddf8478dddb3972\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2c834c24537f8832a33e81f91bb060c0ece62bc06ddfad5b1324b4bed2c28905\"" Jan 13 20:38:09.609298 containerd[1891]: time="2025-01-13T20:38:09.609263832Z" level=info msg="StartContainer for \"2c834c24537f8832a33e81f91bb060c0ece62bc06ddfad5b1324b4bed2c28905\"" Jan 13 20:38:09.670065 systemd[1]: Started cri-containerd-2c834c24537f8832a33e81f91bb060c0ece62bc06ddfad5b1324b4bed2c28905.scope - libcontainer container 2c834c24537f8832a33e81f91bb060c0ece62bc06ddfad5b1324b4bed2c28905. Jan 13 20:38:09.724538 containerd[1891]: time="2025-01-13T20:38:09.724475917Z" level=info msg="StartContainer for \"2c834c24537f8832a33e81f91bb060c0ece62bc06ddfad5b1324b4bed2c28905\" returns successfully" Jan 13 20:38:17.063962 kubelet[3445]: E0113 20:38:17.063580 3445 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-144?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"