Feb 13 19:50:44.207582 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:44:05 -00 2025 Feb 13 19:50:44.207630 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:50:44.207648 kernel: BIOS-provided physical RAM map: Feb 13 19:50:44.207660 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 19:50:44.207671 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 19:50:44.207683 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 19:50:44.207701 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Feb 13 19:50:44.207714 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Feb 13 19:50:44.207726 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 13 19:50:44.207738 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 19:50:44.207751 kernel: NX (Execute Disable) protection: active Feb 13 19:50:44.207763 kernel: APIC: Static calls initialized Feb 13 19:50:44.207776 kernel: SMBIOS 2.7 present. Feb 13 19:50:44.207789 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Feb 13 19:50:44.207808 kernel: Hypervisor detected: KVM Feb 13 19:50:44.207822 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:50:44.207836 kernel: kvm-clock: using sched offset of 9266711855 cycles Feb 13 19:50:44.207851 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:50:44.207880 kernel: tsc: Detected 2499.998 MHz processor Feb 13 19:50:44.208319 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:50:44.208333 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:50:44.208351 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Feb 13 19:50:44.208366 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 19:50:44.208379 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:50:44.208393 kernel: Using GB pages for direct mapping Feb 13 19:50:44.208406 kernel: ACPI: Early table checksum verification disabled Feb 13 19:50:44.208420 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Feb 13 19:50:44.208433 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 13 19:50:44.208447 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 19:50:44.208460 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 13 19:50:44.208476 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Feb 13 19:50:44.208490 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 19:50:44.208502 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 19:50:44.209155 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 13 19:50:44.209170 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 19:50:44.209183 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 13 19:50:44.209197 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 13 19:50:44.209210 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 19:50:44.209224 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Feb 13 19:50:44.209246 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Feb 13 19:50:44.209265 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Feb 13 19:50:44.209280 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Feb 13 19:50:44.209294 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Feb 13 19:50:44.209308 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Feb 13 19:50:44.209326 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Feb 13 19:50:44.209340 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Feb 13 19:50:44.209354 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Feb 13 19:50:44.209369 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Feb 13 19:50:44.209383 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 19:50:44.213936 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 19:50:44.213963 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Feb 13 19:50:44.213978 kernel: NUMA: Initialized distance table, cnt=1 Feb 13 19:50:44.213993 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Feb 13 19:50:44.214015 kernel: Zone ranges: Feb 13 19:50:44.214030 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:50:44.214044 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Feb 13 19:50:44.214059 kernel: Normal empty Feb 13 19:50:44.214073 kernel: Movable zone start for each node Feb 13 19:50:44.214087 kernel: Early memory node ranges Feb 13 19:50:44.214102 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 19:50:44.214116 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Feb 13 19:50:44.214130 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Feb 13 19:50:44.214148 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:50:44.214162 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 19:50:44.214176 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Feb 13 19:50:44.214191 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 19:50:44.214205 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:50:44.214219 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 13 19:50:44.214233 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:50:44.214248 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:50:44.214262 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:50:44.217318 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:50:44.217360 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:50:44.217375 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:50:44.217390 kernel: TSC deadline timer available Feb 13 19:50:44.217405 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 19:50:44.217419 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:50:44.217434 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Feb 13 19:50:44.217448 kernel: Booting paravirtualized kernel on KVM Feb 13 19:50:44.217463 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:50:44.217478 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 19:50:44.217496 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 19:50:44.217510 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 19:50:44.217525 kernel: pcpu-alloc: [0] 0 1 Feb 13 19:50:44.217539 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:50:44.217553 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:50:44.217570 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:50:44.217585 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:50:44.217599 kernel: random: crng init done Feb 13 19:50:44.217616 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:50:44.217631 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 19:50:44.217645 kernel: Fallback order for Node 0: 0 Feb 13 19:50:44.217659 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Feb 13 19:50:44.217673 kernel: Policy zone: DMA32 Feb 13 19:50:44.217686 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:50:44.217700 kernel: Memory: 1932348K/2057760K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42976K init, 2216K bss, 125152K reserved, 0K cma-reserved) Feb 13 19:50:44.217713 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:50:44.217730 kernel: Kernel/User page tables isolation: enabled Feb 13 19:50:44.217742 kernel: ftrace: allocating 37923 entries in 149 pages Feb 13 19:50:44.217756 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:50:44.217769 kernel: Dynamic Preempt: voluntary Feb 13 19:50:44.217791 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:50:44.217805 kernel: rcu: RCU event tracing is enabled. Feb 13 19:50:44.217817 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:50:44.217829 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:50:44.217843 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:50:44.217858 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:50:44.218190 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:50:44.218204 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:50:44.218218 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 19:50:44.218231 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:50:44.218244 kernel: Console: colour VGA+ 80x25 Feb 13 19:50:44.218257 kernel: printk: console [ttyS0] enabled Feb 13 19:50:44.218270 kernel: ACPI: Core revision 20230628 Feb 13 19:50:44.218284 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 13 19:50:44.218297 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:50:44.218314 kernel: x2apic enabled Feb 13 19:50:44.218327 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:50:44.218366 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Feb 13 19:50:44.218382 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Feb 13 19:50:44.218396 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 19:50:44.218409 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 19:50:44.218423 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:50:44.218436 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:50:44.218449 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:50:44.218463 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:50:44.218477 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 19:50:44.218490 kernel: RETBleed: Vulnerable Feb 13 19:50:44.218508 kernel: Speculative Store Bypass: Vulnerable Feb 13 19:50:44.218522 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 19:50:44.218536 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 19:50:44.218549 kernel: GDS: Unknown: Dependent on hypervisor status Feb 13 19:50:44.218562 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:50:44.218576 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:50:44.218589 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:50:44.218605 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 19:50:44.218619 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 19:50:44.218633 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 19:50:44.218646 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 19:50:44.218660 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 19:50:44.218674 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 13 19:50:44.218687 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:50:44.218700 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 19:50:44.218714 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 19:50:44.218727 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Feb 13 19:50:44.218743 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Feb 13 19:50:44.218756 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Feb 13 19:50:44.218770 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Feb 13 19:50:44.218783 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Feb 13 19:50:44.218796 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:50:44.218809 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:50:44.218823 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:50:44.218836 kernel: landlock: Up and running. Feb 13 19:50:44.218849 kernel: SELinux: Initializing. Feb 13 19:50:44.219491 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 19:50:44.219519 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 19:50:44.219535 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 19:50:44.219558 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:50:44.221111 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:50:44.221132 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:50:44.221149 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 19:50:44.221166 kernel: signal: max sigframe size: 3632 Feb 13 19:50:44.221182 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:50:44.221200 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:50:44.221217 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 19:50:44.221232 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:50:44.224366 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:50:44.224383 kernel: .... node #0, CPUs: #1 Feb 13 19:50:44.224401 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 19:50:44.224419 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 19:50:44.224435 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:50:44.224452 kernel: smpboot: Max logical packages: 1 Feb 13 19:50:44.224468 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Feb 13 19:50:44.224485 kernel: devtmpfs: initialized Feb 13 19:50:44.224505 kernel: x86/mm: Memory block size: 128MB Feb 13 19:50:44.224520 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:50:44.224537 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:50:44.224553 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:50:44.224570 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:50:44.224586 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:50:44.224603 kernel: audit: type=2000 audit(1739476243.473:1): state=initialized audit_enabled=0 res=1 Feb 13 19:50:44.224620 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:50:44.224636 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:50:44.224657 kernel: cpuidle: using governor menu Feb 13 19:50:44.224673 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:50:44.224690 kernel: dca service started, version 1.12.1 Feb 13 19:50:44.224707 kernel: PCI: Using configuration type 1 for base access Feb 13 19:50:44.224723 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:50:44.224740 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:50:44.224756 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:50:44.224773 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:50:44.224789 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:50:44.224809 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:50:44.224826 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:50:44.224842 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:50:44.224859 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:50:44.224903 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 19:50:44.224919 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:50:44.224935 kernel: ACPI: Interpreter enabled Feb 13 19:50:44.224952 kernel: ACPI: PM: (supports S0 S5) Feb 13 19:50:44.225256 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:50:44.225283 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:50:44.225300 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:50:44.225317 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 19:50:44.225331 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:50:44.235102 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:50:44.235285 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 19:50:44.235425 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 19:50:44.235446 kernel: acpiphp: Slot [3] registered Feb 13 19:50:44.235469 kernel: acpiphp: Slot [4] registered Feb 13 19:50:44.235486 kernel: acpiphp: Slot [5] registered Feb 13 19:50:44.235503 kernel: acpiphp: Slot [6] registered Feb 13 19:50:44.235519 kernel: acpiphp: Slot [7] registered Feb 13 19:50:44.235534 kernel: acpiphp: Slot [8] registered Feb 13 19:50:44.235550 kernel: acpiphp: Slot [9] registered Feb 13 19:50:44.240928 kernel: acpiphp: Slot [10] registered Feb 13 19:50:44.241149 kernel: acpiphp: Slot [11] registered Feb 13 19:50:44.241171 kernel: acpiphp: Slot [12] registered Feb 13 19:50:44.241196 kernel: acpiphp: Slot [13] registered Feb 13 19:50:44.241212 kernel: acpiphp: Slot [14] registered Feb 13 19:50:44.241229 kernel: acpiphp: Slot [15] registered Feb 13 19:50:44.241245 kernel: acpiphp: Slot [16] registered Feb 13 19:50:44.241262 kernel: acpiphp: Slot [17] registered Feb 13 19:50:44.241278 kernel: acpiphp: Slot [18] registered Feb 13 19:50:44.241294 kernel: acpiphp: Slot [19] registered Feb 13 19:50:44.241311 kernel: acpiphp: Slot [20] registered Feb 13 19:50:44.241327 kernel: acpiphp: Slot [21] registered Feb 13 19:50:44.241347 kernel: acpiphp: Slot [22] registered Feb 13 19:50:44.241363 kernel: acpiphp: Slot [23] registered Feb 13 19:50:44.241380 kernel: acpiphp: Slot [24] registered Feb 13 19:50:44.241397 kernel: acpiphp: Slot [25] registered Feb 13 19:50:44.241413 kernel: acpiphp: Slot [26] registered Feb 13 19:50:44.241429 kernel: acpiphp: Slot [27] registered Feb 13 19:50:44.241446 kernel: acpiphp: Slot [28] registered Feb 13 19:50:44.241462 kernel: acpiphp: Slot [29] registered Feb 13 19:50:44.241479 kernel: acpiphp: Slot [30] registered Feb 13 19:50:44.241495 kernel: acpiphp: Slot [31] registered Feb 13 19:50:44.241514 kernel: PCI host bridge to bus 0000:00 Feb 13 19:50:44.241730 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:50:44.242907 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:50:44.243105 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:50:44.243319 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 19:50:44.243450 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:50:44.243739 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 19:50:44.246577 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 19:50:44.246842 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 13 19:50:44.247141 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 19:50:44.247282 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 13 19:50:44.247417 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 13 19:50:44.247750 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 13 19:50:44.253016 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 13 19:50:44.253216 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 13 19:50:44.253357 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 13 19:50:44.253492 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 13 19:50:44.253639 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 13 19:50:44.253779 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 13 19:50:44.253949 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 19:50:44.254077 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:50:44.254225 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 19:50:44.254360 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 13 19:50:44.254494 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 19:50:44.254622 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Feb 13 19:50:44.254643 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:50:44.254661 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:50:44.254681 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:50:44.254696 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:50:44.254712 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 19:50:44.254727 kernel: iommu: Default domain type: Translated Feb 13 19:50:44.254743 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:50:44.254759 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:50:44.254774 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:50:44.254787 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 19:50:44.254801 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Feb 13 19:50:44.257587 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 13 19:50:44.257747 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 13 19:50:44.257915 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:50:44.257935 kernel: vgaarb: loaded Feb 13 19:50:44.257951 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 13 19:50:44.257966 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 13 19:50:44.257981 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:50:44.257995 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:50:44.258010 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:50:44.258032 kernel: pnp: PnP ACPI init Feb 13 19:50:44.258046 kernel: pnp: PnP ACPI: found 5 devices Feb 13 19:50:44.258061 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:50:44.258076 kernel: NET: Registered PF_INET protocol family Feb 13 19:50:44.258091 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:50:44.258105 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 19:50:44.258120 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:50:44.258134 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 19:50:44.258152 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 19:50:44.258167 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 19:50:44.258181 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 19:50:44.258196 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 19:50:44.258211 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:50:44.258225 kernel: NET: Registered PF_XDP protocol family Feb 13 19:50:44.258415 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:50:44.261596 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:50:44.261839 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:50:44.262004 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 19:50:44.262149 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 19:50:44.262171 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:50:44.262189 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 19:50:44.262206 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Feb 13 19:50:44.262222 kernel: clocksource: Switched to clocksource tsc Feb 13 19:50:44.262239 kernel: Initialise system trusted keyrings Feb 13 19:50:44.262255 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 19:50:44.262276 kernel: Key type asymmetric registered Feb 13 19:50:44.262293 kernel: Asymmetric key parser 'x509' registered Feb 13 19:50:44.262309 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:50:44.262326 kernel: io scheduler mq-deadline registered Feb 13 19:50:44.262352 kernel: io scheduler kyber registered Feb 13 19:50:44.262368 kernel: io scheduler bfq registered Feb 13 19:50:44.262385 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:50:44.262401 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:50:44.262418 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:50:44.262438 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:50:44.262454 kernel: i8042: Warning: Keylock active Feb 13 19:50:44.262471 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:50:44.262487 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:50:44.262628 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 19:50:44.262753 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 19:50:44.264939 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T19:50:43 UTC (1739476243) Feb 13 19:50:44.265191 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 19:50:44.265223 kernel: intel_pstate: CPU model not supported Feb 13 19:50:44.265241 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:50:44.265258 kernel: Segment Routing with IPv6 Feb 13 19:50:44.265274 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:50:44.265291 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:50:44.265307 kernel: Key type dns_resolver registered Feb 13 19:50:44.265324 kernel: IPI shorthand broadcast: enabled Feb 13 19:50:44.265340 kernel: sched_clock: Marking stable (694001731, 270194081)->(1086510602, -122314790) Feb 13 19:50:44.265356 kernel: registered taskstats version 1 Feb 13 19:50:44.265376 kernel: Loading compiled-in X.509 certificates Feb 13 19:50:44.265392 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 0cc219a306b9e46e583adebba1820decbdc4307b' Feb 13 19:50:44.265408 kernel: Key type .fscrypt registered Feb 13 19:50:44.265424 kernel: Key type fscrypt-provisioning registered Feb 13 19:50:44.265440 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:50:44.265457 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:50:44.265473 kernel: ima: No architecture policies found Feb 13 19:50:44.265489 kernel: clk: Disabling unused clocks Feb 13 19:50:44.265508 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 19:50:44.265524 kernel: Write protecting the kernel read-only data: 36864k Feb 13 19:50:44.265540 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 19:50:44.265557 kernel: Run /init as init process Feb 13 19:50:44.265573 kernel: with arguments: Feb 13 19:50:44.265590 kernel: /init Feb 13 19:50:44.265605 kernel: with environment: Feb 13 19:50:44.265621 kernel: HOME=/ Feb 13 19:50:44.265637 kernel: TERM=linux Feb 13 19:50:44.265652 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:50:44.265679 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:50:44.265713 systemd[1]: Detected virtualization amazon. Feb 13 19:50:44.265734 systemd[1]: Detected architecture x86-64. Feb 13 19:50:44.265751 systemd[1]: Running in initrd. Feb 13 19:50:44.265771 systemd[1]: No hostname configured, using default hostname. Feb 13 19:50:44.265788 systemd[1]: Hostname set to . Feb 13 19:50:44.265807 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:50:44.265824 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:50:44.265842 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:50:44.265860 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:50:44.265891 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:50:44.265909 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:50:44.265930 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:50:44.265948 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:50:44.265969 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:50:44.265987 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:50:44.266005 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:50:44.266023 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:50:44.266041 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:50:44.266062 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:50:44.266080 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:50:44.266098 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:50:44.266115 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:50:44.266133 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:50:44.266151 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:50:44.266169 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:50:44.266187 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:50:44.266208 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:50:44.266229 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:50:44.266247 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:50:44.266268 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:50:44.266286 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:50:44.266304 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:50:44.266322 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:50:44.266348 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:50:44.266371 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:50:44.266390 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:50:44.266408 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:50:44.266426 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:50:44.266471 systemd-journald[179]: Collecting audit messages is disabled. Feb 13 19:50:44.266514 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:50:44.266532 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:50:44.266552 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:50:44.266572 systemd-journald[179]: Journal started Feb 13 19:50:44.266611 systemd-journald[179]: Runtime Journal (/run/log/journal/ec21cb49a938cbfc273fb7daf31aa0dd) is 4.8M, max 38.6M, 33.7M free. Feb 13 19:50:44.277435 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:50:44.277502 kernel: Bridge firewalling registered Feb 13 19:50:44.207407 systemd-modules-load[180]: Inserted module 'overlay' Feb 13 19:50:44.401606 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:50:44.276454 systemd-modules-load[180]: Inserted module 'br_netfilter' Feb 13 19:50:44.406431 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:50:44.411207 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:50:44.417351 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:50:44.427181 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:50:44.458752 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:50:44.500092 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:50:44.509812 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:50:44.562139 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:50:44.573621 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:50:44.578667 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:50:44.590268 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:50:44.604111 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:50:44.618856 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:50:44.631299 dracut-cmdline[211]: dracut-dracut-053 Feb 13 19:50:44.636434 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:50:44.672581 systemd-resolved[214]: Positive Trust Anchors: Feb 13 19:50:44.672597 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:50:44.672659 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:50:44.677349 systemd-resolved[214]: Defaulting to hostname 'linux'. Feb 13 19:50:44.690973 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:50:44.728247 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:50:44.810909 kernel: SCSI subsystem initialized Feb 13 19:50:44.822894 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:50:44.836888 kernel: iscsi: registered transport (tcp) Feb 13 19:50:44.859099 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:50:44.859179 kernel: QLogic iSCSI HBA Driver Feb 13 19:50:44.909422 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:50:44.916075 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:50:44.947130 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:50:44.947207 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:50:44.947230 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:50:45.001945 kernel: raid6: avx512x4 gen() 7631 MB/s Feb 13 19:50:45.019929 kernel: raid6: avx512x2 gen() 8674 MB/s Feb 13 19:50:45.036918 kernel: raid6: avx512x1 gen() 10419 MB/s Feb 13 19:50:45.053900 kernel: raid6: avx2x4 gen() 11821 MB/s Feb 13 19:50:45.070902 kernel: raid6: avx2x2 gen() 8726 MB/s Feb 13 19:50:45.088391 kernel: raid6: avx2x1 gen() 10437 MB/s Feb 13 19:50:45.088466 kernel: raid6: using algorithm avx2x4 gen() 11821 MB/s Feb 13 19:50:45.106813 kernel: raid6: .... xor() 3264 MB/s, rmw enabled Feb 13 19:50:45.106914 kernel: raid6: using avx512x2 recovery algorithm Feb 13 19:50:45.144894 kernel: xor: automatically using best checksumming function avx Feb 13 19:50:45.395934 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:50:45.422066 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:50:45.435194 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:50:45.476083 systemd-udevd[397]: Using default interface naming scheme 'v255'. Feb 13 19:50:45.499704 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:50:45.547175 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:50:45.581914 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Feb 13 19:50:45.627953 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:50:45.638295 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:50:45.728460 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:50:45.741632 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:50:45.787962 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:50:45.792885 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:50:45.796923 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:50:45.799922 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:50:45.814072 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:50:45.835125 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:50:45.857013 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 19:50:45.888737 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 19:50:45.889611 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Feb 13 19:50:45.889857 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:50:45.889927 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:95:1e:c9:f0:1f Feb 13 19:50:45.896889 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 19:50:45.897142 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 19:50:45.910944 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 19:50:45.914039 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:50:45.914429 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:50:45.918604 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:50:45.928574 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:50:45.928599 kernel: GPT:9289727 != 16777215 Feb 13 19:50:45.928612 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:50:45.928624 kernel: GPT:9289727 != 16777215 Feb 13 19:50:45.928635 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:50:45.928647 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:50:45.919891 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:50:45.920048 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:50:45.921537 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:50:45.934224 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:50:45.944589 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:50:45.944726 kernel: AES CTR mode by8 optimization enabled Feb 13 19:50:45.943895 (udev-worker)[442]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:46.122915 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (449) Feb 13 19:50:46.141050 kernel: BTRFS: device fsid e9c87d9f-3864-4b45-9be4-80a5397f1fc6 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (444) Feb 13 19:50:46.156222 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:50:46.168335 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:50:46.218158 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:50:46.267471 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 19:50:46.285531 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:50:46.297435 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 19:50:46.316113 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 19:50:46.316346 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 19:50:46.342636 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:50:46.366573 disk-uuid[626]: Primary Header is updated. Feb 13 19:50:46.366573 disk-uuid[626]: Secondary Entries is updated. Feb 13 19:50:46.366573 disk-uuid[626]: Secondary Header is updated. Feb 13 19:50:46.374078 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:50:47.399895 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 19:50:47.408004 disk-uuid[627]: The operation has completed successfully. Feb 13 19:50:47.652538 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:50:47.652666 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:50:47.681108 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:50:47.704085 sh[802]: Success Feb 13 19:50:47.730209 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 19:50:47.860377 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:50:47.875163 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:50:47.877648 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:50:47.925923 kernel: BTRFS info (device dm-0): first mount of filesystem e9c87d9f-3864-4b45-9be4-80a5397f1fc6 Feb 13 19:50:47.930124 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:50:47.930201 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:50:47.930223 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:50:47.935840 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:50:47.998894 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:50:48.002076 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:50:48.004710 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:50:48.013212 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:50:48.026933 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:50:48.068377 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:50:48.068442 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:50:48.068462 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:50:48.083404 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:50:48.104967 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:50:48.105902 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:50:48.121851 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:50:48.136004 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:50:48.289380 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:50:48.296360 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:50:48.342309 systemd-networkd[995]: lo: Link UP Feb 13 19:50:48.342323 systemd-networkd[995]: lo: Gained carrier Feb 13 19:50:48.344385 systemd-networkd[995]: Enumeration completed Feb 13 19:50:48.345502 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:50:48.345559 systemd-networkd[995]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:50:48.345563 systemd-networkd[995]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:50:48.353147 systemd-networkd[995]: eth0: Link UP Feb 13 19:50:48.353152 systemd-networkd[995]: eth0: Gained carrier Feb 13 19:50:48.353164 systemd-networkd[995]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:50:48.355135 systemd[1]: Reached target network.target - Network. Feb 13 19:50:48.373093 systemd-networkd[995]: eth0: DHCPv4 address 172.31.23.227/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:50:48.462919 ignition[917]: Ignition 2.20.0 Feb 13 19:50:48.463244 ignition[917]: Stage: fetch-offline Feb 13 19:50:48.464413 ignition[917]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:48.464430 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:48.466982 ignition[917]: Ignition finished successfully Feb 13 19:50:48.469293 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:50:48.475410 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:50:48.496898 ignition[1004]: Ignition 2.20.0 Feb 13 19:50:48.496913 ignition[1004]: Stage: fetch Feb 13 19:50:48.497321 ignition[1004]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:48.497332 ignition[1004]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:48.497607 ignition[1004]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:48.508705 ignition[1004]: PUT result: OK Feb 13 19:50:48.511263 ignition[1004]: parsed url from cmdline: "" Feb 13 19:50:48.511271 ignition[1004]: no config URL provided Feb 13 19:50:48.511277 ignition[1004]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:50:48.511289 ignition[1004]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:50:48.511306 ignition[1004]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:48.514896 ignition[1004]: PUT result: OK Feb 13 19:50:48.515044 ignition[1004]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 19:50:48.519661 ignition[1004]: GET result: OK Feb 13 19:50:48.519874 ignition[1004]: parsing config with SHA512: b586a6531eaa3efaebbf67acf0f76b7b019edcb66cc929aee6766fb9fd8ccbccd434771dc2fec4897aaae4abaf527f195e5e2971dc8f4f53f9e1d086dd5f6859 Feb 13 19:50:48.530725 unknown[1004]: fetched base config from "system" Feb 13 19:50:48.530736 unknown[1004]: fetched base config from "system" Feb 13 19:50:48.530742 unknown[1004]: fetched user config from "aws" Feb 13 19:50:48.534311 ignition[1004]: fetch: fetch complete Feb 13 19:50:48.534324 ignition[1004]: fetch: fetch passed Feb 13 19:50:48.534484 ignition[1004]: Ignition finished successfully Feb 13 19:50:48.538378 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:50:48.550768 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:50:48.571803 ignition[1011]: Ignition 2.20.0 Feb 13 19:50:48.571818 ignition[1011]: Stage: kargs Feb 13 19:50:48.572351 ignition[1011]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:48.572363 ignition[1011]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:48.572470 ignition[1011]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:48.578140 ignition[1011]: PUT result: OK Feb 13 19:50:48.585045 ignition[1011]: kargs: kargs passed Feb 13 19:50:48.586438 ignition[1011]: Ignition finished successfully Feb 13 19:50:48.588946 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:50:48.604319 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:50:48.623395 ignition[1017]: Ignition 2.20.0 Feb 13 19:50:48.623409 ignition[1017]: Stage: disks Feb 13 19:50:48.623847 ignition[1017]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:48.623861 ignition[1017]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:48.624007 ignition[1017]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:48.625566 ignition[1017]: PUT result: OK Feb 13 19:50:48.636474 ignition[1017]: disks: disks passed Feb 13 19:50:48.636563 ignition[1017]: Ignition finished successfully Feb 13 19:50:48.639897 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:50:48.643709 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:50:48.643812 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:50:48.647715 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:50:48.649324 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:50:48.652537 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:50:48.663181 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:50:48.707759 systemd-fsck[1025]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:50:48.714784 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:50:48.742098 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:50:48.902885 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c5993b0e-9201-4b44-aa01-79dc9d6c9fc9 r/w with ordered data mode. Quota mode: none. Feb 13 19:50:48.903481 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:50:48.904191 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:50:48.913027 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:50:48.926545 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:50:48.929789 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:50:48.929854 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:50:48.929900 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:50:48.941468 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:50:48.951220 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:50:48.958936 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1044) Feb 13 19:50:48.958976 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:50:48.958998 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:50:48.959020 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:50:48.964904 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:50:48.965200 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:50:49.189450 initrd-setup-root[1068]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:50:49.223466 initrd-setup-root[1075]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:50:49.234267 initrd-setup-root[1082]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:50:49.243481 initrd-setup-root[1089]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:50:49.516839 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:50:49.531622 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:50:49.537994 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:50:49.579225 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:50:49.580974 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:50:49.669505 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:50:49.672296 ignition[1156]: INFO : Ignition 2.20.0 Feb 13 19:50:49.672296 ignition[1156]: INFO : Stage: mount Feb 13 19:50:49.682682 ignition[1156]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:49.682682 ignition[1156]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:49.682682 ignition[1156]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:49.689527 ignition[1156]: INFO : PUT result: OK Feb 13 19:50:49.692342 ignition[1156]: INFO : mount: mount passed Feb 13 19:50:49.693549 ignition[1156]: INFO : Ignition finished successfully Feb 13 19:50:49.696883 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:50:49.704024 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:50:49.739227 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:50:49.762087 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1170) Feb 13 19:50:49.762153 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:50:49.766155 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:50:49.766298 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 19:50:49.773901 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 19:50:49.775790 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:50:49.817115 ignition[1187]: INFO : Ignition 2.20.0 Feb 13 19:50:49.817115 ignition[1187]: INFO : Stage: files Feb 13 19:50:49.820331 ignition[1187]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:49.820331 ignition[1187]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:49.820331 ignition[1187]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:49.820331 ignition[1187]: INFO : PUT result: OK Feb 13 19:50:49.833546 ignition[1187]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:50:49.837277 ignition[1187]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:50:49.837277 ignition[1187]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:50:49.848256 ignition[1187]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:50:49.852577 ignition[1187]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:50:49.855615 ignition[1187]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:50:49.855559 unknown[1187]: wrote ssh authorized keys file for user: core Feb 13 19:50:49.860192 ignition[1187]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 19:50:49.860192 ignition[1187]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 13 19:50:49.860192 ignition[1187]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:50:49.875228 ignition[1187]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 19:50:49.874314 systemd-networkd[995]: eth0: Gained IPv6LL Feb 13 19:50:49.972986 ignition[1187]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:50:50.154497 ignition[1187]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:50:50.157943 ignition[1187]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:50:50.157943 ignition[1187]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:50:50.157943 ignition[1187]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:50:50.157943 ignition[1187]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:50:50.157943 ignition[1187]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:50:50.157943 ignition[1187]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:50:50.157943 ignition[1187]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:50:50.157943 ignition[1187]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:50:50.157943 ignition[1187]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:50:50.157943 ignition[1187]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:50:50.157943 ignition[1187]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:50:50.157943 ignition[1187]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:50:50.157943 ignition[1187]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:50:50.157943 ignition[1187]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 19:50:50.653140 ignition[1187]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:50:51.172143 ignition[1187]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:50:51.172143 ignition[1187]: INFO : files: op(c): [started] processing unit "containerd.service" Feb 13 19:50:51.184725 ignition[1187]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 19:50:51.188218 ignition[1187]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 13 19:50:51.188218 ignition[1187]: INFO : files: op(c): [finished] processing unit "containerd.service" Feb 13 19:50:51.188218 ignition[1187]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Feb 13 19:50:51.188218 ignition[1187]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:50:51.188218 ignition[1187]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:50:51.188218 ignition[1187]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Feb 13 19:50:51.188218 ignition[1187]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:50:51.188218 ignition[1187]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:50:51.188218 ignition[1187]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:50:51.188218 ignition[1187]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:50:51.188218 ignition[1187]: INFO : files: files passed Feb 13 19:50:51.188218 ignition[1187]: INFO : Ignition finished successfully Feb 13 19:50:51.198396 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:50:51.213932 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:50:51.221897 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:50:51.232413 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:50:51.232569 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:50:51.253899 initrd-setup-root-after-ignition[1216]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:50:51.253899 initrd-setup-root-after-ignition[1216]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:50:51.259507 initrd-setup-root-after-ignition[1220]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:50:51.263178 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:50:51.264053 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:50:51.274363 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:50:51.315457 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:50:51.315594 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:50:51.319089 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:50:51.323827 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:50:51.327687 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:50:51.337533 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:50:51.368695 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:50:51.378714 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:50:51.403020 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:50:51.403313 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:50:51.403638 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:50:51.403846 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:50:51.404100 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:50:51.404677 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:50:51.405471 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:50:51.406056 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:50:51.406638 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:50:51.407499 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:50:51.408186 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:50:51.408503 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:50:51.408694 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:50:51.409204 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:50:51.409512 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:50:51.409640 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:50:51.409829 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:50:51.410511 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:50:51.410841 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:50:51.411071 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:50:51.430434 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:50:51.439661 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:50:51.440392 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:50:51.449938 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:50:51.450816 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:50:51.460891 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:50:51.461056 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:50:51.478501 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:50:51.479776 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:50:51.479939 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:50:51.513250 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:50:51.525614 ignition[1240]: INFO : Ignition 2.20.0 Feb 13 19:50:51.525614 ignition[1240]: INFO : Stage: umount Feb 13 19:50:51.525614 ignition[1240]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:50:51.525614 ignition[1240]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 19:50:51.533122 ignition[1240]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 19:50:51.533122 ignition[1240]: INFO : PUT result: OK Feb 13 19:50:51.531509 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:50:51.545684 ignition[1240]: INFO : umount: umount passed Feb 13 19:50:51.545684 ignition[1240]: INFO : Ignition finished successfully Feb 13 19:50:51.531967 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:50:51.538668 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:50:51.538809 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:50:51.553331 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:50:51.553435 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:50:51.555361 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:50:51.555447 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:50:51.558660 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:50:51.558752 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:50:51.560159 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:50:51.560275 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:50:51.561726 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:50:51.561777 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:50:51.563022 systemd[1]: Stopped target network.target - Network. Feb 13 19:50:51.564797 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:50:51.565937 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:50:51.569112 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:50:51.570405 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:50:51.576199 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:50:51.578881 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:50:51.583402 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:50:51.588723 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:50:51.588817 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:50:51.596915 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:50:51.596980 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:50:51.599286 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:50:51.599371 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:50:51.611382 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:50:51.611473 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:50:51.614655 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:50:51.616517 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:50:51.623494 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:50:51.624197 systemd-networkd[995]: eth0: DHCPv6 lease lost Feb 13 19:50:51.626138 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:50:51.626292 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:50:51.630006 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:50:51.630091 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:50:51.649733 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:50:51.650991 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:50:51.651080 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:50:51.654164 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:50:51.660897 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:50:51.661029 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:50:51.674296 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:50:51.675647 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:50:51.686631 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:50:51.686729 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:50:51.691686 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:50:51.691746 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:50:51.697882 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:50:51.698775 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:50:51.717654 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:50:51.717756 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:50:51.722919 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:50:51.723078 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:50:51.741169 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:50:51.741288 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:50:51.741817 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:50:51.748263 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:50:51.748334 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:50:51.750359 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:50:51.750441 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:50:51.752259 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:50:51.752331 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:50:51.754759 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:50:51.754823 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:50:51.756921 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:50:51.757059 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:50:51.784231 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:50:51.784475 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:50:51.788998 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:50:51.789136 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:50:51.799127 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:50:51.799254 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:50:51.803427 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:50:51.810082 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:50:51.843196 systemd[1]: Switching root. Feb 13 19:50:51.871068 systemd-journald[179]: Journal stopped Feb 13 19:50:54.292693 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Feb 13 19:50:54.292783 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:50:54.305005 kernel: SELinux: policy capability open_perms=1 Feb 13 19:50:54.305036 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:50:54.305062 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:50:54.305080 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:50:54.305100 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:50:54.305191 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:50:54.305216 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:50:54.305234 kernel: audit: type=1403 audit(1739476252.346:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:50:54.305273 systemd[1]: Successfully loaded SELinux policy in 87.024ms. Feb 13 19:50:54.305297 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.310ms. Feb 13 19:50:54.305317 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:50:54.305414 systemd[1]: Detected virtualization amazon. Feb 13 19:50:54.305436 systemd[1]: Detected architecture x86-64. Feb 13 19:50:54.305460 systemd[1]: Detected first boot. Feb 13 19:50:54.305479 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:50:54.305498 zram_generator::config[1300]: No configuration found. Feb 13 19:50:54.305526 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:50:54.305545 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:50:54.305565 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 19:50:54.305586 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:50:54.305605 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:50:54.305624 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:50:54.305645 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:50:54.305664 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:50:54.305687 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:50:54.305710 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:50:54.305730 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:50:54.305748 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:50:54.305766 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:50:54.305784 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:50:54.305806 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:50:54.305824 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:50:54.305841 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:50:54.307461 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:50:54.307516 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:50:54.307537 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:50:54.307556 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:50:54.307576 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:50:54.307719 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:50:54.307747 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:50:54.307766 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:50:54.307786 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:50:54.307807 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:50:54.307826 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:50:54.307844 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:50:54.310898 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:50:54.310963 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:50:54.310985 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:50:54.311012 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:50:54.311160 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:50:54.311184 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:50:54.311204 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:50:54.311223 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:50:54.311241 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:50:54.311261 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:50:54.311279 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:50:54.311302 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:50:54.311321 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:50:54.311340 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:50:54.311358 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:50:54.311376 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:50:54.311396 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:50:54.311415 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:50:54.311433 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:50:54.311451 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:50:54.311473 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 13 19:50:54.311494 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Feb 13 19:50:54.311512 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:50:54.311529 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:50:54.311547 kernel: fuse: init (API version 7.39) Feb 13 19:50:54.311567 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:50:54.311585 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:50:54.311604 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:50:54.311623 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:50:54.311688 systemd-journald[1404]: Collecting audit messages is disabled. Feb 13 19:50:54.311732 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:50:54.311750 kernel: loop: module loaded Feb 13 19:50:54.311767 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:50:54.311786 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:50:54.311804 kernel: ACPI: bus type drm_connector registered Feb 13 19:50:54.311822 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:50:54.311844 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:50:54.314153 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:50:54.314212 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:50:54.314237 systemd-journald[1404]: Journal started Feb 13 19:50:54.314282 systemd-journald[1404]: Runtime Journal (/run/log/journal/ec21cb49a938cbfc273fb7daf31aa0dd) is 4.8M, max 38.6M, 33.7M free. Feb 13 19:50:54.319655 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:50:54.323109 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:50:54.325901 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:50:54.326170 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:50:54.328705 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:50:54.331219 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:50:54.333238 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:50:54.333510 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:50:54.337614 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:50:54.337851 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:50:54.340382 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:50:54.340771 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:50:54.342533 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:50:54.343020 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:50:54.345319 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:50:54.347277 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:50:54.350140 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:50:54.366835 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:50:54.379066 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:50:54.391058 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:50:54.396322 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:50:54.420656 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:50:54.436581 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:50:54.439768 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:50:54.457091 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:50:54.460729 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:50:54.476066 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:50:54.517745 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:50:54.539757 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:50:54.542075 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:50:54.555241 systemd-journald[1404]: Time spent on flushing to /var/log/journal/ec21cb49a938cbfc273fb7daf31aa0dd is 134.148ms for 945 entries. Feb 13 19:50:54.555241 systemd-journald[1404]: System Journal (/var/log/journal/ec21cb49a938cbfc273fb7daf31aa0dd) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:50:54.699394 systemd-journald[1404]: Received client request to flush runtime journal. Feb 13 19:50:54.565783 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:50:54.567678 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:50:54.619857 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:50:54.630266 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:50:54.645333 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:50:54.696022 systemd-tmpfiles[1449]: ACLs are not supported, ignoring. Feb 13 19:50:54.696216 systemd-tmpfiles[1449]: ACLs are not supported, ignoring. Feb 13 19:50:54.699027 udevadm[1458]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:50:54.702598 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:50:54.715824 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:50:54.739103 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:50:54.855389 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:50:54.866232 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:50:54.900778 systemd-tmpfiles[1471]: ACLs are not supported, ignoring. Feb 13 19:50:54.901282 systemd-tmpfiles[1471]: ACLs are not supported, ignoring. Feb 13 19:50:54.908702 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:50:55.886407 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:50:55.897546 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:50:55.962151 systemd-udevd[1477]: Using default interface naming scheme 'v255'. Feb 13 19:50:56.016955 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:50:56.032011 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:50:56.092328 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:50:56.175279 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Feb 13 19:50:56.245295 (udev-worker)[1493]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:50:56.270616 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:50:56.411594 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 19:50:56.442980 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:50:56.446933 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Feb 13 19:50:56.450906 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 19:50:56.477994 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 13 19:50:56.485363 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 19:50:56.504805 systemd-networkd[1485]: lo: Link UP Feb 13 19:50:56.505271 systemd-networkd[1485]: lo: Gained carrier Feb 13 19:50:56.509216 systemd-networkd[1485]: Enumeration completed Feb 13 19:50:56.511239 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:50:56.521407 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:50:56.524779 systemd-networkd[1485]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:50:56.529415 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:50:56.539744 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:50:56.540970 systemd-networkd[1485]: eth0: Link UP Feb 13 19:50:56.541374 systemd-networkd[1485]: eth0: Gained carrier Feb 13 19:50:56.544274 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:50:56.549892 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1483) Feb 13 19:50:56.556537 systemd-networkd[1485]: eth0: DHCPv4 address 172.31.23.227/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 19:50:56.606948 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:50:56.656294 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:50:56.776910 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:50:56.816144 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 19:50:56.824413 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:50:56.861901 lvm[1599]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:50:56.889482 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:50:57.008007 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:50:57.032246 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:50:57.034422 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:50:57.049526 lvm[1603]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:50:57.090244 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:50:57.095837 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:50:57.097575 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:50:57.097745 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:50:57.099027 systemd[1]: Reached target machines.target - Containers. Feb 13 19:50:57.103976 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:50:57.111167 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:50:57.115587 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:50:57.117764 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:50:57.133769 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:50:57.144521 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:50:57.152261 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:50:57.156656 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:50:57.191928 kernel: loop0: detected capacity change from 0 to 210664 Feb 13 19:50:57.193294 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:50:57.197228 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:50:57.204523 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:50:57.271901 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:50:57.313986 kernel: loop1: detected capacity change from 0 to 140992 Feb 13 19:50:57.458894 kernel: loop2: detected capacity change from 0 to 138184 Feb 13 19:50:57.563193 kernel: loop3: detected capacity change from 0 to 62848 Feb 13 19:50:57.679312 systemd-networkd[1485]: eth0: Gained IPv6LL Feb 13 19:50:57.686281 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:50:57.695031 kernel: loop4: detected capacity change from 0 to 210664 Feb 13 19:50:57.728899 kernel: loop5: detected capacity change from 0 to 140992 Feb 13 19:50:57.763901 kernel: loop6: detected capacity change from 0 to 138184 Feb 13 19:50:57.786991 kernel: loop7: detected capacity change from 0 to 62848 Feb 13 19:50:57.802822 (sd-merge)[1628]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 19:50:57.803755 (sd-merge)[1628]: Merged extensions into '/usr'. Feb 13 19:50:57.812146 systemd[1]: Reloading requested from client PID 1612 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:50:57.812163 systemd[1]: Reloading... Feb 13 19:50:57.917945 zram_generator::config[1655]: No configuration found. Feb 13 19:50:58.216462 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:50:58.308324 systemd[1]: Reloading finished in 495 ms. Feb 13 19:50:58.327218 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:50:58.345497 systemd[1]: Starting ensure-sysext.service... Feb 13 19:50:58.373907 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:50:58.384981 systemd[1]: Reloading requested from client PID 1709 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:50:58.385006 systemd[1]: Reloading... Feb 13 19:50:58.419920 systemd-tmpfiles[1710]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:50:58.421225 systemd-tmpfiles[1710]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:50:58.423047 systemd-tmpfiles[1710]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:50:58.423645 systemd-tmpfiles[1710]: ACLs are not supported, ignoring. Feb 13 19:50:58.423842 systemd-tmpfiles[1710]: ACLs are not supported, ignoring. Feb 13 19:50:58.430455 systemd-tmpfiles[1710]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:50:58.430470 systemd-tmpfiles[1710]: Skipping /boot Feb 13 19:50:58.469051 systemd-tmpfiles[1710]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:50:58.469507 systemd-tmpfiles[1710]: Skipping /boot Feb 13 19:50:58.488896 zram_generator::config[1734]: No configuration found. Feb 13 19:50:58.569119 ldconfig[1608]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:50:58.742419 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:50:58.824906 systemd[1]: Reloading finished in 439 ms. Feb 13 19:50:58.844155 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:50:58.851816 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:50:58.863249 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:50:58.868036 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:50:58.878084 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:50:58.883082 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:50:58.896022 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:50:58.918501 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:50:58.918978 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:50:58.928015 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:50:58.939338 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:50:58.950309 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:50:58.951609 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:50:58.951940 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:50:58.953721 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:50:58.957562 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:50:58.969885 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:50:58.971223 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:50:58.992029 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:50:58.992301 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:50:59.000482 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:50:59.011743 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:50:59.015554 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:50:59.075321 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:50:59.087649 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:50:59.092070 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:50:59.108169 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:50:59.110001 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:50:59.111717 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:50:59.119441 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:50:59.119700 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:50:59.131670 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:50:59.135144 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:50:59.141880 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:50:59.152202 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:50:59.173028 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:50:59.173570 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:50:59.182513 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:50:59.191260 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:50:59.203066 systemd-resolved[1801]: Positive Trust Anchors: Feb 13 19:50:59.203322 systemd-resolved[1801]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:50:59.203388 systemd-resolved[1801]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:50:59.206285 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:50:59.208677 augenrules[1850]: No rules Feb 13 19:50:59.213629 systemd-resolved[1801]: Defaulting to hostname 'linux'. Feb 13 19:50:59.218477 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:50:59.220265 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:50:59.220560 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:50:59.222175 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:50:59.222342 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:50:59.225647 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:50:59.230824 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:50:59.231230 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:50:59.234833 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:50:59.235284 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:50:59.238734 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:50:59.240184 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:50:59.242466 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:50:59.242724 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:50:59.244674 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:50:59.244926 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:50:59.250442 systemd[1]: Finished ensure-sysext.service. Feb 13 19:50:59.259007 systemd[1]: Reached target network.target - Network. Feb 13 19:50:59.260144 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:50:59.263446 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:50:59.268428 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:50:59.268481 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:50:59.274050 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:50:59.278463 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:50:59.283169 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:50:59.286740 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:50:59.289557 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:50:59.291464 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:50:59.291519 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:50:59.292762 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:50:59.300436 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:50:59.312771 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:50:59.328485 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:50:59.339991 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:50:59.340664 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:50:59.355393 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:50:59.362566 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:50:59.364406 systemd[1]: System is tainted: cgroupsv1 Feb 13 19:50:59.364473 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:50:59.364502 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:50:59.390035 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:50:59.414468 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:50:59.451852 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:50:59.519077 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:50:59.536372 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:50:59.540162 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:50:59.558460 jq[1876]: false Feb 13 19:50:59.577035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:50:59.578953 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:50:59.582694 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:50:59.600741 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:50:59.623058 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:50:59.644046 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:50:59.644327 extend-filesystems[1877]: Found loop4 Feb 13 19:50:59.645881 extend-filesystems[1877]: Found loop5 Feb 13 19:50:59.645881 extend-filesystems[1877]: Found loop6 Feb 13 19:50:59.645881 extend-filesystems[1877]: Found loop7 Feb 13 19:50:59.645881 extend-filesystems[1877]: Found nvme0n1 Feb 13 19:50:59.645881 extend-filesystems[1877]: Found nvme0n1p1 Feb 13 19:50:59.645881 extend-filesystems[1877]: Found nvme0n1p2 Feb 13 19:50:59.645881 extend-filesystems[1877]: Found nvme0n1p3 Feb 13 19:50:59.645881 extend-filesystems[1877]: Found usr Feb 13 19:50:59.645881 extend-filesystems[1877]: Found nvme0n1p4 Feb 13 19:50:59.645881 extend-filesystems[1877]: Found nvme0n1p6 Feb 13 19:50:59.645881 extend-filesystems[1877]: Found nvme0n1p7 Feb 13 19:50:59.645881 extend-filesystems[1877]: Found nvme0n1p9 Feb 13 19:50:59.645881 extend-filesystems[1877]: Checking size of /dev/nvme0n1p9 Feb 13 19:50:59.670897 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:50:59.688223 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:50:59.702573 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:50:59.705809 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:50:59.722102 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:50:59.724500 ntpd[1883]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:07:00 UTC 2025 (1): Starting Feb 13 19:50:59.729751 ntpd[1883]: 13 Feb 19:50:59 ntpd[1883]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:07:00 UTC 2025 (1): Starting Feb 13 19:50:59.729751 ntpd[1883]: 13 Feb 19:50:59 ntpd[1883]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:50:59.729751 ntpd[1883]: 13 Feb 19:50:59 ntpd[1883]: ---------------------------------------------------- Feb 13 19:50:59.729751 ntpd[1883]: 13 Feb 19:50:59 ntpd[1883]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:50:59.729751 ntpd[1883]: 13 Feb 19:50:59 ntpd[1883]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:50:59.729751 ntpd[1883]: 13 Feb 19:50:59 ntpd[1883]: corporation. Support and training for ntp-4 are Feb 13 19:50:59.729751 ntpd[1883]: 13 Feb 19:50:59 ntpd[1883]: available at https://www.nwtime.org/support Feb 13 19:50:59.729751 ntpd[1883]: 13 Feb 19:50:59 ntpd[1883]: ---------------------------------------------------- Feb 13 19:50:59.728075 ntpd[1883]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:50:59.738188 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:50:59.728087 ntpd[1883]: ---------------------------------------------------- Feb 13 19:50:59.752596 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:50:59.760883 ntpd[1883]: 13 Feb 19:50:59 ntpd[1883]: proto: precision = 0.069 usec (-24) Feb 13 19:50:59.728097 ntpd[1883]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:50:59.764325 ntpd[1883]: 13 Feb 19:50:59 ntpd[1883]: basedate set to 2025-02-01 Feb 13 19:50:59.764325 ntpd[1883]: 13 Feb 19:50:59 ntpd[1883]: gps base set to 2025-02-02 (week 2352) Feb 13 19:50:59.728107 ntpd[1883]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:50:59.728118 ntpd[1883]: corporation. Support and training for ntp-4 are Feb 13 19:50:59.728130 ntpd[1883]: available at https://www.nwtime.org/support Feb 13 19:50:59.728139 ntpd[1883]: ---------------------------------------------------- Feb 13 19:50:59.745540 ntpd[1883]: proto: precision = 0.069 usec (-24) Feb 13 19:50:59.749770 dbus-daemon[1875]: [system] SELinux support is enabled Feb 13 19:50:59.763182 ntpd[1883]: basedate set to 2025-02-01 Feb 13 19:50:59.774887 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:50:59.775429 jq[1905]: true Feb 13 19:50:59.763421 ntpd[1883]: gps base set to 2025-02-02 (week 2352) Feb 13 19:50:59.775341 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:50:59.789020 ntpd[1883]: 13 Feb 19:50:59 ntpd[1883]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:50:59.789020 ntpd[1883]: 13 Feb 19:50:59 ntpd[1883]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:50:59.789020 ntpd[1883]: 13 Feb 19:50:59 ntpd[1883]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:50:59.789020 ntpd[1883]: 13 Feb 19:50:59 ntpd[1883]: Listen normally on 3 eth0 172.31.23.227:123 Feb 13 19:50:59.789020 ntpd[1883]: 13 Feb 19:50:59 ntpd[1883]: Listen normally on 4 lo [::1]:123 Feb 13 19:50:59.789020 ntpd[1883]: 13 Feb 19:50:59 ntpd[1883]: Listen normally on 5 eth0 [fe80::495:1eff:fec9:f01f%2]:123 Feb 13 19:50:59.789020 ntpd[1883]: 13 Feb 19:50:59 ntpd[1883]: Listening on routing socket on fd #22 for interface updates Feb 13 19:50:59.784678 ntpd[1883]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:50:59.784747 ntpd[1883]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:50:59.785747 dbus-daemon[1875]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1485 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:50:59.786017 ntpd[1883]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:50:59.786067 ntpd[1883]: Listen normally on 3 eth0 172.31.23.227:123 Feb 13 19:50:59.786110 ntpd[1883]: Listen normally on 4 lo [::1]:123 Feb 13 19:50:59.786160 ntpd[1883]: Listen normally on 5 eth0 [fe80::495:1eff:fec9:f01f%2]:123 Feb 13 19:50:59.786203 ntpd[1883]: Listening on routing socket on fd #22 for interface updates Feb 13 19:50:59.790805 ntpd[1883]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:50:59.793273 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:50:59.799162 ntpd[1883]: 13 Feb 19:50:59 ntpd[1883]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:50:59.799162 ntpd[1883]: 13 Feb 19:50:59 ntpd[1883]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:50:59.790849 ntpd[1883]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:50:59.793643 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:50:59.810422 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:50:59.810956 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:50:59.860770 extend-filesystems[1877]: Resized partition /dev/nvme0n1p9 Feb 13 19:50:59.885626 extend-filesystems[1927]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:50:59.903287 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 19:50:59.941390 update_engine[1903]: I20250213 19:50:59.941252 1903 main.cc:92] Flatcar Update Engine starting Feb 13 19:50:59.947314 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:50:59.960633 dbus-daemon[1875]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:50:59.966401 (ntainerd)[1928]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:51:00.019707 update_engine[1903]: I20250213 19:50:59.987756 1903 update_check_scheduler.cc:74] Next update check in 8m14s Feb 13 19:51:00.019818 tar[1912]: linux-amd64/helm Feb 13 19:51:00.004609 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:51:00.020487 jq[1919]: true Feb 13 19:51:00.014306 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:51:00.021479 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:51:00.021524 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:51:00.037856 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:51:00.039359 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:51:00.039427 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:51:00.050375 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:51:00.065315 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:51:00.077938 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 19:51:00.096283 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:51:00.115643 extend-filesystems[1927]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 19:51:00.115643 extend-filesystems[1927]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:51:00.115643 extend-filesystems[1927]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 19:51:00.146165 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 19:51:00.160498 extend-filesystems[1877]: Resized filesystem in /dev/nvme0n1p9 Feb 13 19:51:00.169329 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:51:00.169680 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:51:00.203751 coreos-metadata[1873]: Feb 13 19:51:00.199 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:51:00.211545 coreos-metadata[1873]: Feb 13 19:51:00.205 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 19:51:00.217826 coreos-metadata[1873]: Feb 13 19:51:00.217 INFO Fetch successful Feb 13 19:51:00.217826 coreos-metadata[1873]: Feb 13 19:51:00.217 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 19:51:00.219939 coreos-metadata[1873]: Feb 13 19:51:00.219 INFO Fetch successful Feb 13 19:51:00.219939 coreos-metadata[1873]: Feb 13 19:51:00.219 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 19:51:00.222066 coreos-metadata[1873]: Feb 13 19:51:00.221 INFO Fetch successful Feb 13 19:51:00.222066 coreos-metadata[1873]: Feb 13 19:51:00.221 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 19:51:00.245577 coreos-metadata[1873]: Feb 13 19:51:00.243 INFO Fetch successful Feb 13 19:51:00.245577 coreos-metadata[1873]: Feb 13 19:51:00.243 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 19:51:00.245577 coreos-metadata[1873]: Feb 13 19:51:00.245 INFO Fetch failed with 404: resource not found Feb 13 19:51:00.245577 coreos-metadata[1873]: Feb 13 19:51:00.245 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 19:51:00.255726 systemd-logind[1900]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:51:00.258164 coreos-metadata[1873]: Feb 13 19:51:00.256 INFO Fetch successful Feb 13 19:51:00.258164 coreos-metadata[1873]: Feb 13 19:51:00.256 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 19:51:00.255761 systemd-logind[1900]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 19:51:00.255789 systemd-logind[1900]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:51:00.267454 coreos-metadata[1873]: Feb 13 19:51:00.258 INFO Fetch successful Feb 13 19:51:00.267454 coreos-metadata[1873]: Feb 13 19:51:00.258 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 19:51:00.267454 coreos-metadata[1873]: Feb 13 19:51:00.265 INFO Fetch successful Feb 13 19:51:00.267454 coreos-metadata[1873]: Feb 13 19:51:00.266 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 19:51:00.262689 systemd-logind[1900]: New seat seat0. Feb 13 19:51:00.279672 coreos-metadata[1873]: Feb 13 19:51:00.271 INFO Fetch successful Feb 13 19:51:00.279672 coreos-metadata[1873]: Feb 13 19:51:00.271 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 19:51:00.279672 coreos-metadata[1873]: Feb 13 19:51:00.277 INFO Fetch successful Feb 13 19:51:00.272261 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:51:00.362893 bash[1981]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:51:00.356691 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:51:00.378049 systemd[1]: Starting sshkeys.service... Feb 13 19:51:00.439429 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1991) Feb 13 19:51:00.448042 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:51:00.451991 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:51:00.516076 dbus-daemon[1875]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:51:00.519460 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:51:00.527337 dbus-daemon[1875]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1949 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:51:00.531332 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:51:00.533278 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:51:00.557380 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:51:00.701049 locksmithd[1951]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:51:00.975221 polkitd[2041]: Started polkitd version 121 Feb 13 19:51:01.021575 amazon-ssm-agent[1958]: Initializing new seelog logger Feb 13 19:51:01.022065 amazon-ssm-agent[1958]: New Seelog Logger Creation Complete Feb 13 19:51:01.022065 amazon-ssm-agent[1958]: 2025/02/13 19:51:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:51:01.022065 amazon-ssm-agent[1958]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:51:01.024145 amazon-ssm-agent[1958]: 2025/02/13 19:51:01 processing appconfig overrides Feb 13 19:51:01.046679 sshd_keygen[1925]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:51:01.066121 amazon-ssm-agent[1958]: 2025/02/13 19:51:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:51:01.066121 amazon-ssm-agent[1958]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:51:01.066654 amazon-ssm-agent[1958]: 2025/02/13 19:51:01 processing appconfig overrides Feb 13 19:51:01.068642 amazon-ssm-agent[1958]: 2025/02/13 19:51:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:51:01.068642 amazon-ssm-agent[1958]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:51:01.068642 amazon-ssm-agent[1958]: 2025/02/13 19:51:01 processing appconfig overrides Feb 13 19:51:01.068642 amazon-ssm-agent[1958]: 2025-02-13 19:51:01 INFO Proxy environment variables: Feb 13 19:51:01.124859 amazon-ssm-agent[1958]: 2025/02/13 19:51:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:51:01.124859 amazon-ssm-agent[1958]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 19:51:01.134443 amazon-ssm-agent[1958]: 2025/02/13 19:51:01 processing appconfig overrides Feb 13 19:51:01.170290 amazon-ssm-agent[1958]: 2025-02-13 19:51:01 INFO https_proxy: Feb 13 19:51:01.171075 polkitd[2041]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:51:01.171206 polkitd[2041]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:51:01.172815 polkitd[2041]: Finished loading, compiling and executing 2 rules Feb 13 19:51:01.180233 dbus-daemon[1875]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:51:01.180447 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:51:01.187494 coreos-metadata[2024]: Feb 13 19:51:01.180 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 19:51:01.192313 polkitd[2041]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:51:01.205518 coreos-metadata[2024]: Feb 13 19:51:01.205 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 19:51:01.252001 coreos-metadata[2024]: Feb 13 19:51:01.251 INFO Fetch successful Feb 13 19:51:01.252001 coreos-metadata[2024]: Feb 13 19:51:01.251 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 19:51:01.275761 amazon-ssm-agent[1958]: 2025-02-13 19:51:01 INFO http_proxy: Feb 13 19:51:01.276363 coreos-metadata[2024]: Feb 13 19:51:01.276 INFO Fetch successful Feb 13 19:51:01.330281 unknown[2024]: wrote ssh authorized keys file for user: core Feb 13 19:51:01.468372 amazon-ssm-agent[1958]: 2025-02-13 19:51:01 INFO no_proxy: Feb 13 19:51:01.479855 update-ssh-keys[2110]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:51:01.490855 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:51:01.553832 systemd[1]: Finished sshkeys.service. Feb 13 19:51:01.626297 amazon-ssm-agent[1958]: 2025-02-13 19:51:01 INFO Checking if agent identity type OnPrem can be assumed Feb 13 19:51:01.660048 systemd-hostnamed[1949]: Hostname set to (transient) Feb 13 19:51:01.660178 systemd-resolved[1801]: System hostname changed to 'ip-172-31-23-227'. Feb 13 19:51:01.708805 amazon-ssm-agent[1958]: 2025-02-13 19:51:01 INFO Checking if agent identity type EC2 can be assumed Feb 13 19:51:01.686298 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:51:01.713462 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:51:01.759657 systemd[1]: Started sshd@0-172.31.23.227:22-139.178.89.65:44138.service - OpenSSH per-connection server daemon (139.178.89.65:44138). Feb 13 19:51:01.867055 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:51:01.867398 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:51:01.950141 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:51:02.189488 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:51:02.249261 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:51:02.275379 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:51:02.280139 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:51:02.374563 amazon-ssm-agent[1958]: 2025-02-13 19:51:02 INFO Agent will take identity from EC2 Feb 13 19:51:02.483911 amazon-ssm-agent[1958]: 2025-02-13 19:51:02 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:51:02.508623 containerd[1928]: time="2025-02-13T19:51:02.508297684Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:51:02.582859 amazon-ssm-agent[1958]: 2025-02-13 19:51:02 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:51:02.689886 amazon-ssm-agent[1958]: 2025-02-13 19:51:02 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 19:51:02.751240 sshd[2125]: Accepted publickey for core from 139.178.89.65 port 44138 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:51:02.759459 sshd-session[2125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:02.803911 amazon-ssm-agent[1958]: 2025-02-13 19:51:02 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 19:51:02.825997 containerd[1928]: time="2025-02-13T19:51:02.816718393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:02.819106 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:51:02.833236 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:51:02.841389 containerd[1928]: time="2025-02-13T19:51:02.841335001Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:51:02.841389 containerd[1928]: time="2025-02-13T19:51:02.841388562Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:51:02.841531 containerd[1928]: time="2025-02-13T19:51:02.841412275Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:51:02.841771 containerd[1928]: time="2025-02-13T19:51:02.841620715Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:51:02.841771 containerd[1928]: time="2025-02-13T19:51:02.841646458Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:02.841771 containerd[1928]: time="2025-02-13T19:51:02.841720061Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:51:02.841771 containerd[1928]: time="2025-02-13T19:51:02.841736655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:02.852020 containerd[1928]: time="2025-02-13T19:51:02.842044613Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:51:02.852020 containerd[1928]: time="2025-02-13T19:51:02.842074771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:02.852020 containerd[1928]: time="2025-02-13T19:51:02.842096512Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:51:02.852020 containerd[1928]: time="2025-02-13T19:51:02.842111190Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:02.852020 containerd[1928]: time="2025-02-13T19:51:02.842221564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:02.852020 containerd[1928]: time="2025-02-13T19:51:02.842502656Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:02.860462 containerd[1928]: time="2025-02-13T19:51:02.858074520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:51:02.860462 containerd[1928]: time="2025-02-13T19:51:02.858120209Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:51:02.860462 containerd[1928]: time="2025-02-13T19:51:02.858295825Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:51:02.860462 containerd[1928]: time="2025-02-13T19:51:02.858364741Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:51:02.866888 systemd-logind[1900]: New session 1 of user core. Feb 13 19:51:02.881439 containerd[1928]: time="2025-02-13T19:51:02.874628261Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:51:02.881439 containerd[1928]: time="2025-02-13T19:51:02.875749613Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:51:02.881439 containerd[1928]: time="2025-02-13T19:51:02.877083739Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:51:02.881439 containerd[1928]: time="2025-02-13T19:51:02.877128911Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:51:02.881439 containerd[1928]: time="2025-02-13T19:51:02.877153851Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:51:02.881439 containerd[1928]: time="2025-02-13T19:51:02.877378680Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:51:02.881439 containerd[1928]: time="2025-02-13T19:51:02.877824970Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:51:02.881439 containerd[1928]: time="2025-02-13T19:51:02.878000081Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:51:02.881439 containerd[1928]: time="2025-02-13T19:51:02.878024247Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:51:02.881439 containerd[1928]: time="2025-02-13T19:51:02.878049262Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:51:02.881439 containerd[1928]: time="2025-02-13T19:51:02.878071730Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:51:02.881439 containerd[1928]: time="2025-02-13T19:51:02.878094273Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:51:02.881439 containerd[1928]: time="2025-02-13T19:51:02.878114193Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:51:02.881439 containerd[1928]: time="2025-02-13T19:51:02.878136843Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:51:02.882090 containerd[1928]: time="2025-02-13T19:51:02.878157883Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:51:02.882090 containerd[1928]: time="2025-02-13T19:51:02.878178999Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:51:02.882090 containerd[1928]: time="2025-02-13T19:51:02.878199795Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:51:02.882090 containerd[1928]: time="2025-02-13T19:51:02.878216946Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:51:02.882090 containerd[1928]: time="2025-02-13T19:51:02.878253059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:51:02.882090 containerd[1928]: time="2025-02-13T19:51:02.878274389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:51:02.882090 containerd[1928]: time="2025-02-13T19:51:02.878294324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:51:02.882090 containerd[1928]: time="2025-02-13T19:51:02.878328962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:51:02.882090 containerd[1928]: time="2025-02-13T19:51:02.878348199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:51:02.882090 containerd[1928]: time="2025-02-13T19:51:02.878370599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:51:02.882090 containerd[1928]: time="2025-02-13T19:51:02.878387966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:51:02.882090 containerd[1928]: time="2025-02-13T19:51:02.878408402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:51:02.882090 containerd[1928]: time="2025-02-13T19:51:02.878429325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:51:02.882090 containerd[1928]: time="2025-02-13T19:51:02.878450965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:51:02.882573 containerd[1928]: time="2025-02-13T19:51:02.878467374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:51:02.882573 containerd[1928]: time="2025-02-13T19:51:02.878482677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:51:02.882573 containerd[1928]: time="2025-02-13T19:51:02.878500204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:51:02.882573 containerd[1928]: time="2025-02-13T19:51:02.878521120Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:51:02.882573 containerd[1928]: time="2025-02-13T19:51:02.878553496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:51:02.882573 containerd[1928]: time="2025-02-13T19:51:02.878572438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:51:02.882573 containerd[1928]: time="2025-02-13T19:51:02.878600796Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:51:02.882573 containerd[1928]: time="2025-02-13T19:51:02.878660254Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:51:02.882573 containerd[1928]: time="2025-02-13T19:51:02.878686818Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:51:02.882573 containerd[1928]: time="2025-02-13T19:51:02.878702274Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:51:02.882573 containerd[1928]: time="2025-02-13T19:51:02.878721628Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:51:02.882573 containerd[1928]: time="2025-02-13T19:51:02.878736415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:51:02.882573 containerd[1928]: time="2025-02-13T19:51:02.878756658Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:51:02.882573 containerd[1928]: time="2025-02-13T19:51:02.878773254Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:51:02.895837 containerd[1928]: time="2025-02-13T19:51:02.878789407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:51:02.904751 amazon-ssm-agent[1958]: 2025-02-13 19:51:02 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Feb 13 19:51:02.905456 containerd[1928]: time="2025-02-13T19:51:02.905232164Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:51:02.905456 containerd[1928]: time="2025-02-13T19:51:02.905328221Z" level=info msg="Connect containerd service" Feb 13 19:51:02.905456 containerd[1928]: time="2025-02-13T19:51:02.905389216Z" level=info msg="using legacy CRI server" Feb 13 19:51:02.905456 containerd[1928]: time="2025-02-13T19:51:02.905400905Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:51:02.906536 containerd[1928]: time="2025-02-13T19:51:02.905580122Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:51:02.940427 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:51:02.972432 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:51:02.984892 containerd[1928]: time="2025-02-13T19:51:02.983448979Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:51:02.984892 containerd[1928]: time="2025-02-13T19:51:02.983970597Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:51:02.984892 containerd[1928]: time="2025-02-13T19:51:02.984039955Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:51:02.984892 containerd[1928]: time="2025-02-13T19:51:02.984076468Z" level=info msg="Start subscribing containerd event" Feb 13 19:51:02.984892 containerd[1928]: time="2025-02-13T19:51:02.984123753Z" level=info msg="Start recovering state" Feb 13 19:51:02.984892 containerd[1928]: time="2025-02-13T19:51:02.984205302Z" level=info msg="Start event monitor" Feb 13 19:51:02.984892 containerd[1928]: time="2025-02-13T19:51:02.984218603Z" level=info msg="Start snapshots syncer" Feb 13 19:51:02.984892 containerd[1928]: time="2025-02-13T19:51:02.984231101Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:51:02.984892 containerd[1928]: time="2025-02-13T19:51:02.984244147Z" level=info msg="Start streaming server" Feb 13 19:51:02.984892 containerd[1928]: time="2025-02-13T19:51:02.984313118Z" level=info msg="containerd successfully booted in 0.488699s" Feb 13 19:51:02.985056 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:51:03.012345 amazon-ssm-agent[1958]: 2025-02-13 19:51:02 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 19:51:03.035946 (systemd)[2155]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:51:03.108235 amazon-ssm-agent[1958]: 2025-02-13 19:51:02 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 19:51:03.208497 amazon-ssm-agent[1958]: 2025-02-13 19:51:02 INFO [Registrar] Starting registrar module Feb 13 19:51:03.325117 amazon-ssm-agent[1958]: 2025-02-13 19:51:02 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 19:51:03.694696 systemd[2155]: Queued start job for default target default.target. Feb 13 19:51:03.697496 systemd[2155]: Created slice app.slice - User Application Slice. Feb 13 19:51:03.697532 systemd[2155]: Reached target paths.target - Paths. Feb 13 19:51:03.697554 systemd[2155]: Reached target timers.target - Timers. Feb 13 19:51:03.710006 systemd[2155]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:51:03.851094 systemd[2155]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:51:03.851592 systemd[2155]: Reached target sockets.target - Sockets. Feb 13 19:51:03.851747 systemd[2155]: Reached target basic.target - Basic System. Feb 13 19:51:03.851815 systemd[2155]: Reached target default.target - Main User Target. Feb 13 19:51:03.851857 systemd[2155]: Startup finished in 771ms. Feb 13 19:51:03.852256 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:51:03.875337 amazon-ssm-agent[1958]: 2025-02-13 19:51:03 INFO [EC2Identity] EC2 registration was successful. Feb 13 19:51:03.884359 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:51:03.945856 amazon-ssm-agent[1958]: 2025-02-13 19:51:03 INFO [CredentialRefresher] credentialRefresher has started Feb 13 19:51:03.946130 amazon-ssm-agent[1958]: 2025-02-13 19:51:03 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 19:51:03.946331 amazon-ssm-agent[1958]: 2025-02-13 19:51:03 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 19:51:03.977250 amazon-ssm-agent[1958]: 2025-02-13 19:51:03 INFO [CredentialRefresher] Next credential rotation will be in 30.6833216027 minutes Feb 13 19:51:04.121913 systemd[1]: Started sshd@1-172.31.23.227:22-139.178.89.65:54638.service - OpenSSH per-connection server daemon (139.178.89.65:54638). Feb 13 19:51:04.521161 sshd[2168]: Accepted publickey for core from 139.178.89.65 port 54638 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:51:04.534600 sshd-session[2168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:04.545858 tar[1912]: linux-amd64/LICENSE Feb 13 19:51:04.546941 tar[1912]: linux-amd64/README.md Feb 13 19:51:04.596394 systemd-logind[1900]: New session 2 of user core. Feb 13 19:51:04.604607 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:51:04.625544 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:51:04.784951 sshd[2176]: Connection closed by 139.178.89.65 port 54638 Feb 13 19:51:04.788810 sshd-session[2168]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:04.819575 systemd-logind[1900]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:51:04.824733 systemd[1]: sshd@1-172.31.23.227:22-139.178.89.65:54638.service: Deactivated successfully. Feb 13 19:51:04.864986 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:51:04.892158 systemd[1]: Started sshd@2-172.31.23.227:22-139.178.89.65:54654.service - OpenSSH per-connection server daemon (139.178.89.65:54654). Feb 13 19:51:04.911132 systemd-logind[1900]: Removed session 2. Feb 13 19:51:05.117316 amazon-ssm-agent[1958]: 2025-02-13 19:51:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 19:51:05.219801 amazon-ssm-agent[1958]: 2025-02-13 19:51:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2184) started Feb 13 19:51:05.272812 sshd[2181]: Accepted publickey for core from 139.178.89.65 port 54654 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:51:05.283006 sshd-session[2181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:05.319651 amazon-ssm-agent[1958]: 2025-02-13 19:51:05 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 19:51:05.368998 systemd-logind[1900]: New session 3 of user core. Feb 13 19:51:05.399490 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:51:05.672948 sshd[2190]: Connection closed by 139.178.89.65 port 54654 Feb 13 19:51:05.675330 sshd-session[2181]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:05.723663 systemd[1]: sshd@2-172.31.23.227:22-139.178.89.65:54654.service: Deactivated successfully. Feb 13 19:51:05.746801 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:51:05.766953 systemd-logind[1900]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:51:05.777212 systemd-logind[1900]: Removed session 3. Feb 13 19:51:06.479173 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:06.482614 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:51:06.486511 systemd[1]: Startup finished in 9.435s (kernel) + 14.224s (userspace) = 23.659s. Feb 13 19:51:06.495167 (kubelet)[2209]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:51:07.712427 systemd-resolved[1801]: Clock change detected. Flushing caches. Feb 13 19:51:09.038595 kubelet[2209]: E0213 19:51:09.038511 2209 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:51:09.047166 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:51:09.047495 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:51:16.689497 systemd[1]: Started sshd@3-172.31.23.227:22-139.178.89.65:40108.service - OpenSSH per-connection server daemon (139.178.89.65:40108). Feb 13 19:51:16.926533 sshd[2222]: Accepted publickey for core from 139.178.89.65 port 40108 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:51:16.931808 sshd-session[2222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:16.957337 systemd-logind[1900]: New session 4 of user core. Feb 13 19:51:16.963830 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:51:17.107842 sshd[2225]: Connection closed by 139.178.89.65 port 40108 Feb 13 19:51:17.109684 sshd-session[2222]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:17.137204 systemd[1]: sshd@3-172.31.23.227:22-139.178.89.65:40108.service: Deactivated successfully. Feb 13 19:51:17.138455 systemd-logind[1900]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:51:17.144962 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:51:17.146143 systemd-logind[1900]: Removed session 4. Feb 13 19:51:17.150823 systemd[1]: Started sshd@4-172.31.23.227:22-139.178.89.65:40120.service - OpenSSH per-connection server daemon (139.178.89.65:40120). Feb 13 19:51:17.319160 sshd[2230]: Accepted publickey for core from 139.178.89.65 port 40120 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:51:17.320706 sshd-session[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:17.333448 systemd-logind[1900]: New session 5 of user core. Feb 13 19:51:17.338075 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:51:17.478346 sshd[2233]: Connection closed by 139.178.89.65 port 40120 Feb 13 19:51:17.480252 sshd-session[2230]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:17.486059 systemd[1]: sshd@4-172.31.23.227:22-139.178.89.65:40120.service: Deactivated successfully. Feb 13 19:51:17.498652 systemd-logind[1900]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:51:17.513228 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:51:17.524830 systemd[1]: Started sshd@5-172.31.23.227:22-139.178.89.65:40122.service - OpenSSH per-connection server daemon (139.178.89.65:40122). Feb 13 19:51:17.526700 systemd-logind[1900]: Removed session 5. Feb 13 19:51:17.706258 sshd[2238]: Accepted publickey for core from 139.178.89.65 port 40122 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:51:17.708453 sshd-session[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:17.725563 systemd-logind[1900]: New session 6 of user core. Feb 13 19:51:17.739285 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:51:17.893287 sshd[2241]: Connection closed by 139.178.89.65 port 40122 Feb 13 19:51:17.894030 sshd-session[2238]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:17.901173 systemd[1]: sshd@5-172.31.23.227:22-139.178.89.65:40122.service: Deactivated successfully. Feb 13 19:51:17.906626 systemd-logind[1900]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:51:17.907528 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:51:17.909783 systemd-logind[1900]: Removed session 6. Feb 13 19:51:17.926245 systemd[1]: Started sshd@6-172.31.23.227:22-139.178.89.65:40134.service - OpenSSH per-connection server daemon (139.178.89.65:40134). Feb 13 19:51:18.161847 sshd[2246]: Accepted publickey for core from 139.178.89.65 port 40134 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:51:18.162802 sshd-session[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:18.184369 systemd-logind[1900]: New session 7 of user core. Feb 13 19:51:18.189820 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:51:18.369658 sudo[2250]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:51:18.370131 sudo[2250]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:51:19.096752 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:51:19.122011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:19.323762 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:51:19.333573 (dockerd)[2272]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:51:19.523886 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:19.569156 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:51:19.699775 kubelet[2281]: E0213 19:51:19.699708 2281 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:51:19.709814 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:51:19.713301 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:51:20.239138 dockerd[2272]: time="2025-02-13T19:51:20.235684969Z" level=info msg="Starting up" Feb 13 19:51:20.434343 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport348065050-merged.mount: Deactivated successfully. Feb 13 19:51:20.564220 systemd[1]: var-lib-docker-metacopy\x2dcheck192053487-merged.mount: Deactivated successfully. Feb 13 19:51:20.591624 dockerd[2272]: time="2025-02-13T19:51:20.591343762Z" level=info msg="Loading containers: start." Feb 13 19:51:21.069472 kernel: Initializing XFRM netlink socket Feb 13 19:51:21.175354 (udev-worker)[2309]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:51:21.282892 systemd-networkd[1485]: docker0: Link UP Feb 13 19:51:21.349400 dockerd[2272]: time="2025-02-13T19:51:21.349327650Z" level=info msg="Loading containers: done." Feb 13 19:51:21.423562 dockerd[2272]: time="2025-02-13T19:51:21.423274312Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:51:21.423776 dockerd[2272]: time="2025-02-13T19:51:21.423641792Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 19:51:21.423841 dockerd[2272]: time="2025-02-13T19:51:21.423796693Z" level=info msg="Daemon has completed initialization" Feb 13 19:51:21.536668 dockerd[2272]: time="2025-02-13T19:51:21.536330996Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:51:21.537139 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:51:23.506990 containerd[1928]: time="2025-02-13T19:51:23.506954832Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 19:51:24.228279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount393345181.mount: Deactivated successfully. Feb 13 19:51:26.995384 containerd[1928]: time="2025-02-13T19:51:26.995325667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:27.001346 containerd[1928]: time="2025-02-13T19:51:27.001259888Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678214" Feb 13 19:51:27.002540 containerd[1928]: time="2025-02-13T19:51:27.002468223Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:27.023670 containerd[1928]: time="2025-02-13T19:51:27.023621551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:27.025719 containerd[1928]: time="2025-02-13T19:51:27.025405529Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 3.518405087s" Feb 13 19:51:27.025719 containerd[1928]: time="2025-02-13T19:51:27.025455442Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 19:51:27.078031 containerd[1928]: time="2025-02-13T19:51:27.077992280Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 19:51:29.373409 containerd[1928]: time="2025-02-13T19:51:29.373346259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:29.375338 containerd[1928]: time="2025-02-13T19:51:29.375142128Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611545" Feb 13 19:51:29.376998 containerd[1928]: time="2025-02-13T19:51:29.376937050Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:29.379991 containerd[1928]: time="2025-02-13T19:51:29.379942790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:29.388134 containerd[1928]: time="2025-02-13T19:51:29.387932248Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 2.30989713s" Feb 13 19:51:29.388134 containerd[1928]: time="2025-02-13T19:51:29.387983317Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 19:51:29.465244 containerd[1928]: time="2025-02-13T19:51:29.465090699Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 19:51:29.846331 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:51:29.857678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:30.333843 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:30.347892 (kubelet)[2567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:51:30.487381 kubelet[2567]: E0213 19:51:30.487237 2567 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:51:30.494402 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:51:30.494700 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:51:31.277027 containerd[1928]: time="2025-02-13T19:51:31.276980299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:31.278670 containerd[1928]: time="2025-02-13T19:51:31.278617702Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782130" Feb 13 19:51:31.281070 containerd[1928]: time="2025-02-13T19:51:31.279593889Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:31.282724 containerd[1928]: time="2025-02-13T19:51:31.282689366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:31.284223 containerd[1928]: time="2025-02-13T19:51:31.284184019Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 1.818934097s" Feb 13 19:51:31.284399 containerd[1928]: time="2025-02-13T19:51:31.284356292Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 19:51:31.318291 containerd[1928]: time="2025-02-13T19:51:31.318255162Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:51:32.594606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3966203920.mount: Deactivated successfully. Feb 13 19:51:32.681793 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:51:33.182792 containerd[1928]: time="2025-02-13T19:51:33.182733451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:33.185297 containerd[1928]: time="2025-02-13T19:51:33.184273650Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057858" Feb 13 19:51:33.186768 containerd[1928]: time="2025-02-13T19:51:33.186397720Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:33.193642 containerd[1928]: time="2025-02-13T19:51:33.193576994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:33.194690 containerd[1928]: time="2025-02-13T19:51:33.194648417Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 1.876100012s" Feb 13 19:51:33.194844 containerd[1928]: time="2025-02-13T19:51:33.194825295Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 19:51:33.222913 containerd[1928]: time="2025-02-13T19:51:33.222868461Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:51:33.798758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount385131548.mount: Deactivated successfully. Feb 13 19:51:35.423968 containerd[1928]: time="2025-02-13T19:51:35.423648900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:35.426891 containerd[1928]: time="2025-02-13T19:51:35.426629931Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 19:51:35.429627 containerd[1928]: time="2025-02-13T19:51:35.429578739Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:35.437388 containerd[1928]: time="2025-02-13T19:51:35.435822523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:35.437388 containerd[1928]: time="2025-02-13T19:51:35.437243313Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.214330613s" Feb 13 19:51:35.437388 containerd[1928]: time="2025-02-13T19:51:35.437288644Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 19:51:35.498265 containerd[1928]: time="2025-02-13T19:51:35.498217653Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 19:51:36.051421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2540751913.mount: Deactivated successfully. Feb 13 19:51:36.068832 containerd[1928]: time="2025-02-13T19:51:36.068728152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:36.071077 containerd[1928]: time="2025-02-13T19:51:36.070524333Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 19:51:36.073474 containerd[1928]: time="2025-02-13T19:51:36.073397086Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:36.079255 containerd[1928]: time="2025-02-13T19:51:36.078254250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:36.079255 containerd[1928]: time="2025-02-13T19:51:36.079105626Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 580.833248ms" Feb 13 19:51:36.079255 containerd[1928]: time="2025-02-13T19:51:36.079147844Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 19:51:36.158978 containerd[1928]: time="2025-02-13T19:51:36.158862591Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 19:51:36.706848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3090029458.mount: Deactivated successfully. Feb 13 19:51:39.911312 containerd[1928]: time="2025-02-13T19:51:39.911247818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:39.914105 containerd[1928]: time="2025-02-13T19:51:39.913603244Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Feb 13 19:51:39.916860 containerd[1928]: time="2025-02-13T19:51:39.916124266Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:39.923505 containerd[1928]: time="2025-02-13T19:51:39.923455882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:39.925445 containerd[1928]: time="2025-02-13T19:51:39.925402077Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.766499107s" Feb 13 19:51:39.925570 containerd[1928]: time="2025-02-13T19:51:39.925453265Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 19:51:40.501754 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 19:51:40.509742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:40.935735 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:40.987863 (kubelet)[2747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:51:41.104302 kubelet[2747]: E0213 19:51:41.104255 2747 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:51:41.112244 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:51:41.112597 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:51:44.064864 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:44.071819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:44.115773 systemd[1]: Reloading requested from client PID 2789 ('systemctl') (unit session-7.scope)... Feb 13 19:51:44.115796 systemd[1]: Reloading... Feb 13 19:51:44.228394 zram_generator::config[2826]: No configuration found. Feb 13 19:51:44.507260 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:51:44.714797 systemd[1]: Reloading finished in 598 ms. Feb 13 19:51:44.808498 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:51:44.811277 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:51:44.811760 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:44.825000 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:45.136765 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:45.149230 (kubelet)[2901]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:51:45.227162 kubelet[2901]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:51:45.227162 kubelet[2901]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:51:45.227162 kubelet[2901]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:51:45.229019 kubelet[2901]: I0213 19:51:45.228950 2901 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:51:45.826039 update_engine[1903]: I20250213 19:51:45.822582 1903 update_attempter.cc:509] Updating boot flags... Feb 13 19:51:45.986118 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (2922) Feb 13 19:51:46.297386 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (2922) Feb 13 19:51:46.696402 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (2922) Feb 13 19:51:47.164988 kubelet[2901]: I0213 19:51:47.164925 2901 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:51:47.164988 kubelet[2901]: I0213 19:51:47.164978 2901 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:51:47.165828 kubelet[2901]: I0213 19:51:47.165621 2901 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:51:47.216395 kubelet[2901]: I0213 19:51:47.215911 2901 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:51:47.220664 kubelet[2901]: E0213 19:51:47.220627 2901 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.227:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.227:6443: connect: connection refused Feb 13 19:51:47.245142 kubelet[2901]: I0213 19:51:47.245109 2901 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:51:47.247702 kubelet[2901]: I0213 19:51:47.247636 2901 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:51:47.247927 kubelet[2901]: I0213 19:51:47.247698 2901 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-227","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:51:47.248204 kubelet[2901]: I0213 19:51:47.247950 2901 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:51:47.248204 kubelet[2901]: I0213 19:51:47.247966 2901 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:51:47.253790 kubelet[2901]: I0213 19:51:47.253749 2901 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:51:47.258975 kubelet[2901]: I0213 19:51:47.258564 2901 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:51:47.258975 kubelet[2901]: W0213 19:51:47.258547 2901 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.227:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-227&limit=500&resourceVersion=0": dial tcp 172.31.23.227:6443: connect: connection refused Feb 13 19:51:47.258975 kubelet[2901]: I0213 19:51:47.258600 2901 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:51:47.258975 kubelet[2901]: E0213 19:51:47.258612 2901 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.227:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-227&limit=500&resourceVersion=0": dial tcp 172.31.23.227:6443: connect: connection refused Feb 13 19:51:47.258975 kubelet[2901]: I0213 19:51:47.258633 2901 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:51:47.258975 kubelet[2901]: I0213 19:51:47.258655 2901 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:51:47.265717 kubelet[2901]: W0213 19:51:47.265605 2901 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.227:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.227:6443: connect: connection refused Feb 13 19:51:47.265717 kubelet[2901]: E0213 19:51:47.265674 2901 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.227:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.227:6443: connect: connection refused Feb 13 19:51:47.266178 kubelet[2901]: I0213 19:51:47.266065 2901 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:51:47.268732 kubelet[2901]: I0213 19:51:47.268704 2901 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:51:47.270388 kubelet[2901]: W0213 19:51:47.268936 2901 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:51:47.270388 kubelet[2901]: I0213 19:51:47.269746 2901 server.go:1264] "Started kubelet" Feb 13 19:51:47.292387 kubelet[2901]: I0213 19:51:47.289462 2901 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:51:47.293106 kubelet[2901]: I0213 19:51:47.293086 2901 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:51:47.294702 kubelet[2901]: I0213 19:51:47.294638 2901 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:51:47.300312 kubelet[2901]: I0213 19:51:47.300274 2901 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:51:47.300847 kubelet[2901]: I0213 19:51:47.300827 2901 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:51:47.300948 kubelet[2901]: E0213 19:51:47.300761 2901 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.227:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.227:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-227.1823dc7c3e051100 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-227,UID:ip-172-31-23-227,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-227,},FirstTimestamp:2025-02-13 19:51:47.269718272 +0000 UTC m=+2.113931352,LastTimestamp:2025-02-13 19:51:47.269718272 +0000 UTC m=+2.113931352,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-227,}" Feb 13 19:51:47.316700 kubelet[2901]: E0213 19:51:47.316636 2901 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:51:47.317028 kubelet[2901]: E0213 19:51:47.317005 2901 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-23-227\" not found" Feb 13 19:51:47.317094 kubelet[2901]: I0213 19:51:47.317055 2901 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:51:47.317667 kubelet[2901]: I0213 19:51:47.317636 2901 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:51:47.317841 kubelet[2901]: I0213 19:51:47.317724 2901 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:51:47.318542 kubelet[2901]: W0213 19:51:47.318486 2901 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.227:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.227:6443: connect: connection refused Feb 13 19:51:47.319009 kubelet[2901]: E0213 19:51:47.318558 2901 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.227:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.227:6443: connect: connection refused Feb 13 19:51:47.320477 kubelet[2901]: E0213 19:51:47.320150 2901 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.227:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-227?timeout=10s\": dial tcp 172.31.23.227:6443: connect: connection refused" interval="200ms" Feb 13 19:51:47.322430 kubelet[2901]: I0213 19:51:47.322091 2901 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:51:47.322430 kubelet[2901]: I0213 19:51:47.322341 2901 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:51:47.324328 kubelet[2901]: I0213 19:51:47.324314 2901 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:51:47.363209 kubelet[2901]: I0213 19:51:47.363155 2901 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:51:47.364827 kubelet[2901]: I0213 19:51:47.364764 2901 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:51:47.364827 kubelet[2901]: I0213 19:51:47.364807 2901 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:51:47.364827 kubelet[2901]: I0213 19:51:47.364828 2901 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:51:47.365008 kubelet[2901]: E0213 19:51:47.364878 2901 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:51:47.371058 kubelet[2901]: W0213 19:51:47.371014 2901 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.227:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.227:6443: connect: connection refused Feb 13 19:51:47.371058 kubelet[2901]: E0213 19:51:47.371055 2901 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.227:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.227:6443: connect: connection refused Feb 13 19:51:47.372702 kubelet[2901]: I0213 19:51:47.372668 2901 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:51:47.372702 kubelet[2901]: I0213 19:51:47.372688 2901 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:51:47.372702 kubelet[2901]: I0213 19:51:47.372706 2901 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:51:47.376260 kubelet[2901]: I0213 19:51:47.376151 2901 policy_none.go:49] "None policy: Start" Feb 13 19:51:47.377655 kubelet[2901]: I0213 19:51:47.377624 2901 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:51:47.378324 kubelet[2901]: I0213 19:51:47.377914 2901 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:51:47.388743 kubelet[2901]: I0213 19:51:47.388704 2901 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:51:47.390027 kubelet[2901]: I0213 19:51:47.389896 2901 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:51:47.390153 kubelet[2901]: I0213 19:51:47.390137 2901 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:51:47.397669 kubelet[2901]: E0213 19:51:47.397591 2901 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-227\" not found" Feb 13 19:51:47.421265 kubelet[2901]: I0213 19:51:47.420854 2901 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-227" Feb 13 19:51:47.423511 kubelet[2901]: E0213 19:51:47.423475 2901 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.227:6443/api/v1/nodes\": dial tcp 172.31.23.227:6443: connect: connection refused" node="ip-172-31-23-227" Feb 13 19:51:47.465993 kubelet[2901]: I0213 19:51:47.465865 2901 topology_manager.go:215] "Topology Admit Handler" podUID="cdc0d348615f7bda555a0a352169fe5f" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-227" Feb 13 19:51:47.467684 kubelet[2901]: I0213 19:51:47.467644 2901 topology_manager.go:215] "Topology Admit Handler" podUID="98e840f8adcf2c594e6e0b38f4507839" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-227" Feb 13 19:51:47.475310 kubelet[2901]: I0213 19:51:47.474492 2901 topology_manager.go:215] "Topology Admit Handler" podUID="64be28fb96332db90c14c7115eabc4e9" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-227" Feb 13 19:51:47.518569 kubelet[2901]: I0213 19:51:47.518534 2901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cdc0d348615f7bda555a0a352169fe5f-ca-certs\") pod \"kube-apiserver-ip-172-31-23-227\" (UID: \"cdc0d348615f7bda555a0a352169fe5f\") " pod="kube-system/kube-apiserver-ip-172-31-23-227" Feb 13 19:51:47.518833 kubelet[2901]: I0213 19:51:47.518803 2901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cdc0d348615f7bda555a0a352169fe5f-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-227\" (UID: \"cdc0d348615f7bda555a0a352169fe5f\") " pod="kube-system/kube-apiserver-ip-172-31-23-227" Feb 13 19:51:47.519131 kubelet[2901]: I0213 19:51:47.518846 2901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98e840f8adcf2c594e6e0b38f4507839-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-227\" (UID: \"98e840f8adcf2c594e6e0b38f4507839\") " pod="kube-system/kube-controller-manager-ip-172-31-23-227" Feb 13 19:51:47.519131 kubelet[2901]: I0213 19:51:47.518874 2901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98e840f8adcf2c594e6e0b38f4507839-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-227\" (UID: \"98e840f8adcf2c594e6e0b38f4507839\") " pod="kube-system/kube-controller-manager-ip-172-31-23-227" Feb 13 19:51:47.519131 kubelet[2901]: I0213 19:51:47.518899 2901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98e840f8adcf2c594e6e0b38f4507839-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-227\" (UID: \"98e840f8adcf2c594e6e0b38f4507839\") " pod="kube-system/kube-controller-manager-ip-172-31-23-227" Feb 13 19:51:47.519131 kubelet[2901]: I0213 19:51:47.518926 2901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/64be28fb96332db90c14c7115eabc4e9-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-227\" (UID: \"64be28fb96332db90c14c7115eabc4e9\") " pod="kube-system/kube-scheduler-ip-172-31-23-227" Feb 13 19:51:47.519131 kubelet[2901]: I0213 19:51:47.518950 2901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cdc0d348615f7bda555a0a352169fe5f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-227\" (UID: \"cdc0d348615f7bda555a0a352169fe5f\") " pod="kube-system/kube-apiserver-ip-172-31-23-227" Feb 13 19:51:47.519447 kubelet[2901]: I0213 19:51:47.518975 2901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98e840f8adcf2c594e6e0b38f4507839-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-227\" (UID: \"98e840f8adcf2c594e6e0b38f4507839\") " pod="kube-system/kube-controller-manager-ip-172-31-23-227" Feb 13 19:51:47.519447 kubelet[2901]: I0213 19:51:47.518998 2901 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98e840f8adcf2c594e6e0b38f4507839-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-227\" (UID: \"98e840f8adcf2c594e6e0b38f4507839\") " pod="kube-system/kube-controller-manager-ip-172-31-23-227" Feb 13 19:51:47.521149 kubelet[2901]: E0213 19:51:47.521056 2901 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.227:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-227?timeout=10s\": dial tcp 172.31.23.227:6443: connect: connection refused" interval="400ms" Feb 13 19:51:47.625610 kubelet[2901]: I0213 19:51:47.625570 2901 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-227" Feb 13 19:51:47.626007 kubelet[2901]: E0213 19:51:47.625908 2901 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.227:6443/api/v1/nodes\": dial tcp 172.31.23.227:6443: connect: connection refused" node="ip-172-31-23-227" Feb 13 19:51:47.778216 containerd[1928]: time="2025-02-13T19:51:47.777996341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-227,Uid:cdc0d348615f7bda555a0a352169fe5f,Namespace:kube-system,Attempt:0,}" Feb 13 19:51:47.800240 containerd[1928]: time="2025-02-13T19:51:47.800093370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-227,Uid:98e840f8adcf2c594e6e0b38f4507839,Namespace:kube-system,Attempt:0,}" Feb 13 19:51:47.808966 containerd[1928]: time="2025-02-13T19:51:47.808837304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-227,Uid:64be28fb96332db90c14c7115eabc4e9,Namespace:kube-system,Attempt:0,}" Feb 13 19:51:47.921760 kubelet[2901]: E0213 19:51:47.921705 2901 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.227:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-227?timeout=10s\": dial tcp 172.31.23.227:6443: connect: connection refused" interval="800ms" Feb 13 19:51:48.029922 kubelet[2901]: I0213 19:51:48.029837 2901 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-227" Feb 13 19:51:48.030722 kubelet[2901]: E0213 19:51:48.030440 2901 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.227:6443/api/v1/nodes\": dial tcp 172.31.23.227:6443: connect: connection refused" node="ip-172-31-23-227" Feb 13 19:51:48.295666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1230930431.mount: Deactivated successfully. Feb 13 19:51:48.298325 containerd[1928]: time="2025-02-13T19:51:48.298286544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:51:48.305478 containerd[1928]: time="2025-02-13T19:51:48.305412030Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:51:48.308198 containerd[1928]: time="2025-02-13T19:51:48.308135612Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:51:48.311874 containerd[1928]: time="2025-02-13T19:51:48.311780229Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:51:48.313434 containerd[1928]: time="2025-02-13T19:51:48.313246850Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:51:48.313434 containerd[1928]: time="2025-02-13T19:51:48.313352274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:51:48.315777 containerd[1928]: time="2025-02-13T19:51:48.315550506Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:51:48.315777 containerd[1928]: time="2025-02-13T19:51:48.315724339Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:51:48.316747 containerd[1928]: time="2025-02-13T19:51:48.316709963Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 538.48249ms" Feb 13 19:51:48.340467 containerd[1928]: time="2025-02-13T19:51:48.339638820Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 530.614307ms" Feb 13 19:51:48.349252 containerd[1928]: time="2025-02-13T19:51:48.349202137Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 548.923841ms" Feb 13 19:51:48.376592 kubelet[2901]: W0213 19:51:48.375758 2901 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.227:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.227:6443: connect: connection refused Feb 13 19:51:48.376592 kubelet[2901]: E0213 19:51:48.375839 2901 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.227:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.227:6443: connect: connection refused Feb 13 19:51:48.624640 kubelet[2901]: W0213 19:51:48.624499 2901 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.227:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.227:6443: connect: connection refused Feb 13 19:51:48.625015 kubelet[2901]: E0213 19:51:48.624996 2901 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.227:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.227:6443: connect: connection refused Feb 13 19:51:48.660935 kubelet[2901]: W0213 19:51:48.660854 2901 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.227:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.227:6443: connect: connection refused Feb 13 19:51:48.661168 kubelet[2901]: E0213 19:51:48.661154 2901 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.227:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.227:6443: connect: connection refused Feb 13 19:51:48.709887 kubelet[2901]: W0213 19:51:48.709817 2901 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.227:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-227&limit=500&resourceVersion=0": dial tcp 172.31.23.227:6443: connect: connection refused Feb 13 19:51:48.710307 kubelet[2901]: E0213 19:51:48.710261 2901 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.227:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-227&limit=500&resourceVersion=0": dial tcp 172.31.23.227:6443: connect: connection refused Feb 13 19:51:48.724092 kubelet[2901]: E0213 19:51:48.723729 2901 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.227:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-227?timeout=10s\": dial tcp 172.31.23.227:6443: connect: connection refused" interval="1.6s" Feb 13 19:51:48.748671 containerd[1928]: time="2025-02-13T19:51:48.732076321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:48.748671 containerd[1928]: time="2025-02-13T19:51:48.748255031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:48.748671 containerd[1928]: time="2025-02-13T19:51:48.748317993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:48.748950 containerd[1928]: time="2025-02-13T19:51:48.748769063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:48.756481 containerd[1928]: time="2025-02-13T19:51:48.754946742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:48.756481 containerd[1928]: time="2025-02-13T19:51:48.755156572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:48.756481 containerd[1928]: time="2025-02-13T19:51:48.755211715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:48.756481 containerd[1928]: time="2025-02-13T19:51:48.755379950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:48.759257 containerd[1928]: time="2025-02-13T19:51:48.757596137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:48.759257 containerd[1928]: time="2025-02-13T19:51:48.757689320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:48.759257 containerd[1928]: time="2025-02-13T19:51:48.757710175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:48.759257 containerd[1928]: time="2025-02-13T19:51:48.757879084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:48.841185 kubelet[2901]: I0213 19:51:48.841040 2901 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-227" Feb 13 19:51:48.842044 kubelet[2901]: E0213 19:51:48.841813 2901 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.227:6443/api/v1/nodes\": dial tcp 172.31.23.227:6443: connect: connection refused" node="ip-172-31-23-227" Feb 13 19:51:48.954661 containerd[1928]: time="2025-02-13T19:51:48.954437286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-227,Uid:98e840f8adcf2c594e6e0b38f4507839,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff0a740109f17b022640f7f0808920cc6735b0e343645e3d32ee06f8a3173b32\"" Feb 13 19:51:48.962912 containerd[1928]: time="2025-02-13T19:51:48.962794195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-227,Uid:cdc0d348615f7bda555a0a352169fe5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"11651dbe65d63949a9eaf020e3845f13abd5bd04b9ead5d490785e40de609ff4\"" Feb 13 19:51:48.969089 containerd[1928]: time="2025-02-13T19:51:48.969029977Z" level=info msg="CreateContainer within sandbox \"11651dbe65d63949a9eaf020e3845f13abd5bd04b9ead5d490785e40de609ff4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:51:48.969629 containerd[1928]: time="2025-02-13T19:51:48.969599048Z" level=info msg="CreateContainer within sandbox \"ff0a740109f17b022640f7f0808920cc6735b0e343645e3d32ee06f8a3173b32\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:51:48.983255 containerd[1928]: time="2025-02-13T19:51:48.983210173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-227,Uid:64be28fb96332db90c14c7115eabc4e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c627e745046c403677fa610cac9c95e2c07f9b646c9b226208fa8ce9b3a2bc3\"" Feb 13 19:51:48.989546 containerd[1928]: time="2025-02-13T19:51:48.989503469Z" level=info msg="CreateContainer within sandbox \"3c627e745046c403677fa610cac9c95e2c07f9b646c9b226208fa8ce9b3a2bc3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:51:49.023339 containerd[1928]: time="2025-02-13T19:51:49.023016806Z" level=info msg="CreateContainer within sandbox \"11651dbe65d63949a9eaf020e3845f13abd5bd04b9ead5d490785e40de609ff4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a3a344d9806edead860418baaa618f50e21d756bd542b2d2f7090bad5ce03d6b\"" Feb 13 19:51:49.024124 containerd[1928]: time="2025-02-13T19:51:49.024089604Z" level=info msg="StartContainer for \"a3a344d9806edead860418baaa618f50e21d756bd542b2d2f7090bad5ce03d6b\"" Feb 13 19:51:49.028727 containerd[1928]: time="2025-02-13T19:51:49.028690753Z" level=info msg="CreateContainer within sandbox \"ff0a740109f17b022640f7f0808920cc6735b0e343645e3d32ee06f8a3173b32\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6da8110bdfbc7ecc8bc9e0696fd5a8c08969ca3d96342daa1bede8dd2d77f763\"" Feb 13 19:51:49.031272 containerd[1928]: time="2025-02-13T19:51:49.030464710Z" level=info msg="StartContainer for \"6da8110bdfbc7ecc8bc9e0696fd5a8c08969ca3d96342daa1bede8dd2d77f763\"" Feb 13 19:51:49.050781 containerd[1928]: time="2025-02-13T19:51:49.050737500Z" level=info msg="CreateContainer within sandbox \"3c627e745046c403677fa610cac9c95e2c07f9b646c9b226208fa8ce9b3a2bc3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"77e5fbf92c4438a81a9e00d004e34fbb33acd1f8099100016625d4d190491fb6\"" Feb 13 19:51:49.052682 containerd[1928]: time="2025-02-13T19:51:49.052637196Z" level=info msg="StartContainer for \"77e5fbf92c4438a81a9e00d004e34fbb33acd1f8099100016625d4d190491fb6\"" Feb 13 19:51:49.256098 containerd[1928]: time="2025-02-13T19:51:49.253748307Z" level=info msg="StartContainer for \"6da8110bdfbc7ecc8bc9e0696fd5a8c08969ca3d96342daa1bede8dd2d77f763\" returns successfully" Feb 13 19:51:49.290659 containerd[1928]: time="2025-02-13T19:51:49.289382228Z" level=info msg="StartContainer for \"a3a344d9806edead860418baaa618f50e21d756bd542b2d2f7090bad5ce03d6b\" returns successfully" Feb 13 19:51:49.321799 containerd[1928]: time="2025-02-13T19:51:49.321300321Z" level=info msg="StartContainer for \"77e5fbf92c4438a81a9e00d004e34fbb33acd1f8099100016625d4d190491fb6\" returns successfully" Feb 13 19:51:49.345750 kubelet[2901]: E0213 19:51:49.345710 2901 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.227:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.227:6443: connect: connection refused Feb 13 19:51:50.449500 kubelet[2901]: I0213 19:51:50.446623 2901 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-227" Feb 13 19:51:53.268480 kubelet[2901]: I0213 19:51:53.268443 2901 apiserver.go:52] "Watching apiserver" Feb 13 19:51:53.319519 kubelet[2901]: I0213 19:51:53.319485 2901 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:51:53.423974 kubelet[2901]: E0213 19:51:53.423927 2901 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-227\" not found" node="ip-172-31-23-227" Feb 13 19:51:53.512582 kubelet[2901]: I0213 19:51:53.512538 2901 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-227" Feb 13 19:51:53.514013 kubelet[2901]: E0213 19:51:53.513898 2901 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-23-227.1823dc7c3e051100 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-227,UID:ip-172-31-23-227,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-227,},FirstTimestamp:2025-02-13 19:51:47.269718272 +0000 UTC m=+2.113931352,LastTimestamp:2025-02-13 19:51:47.269718272 +0000 UTC m=+2.113931352,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-227,}" Feb 13 19:51:53.580407 kubelet[2901]: E0213 19:51:53.578709 2901 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-23-227.1823dc7c40ce5b68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-227,UID:ip-172-31-23-227,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-23-227,},FirstTimestamp:2025-02-13 19:51:47.316464488 +0000 UTC m=+2.160677569,LastTimestamp:2025-02-13 19:51:47.316464488 +0000 UTC m=+2.160677569,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-227,}" Feb 13 19:51:56.206668 systemd[1]: Reloading requested from client PID 3449 ('systemctl') (unit session-7.scope)... Feb 13 19:51:56.206688 systemd[1]: Reloading... Feb 13 19:51:56.473405 zram_generator::config[3492]: No configuration found. Feb 13 19:51:56.790925 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:51:57.046875 systemd[1]: Reloading finished in 839 ms. Feb 13 19:51:57.108415 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:57.144607 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:51:57.145178 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:57.157812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:57.492847 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:57.524893 (kubelet)[3554]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:51:57.664047 kubelet[3554]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:51:57.664858 kubelet[3554]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:51:57.664858 kubelet[3554]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:51:57.671734 kubelet[3554]: I0213 19:51:57.670661 3554 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:51:57.691465 kubelet[3554]: I0213 19:51:57.691345 3554 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:51:57.691728 kubelet[3554]: I0213 19:51:57.691700 3554 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:51:57.692003 kubelet[3554]: I0213 19:51:57.691954 3554 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:51:57.707728 kubelet[3554]: I0213 19:51:57.707470 3554 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:51:57.712435 kubelet[3554]: I0213 19:51:57.711459 3554 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:51:57.753701 kubelet[3554]: I0213 19:51:57.753588 3554 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:51:57.757603 kubelet[3554]: I0213 19:51:57.757542 3554 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:51:57.759699 kubelet[3554]: I0213 19:51:57.757602 3554 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-227","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:51:57.759699 kubelet[3554]: I0213 19:51:57.758005 3554 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:51:57.759699 kubelet[3554]: I0213 19:51:57.758024 3554 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:51:57.761382 kubelet[3554]: I0213 19:51:57.760722 3554 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:51:57.761382 kubelet[3554]: I0213 19:51:57.760959 3554 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:51:57.766908 kubelet[3554]: I0213 19:51:57.761644 3554 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:51:57.766908 kubelet[3554]: I0213 19:51:57.761714 3554 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:51:57.766908 kubelet[3554]: I0213 19:51:57.761760 3554 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:51:57.808392 kubelet[3554]: I0213 19:51:57.808342 3554 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:51:57.808725 kubelet[3554]: I0213 19:51:57.808699 3554 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:51:57.816397 kubelet[3554]: I0213 19:51:57.816031 3554 server.go:1264] "Started kubelet" Feb 13 19:51:57.834509 kubelet[3554]: I0213 19:51:57.832680 3554 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:51:57.849032 kubelet[3554]: I0213 19:51:57.845912 3554 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:51:57.864707 kubelet[3554]: I0213 19:51:57.864666 3554 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:51:57.865503 kubelet[3554]: I0213 19:51:57.865184 3554 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:51:57.866665 kubelet[3554]: I0213 19:51:57.866637 3554 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:51:57.878788 kubelet[3554]: I0213 19:51:57.878204 3554 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:51:57.879796 kubelet[3554]: I0213 19:51:57.879503 3554 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:51:57.882967 kubelet[3554]: I0213 19:51:57.879949 3554 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:51:57.900087 kubelet[3554]: I0213 19:51:57.898312 3554 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:51:57.905614 kubelet[3554]: I0213 19:51:57.905270 3554 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:51:57.925462 kubelet[3554]: I0213 19:51:57.925350 3554 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:51:57.936730 kubelet[3554]: I0213 19:51:57.936539 3554 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:51:57.938666 kubelet[3554]: E0213 19:51:57.938634 3554 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:51:57.950115 kubelet[3554]: I0213 19:51:57.949580 3554 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:51:57.950115 kubelet[3554]: I0213 19:51:57.949624 3554 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:51:57.950115 kubelet[3554]: I0213 19:51:57.949648 3554 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:51:57.950115 kubelet[3554]: E0213 19:51:57.949706 3554 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:51:57.991261 kubelet[3554]: I0213 19:51:57.991234 3554 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-227" Feb 13 19:51:58.017710 kubelet[3554]: I0213 19:51:58.015646 3554 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-23-227" Feb 13 19:51:58.017710 kubelet[3554]: I0213 19:51:58.015776 3554 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-227" Feb 13 19:51:58.050066 kubelet[3554]: E0213 19:51:58.049913 3554 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:51:58.109558 kubelet[3554]: I0213 19:51:58.109511 3554 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:51:58.109558 kubelet[3554]: I0213 19:51:58.109532 3554 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:51:58.109558 kubelet[3554]: I0213 19:51:58.109557 3554 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:51:58.109793 kubelet[3554]: I0213 19:51:58.109771 3554 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:51:58.109834 kubelet[3554]: I0213 19:51:58.109785 3554 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:51:58.109834 kubelet[3554]: I0213 19:51:58.109811 3554 policy_none.go:49] "None policy: Start" Feb 13 19:51:58.111754 kubelet[3554]: I0213 19:51:58.111722 3554 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:51:58.111893 kubelet[3554]: I0213 19:51:58.111796 3554 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:51:58.112077 kubelet[3554]: I0213 19:51:58.112058 3554 state_mem.go:75] "Updated machine memory state" Feb 13 19:51:58.113881 kubelet[3554]: I0213 19:51:58.113852 3554 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:51:58.114091 kubelet[3554]: I0213 19:51:58.114053 3554 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:51:58.115464 kubelet[3554]: I0213 19:51:58.115385 3554 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:51:58.255018 kubelet[3554]: I0213 19:51:58.254319 3554 topology_manager.go:215] "Topology Admit Handler" podUID="98e840f8adcf2c594e6e0b38f4507839" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-227" Feb 13 19:51:58.255018 kubelet[3554]: I0213 19:51:58.254456 3554 topology_manager.go:215] "Topology Admit Handler" podUID="64be28fb96332db90c14c7115eabc4e9" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-227" Feb 13 19:51:58.255018 kubelet[3554]: I0213 19:51:58.254526 3554 topology_manager.go:215] "Topology Admit Handler" podUID="cdc0d348615f7bda555a0a352169fe5f" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-227" Feb 13 19:51:58.290379 kubelet[3554]: I0213 19:51:58.289113 3554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98e840f8adcf2c594e6e0b38f4507839-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-227\" (UID: \"98e840f8adcf2c594e6e0b38f4507839\") " pod="kube-system/kube-controller-manager-ip-172-31-23-227" Feb 13 19:51:58.290379 kubelet[3554]: I0213 19:51:58.289168 3554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cdc0d348615f7bda555a0a352169fe5f-ca-certs\") pod \"kube-apiserver-ip-172-31-23-227\" (UID: \"cdc0d348615f7bda555a0a352169fe5f\") " pod="kube-system/kube-apiserver-ip-172-31-23-227" Feb 13 19:51:58.290379 kubelet[3554]: I0213 19:51:58.289197 3554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cdc0d348615f7bda555a0a352169fe5f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-227\" (UID: \"cdc0d348615f7bda555a0a352169fe5f\") " pod="kube-system/kube-apiserver-ip-172-31-23-227" Feb 13 19:51:58.290379 kubelet[3554]: I0213 19:51:58.289227 3554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98e840f8adcf2c594e6e0b38f4507839-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-227\" (UID: \"98e840f8adcf2c594e6e0b38f4507839\") " pod="kube-system/kube-controller-manager-ip-172-31-23-227" Feb 13 19:51:58.290379 kubelet[3554]: I0213 19:51:58.289251 3554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98e840f8adcf2c594e6e0b38f4507839-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-227\" (UID: \"98e840f8adcf2c594e6e0b38f4507839\") " pod="kube-system/kube-controller-manager-ip-172-31-23-227" Feb 13 19:51:58.290606 kubelet[3554]: I0213 19:51:58.289274 3554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/64be28fb96332db90c14c7115eabc4e9-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-227\" (UID: \"64be28fb96332db90c14c7115eabc4e9\") " pod="kube-system/kube-scheduler-ip-172-31-23-227" Feb 13 19:51:58.290606 kubelet[3554]: I0213 19:51:58.289297 3554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cdc0d348615f7bda555a0a352169fe5f-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-227\" (UID: \"cdc0d348615f7bda555a0a352169fe5f\") " pod="kube-system/kube-apiserver-ip-172-31-23-227" Feb 13 19:51:58.290606 kubelet[3554]: I0213 19:51:58.289322 3554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98e840f8adcf2c594e6e0b38f4507839-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-227\" (UID: \"98e840f8adcf2c594e6e0b38f4507839\") " pod="kube-system/kube-controller-manager-ip-172-31-23-227" Feb 13 19:51:58.290606 kubelet[3554]: I0213 19:51:58.289349 3554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98e840f8adcf2c594e6e0b38f4507839-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-227\" (UID: \"98e840f8adcf2c594e6e0b38f4507839\") " pod="kube-system/kube-controller-manager-ip-172-31-23-227" Feb 13 19:51:58.806726 kubelet[3554]: I0213 19:51:58.806683 3554 apiserver.go:52] "Watching apiserver" Feb 13 19:51:58.890490 kubelet[3554]: I0213 19:51:58.882882 3554 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:51:59.131515 kubelet[3554]: I0213 19:51:59.131449 3554 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-227" podStartSLOduration=1.13140778 podStartE2EDuration="1.13140778s" podCreationTimestamp="2025-02-13 19:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:51:59.125297695 +0000 UTC m=+1.590860434" watchObservedRunningTime="2025-02-13 19:51:59.13140778 +0000 UTC m=+1.596970517" Feb 13 19:51:59.164701 kubelet[3554]: I0213 19:51:59.162318 3554 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-227" podStartSLOduration=1.162289096 podStartE2EDuration="1.162289096s" podCreationTimestamp="2025-02-13 19:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:51:59.147242659 +0000 UTC m=+1.612805397" watchObservedRunningTime="2025-02-13 19:51:59.162289096 +0000 UTC m=+1.627851832" Feb 13 19:51:59.186701 kubelet[3554]: I0213 19:51:59.186639 3554 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-227" podStartSLOduration=1.186617493 podStartE2EDuration="1.186617493s" podCreationTimestamp="2025-02-13 19:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:51:59.162883852 +0000 UTC m=+1.628446590" watchObservedRunningTime="2025-02-13 19:51:59.186617493 +0000 UTC m=+1.652180232" Feb 13 19:51:59.800792 sudo[2250]: pam_unix(sudo:session): session closed for user root Feb 13 19:51:59.826913 sshd[2249]: Connection closed by 139.178.89.65 port 40134 Feb 13 19:51:59.826788 sshd-session[2246]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:59.831973 systemd[1]: sshd@6-172.31.23.227:22-139.178.89.65:40134.service: Deactivated successfully. Feb 13 19:51:59.841009 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:51:59.842860 systemd-logind[1900]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:51:59.845411 systemd-logind[1900]: Removed session 7. Feb 13 19:52:09.665666 kubelet[3554]: I0213 19:52:09.663978 3554 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:52:09.671395 containerd[1928]: time="2025-02-13T19:52:09.669618047Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:52:09.671959 kubelet[3554]: I0213 19:52:09.670886 3554 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:52:10.246832 kubelet[3554]: I0213 19:52:10.243559 3554 topology_manager.go:215] "Topology Admit Handler" podUID="db5283d0-f1f0-463f-b7d4-3a7bcf5599c4" podNamespace="kube-system" podName="kube-proxy-d85wp" Feb 13 19:52:10.246832 kubelet[3554]: I0213 19:52:10.243890 3554 topology_manager.go:215] "Topology Admit Handler" podUID="1b0b2c9b-cb9c-4378-b13b-917bd7f71bf5" podNamespace="kube-flannel" podName="kube-flannel-ds-mm2cl" Feb 13 19:52:10.285180 kubelet[3554]: I0213 19:52:10.283665 3554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/1b0b2c9b-cb9c-4378-b13b-917bd7f71bf5-flannel-cfg\") pod \"kube-flannel-ds-mm2cl\" (UID: \"1b0b2c9b-cb9c-4378-b13b-917bd7f71bf5\") " pod="kube-flannel/kube-flannel-ds-mm2cl" Feb 13 19:52:10.285180 kubelet[3554]: I0213 19:52:10.283721 3554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b0b2c9b-cb9c-4378-b13b-917bd7f71bf5-xtables-lock\") pod \"kube-flannel-ds-mm2cl\" (UID: \"1b0b2c9b-cb9c-4378-b13b-917bd7f71bf5\") " pod="kube-flannel/kube-flannel-ds-mm2cl" Feb 13 19:52:10.285180 kubelet[3554]: I0213 19:52:10.283750 3554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db5283d0-f1f0-463f-b7d4-3a7bcf5599c4-xtables-lock\") pod \"kube-proxy-d85wp\" (UID: \"db5283d0-f1f0-463f-b7d4-3a7bcf5599c4\") " pod="kube-system/kube-proxy-d85wp" Feb 13 19:52:10.285180 kubelet[3554]: I0213 19:52:10.283775 3554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cbwt\" (UniqueName: \"kubernetes.io/projected/db5283d0-f1f0-463f-b7d4-3a7bcf5599c4-kube-api-access-9cbwt\") pod \"kube-proxy-d85wp\" (UID: \"db5283d0-f1f0-463f-b7d4-3a7bcf5599c4\") " pod="kube-system/kube-proxy-d85wp" Feb 13 19:52:10.285180 kubelet[3554]: I0213 19:52:10.283801 3554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1b0b2c9b-cb9c-4378-b13b-917bd7f71bf5-run\") pod \"kube-flannel-ds-mm2cl\" (UID: \"1b0b2c9b-cb9c-4378-b13b-917bd7f71bf5\") " pod="kube-flannel/kube-flannel-ds-mm2cl" Feb 13 19:52:10.285804 kubelet[3554]: I0213 19:52:10.283840 3554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/db5283d0-f1f0-463f-b7d4-3a7bcf5599c4-kube-proxy\") pod \"kube-proxy-d85wp\" (UID: \"db5283d0-f1f0-463f-b7d4-3a7bcf5599c4\") " pod="kube-system/kube-proxy-d85wp" Feb 13 19:52:10.285804 kubelet[3554]: I0213 19:52:10.283863 3554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db5283d0-f1f0-463f-b7d4-3a7bcf5599c4-lib-modules\") pod \"kube-proxy-d85wp\" (UID: \"db5283d0-f1f0-463f-b7d4-3a7bcf5599c4\") " pod="kube-system/kube-proxy-d85wp" Feb 13 19:52:10.285804 kubelet[3554]: I0213 19:52:10.283889 3554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/1b0b2c9b-cb9c-4378-b13b-917bd7f71bf5-cni\") pod \"kube-flannel-ds-mm2cl\" (UID: \"1b0b2c9b-cb9c-4378-b13b-917bd7f71bf5\") " pod="kube-flannel/kube-flannel-ds-mm2cl" Feb 13 19:52:10.285804 kubelet[3554]: I0213 19:52:10.283923 3554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/1b0b2c9b-cb9c-4378-b13b-917bd7f71bf5-cni-plugin\") pod \"kube-flannel-ds-mm2cl\" (UID: \"1b0b2c9b-cb9c-4378-b13b-917bd7f71bf5\") " pod="kube-flannel/kube-flannel-ds-mm2cl" Feb 13 19:52:10.285804 kubelet[3554]: I0213 19:52:10.283947 3554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5msgj\" (UniqueName: \"kubernetes.io/projected/1b0b2c9b-cb9c-4378-b13b-917bd7f71bf5-kube-api-access-5msgj\") pod \"kube-flannel-ds-mm2cl\" (UID: \"1b0b2c9b-cb9c-4378-b13b-917bd7f71bf5\") " pod="kube-flannel/kube-flannel-ds-mm2cl" Feb 13 19:52:10.556037 containerd[1928]: time="2025-02-13T19:52:10.555905270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d85wp,Uid:db5283d0-f1f0-463f-b7d4-3a7bcf5599c4,Namespace:kube-system,Attempt:0,}" Feb 13 19:52:10.562637 containerd[1928]: time="2025-02-13T19:52:10.562518119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-mm2cl,Uid:1b0b2c9b-cb9c-4378-b13b-917bd7f71bf5,Namespace:kube-flannel,Attempt:0,}" Feb 13 19:52:10.627591 containerd[1928]: time="2025-02-13T19:52:10.627388830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:52:10.627591 containerd[1928]: time="2025-02-13T19:52:10.627514930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:52:10.628112 containerd[1928]: time="2025-02-13T19:52:10.627577244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:10.629561 containerd[1928]: time="2025-02-13T19:52:10.629428419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:10.637523 containerd[1928]: time="2025-02-13T19:52:10.634418911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:52:10.637523 containerd[1928]: time="2025-02-13T19:52:10.635385544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:52:10.637523 containerd[1928]: time="2025-02-13T19:52:10.635418458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:10.637523 containerd[1928]: time="2025-02-13T19:52:10.635535116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:10.774398 containerd[1928]: time="2025-02-13T19:52:10.773244995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d85wp,Uid:db5283d0-f1f0-463f-b7d4-3a7bcf5599c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"36f7dce45b340a8c46769777cad132892aeea503a9d92817b808220ac793426e\"" Feb 13 19:52:10.779845 containerd[1928]: time="2025-02-13T19:52:10.779805046Z" level=info msg="CreateContainer within sandbox \"36f7dce45b340a8c46769777cad132892aeea503a9d92817b808220ac793426e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:52:10.823296 containerd[1928]: time="2025-02-13T19:52:10.823086038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-mm2cl,Uid:1b0b2c9b-cb9c-4378-b13b-917bd7f71bf5,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"0827a82e5029a69ab6a2b794b9883b48a925bc5643e55209e4c4a8d8b815ea0e\"" Feb 13 19:52:10.831582 containerd[1928]: time="2025-02-13T19:52:10.831113770Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 19:52:10.847849 containerd[1928]: time="2025-02-13T19:52:10.847816485Z" level=info msg="CreateContainer within sandbox \"36f7dce45b340a8c46769777cad132892aeea503a9d92817b808220ac793426e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8fba8d2598962a815317391f12675914ba5beb9ae9df1f295e0dc48321ad5e86\"" Feb 13 19:52:10.851570 containerd[1928]: time="2025-02-13T19:52:10.850212200Z" level=info msg="StartContainer for \"8fba8d2598962a815317391f12675914ba5beb9ae9df1f295e0dc48321ad5e86\"" Feb 13 19:52:10.954019 containerd[1928]: time="2025-02-13T19:52:10.953982302Z" level=info msg="StartContainer for \"8fba8d2598962a815317391f12675914ba5beb9ae9df1f295e0dc48321ad5e86\" returns successfully" Feb 13 19:52:13.381117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount562770351.mount: Deactivated successfully. Feb 13 19:52:13.454979 containerd[1928]: time="2025-02-13T19:52:13.454926812Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:13.456940 containerd[1928]: time="2025-02-13T19:52:13.456580669Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Feb 13 19:52:13.459314 containerd[1928]: time="2025-02-13T19:52:13.459232654Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:13.465089 containerd[1928]: time="2025-02-13T19:52:13.463589670Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:13.465089 containerd[1928]: time="2025-02-13T19:52:13.464614686Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.633455452s" Feb 13 19:52:13.465089 containerd[1928]: time="2025-02-13T19:52:13.464655341Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Feb 13 19:52:13.467707 containerd[1928]: time="2025-02-13T19:52:13.467673769Z" level=info msg="CreateContainer within sandbox \"0827a82e5029a69ab6a2b794b9883b48a925bc5643e55209e4c4a8d8b815ea0e\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 19:52:13.495890 containerd[1928]: time="2025-02-13T19:52:13.495841890Z" level=info msg="CreateContainer within sandbox \"0827a82e5029a69ab6a2b794b9883b48a925bc5643e55209e4c4a8d8b815ea0e\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"55f6c7c69f68fe06d7fdcc0ae569391a34e46dc0762ed267d6810270c8910307\"" Feb 13 19:52:13.498402 containerd[1928]: time="2025-02-13T19:52:13.497332126Z" level=info msg="StartContainer for \"55f6c7c69f68fe06d7fdcc0ae569391a34e46dc0762ed267d6810270c8910307\"" Feb 13 19:52:13.580491 containerd[1928]: time="2025-02-13T19:52:13.580440002Z" level=info msg="StartContainer for \"55f6c7c69f68fe06d7fdcc0ae569391a34e46dc0762ed267d6810270c8910307\" returns successfully" Feb 13 19:52:13.694256 containerd[1928]: time="2025-02-13T19:52:13.694072059Z" level=info msg="shim disconnected" id=55f6c7c69f68fe06d7fdcc0ae569391a34e46dc0762ed267d6810270c8910307 namespace=k8s.io Feb 13 19:52:13.694856 containerd[1928]: time="2025-02-13T19:52:13.694656785Z" level=warning msg="cleaning up after shim disconnected" id=55f6c7c69f68fe06d7fdcc0ae569391a34e46dc0762ed267d6810270c8910307 namespace=k8s.io Feb 13 19:52:13.694856 containerd[1928]: time="2025-02-13T19:52:13.694684920Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:52:14.095168 containerd[1928]: time="2025-02-13T19:52:14.093810493Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 19:52:14.140972 kubelet[3554]: I0213 19:52:14.140902 3554 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d85wp" podStartSLOduration=4.140877832 podStartE2EDuration="4.140877832s" podCreationTimestamp="2025-02-13 19:52:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:52:11.11337612 +0000 UTC m=+13.578938858" watchObservedRunningTime="2025-02-13 19:52:14.140877832 +0000 UTC m=+16.606440573" Feb 13 19:52:14.155116 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55f6c7c69f68fe06d7fdcc0ae569391a34e46dc0762ed267d6810270c8910307-rootfs.mount: Deactivated successfully. Feb 13 19:52:17.500604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4036992477.mount: Deactivated successfully. Feb 13 19:52:19.118358 containerd[1928]: time="2025-02-13T19:52:19.118277290Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:19.121469 containerd[1928]: time="2025-02-13T19:52:19.121399052Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Feb 13 19:52:19.123972 containerd[1928]: time="2025-02-13T19:52:19.123894404Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:19.128618 containerd[1928]: time="2025-02-13T19:52:19.128549348Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:19.131301 containerd[1928]: time="2025-02-13T19:52:19.130408597Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 5.035550736s" Feb 13 19:52:19.131301 containerd[1928]: time="2025-02-13T19:52:19.130457059Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Feb 13 19:52:19.134510 containerd[1928]: time="2025-02-13T19:52:19.134130359Z" level=info msg="CreateContainer within sandbox \"0827a82e5029a69ab6a2b794b9883b48a925bc5643e55209e4c4a8d8b815ea0e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:52:19.162501 containerd[1928]: time="2025-02-13T19:52:19.162451805Z" level=info msg="CreateContainer within sandbox \"0827a82e5029a69ab6a2b794b9883b48a925bc5643e55209e4c4a8d8b815ea0e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7ae76af566772fae43c33db13e7f08b60bf52ded2a008280363708bdced55500\"" Feb 13 19:52:19.163489 containerd[1928]: time="2025-02-13T19:52:19.163395200Z" level=info msg="StartContainer for \"7ae76af566772fae43c33db13e7f08b60bf52ded2a008280363708bdced55500\"" Feb 13 19:52:19.254433 containerd[1928]: time="2025-02-13T19:52:19.254389760Z" level=info msg="StartContainer for \"7ae76af566772fae43c33db13e7f08b60bf52ded2a008280363708bdced55500\" returns successfully" Feb 13 19:52:19.298762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ae76af566772fae43c33db13e7f08b60bf52ded2a008280363708bdced55500-rootfs.mount: Deactivated successfully. Feb 13 19:52:19.319527 kubelet[3554]: I0213 19:52:19.319492 3554 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:52:19.452335 kubelet[3554]: I0213 19:52:19.452214 3554 topology_manager.go:215] "Topology Admit Handler" podUID="3b3538f5-8309-4877-a2eb-53e2b62f3a14" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tghpl" Feb 13 19:52:19.452536 kubelet[3554]: I0213 19:52:19.452464 3554 topology_manager.go:215] "Topology Admit Handler" podUID="075730b6-4bbc-4c42-a382-aa19f6c5842c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wd9gw" Feb 13 19:52:19.467302 containerd[1928]: time="2025-02-13T19:52:19.467228305Z" level=info msg="shim disconnected" id=7ae76af566772fae43c33db13e7f08b60bf52ded2a008280363708bdced55500 namespace=k8s.io Feb 13 19:52:19.467302 containerd[1928]: time="2025-02-13T19:52:19.467284203Z" level=warning msg="cleaning up after shim disconnected" id=7ae76af566772fae43c33db13e7f08b60bf52ded2a008280363708bdced55500 namespace=k8s.io Feb 13 19:52:19.467302 containerd[1928]: time="2025-02-13T19:52:19.467298767Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:52:19.498165 containerd[1928]: time="2025-02-13T19:52:19.498092967Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:52:19Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:52:19.513283 kubelet[3554]: I0213 19:52:19.513202 3554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b3538f5-8309-4877-a2eb-53e2b62f3a14-config-volume\") pod \"coredns-7db6d8ff4d-tghpl\" (UID: \"3b3538f5-8309-4877-a2eb-53e2b62f3a14\") " pod="kube-system/coredns-7db6d8ff4d-tghpl" Feb 13 19:52:19.513494 kubelet[3554]: I0213 19:52:19.513316 3554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg5fh\" (UniqueName: \"kubernetes.io/projected/3b3538f5-8309-4877-a2eb-53e2b62f3a14-kube-api-access-dg5fh\") pod \"coredns-7db6d8ff4d-tghpl\" (UID: \"3b3538f5-8309-4877-a2eb-53e2b62f3a14\") " pod="kube-system/coredns-7db6d8ff4d-tghpl" Feb 13 19:52:19.513494 kubelet[3554]: I0213 19:52:19.513349 3554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/075730b6-4bbc-4c42-a382-aa19f6c5842c-config-volume\") pod \"coredns-7db6d8ff4d-wd9gw\" (UID: \"075730b6-4bbc-4c42-a382-aa19f6c5842c\") " pod="kube-system/coredns-7db6d8ff4d-wd9gw" Feb 13 19:52:19.513494 kubelet[3554]: I0213 19:52:19.513445 3554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksbrd\" (UniqueName: \"kubernetes.io/projected/075730b6-4bbc-4c42-a382-aa19f6c5842c-kube-api-access-ksbrd\") pod \"coredns-7db6d8ff4d-wd9gw\" (UID: \"075730b6-4bbc-4c42-a382-aa19f6c5842c\") " pod="kube-system/coredns-7db6d8ff4d-wd9gw" Feb 13 19:52:19.795716 containerd[1928]: time="2025-02-13T19:52:19.795591314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tghpl,Uid:3b3538f5-8309-4877-a2eb-53e2b62f3a14,Namespace:kube-system,Attempt:0,}" Feb 13 19:52:19.797816 containerd[1928]: time="2025-02-13T19:52:19.797772053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wd9gw,Uid:075730b6-4bbc-4c42-a382-aa19f6c5842c,Namespace:kube-system,Attempt:0,}" Feb 13 19:52:19.897127 containerd[1928]: time="2025-02-13T19:52:19.897072029Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wd9gw,Uid:075730b6-4bbc-4c42-a382-aa19f6c5842c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bc21380f84d2a1bc3809653d5ee716428d30f6986c80b1156a21fe6e3560feb4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:52:19.897458 kubelet[3554]: E0213 19:52:19.897416 3554 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc21380f84d2a1bc3809653d5ee716428d30f6986c80b1156a21fe6e3560feb4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:52:19.897713 kubelet[3554]: E0213 19:52:19.897488 3554 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc21380f84d2a1bc3809653d5ee716428d30f6986c80b1156a21fe6e3560feb4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-wd9gw" Feb 13 19:52:19.897713 kubelet[3554]: E0213 19:52:19.897668 3554 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc21380f84d2a1bc3809653d5ee716428d30f6986c80b1156a21fe6e3560feb4\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-wd9gw" Feb 13 19:52:19.897812 kubelet[3554]: E0213 19:52:19.897742 3554 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-wd9gw_kube-system(075730b6-4bbc-4c42-a382-aa19f6c5842c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-wd9gw_kube-system(075730b6-4bbc-4c42-a382-aa19f6c5842c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc21380f84d2a1bc3809653d5ee716428d30f6986c80b1156a21fe6e3560feb4\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-wd9gw" podUID="075730b6-4bbc-4c42-a382-aa19f6c5842c" Feb 13 19:52:19.899782 containerd[1928]: time="2025-02-13T19:52:19.899173016Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tghpl,Uid:3b3538f5-8309-4877-a2eb-53e2b62f3a14,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"221a25bd0aa9333635249573a376cedcb1784abbff472ea6292eb8bb7ffaa9fb\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:52:19.900127 kubelet[3554]: E0213 19:52:19.900076 3554 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"221a25bd0aa9333635249573a376cedcb1784abbff472ea6292eb8bb7ffaa9fb\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:52:19.900231 kubelet[3554]: E0213 19:52:19.900150 3554 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"221a25bd0aa9333635249573a376cedcb1784abbff472ea6292eb8bb7ffaa9fb\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-tghpl" Feb 13 19:52:19.900231 kubelet[3554]: E0213 19:52:19.900177 3554 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"221a25bd0aa9333635249573a376cedcb1784abbff472ea6292eb8bb7ffaa9fb\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-tghpl" Feb 13 19:52:19.900311 kubelet[3554]: E0213 19:52:19.900230 3554 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-tghpl_kube-system(3b3538f5-8309-4877-a2eb-53e2b62f3a14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-tghpl_kube-system(3b3538f5-8309-4877-a2eb-53e2b62f3a14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"221a25bd0aa9333635249573a376cedcb1784abbff472ea6292eb8bb7ffaa9fb\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-tghpl" podUID="3b3538f5-8309-4877-a2eb-53e2b62f3a14" Feb 13 19:52:20.167540 containerd[1928]: time="2025-02-13T19:52:20.161587203Z" level=info msg="CreateContainer within sandbox \"0827a82e5029a69ab6a2b794b9883b48a925bc5643e55209e4c4a8d8b815ea0e\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 19:52:20.215261 containerd[1928]: time="2025-02-13T19:52:20.215209745Z" level=info msg="CreateContainer within sandbox \"0827a82e5029a69ab6a2b794b9883b48a925bc5643e55209e4c4a8d8b815ea0e\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"dbe75c5bccf00994f87e5b08bc231da565607b0584ae572d4a546c6b13d7e100\"" Feb 13 19:52:20.216978 containerd[1928]: time="2025-02-13T19:52:20.216533189Z" level=info msg="StartContainer for \"dbe75c5bccf00994f87e5b08bc231da565607b0584ae572d4a546c6b13d7e100\"" Feb 13 19:52:20.314625 containerd[1928]: time="2025-02-13T19:52:20.314445189Z" level=info msg="StartContainer for \"dbe75c5bccf00994f87e5b08bc231da565607b0584ae572d4a546c6b13d7e100\" returns successfully" Feb 13 19:52:21.215382 kubelet[3554]: I0213 19:52:21.210323 3554 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-mm2cl" podStartSLOduration=2.906007026 podStartE2EDuration="11.210302082s" podCreationTimestamp="2025-02-13 19:52:10 +0000 UTC" firstStartedPulling="2025-02-13 19:52:10.827800619 +0000 UTC m=+13.293363336" lastFinishedPulling="2025-02-13 19:52:19.132095673 +0000 UTC m=+21.597658392" observedRunningTime="2025-02-13 19:52:21.210002074 +0000 UTC m=+23.675564813" watchObservedRunningTime="2025-02-13 19:52:21.210302082 +0000 UTC m=+23.675864836" Feb 13 19:52:21.401223 (udev-worker)[4089]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:52:21.427254 systemd-networkd[1485]: flannel.1: Link UP Feb 13 19:52:21.427268 systemd-networkd[1485]: flannel.1: Gained carrier Feb 13 19:52:23.465767 systemd-networkd[1485]: flannel.1: Gained IPv6LL Feb 13 19:52:25.711005 ntpd[1883]: Listen normally on 6 flannel.1 192.168.0.0:123 Feb 13 19:52:25.711549 ntpd[1883]: 13 Feb 19:52:25 ntpd[1883]: Listen normally on 6 flannel.1 192.168.0.0:123 Feb 13 19:52:25.711549 ntpd[1883]: 13 Feb 19:52:25 ntpd[1883]: Listen normally on 7 flannel.1 [fe80::5423:95ff:fe6a:88f%4]:123 Feb 13 19:52:25.711097 ntpd[1883]: Listen normally on 7 flannel.1 [fe80::5423:95ff:fe6a:88f%4]:123 Feb 13 19:52:30.960740 containerd[1928]: time="2025-02-13T19:52:30.960687029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tghpl,Uid:3b3538f5-8309-4877-a2eb-53e2b62f3a14,Namespace:kube-system,Attempt:0,}" Feb 13 19:52:31.038695 systemd-networkd[1485]: cni0: Link UP Feb 13 19:52:31.038706 systemd-networkd[1485]: cni0: Gained carrier Feb 13 19:52:31.049747 (udev-worker)[4206]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:52:31.050646 systemd-networkd[1485]: cni0: Lost carrier Feb 13 19:52:31.059677 systemd-networkd[1485]: veth63728238: Link UP Feb 13 19:52:31.064501 kernel: cni0: port 1(veth63728238) entered blocking state Feb 13 19:52:31.064682 kernel: cni0: port 1(veth63728238) entered disabled state Feb 13 19:52:31.071431 kernel: veth63728238: entered allmulticast mode Feb 13 19:52:31.074540 kernel: veth63728238: entered promiscuous mode Feb 13 19:52:31.074898 kernel: cni0: port 1(veth63728238) entered blocking state Feb 13 19:52:31.074933 kernel: cni0: port 1(veth63728238) entered forwarding state Feb 13 19:52:31.075508 kernel: cni0: port 1(veth63728238) entered disabled state Feb 13 19:52:31.085526 (udev-worker)[4211]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:52:31.087870 kernel: cni0: port 1(veth63728238) entered blocking state Feb 13 19:52:31.087915 kernel: cni0: port 1(veth63728238) entered forwarding state Feb 13 19:52:31.088554 systemd-networkd[1485]: veth63728238: Gained carrier Feb 13 19:52:31.088815 systemd-networkd[1485]: cni0: Gained carrier Feb 13 19:52:31.139053 containerd[1928]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000106628), "name":"cbr0", "type":"bridge"} Feb 13 19:52:31.139053 containerd[1928]: delegateAdd: netconf sent to delegate plugin: Feb 13 19:52:31.171168 containerd[1928]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T19:52:31.171043558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:52:31.171617 containerd[1928]: time="2025-02-13T19:52:31.171308685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:52:31.171884 containerd[1928]: time="2025-02-13T19:52:31.171818194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:31.173344 containerd[1928]: time="2025-02-13T19:52:31.173279712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:31.215767 systemd[1]: run-containerd-runc-k8s.io-eaab80de19b9b46d6cc8eafcc157f57420049214588c65ba4d1d5be999b6539c-runc.DeBxVY.mount: Deactivated successfully. Feb 13 19:52:31.310434 containerd[1928]: time="2025-02-13T19:52:31.309967444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tghpl,Uid:3b3538f5-8309-4877-a2eb-53e2b62f3a14,Namespace:kube-system,Attempt:0,} returns sandbox id \"eaab80de19b9b46d6cc8eafcc157f57420049214588c65ba4d1d5be999b6539c\"" Feb 13 19:52:31.329120 containerd[1928]: time="2025-02-13T19:52:31.329074212Z" level=info msg="CreateContainer within sandbox \"eaab80de19b9b46d6cc8eafcc157f57420049214588c65ba4d1d5be999b6539c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:52:31.360650 containerd[1928]: time="2025-02-13T19:52:31.360611269Z" level=info msg="CreateContainer within sandbox \"eaab80de19b9b46d6cc8eafcc157f57420049214588c65ba4d1d5be999b6539c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a8c004ebff4c49c7a2b447ae4d3ce39b4ddd6e14a44843e4a03e64635e6e6fe1\"" Feb 13 19:52:31.361971 containerd[1928]: time="2025-02-13T19:52:31.361937226Z" level=info msg="StartContainer for \"a8c004ebff4c49c7a2b447ae4d3ce39b4ddd6e14a44843e4a03e64635e6e6fe1\"" Feb 13 19:52:31.449286 containerd[1928]: time="2025-02-13T19:52:31.449168848Z" level=info msg="StartContainer for \"a8c004ebff4c49c7a2b447ae4d3ce39b4ddd6e14a44843e4a03e64635e6e6fe1\" returns successfully" Feb 13 19:52:32.232794 kubelet[3554]: I0213 19:52:32.232694 3554 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tghpl" podStartSLOduration=22.232674468 podStartE2EDuration="22.232674468s" podCreationTimestamp="2025-02-13 19:52:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:52:32.232463381 +0000 UTC m=+34.698026117" watchObservedRunningTime="2025-02-13 19:52:32.232674468 +0000 UTC m=+34.698237198" Feb 13 19:52:32.421689 systemd-networkd[1485]: cni0: Gained IPv6LL Feb 13 19:52:32.998678 systemd-networkd[1485]: veth63728238: Gained IPv6LL Feb 13 19:52:33.953210 containerd[1928]: time="2025-02-13T19:52:33.952850981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wd9gw,Uid:075730b6-4bbc-4c42-a382-aa19f6c5842c,Namespace:kube-system,Attempt:0,}" Feb 13 19:52:34.002609 (udev-worker)[4210]: Network interface NamePolicy= disabled on kernel command line. Feb 13 19:52:34.005413 kernel: cni0: port 2(vethb1abf02f) entered blocking state Feb 13 19:52:34.005476 kernel: cni0: port 2(vethb1abf02f) entered disabled state Feb 13 19:52:34.004636 systemd-networkd[1485]: vethb1abf02f: Link UP Feb 13 19:52:34.007390 kernel: vethb1abf02f: entered allmulticast mode Feb 13 19:52:34.008499 kernel: vethb1abf02f: entered promiscuous mode Feb 13 19:52:34.040119 kernel: cni0: port 2(vethb1abf02f) entered blocking state Feb 13 19:52:34.040250 kernel: cni0: port 2(vethb1abf02f) entered forwarding state Feb 13 19:52:34.040504 systemd-networkd[1485]: vethb1abf02f: Gained carrier Feb 13 19:52:34.042489 containerd[1928]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Feb 13 19:52:34.042489 containerd[1928]: delegateAdd: netconf sent to delegate plugin: Feb 13 19:52:34.073110 containerd[1928]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T19:52:34.072748932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:52:34.073110 containerd[1928]: time="2025-02-13T19:52:34.072848302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:52:34.073110 containerd[1928]: time="2025-02-13T19:52:34.072868409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:34.073110 containerd[1928]: time="2025-02-13T19:52:34.072998064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:34.182381 containerd[1928]: time="2025-02-13T19:52:34.182064786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wd9gw,Uid:075730b6-4bbc-4c42-a382-aa19f6c5842c,Namespace:kube-system,Attempt:0,} returns sandbox id \"42bf179b311be2c293a3d636572f4714a2c28b1d05ada540c2524bbedf681540\"" Feb 13 19:52:34.190669 containerd[1928]: time="2025-02-13T19:52:34.190593266Z" level=info msg="CreateContainer within sandbox \"42bf179b311be2c293a3d636572f4714a2c28b1d05ada540c2524bbedf681540\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:52:34.243326 containerd[1928]: time="2025-02-13T19:52:34.243059022Z" level=info msg="CreateContainer within sandbox \"42bf179b311be2c293a3d636572f4714a2c28b1d05ada540c2524bbedf681540\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"562edb533f7119f9fff31089f7d8b113b50fdf76d6e1f8a1f0d1dab0c6ce3873\"" Feb 13 19:52:34.246340 containerd[1928]: time="2025-02-13T19:52:34.246304765Z" level=info msg="StartContainer for \"562edb533f7119f9fff31089f7d8b113b50fdf76d6e1f8a1f0d1dab0c6ce3873\"" Feb 13 19:52:34.317638 containerd[1928]: time="2025-02-13T19:52:34.317400666Z" level=info msg="StartContainer for \"562edb533f7119f9fff31089f7d8b113b50fdf76d6e1f8a1f0d1dab0c6ce3873\" returns successfully" Feb 13 19:52:35.301384 kubelet[3554]: I0213 19:52:35.300209 3554 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wd9gw" podStartSLOduration=25.300187691 podStartE2EDuration="25.300187691s" podCreationTimestamp="2025-02-13 19:52:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:52:35.275609104 +0000 UTC m=+37.741171843" watchObservedRunningTime="2025-02-13 19:52:35.300187691 +0000 UTC m=+37.765750433" Feb 13 19:52:35.685625 systemd-networkd[1485]: vethb1abf02f: Gained IPv6LL Feb 13 19:52:37.711577 ntpd[1883]: Listen normally on 8 cni0 192.168.0.1:123 Feb 13 19:52:37.711684 ntpd[1883]: Listen normally on 9 cni0 [fe80::455:56ff:fe23:4ab9%5]:123 Feb 13 19:52:37.712150 ntpd[1883]: 13 Feb 19:52:37 ntpd[1883]: Listen normally on 8 cni0 192.168.0.1:123 Feb 13 19:52:37.712150 ntpd[1883]: 13 Feb 19:52:37 ntpd[1883]: Listen normally on 9 cni0 [fe80::455:56ff:fe23:4ab9%5]:123 Feb 13 19:52:37.712150 ntpd[1883]: 13 Feb 19:52:37 ntpd[1883]: Listen normally on 10 veth63728238 [fe80::94e6:e4ff:fe6e:ff7d%6]:123 Feb 13 19:52:37.712150 ntpd[1883]: 13 Feb 19:52:37 ntpd[1883]: Listen normally on 11 vethb1abf02f [fe80::fcc3:5eff:fea5:e1bd%7]:123 Feb 13 19:52:37.711745 ntpd[1883]: Listen normally on 10 veth63728238 [fe80::94e6:e4ff:fe6e:ff7d%6]:123 Feb 13 19:52:37.711792 ntpd[1883]: Listen normally on 11 vethb1abf02f [fe80::fcc3:5eff:fea5:e1bd%7]:123 Feb 13 19:52:50.362702 systemd[1]: Started sshd@7-172.31.23.227:22-139.178.89.65:51836.service - OpenSSH per-connection server daemon (139.178.89.65:51836). Feb 13 19:52:50.569223 sshd[4501]: Accepted publickey for core from 139.178.89.65 port 51836 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:52:50.570839 sshd-session[4501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:50.592115 systemd-logind[1900]: New session 8 of user core. Feb 13 19:52:50.597285 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:52:51.014350 sshd[4504]: Connection closed by 139.178.89.65 port 51836 Feb 13 19:52:51.015031 sshd-session[4501]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:51.020466 systemd-logind[1900]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:52:51.022900 systemd[1]: sshd@7-172.31.23.227:22-139.178.89.65:51836.service: Deactivated successfully. Feb 13 19:52:51.029836 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:52:51.031925 systemd-logind[1900]: Removed session 8. Feb 13 19:52:56.055555 systemd[1]: Started sshd@8-172.31.23.227:22-139.178.89.65:44878.service - OpenSSH per-connection server daemon (139.178.89.65:44878). Feb 13 19:52:56.292028 sshd[4543]: Accepted publickey for core from 139.178.89.65 port 44878 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:52:56.295400 sshd-session[4543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:52:56.307836 systemd-logind[1900]: New session 9 of user core. Feb 13 19:52:56.316743 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:52:56.600648 sshd[4546]: Connection closed by 139.178.89.65 port 44878 Feb 13 19:52:56.609546 sshd-session[4543]: pam_unix(sshd:session): session closed for user core Feb 13 19:52:56.622887 systemd[1]: sshd@8-172.31.23.227:22-139.178.89.65:44878.service: Deactivated successfully. Feb 13 19:52:56.628256 systemd-logind[1900]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:52:56.628896 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:52:56.634749 systemd-logind[1900]: Removed session 9. Feb 13 19:53:01.635284 systemd[1]: Started sshd@9-172.31.23.227:22-139.178.89.65:44882.service - OpenSSH per-connection server daemon (139.178.89.65:44882). Feb 13 19:53:01.847216 sshd[4581]: Accepted publickey for core from 139.178.89.65 port 44882 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:53:01.850971 sshd-session[4581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:01.869868 systemd-logind[1900]: New session 10 of user core. Feb 13 19:53:01.903202 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:53:02.389174 sshd[4605]: Connection closed by 139.178.89.65 port 44882 Feb 13 19:53:02.389974 sshd-session[4581]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:02.420552 systemd[1]: sshd@9-172.31.23.227:22-139.178.89.65:44882.service: Deactivated successfully. Feb 13 19:53:02.430695 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:53:02.454751 systemd-logind[1900]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:53:02.467612 systemd-logind[1900]: Removed session 10. Feb 13 19:53:07.421152 systemd[1]: Started sshd@10-172.31.23.227:22-139.178.89.65:49258.service - OpenSSH per-connection server daemon (139.178.89.65:49258). Feb 13 19:53:07.625266 sshd[4638]: Accepted publickey for core from 139.178.89.65 port 49258 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:53:07.626017 sshd-session[4638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:07.637455 systemd-logind[1900]: New session 11 of user core. Feb 13 19:53:07.648858 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:53:07.895231 sshd[4641]: Connection closed by 139.178.89.65 port 49258 Feb 13 19:53:07.897931 sshd-session[4638]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:07.905159 systemd[1]: sshd@10-172.31.23.227:22-139.178.89.65:49258.service: Deactivated successfully. Feb 13 19:53:07.917762 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:53:07.920676 systemd-logind[1900]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:53:07.935870 systemd[1]: Started sshd@11-172.31.23.227:22-139.178.89.65:49274.service - OpenSSH per-connection server daemon (139.178.89.65:49274). Feb 13 19:53:07.940029 systemd-logind[1900]: Removed session 11. Feb 13 19:53:08.156489 sshd[4653]: Accepted publickey for core from 139.178.89.65 port 49274 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:53:08.157863 sshd-session[4653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:08.164949 systemd-logind[1900]: New session 12 of user core. Feb 13 19:53:08.170703 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:53:08.483440 sshd[4656]: Connection closed by 139.178.89.65 port 49274 Feb 13 19:53:08.484909 sshd-session[4653]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:08.496770 systemd[1]: sshd@11-172.31.23.227:22-139.178.89.65:49274.service: Deactivated successfully. Feb 13 19:53:08.508340 systemd-logind[1900]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:53:08.509796 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:53:08.527563 systemd[1]: Started sshd@12-172.31.23.227:22-139.178.89.65:49278.service - OpenSSH per-connection server daemon (139.178.89.65:49278). Feb 13 19:53:08.530859 systemd-logind[1900]: Removed session 12. Feb 13 19:53:08.702411 sshd[4666]: Accepted publickey for core from 139.178.89.65 port 49278 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:53:08.704516 sshd-session[4666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:08.714569 systemd-logind[1900]: New session 13 of user core. Feb 13 19:53:08.730857 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:53:08.963593 sshd[4669]: Connection closed by 139.178.89.65 port 49278 Feb 13 19:53:08.965579 sshd-session[4666]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:08.970324 systemd[1]: sshd@12-172.31.23.227:22-139.178.89.65:49278.service: Deactivated successfully. Feb 13 19:53:08.976216 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:53:08.977826 systemd-logind[1900]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:53:08.979078 systemd-logind[1900]: Removed session 13. Feb 13 19:53:14.005184 systemd[1]: Started sshd@13-172.31.23.227:22-139.178.89.65:49294.service - OpenSSH per-connection server daemon (139.178.89.65:49294). Feb 13 19:53:14.191086 sshd[4704]: Accepted publickey for core from 139.178.89.65 port 49294 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:53:14.193201 sshd-session[4704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:14.210937 systemd-logind[1900]: New session 14 of user core. Feb 13 19:53:14.220158 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:53:14.458671 sshd[4707]: Connection closed by 139.178.89.65 port 49294 Feb 13 19:53:14.459882 sshd-session[4704]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:14.467056 systemd[1]: sshd@13-172.31.23.227:22-139.178.89.65:49294.service: Deactivated successfully. Feb 13 19:53:14.467694 systemd-logind[1900]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:53:14.474254 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:53:14.475733 systemd-logind[1900]: Removed session 14. Feb 13 19:53:19.497871 systemd[1]: Started sshd@14-172.31.23.227:22-139.178.89.65:57560.service - OpenSSH per-connection server daemon (139.178.89.65:57560). Feb 13 19:53:19.719469 sshd[4739]: Accepted publickey for core from 139.178.89.65 port 57560 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:53:19.721052 sshd-session[4739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:19.729048 systemd-logind[1900]: New session 15 of user core. Feb 13 19:53:19.737486 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:53:19.969020 sshd[4742]: Connection closed by 139.178.89.65 port 57560 Feb 13 19:53:19.970610 sshd-session[4739]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:19.975566 systemd-logind[1900]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:53:19.977422 systemd[1]: sshd@14-172.31.23.227:22-139.178.89.65:57560.service: Deactivated successfully. Feb 13 19:53:19.984050 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:53:19.985700 systemd-logind[1900]: Removed session 15. Feb 13 19:53:20.004454 systemd[1]: Started sshd@15-172.31.23.227:22-139.178.89.65:57570.service - OpenSSH per-connection server daemon (139.178.89.65:57570). Feb 13 19:53:20.170281 sshd[4753]: Accepted publickey for core from 139.178.89.65 port 57570 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:53:20.171475 sshd-session[4753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:20.179514 systemd-logind[1900]: New session 16 of user core. Feb 13 19:53:20.184201 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:53:20.867598 sshd[4756]: Connection closed by 139.178.89.65 port 57570 Feb 13 19:53:20.871274 sshd-session[4753]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:20.887512 systemd[1]: sshd@15-172.31.23.227:22-139.178.89.65:57570.service: Deactivated successfully. Feb 13 19:53:20.896835 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:53:20.899636 systemd-logind[1900]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:53:20.913444 systemd[1]: Started sshd@16-172.31.23.227:22-139.178.89.65:57586.service - OpenSSH per-connection server daemon (139.178.89.65:57586). Feb 13 19:53:20.914496 systemd-logind[1900]: Removed session 16. Feb 13 19:53:21.096187 sshd[4765]: Accepted publickey for core from 139.178.89.65 port 57586 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:53:21.097849 sshd-session[4765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:21.103435 systemd-logind[1900]: New session 17 of user core. Feb 13 19:53:21.110805 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:53:23.196392 sshd[4768]: Connection closed by 139.178.89.65 port 57586 Feb 13 19:53:23.197661 sshd-session[4765]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:23.207611 systemd[1]: sshd@16-172.31.23.227:22-139.178.89.65:57586.service: Deactivated successfully. Feb 13 19:53:23.244186 systemd-logind[1900]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:53:23.244710 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:53:23.259034 systemd[1]: Started sshd@17-172.31.23.227:22-139.178.89.65:57598.service - OpenSSH per-connection server daemon (139.178.89.65:57598). Feb 13 19:53:23.261836 systemd-logind[1900]: Removed session 17. Feb 13 19:53:23.455087 sshd[4805]: Accepted publickey for core from 139.178.89.65 port 57598 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:53:23.457150 sshd-session[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:23.463860 systemd-logind[1900]: New session 18 of user core. Feb 13 19:53:23.468939 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:53:24.050990 sshd[4808]: Connection closed by 139.178.89.65 port 57598 Feb 13 19:53:24.055393 sshd-session[4805]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:24.062174 systemd-logind[1900]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:53:24.064113 systemd[1]: sshd@17-172.31.23.227:22-139.178.89.65:57598.service: Deactivated successfully. Feb 13 19:53:24.073929 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:53:24.081264 systemd-logind[1900]: Removed session 18. Feb 13 19:53:24.088908 systemd[1]: Started sshd@18-172.31.23.227:22-139.178.89.65:57614.service - OpenSSH per-connection server daemon (139.178.89.65:57614). Feb 13 19:53:24.313351 sshd[4818]: Accepted publickey for core from 139.178.89.65 port 57614 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:53:24.320833 sshd-session[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:24.342626 systemd-logind[1900]: New session 19 of user core. Feb 13 19:53:24.356186 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:53:24.611927 sshd[4821]: Connection closed by 139.178.89.65 port 57614 Feb 13 19:53:24.614393 sshd-session[4818]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:24.620760 systemd[1]: sshd@18-172.31.23.227:22-139.178.89.65:57614.service: Deactivated successfully. Feb 13 19:53:24.630397 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:53:24.632012 systemd-logind[1900]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:53:24.644351 systemd-logind[1900]: Removed session 19. Feb 13 19:53:29.649573 systemd[1]: Started sshd@19-172.31.23.227:22-139.178.89.65:55888.service - OpenSSH per-connection server daemon (139.178.89.65:55888). Feb 13 19:53:29.852409 sshd[4853]: Accepted publickey for core from 139.178.89.65 port 55888 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:53:29.854486 sshd-session[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:29.861778 systemd-logind[1900]: New session 20 of user core. Feb 13 19:53:29.872305 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:53:30.118604 sshd[4856]: Connection closed by 139.178.89.65 port 55888 Feb 13 19:53:30.119520 sshd-session[4853]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:30.125962 systemd[1]: sshd@19-172.31.23.227:22-139.178.89.65:55888.service: Deactivated successfully. Feb 13 19:53:30.132296 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:53:30.132870 systemd-logind[1900]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:53:30.136490 systemd-logind[1900]: Removed session 20. Feb 13 19:53:35.151756 systemd[1]: Started sshd@20-172.31.23.227:22-139.178.89.65:56324.service - OpenSSH per-connection server daemon (139.178.89.65:56324). Feb 13 19:53:35.373825 sshd[4891]: Accepted publickey for core from 139.178.89.65 port 56324 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:53:35.386029 sshd-session[4891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:35.406946 systemd-logind[1900]: New session 21 of user core. Feb 13 19:53:35.421241 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:53:35.656611 sshd[4894]: Connection closed by 139.178.89.65 port 56324 Feb 13 19:53:35.658631 sshd-session[4891]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:35.663739 systemd-logind[1900]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:53:35.667035 systemd[1]: sshd@20-172.31.23.227:22-139.178.89.65:56324.service: Deactivated successfully. Feb 13 19:53:35.673408 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:53:35.674922 systemd-logind[1900]: Removed session 21. Feb 13 19:53:40.691782 systemd[1]: Started sshd@21-172.31.23.227:22-139.178.89.65:56332.service - OpenSSH per-connection server daemon (139.178.89.65:56332). Feb 13 19:53:40.875651 sshd[4926]: Accepted publickey for core from 139.178.89.65 port 56332 ssh2: RSA SHA256:8P+kPxi1I257RCRHId8CcpewLV4ndpYsy+CU1pFADU8 Feb 13 19:53:40.877781 sshd-session[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:53:40.884810 systemd-logind[1900]: New session 22 of user core. Feb 13 19:53:40.892871 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:53:41.159395 sshd[4929]: Connection closed by 139.178.89.65 port 56332 Feb 13 19:53:41.160153 sshd-session[4926]: pam_unix(sshd:session): session closed for user core Feb 13 19:53:41.167571 systemd[1]: sshd@21-172.31.23.227:22-139.178.89.65:56332.service: Deactivated successfully. Feb 13 19:53:41.183113 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:53:41.189562 systemd-logind[1900]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:53:41.197513 systemd-logind[1900]: Removed session 22. Feb 13 19:53:56.668534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6da8110bdfbc7ecc8bc9e0696fd5a8c08969ca3d96342daa1bede8dd2d77f763-rootfs.mount: Deactivated successfully. Feb 13 19:53:56.672849 containerd[1928]: time="2025-02-13T19:53:56.672778230Z" level=info msg="shim disconnected" id=6da8110bdfbc7ecc8bc9e0696fd5a8c08969ca3d96342daa1bede8dd2d77f763 namespace=k8s.io Feb 13 19:53:56.672849 containerd[1928]: time="2025-02-13T19:53:56.672842806Z" level=warning msg="cleaning up after shim disconnected" id=6da8110bdfbc7ecc8bc9e0696fd5a8c08969ca3d96342daa1bede8dd2d77f763 namespace=k8s.io Feb 13 19:53:56.673620 containerd[1928]: time="2025-02-13T19:53:56.672854768Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:53:57.561018 kubelet[3554]: I0213 19:53:57.560835 3554 scope.go:117] "RemoveContainer" containerID="6da8110bdfbc7ecc8bc9e0696fd5a8c08969ca3d96342daa1bede8dd2d77f763" Feb 13 19:53:57.566110 containerd[1928]: time="2025-02-13T19:53:57.565862059Z" level=info msg="CreateContainer within sandbox \"ff0a740109f17b022640f7f0808920cc6735b0e343645e3d32ee06f8a3173b32\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 19:53:57.616422 containerd[1928]: time="2025-02-13T19:53:57.616354959Z" level=info msg="CreateContainer within sandbox \"ff0a740109f17b022640f7f0808920cc6735b0e343645e3d32ee06f8a3173b32\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"52fbcd569b7468b850392e45fc983df347cf3f9b68e1b517edd92be77594066a\"" Feb 13 19:53:57.617142 containerd[1928]: time="2025-02-13T19:53:57.617091767Z" level=info msg="StartContainer for \"52fbcd569b7468b850392e45fc983df347cf3f9b68e1b517edd92be77594066a\"" Feb 13 19:53:57.748792 containerd[1928]: time="2025-02-13T19:53:57.748742988Z" level=info msg="StartContainer for \"52fbcd569b7468b850392e45fc983df347cf3f9b68e1b517edd92be77594066a\" returns successfully" Feb 13 19:54:00.756565 kubelet[3554]: E0213 19:54:00.756308 3554 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.227:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-227?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 13 19:54:01.714870 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77e5fbf92c4438a81a9e00d004e34fbb33acd1f8099100016625d4d190491fb6-rootfs.mount: Deactivated successfully. Feb 13 19:54:01.746369 containerd[1928]: time="2025-02-13T19:54:01.746265745Z" level=info msg="shim disconnected" id=77e5fbf92c4438a81a9e00d004e34fbb33acd1f8099100016625d4d190491fb6 namespace=k8s.io Feb 13 19:54:01.746369 containerd[1928]: time="2025-02-13T19:54:01.746332659Z" level=warning msg="cleaning up after shim disconnected" id=77e5fbf92c4438a81a9e00d004e34fbb33acd1f8099100016625d4d190491fb6 namespace=k8s.io Feb 13 19:54:01.746369 containerd[1928]: time="2025-02-13T19:54:01.746345963Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:54:02.590506 kubelet[3554]: I0213 19:54:02.589991 3554 scope.go:117] "RemoveContainer" containerID="77e5fbf92c4438a81a9e00d004e34fbb33acd1f8099100016625d4d190491fb6" Feb 13 19:54:02.609089 containerd[1928]: time="2025-02-13T19:54:02.608721450Z" level=info msg="CreateContainer within sandbox \"3c627e745046c403677fa610cac9c95e2c07f9b646c9b226208fa8ce9b3a2bc3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 19:54:02.708622 containerd[1928]: time="2025-02-13T19:54:02.708402618Z" level=info msg="CreateContainer within sandbox \"3c627e745046c403677fa610cac9c95e2c07f9b646c9b226208fa8ce9b3a2bc3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"999b25569e859451459f147f3d8011a7e83fb502bf8df81afc57196026857f18\"" Feb 13 19:54:02.709135 containerd[1928]: time="2025-02-13T19:54:02.709112545Z" level=info msg="StartContainer for \"999b25569e859451459f147f3d8011a7e83fb502bf8df81afc57196026857f18\"" Feb 13 19:54:02.920212 systemd[1]: run-containerd-runc-k8s.io-999b25569e859451459f147f3d8011a7e83fb502bf8df81afc57196026857f18-runc.eGl1BV.mount: Deactivated successfully. Feb 13 19:54:03.053082 containerd[1928]: time="2025-02-13T19:54:03.053000631Z" level=info msg="StartContainer for \"999b25569e859451459f147f3d8011a7e83fb502bf8df81afc57196026857f18\" returns successfully" Feb 13 19:54:10.758078 kubelet[3554]: E0213 19:54:10.758035 3554 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.227:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-227?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"