Feb 13 15:51:49.061753 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 14:06:02 -00 2025 Feb 13 15:51:49.061791 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:51:49.061806 kernel: BIOS-provided physical RAM map: Feb 13 15:51:49.061817 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 15:51:49.061827 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 15:51:49.061838 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 15:51:49.061854 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Feb 13 15:51:49.061865 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Feb 13 15:51:49.061875 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 13 15:51:49.061886 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 15:51:49.061897 kernel: NX (Execute Disable) protection: active Feb 13 15:51:49.061907 kernel: APIC: Static calls initialized Feb 13 15:51:49.061917 kernel: SMBIOS 2.7 present. Feb 13 15:51:49.061929 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Feb 13 15:51:49.062315 kernel: Hypervisor detected: KVM Feb 13 15:51:49.062333 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:51:49.062348 kernel: kvm-clock: using sched offset of 7464138102 cycles Feb 13 15:51:49.062363 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:51:49.062378 kernel: tsc: Detected 2500.006 MHz processor Feb 13 15:51:49.062393 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:51:49.062408 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:51:49.062427 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Feb 13 15:51:49.062442 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 15:51:49.062457 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:51:49.062471 kernel: Using GB pages for direct mapping Feb 13 15:51:49.062486 kernel: ACPI: Early table checksum verification disabled Feb 13 15:51:49.062501 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Feb 13 15:51:49.062516 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 13 15:51:49.062530 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 13 15:51:49.062545 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 13 15:51:49.062717 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Feb 13 15:51:49.062734 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 15:51:49.062749 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 13 15:51:49.062763 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 13 15:51:49.062777 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 13 15:51:49.062792 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 13 15:51:49.062805 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 13 15:51:49.062819 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 13 15:51:49.062834 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Feb 13 15:51:49.062853 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Feb 13 15:51:49.062873 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Feb 13 15:51:49.062889 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Feb 13 15:51:49.062904 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Feb 13 15:51:49.062920 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Feb 13 15:51:49.062938 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Feb 13 15:51:49.062979 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Feb 13 15:51:49.062994 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Feb 13 15:51:49.063010 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Feb 13 15:51:49.063026 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 15:51:49.063040 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 15:51:49.063055 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Feb 13 15:51:49.063071 kernel: NUMA: Initialized distance table, cnt=1 Feb 13 15:51:49.063086 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Feb 13 15:51:49.063105 kernel: Zone ranges: Feb 13 15:51:49.063121 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:51:49.063136 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Feb 13 15:51:49.063152 kernel: Normal empty Feb 13 15:51:49.063166 kernel: Movable zone start for each node Feb 13 15:51:49.063181 kernel: Early memory node ranges Feb 13 15:51:49.063308 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 15:51:49.063324 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Feb 13 15:51:49.063339 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Feb 13 15:51:49.063355 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:51:49.063375 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 15:51:49.063390 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Feb 13 15:51:49.063406 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 15:51:49.063421 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:51:49.063436 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 13 15:51:49.063451 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:51:49.063466 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:51:49.063482 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:51:49.063498 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:51:49.063517 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:51:49.063532 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 15:51:49.063547 kernel: TSC deadline timer available Feb 13 15:51:49.063562 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 15:51:49.063577 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 15:51:49.063593 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Feb 13 15:51:49.063609 kernel: Booting paravirtualized kernel on KVM Feb 13 15:51:49.063625 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:51:49.063640 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 15:51:49.063659 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 15:51:49.063674 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 15:51:49.063687 kernel: pcpu-alloc: [0] 0 1 Feb 13 15:51:49.063701 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:51:49.063716 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:51:49.063732 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:51:49.063748 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:51:49.063762 kernel: random: crng init done Feb 13 15:51:49.063781 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:51:49.063795 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 15:51:49.063809 kernel: Fallback order for Node 0: 0 Feb 13 15:51:49.063822 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Feb 13 15:51:49.063837 kernel: Policy zone: DMA32 Feb 13 15:51:49.063850 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:51:49.063864 kernel: Memory: 1930296K/2057760K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 127204K reserved, 0K cma-reserved) Feb 13 15:51:49.063879 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:51:49.063893 kernel: Kernel/User page tables isolation: enabled Feb 13 15:51:49.063910 kernel: ftrace: allocating 37890 entries in 149 pages Feb 13 15:51:49.063924 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:51:49.063939 kernel: Dynamic Preempt: voluntary Feb 13 15:51:49.063975 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:51:49.063990 kernel: rcu: RCU event tracing is enabled. Feb 13 15:51:49.064004 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:51:49.064018 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:51:49.064031 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:51:49.064045 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:51:49.064062 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:51:49.064076 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:51:49.064090 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 15:51:49.064104 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:51:49.064118 kernel: Console: colour VGA+ 80x25 Feb 13 15:51:49.064132 kernel: printk: console [ttyS0] enabled Feb 13 15:51:49.064146 kernel: ACPI: Core revision 20230628 Feb 13 15:51:49.064160 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 13 15:51:49.064174 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:51:49.064190 kernel: x2apic enabled Feb 13 15:51:49.064205 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:51:49.064231 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093fa6a7c, max_idle_ns: 440795295209 ns Feb 13 15:51:49.064249 kernel: Calibrating delay loop (skipped) preset value.. 5000.01 BogoMIPS (lpj=2500006) Feb 13 15:51:49.064264 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 15:51:49.064279 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 15:51:49.064294 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:51:49.064309 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:51:49.064323 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:51:49.064338 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:51:49.064354 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 13 15:51:49.064369 kernel: RETBleed: Vulnerable Feb 13 15:51:49.064384 kernel: Speculative Store Bypass: Vulnerable Feb 13 15:51:49.064402 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:51:49.064418 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:51:49.064433 kernel: GDS: Unknown: Dependent on hypervisor status Feb 13 15:51:49.064448 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:51:49.064462 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:51:49.064478 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:51:49.064496 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 13 15:51:49.064511 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 13 15:51:49.064525 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 13 15:51:49.064540 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 13 15:51:49.064555 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 13 15:51:49.064570 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 13 15:51:49.064585 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:51:49.064602 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 13 15:51:49.064618 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 13 15:51:49.064634 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Feb 13 15:51:49.064650 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Feb 13 15:51:49.064669 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Feb 13 15:51:49.064685 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Feb 13 15:51:49.064703 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Feb 13 15:51:49.064719 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:51:49.064735 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:51:49.064752 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:51:49.064767 kernel: landlock: Up and running. Feb 13 15:51:49.064784 kernel: SELinux: Initializing. Feb 13 15:51:49.064800 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 15:51:49.064817 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 15:51:49.064833 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 13 15:51:49.064853 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:51:49.064869 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:51:49.064885 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:51:49.064901 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 13 15:51:49.064916 kernel: signal: max sigframe size: 3632 Feb 13 15:51:49.064931 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:51:49.065511 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:51:49.065532 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 15:51:49.065641 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:51:49.065663 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:51:49.065680 kernel: .... node #0, CPUs: #1 Feb 13 15:51:49.065697 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 15:51:49.065717 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 15:51:49.065731 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:51:49.065745 kernel: smpboot: Max logical packages: 1 Feb 13 15:51:49.065760 kernel: smpboot: Total of 2 processors activated (10000.02 BogoMIPS) Feb 13 15:51:49.065775 kernel: devtmpfs: initialized Feb 13 15:51:49.065789 kernel: x86/mm: Memory block size: 128MB Feb 13 15:51:49.065808 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:51:49.065831 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:51:49.065845 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:51:49.065860 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:51:49.065874 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:51:49.067007 kernel: audit: type=2000 audit(1739461907.516:1): state=initialized audit_enabled=0 res=1 Feb 13 15:51:49.067024 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:51:49.067040 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:51:49.067055 kernel: cpuidle: using governor menu Feb 13 15:51:49.067075 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:51:49.067090 kernel: dca service started, version 1.12.1 Feb 13 15:51:49.067105 kernel: PCI: Using configuration type 1 for base access Feb 13 15:51:49.067120 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:51:49.067136 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:51:49.067151 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:51:49.067166 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:51:49.067181 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:51:49.067196 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:51:49.067214 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:51:49.067229 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:51:49.067244 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:51:49.067260 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 15:51:49.067276 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:51:49.067291 kernel: ACPI: Interpreter enabled Feb 13 15:51:49.067306 kernel: ACPI: PM: (supports S0 S5) Feb 13 15:51:49.067321 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:51:49.067336 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:51:49.067355 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 15:51:49.067370 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 15:51:49.067385 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:51:49.067705 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:51:49.067858 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 15:51:49.068024 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 15:51:49.068043 kernel: acpiphp: Slot [3] registered Feb 13 15:51:49.068062 kernel: acpiphp: Slot [4] registered Feb 13 15:51:49.068077 kernel: acpiphp: Slot [5] registered Feb 13 15:51:49.068091 kernel: acpiphp: Slot [6] registered Feb 13 15:51:49.068106 kernel: acpiphp: Slot [7] registered Feb 13 15:51:49.068120 kernel: acpiphp: Slot [8] registered Feb 13 15:51:49.069799 kernel: acpiphp: Slot [9] registered Feb 13 15:51:49.069817 kernel: acpiphp: Slot [10] registered Feb 13 15:51:49.069832 kernel: acpiphp: Slot [11] registered Feb 13 15:51:49.069847 kernel: acpiphp: Slot [12] registered Feb 13 15:51:49.069866 kernel: acpiphp: Slot [13] registered Feb 13 15:51:49.069879 kernel: acpiphp: Slot [14] registered Feb 13 15:51:49.069892 kernel: acpiphp: Slot [15] registered Feb 13 15:51:49.069907 kernel: acpiphp: Slot [16] registered Feb 13 15:51:49.069921 kernel: acpiphp: Slot [17] registered Feb 13 15:51:49.069934 kernel: acpiphp: Slot [18] registered Feb 13 15:51:49.069966 kernel: acpiphp: Slot [19] registered Feb 13 15:51:49.069982 kernel: acpiphp: Slot [20] registered Feb 13 15:51:49.069996 kernel: acpiphp: Slot [21] registered Feb 13 15:51:49.070010 kernel: acpiphp: Slot [22] registered Feb 13 15:51:49.070030 kernel: acpiphp: Slot [23] registered Feb 13 15:51:49.070044 kernel: acpiphp: Slot [24] registered Feb 13 15:51:49.070059 kernel: acpiphp: Slot [25] registered Feb 13 15:51:49.070075 kernel: acpiphp: Slot [26] registered Feb 13 15:51:49.070091 kernel: acpiphp: Slot [27] registered Feb 13 15:51:49.070107 kernel: acpiphp: Slot [28] registered Feb 13 15:51:49.070123 kernel: acpiphp: Slot [29] registered Feb 13 15:51:49.070138 kernel: acpiphp: Slot [30] registered Feb 13 15:51:49.070154 kernel: acpiphp: Slot [31] registered Feb 13 15:51:49.070174 kernel: PCI host bridge to bus 0000:00 Feb 13 15:51:49.070470 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:51:49.070614 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:51:49.070751 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:51:49.070888 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 15:51:49.074602 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:51:49.074858 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 15:51:49.075044 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 15:51:49.075209 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 13 15:51:49.075356 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 15:51:49.075488 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 13 15:51:49.075615 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 13 15:51:49.075746 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 13 15:51:49.075881 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 13 15:51:49.076063 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 13 15:51:49.076211 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 13 15:51:49.076357 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 13 15:51:49.076515 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 13 15:51:49.076666 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 13 15:51:49.076814 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 15:51:49.079674 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 15:51:49.079888 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 13 15:51:49.080066 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 13 15:51:49.080219 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 13 15:51:49.080369 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Feb 13 15:51:49.080509 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:51:49.080530 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:51:49.080545 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:51:49.080565 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:51:49.080579 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 15:51:49.080595 kernel: iommu: Default domain type: Translated Feb 13 15:51:49.080609 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:51:49.080624 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:51:49.080638 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:51:49.080654 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 15:51:49.080669 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Feb 13 15:51:49.080823 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 13 15:51:49.080976 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 13 15:51:49.081118 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 15:51:49.081138 kernel: vgaarb: loaded Feb 13 15:51:49.081154 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 13 15:51:49.081170 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 13 15:51:49.081186 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:51:49.081203 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:51:49.081224 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:51:49.081248 kernel: pnp: PnP ACPI init Feb 13 15:51:49.081264 kernel: pnp: PnP ACPI: found 5 devices Feb 13 15:51:49.081277 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:51:49.081292 kernel: NET: Registered PF_INET protocol family Feb 13 15:51:49.081306 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:51:49.081321 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 15:51:49.081337 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:51:49.081352 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:51:49.081390 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 15:51:49.081484 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 15:51:49.081502 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 15:51:49.081518 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 15:51:49.081534 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:51:49.081550 kernel: NET: Registered PF_XDP protocol family Feb 13 15:51:49.081702 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:51:49.081912 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:51:49.084111 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:51:49.084252 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 15:51:49.084396 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 15:51:49.084420 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:51:49.084439 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 15:51:49.084456 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093fa6a7c, max_idle_ns: 440795295209 ns Feb 13 15:51:49.084473 kernel: clocksource: Switched to clocksource tsc Feb 13 15:51:49.084489 kernel: Initialise system trusted keyrings Feb 13 15:51:49.084505 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 15:51:49.084525 kernel: Key type asymmetric registered Feb 13 15:51:49.084540 kernel: Asymmetric key parser 'x509' registered Feb 13 15:51:49.084556 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:51:49.084571 kernel: io scheduler mq-deadline registered Feb 13 15:51:49.084587 kernel: io scheduler kyber registered Feb 13 15:51:49.084603 kernel: io scheduler bfq registered Feb 13 15:51:49.084619 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:51:49.084634 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:51:49.084650 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:51:49.084666 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:51:49.084684 kernel: i8042: Warning: Keylock active Feb 13 15:51:49.084700 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:51:49.084716 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:51:49.084857 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 15:51:49.084997 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 15:51:49.085118 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T15:51:48 UTC (1739461908) Feb 13 15:51:49.085236 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 15:51:49.085259 kernel: intel_pstate: CPU model not supported Feb 13 15:51:49.085275 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:51:49.085291 kernel: Segment Routing with IPv6 Feb 13 15:51:49.085306 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:51:49.085322 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:51:49.085337 kernel: Key type dns_resolver registered Feb 13 15:51:49.085352 kernel: IPI shorthand broadcast: enabled Feb 13 15:51:49.085378 kernel: sched_clock: Marking stable (669003588, 231531466)->(996309486, -95774432) Feb 13 15:51:49.085394 kernel: registered taskstats version 1 Feb 13 15:51:49.085410 kernel: Loading compiled-in X.509 certificates Feb 13 15:51:49.085428 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 3d19ae6dcd850c11d55bf09bd44e00c45ed399eb' Feb 13 15:51:49.085444 kernel: Key type .fscrypt registered Feb 13 15:51:49.085459 kernel: Key type fscrypt-provisioning registered Feb 13 15:51:49.085474 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:51:49.085490 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:51:49.085506 kernel: ima: No architecture policies found Feb 13 15:51:49.085521 kernel: clk: Disabling unused clocks Feb 13 15:51:49.085537 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 15:51:49.085556 kernel: Write protecting the kernel read-only data: 38912k Feb 13 15:51:49.085571 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 15:51:49.085587 kernel: Run /init as init process Feb 13 15:51:49.085602 kernel: with arguments: Feb 13 15:51:49.085618 kernel: /init Feb 13 15:51:49.085633 kernel: with environment: Feb 13 15:51:49.085648 kernel: HOME=/ Feb 13 15:51:49.085663 kernel: TERM=linux Feb 13 15:51:49.085678 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:51:49.085701 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:51:49.085734 systemd[1]: Detected virtualization amazon. Feb 13 15:51:49.085754 systemd[1]: Detected architecture x86-64. Feb 13 15:51:49.085771 systemd[1]: Running in initrd. Feb 13 15:51:49.085787 systemd[1]: No hostname configured, using default hostname. Feb 13 15:51:49.085807 systemd[1]: Hostname set to . Feb 13 15:51:49.085824 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:51:49.085841 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:51:49.085858 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:51:49.085878 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:51:49.085897 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:51:49.085914 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:51:49.085931 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:51:49.094137 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:51:49.094169 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:51:49.094184 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:51:49.094201 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:51:49.094215 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:51:49.094229 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:51:49.094244 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:51:49.094267 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:51:49.094281 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:51:49.094296 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:51:49.094311 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:51:49.094325 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:51:49.094340 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:51:49.094354 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:51:49.094368 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:51:49.094383 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:51:49.094399 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:51:49.094414 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 15:51:49.094433 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:51:49.094450 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:51:49.094465 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:51:49.094480 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:51:49.094497 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:51:49.094514 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:51:49.094529 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:51:49.094544 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:51:49.094557 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:51:49.094572 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:51:49.094624 systemd-journald[179]: Collecting audit messages is disabled. Feb 13 15:51:49.094661 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:51:49.094676 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:51:49.094694 systemd-journald[179]: Journal started Feb 13 15:51:49.094724 systemd-journald[179]: Runtime Journal (/run/log/journal/ec299c2bb9bf775c6a79042e8398f9e3) is 4.8M, max 38.5M, 33.7M free. Feb 13 15:51:49.100816 systemd-modules-load[180]: Inserted module 'overlay' Feb 13 15:51:49.263901 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:51:49.264012 kernel: Bridge firewalling registered Feb 13 15:51:49.142020 systemd-modules-load[180]: Inserted module 'br_netfilter' Feb 13 15:51:49.266968 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:51:49.267159 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:51:49.284239 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:51:49.294166 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:51:49.305491 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:51:49.310803 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:51:49.313543 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:51:49.316477 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:51:49.328151 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:51:49.336729 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:51:49.343282 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:51:49.368309 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:51:49.383274 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:51:49.409053 systemd-resolved[204]: Positive Trust Anchors: Feb 13 15:51:49.409069 systemd-resolved[204]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:51:49.409132 systemd-resolved[204]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:51:49.432230 dracut-cmdline[215]: dracut-dracut-053 Feb 13 15:51:49.432230 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=85b856728ac62eb775b23688185fbd191f36059b11eac7a7eacb2da5f3555b05 Feb 13 15:51:49.448404 systemd-resolved[204]: Defaulting to hostname 'linux'. Feb 13 15:51:49.453067 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:51:49.456775 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:51:49.576978 kernel: SCSI subsystem initialized Feb 13 15:51:49.588994 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:51:49.609988 kernel: iscsi: registered transport (tcp) Feb 13 15:51:49.660987 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:51:49.661066 kernel: QLogic iSCSI HBA Driver Feb 13 15:51:49.748879 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:51:49.755153 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:51:49.786257 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:51:49.786340 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:51:49.786364 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:51:49.832999 kernel: raid6: avx512x4 gen() 16390 MB/s Feb 13 15:51:49.849996 kernel: raid6: avx512x2 gen() 17107 MB/s Feb 13 15:51:49.866997 kernel: raid6: avx512x1 gen() 17421 MB/s Feb 13 15:51:49.886130 kernel: raid6: avx2x4 gen() 12648 MB/s Feb 13 15:51:49.902999 kernel: raid6: avx2x2 gen() 11140 MB/s Feb 13 15:51:49.920438 kernel: raid6: avx2x1 gen() 7434 MB/s Feb 13 15:51:49.920516 kernel: raid6: using algorithm avx512x1 gen() 17421 MB/s Feb 13 15:51:49.938376 kernel: raid6: .... xor() 18577 MB/s, rmw enabled Feb 13 15:51:49.938462 kernel: raid6: using avx512x2 recovery algorithm Feb 13 15:51:49.961980 kernel: xor: automatically using best checksumming function avx Feb 13 15:51:50.132976 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:51:50.144621 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:51:50.153326 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:51:50.169881 systemd-udevd[397]: Using default interface naming scheme 'v255'. Feb 13 15:51:50.175980 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:51:50.187246 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:51:50.211035 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Feb 13 15:51:50.244323 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:51:50.250190 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:51:50.335665 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:51:50.350140 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:51:50.385584 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:51:50.391397 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:51:50.395906 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:51:50.400005 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:51:50.409105 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:51:50.447361 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:51:50.467006 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:51:50.496399 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 13 15:51:50.547748 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 13 15:51:50.547977 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:51:50.548005 kernel: AES CTR mode by8 optimization enabled Feb 13 15:51:50.548036 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Feb 13 15:51:50.548224 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:54:3d:dc:f7:99 Feb 13 15:51:50.544091 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:51:50.544367 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:51:50.547581 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:51:50.562089 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 13 15:51:50.562346 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 15:51:50.551502 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:51:50.551709 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:51:50.555314 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:51:50.558280 (udev-worker)[449]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:51:50.566558 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:51:50.591038 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 13 15:51:50.599040 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:51:50.599211 kernel: GPT:9289727 != 16777215 Feb 13 15:51:50.599245 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:51:50.599267 kernel: GPT:9289727 != 16777215 Feb 13 15:51:50.599286 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:51:50.599305 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:51:50.741989 kernel: BTRFS: device fsid 0e178e67-0100-48b1-87c9-422b9a68652a devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (452) Feb 13 15:51:50.753007 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (455) Feb 13 15:51:50.813875 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:51:50.823051 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:51:50.877475 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Feb 13 15:51:50.879822 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:51:50.900551 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Feb 13 15:51:50.914433 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Feb 13 15:51:50.914571 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Feb 13 15:51:50.956618 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:51:50.966188 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:51:50.989058 disk-uuid[630]: Primary Header is updated. Feb 13 15:51:50.989058 disk-uuid[630]: Secondary Entries is updated. Feb 13 15:51:50.989058 disk-uuid[630]: Secondary Header is updated. Feb 13 15:51:50.998978 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:51:51.010975 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:51:52.017037 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 13 15:51:52.017707 disk-uuid[631]: The operation has completed successfully. Feb 13 15:51:52.207802 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:51:52.208030 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:51:52.242302 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:51:52.259908 sh[889]: Success Feb 13 15:51:52.283141 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 15:51:52.420842 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:51:52.434339 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:51:52.451676 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:51:52.492970 kernel: BTRFS info (device dm-0): first mount of filesystem 0e178e67-0100-48b1-87c9-422b9a68652a Feb 13 15:51:52.493196 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:51:52.495648 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:51:52.495703 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:51:52.496888 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:51:52.622273 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:51:52.645497 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:51:52.649975 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:51:52.666332 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:51:52.699362 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:51:52.739954 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:51:52.740023 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:51:52.740043 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:51:52.749191 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:51:52.763391 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:51:52.765039 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:51:52.771114 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:51:52.789461 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:51:52.887431 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:51:52.912289 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:51:53.015309 systemd-networkd[1082]: lo: Link UP Feb 13 15:51:53.015325 systemd-networkd[1082]: lo: Gained carrier Feb 13 15:51:53.020273 systemd-networkd[1082]: Enumeration completed Feb 13 15:51:53.022640 systemd-networkd[1082]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:51:53.022645 systemd-networkd[1082]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:51:53.023007 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:51:53.025152 systemd[1]: Reached target network.target - Network. Feb 13 15:51:53.041347 systemd-networkd[1082]: eth0: Link UP Feb 13 15:51:53.041360 systemd-networkd[1082]: eth0: Gained carrier Feb 13 15:51:53.041382 systemd-networkd[1082]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:51:53.067200 systemd-networkd[1082]: eth0: DHCPv4 address 172.31.30.72/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:51:53.231588 ignition[1014]: Ignition 2.20.0 Feb 13 15:51:53.231609 ignition[1014]: Stage: fetch-offline Feb 13 15:51:53.234529 ignition[1014]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:51:53.234543 ignition[1014]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:51:53.235068 ignition[1014]: Ignition finished successfully Feb 13 15:51:53.241081 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:51:53.250257 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:51:53.273680 ignition[1090]: Ignition 2.20.0 Feb 13 15:51:53.273694 ignition[1090]: Stage: fetch Feb 13 15:51:53.274372 ignition[1090]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:51:53.274401 ignition[1090]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:51:53.274542 ignition[1090]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:51:53.334824 ignition[1090]: PUT result: OK Feb 13 15:51:53.342393 ignition[1090]: parsed url from cmdline: "" Feb 13 15:51:53.342407 ignition[1090]: no config URL provided Feb 13 15:51:53.342418 ignition[1090]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:51:53.342434 ignition[1090]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:51:53.342462 ignition[1090]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:51:53.343601 ignition[1090]: PUT result: OK Feb 13 15:51:53.343643 ignition[1090]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 13 15:51:53.349191 ignition[1090]: GET result: OK Feb 13 15:51:53.349250 ignition[1090]: parsing config with SHA512: e5ee857a8d99a8e4cbcd22ed652389519ae64b040c4acca173f7f571ab7a6904261ffe6fc67915e70f8a9bbc4c56fe4509ec8e5fecd625874575067bb51753d3 Feb 13 15:51:53.359982 unknown[1090]: fetched base config from "system" Feb 13 15:51:53.360000 unknown[1090]: fetched base config from "system" Feb 13 15:51:53.360534 ignition[1090]: fetch: fetch complete Feb 13 15:51:53.360008 unknown[1090]: fetched user config from "aws" Feb 13 15:51:53.360540 ignition[1090]: fetch: fetch passed Feb 13 15:51:53.360601 ignition[1090]: Ignition finished successfully Feb 13 15:51:53.365480 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:51:53.375255 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:51:53.432609 ignition[1097]: Ignition 2.20.0 Feb 13 15:51:53.432628 ignition[1097]: Stage: kargs Feb 13 15:51:53.438349 ignition[1097]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:51:53.438373 ignition[1097]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:51:53.440032 ignition[1097]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:51:53.448093 ignition[1097]: PUT result: OK Feb 13 15:51:53.453974 ignition[1097]: kargs: kargs passed Feb 13 15:51:53.454075 ignition[1097]: Ignition finished successfully Feb 13 15:51:53.456117 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:51:53.466325 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:51:53.486959 ignition[1103]: Ignition 2.20.0 Feb 13 15:51:53.487293 ignition[1103]: Stage: disks Feb 13 15:51:53.487849 ignition[1103]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:51:53.487863 ignition[1103]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:51:53.487993 ignition[1103]: PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:51:53.489101 ignition[1103]: PUT result: OK Feb 13 15:51:53.499511 ignition[1103]: disks: disks passed Feb 13 15:51:53.499590 ignition[1103]: Ignition finished successfully Feb 13 15:51:53.500735 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:51:53.503578 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:51:53.507377 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:51:53.510336 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:51:53.511677 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:51:53.514137 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:51:53.529174 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:51:53.570626 systemd-fsck[1111]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:51:53.576510 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:51:53.586113 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:51:53.775983 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e45e00fd-a630-4f0f-91bb-bc879e42a47e r/w with ordered data mode. Quota mode: none. Feb 13 15:51:53.776574 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:51:53.779034 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:51:53.794103 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:51:53.802126 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:51:53.804640 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:51:53.804689 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:51:53.804716 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:51:53.818428 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:51:53.825223 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:51:53.836989 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1130) Feb 13 15:51:53.840093 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:51:53.840166 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:51:53.840187 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:51:53.855275 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:51:53.856637 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:51:54.137148 systemd-networkd[1082]: eth0: Gained IPv6LL Feb 13 15:51:54.156897 initrd-setup-root[1154]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:51:54.175440 initrd-setup-root[1161]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:51:54.187524 initrd-setup-root[1168]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:51:54.194010 initrd-setup-root[1175]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:51:54.510411 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:51:54.520094 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:51:54.526531 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:51:54.546026 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:51:54.546021 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:51:54.617259 ignition[1243]: INFO : Ignition 2.20.0 Feb 13 15:51:54.617259 ignition[1243]: INFO : Stage: mount Feb 13 15:51:54.617259 ignition[1243]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:51:54.617259 ignition[1243]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:51:54.617259 ignition[1243]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:51:54.627135 ignition[1243]: INFO : PUT result: OK Feb 13 15:51:54.630313 ignition[1243]: INFO : mount: mount passed Feb 13 15:51:54.631666 ignition[1243]: INFO : Ignition finished successfully Feb 13 15:51:54.636481 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:51:54.647263 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:51:54.652715 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:51:54.784576 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:51:54.836045 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1255) Feb 13 15:51:54.838478 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem c26baa82-37e4-4435-b3ec-4748612bc475 Feb 13 15:51:54.838732 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:51:54.838800 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 13 15:51:54.846023 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 13 15:51:54.848038 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:51:54.877989 ignition[1272]: INFO : Ignition 2.20.0 Feb 13 15:51:54.877989 ignition[1272]: INFO : Stage: files Feb 13 15:51:54.880315 ignition[1272]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:51:54.880315 ignition[1272]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:51:54.880315 ignition[1272]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:51:54.886286 ignition[1272]: INFO : PUT result: OK Feb 13 15:51:54.890918 ignition[1272]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:51:54.893400 ignition[1272]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:51:54.893400 ignition[1272]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:51:54.918996 ignition[1272]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:51:54.923973 ignition[1272]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:51:54.927812 unknown[1272]: wrote ssh authorized keys file for user: core Feb 13 15:51:54.930844 ignition[1272]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:51:54.955276 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:51:54.959296 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 15:51:55.050236 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:51:55.386364 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:51:55.386364 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:51:55.391211 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:51:55.391211 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:51:55.391211 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:51:55.391211 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:51:55.391211 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:51:55.391211 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:51:55.391211 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:51:55.391211 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:51:55.391211 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:51:55.391211 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:51:55.391211 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:51:55.391211 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:51:55.391211 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Feb 13 15:51:55.748882 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:51:56.274968 ignition[1272]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:51:56.274968 ignition[1272]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:51:56.302337 ignition[1272]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:51:56.302337 ignition[1272]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:51:56.302337 ignition[1272]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:51:56.302337 ignition[1272]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:51:56.302337 ignition[1272]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:51:56.302337 ignition[1272]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:51:56.302337 ignition[1272]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:51:56.302337 ignition[1272]: INFO : files: files passed Feb 13 15:51:56.302337 ignition[1272]: INFO : Ignition finished successfully Feb 13 15:51:56.312677 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:51:56.341896 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:51:56.357110 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:51:56.372422 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:51:56.372640 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:51:56.403794 initrd-setup-root-after-ignition[1301]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:51:56.403794 initrd-setup-root-after-ignition[1301]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:51:56.409333 initrd-setup-root-after-ignition[1305]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:51:56.414037 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:51:56.415448 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:51:56.429381 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:51:56.506905 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:51:56.507091 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:51:56.514729 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:51:56.516073 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:51:56.519652 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:51:56.528111 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:51:56.545646 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:51:56.563198 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:51:56.587378 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:51:56.587627 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:51:56.593200 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:51:56.595438 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:51:56.596607 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:51:56.601054 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:51:56.603315 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:51:56.604542 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:51:56.608032 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:51:56.608173 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:51:56.613520 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:51:56.613662 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:51:56.626871 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:51:56.630742 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:51:56.634726 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:51:56.638361 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:51:56.640625 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:51:56.644215 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:51:56.647188 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:51:56.651988 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:51:56.653462 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:51:56.656894 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:51:56.657046 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:51:56.660381 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:51:56.661880 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:51:56.667791 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:51:56.668182 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:51:56.682858 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:51:56.701431 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:51:56.707302 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:51:56.707543 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:51:56.722688 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:51:56.723132 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:51:56.735465 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:51:56.735580 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:51:56.753903 ignition[1325]: INFO : Ignition 2.20.0 Feb 13 15:51:56.757170 ignition[1325]: INFO : Stage: umount Feb 13 15:51:56.757170 ignition[1325]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:51:56.757170 ignition[1325]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 13 15:51:56.762178 ignition[1325]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 13 15:51:56.762178 ignition[1325]: INFO : PUT result: OK Feb 13 15:51:56.767980 ignition[1325]: INFO : umount: umount passed Feb 13 15:51:56.767980 ignition[1325]: INFO : Ignition finished successfully Feb 13 15:51:56.769288 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:51:56.773746 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:51:56.773875 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:51:56.778469 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:51:56.778592 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:51:56.784120 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:51:56.784203 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:51:56.801440 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:51:56.801536 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:51:56.804405 systemd[1]: Stopped target network.target - Network. Feb 13 15:51:56.808068 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:51:56.808175 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:51:56.812872 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:51:56.815926 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:51:56.818433 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:51:56.825064 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:51:56.829356 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:51:56.833142 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:51:56.833219 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:51:56.835658 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:51:56.838337 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:51:56.846092 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:51:56.846179 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:51:56.848758 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:51:56.848841 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:51:56.851390 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:51:56.853648 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:51:56.863141 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:51:56.864001 systemd-networkd[1082]: eth0: DHCPv6 lease lost Feb 13 15:51:56.864781 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:51:56.870888 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:51:56.871051 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:51:56.876321 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:51:56.877511 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:51:56.884203 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:51:56.884266 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:51:56.886985 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:51:56.887053 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:51:56.897099 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:51:56.898364 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:51:56.898522 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:51:56.900258 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:51:56.900327 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:51:56.905743 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:51:56.905850 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:51:56.913240 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:51:56.913414 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:51:56.916358 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:51:56.936121 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:51:56.938603 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:51:56.943230 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:51:56.943335 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:51:56.948577 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:51:56.948632 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:51:56.953841 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:51:56.953905 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:51:56.957923 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:51:56.958090 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:51:56.962134 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:51:56.962259 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:51:56.981292 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:51:56.984884 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:51:56.987222 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:51:56.990174 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:51:56.990259 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:51:56.997593 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:51:56.997746 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:51:57.002685 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:51:57.002772 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:51:57.015623 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:51:57.015991 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:51:57.019189 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:51:57.019316 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:51:57.026664 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:51:57.040381 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:51:57.095318 systemd[1]: Switching root. Feb 13 15:51:57.152465 systemd-journald[179]: Journal stopped Feb 13 15:51:59.490105 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Feb 13 15:51:59.490187 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:51:59.490214 kernel: SELinux: policy capability open_perms=1 Feb 13 15:51:59.490231 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:51:59.490248 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:51:59.490265 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:51:59.490283 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:51:59.490302 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:51:59.490323 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:51:59.490389 kernel: audit: type=1403 audit(1739461917.797:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:51:59.490414 systemd[1]: Successfully loaded SELinux policy in 47.072ms. Feb 13 15:51:59.490443 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.366ms. Feb 13 15:51:59.490463 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:51:59.490482 systemd[1]: Detected virtualization amazon. Feb 13 15:51:59.490501 systemd[1]: Detected architecture x86-64. Feb 13 15:51:59.490519 systemd[1]: Detected first boot. Feb 13 15:51:59.490540 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:51:59.490562 zram_generator::config[1368]: No configuration found. Feb 13 15:51:59.490581 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:51:59.490600 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:51:59.490619 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:51:59.490637 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:51:59.490657 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:51:59.490675 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:51:59.490697 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:51:59.490719 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:51:59.490738 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:51:59.490756 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:51:59.490773 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:51:59.490792 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:51:59.490810 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:51:59.490828 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:51:59.490847 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:51:59.490865 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:51:59.490886 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:51:59.490905 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:51:59.490927 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:51:59.495142 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:51:59.497567 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:51:59.499377 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:51:59.499456 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:51:59.499488 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:51:59.499509 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:51:59.499530 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:51:59.499551 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:51:59.499571 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:51:59.499590 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:51:59.499607 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:51:59.499626 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:51:59.499647 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:51:59.499667 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:51:59.500124 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:51:59.500151 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:51:59.500171 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:51:59.500189 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:51:59.500212 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:51:59.500231 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:51:59.500250 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:51:59.500269 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:51:59.500292 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:51:59.500314 systemd[1]: Reached target machines.target - Containers. Feb 13 15:51:59.500332 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:51:59.500350 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:51:59.500370 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:51:59.500389 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:51:59.500408 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:51:59.500427 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:51:59.500478 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:51:59.500502 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:51:59.502020 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:51:59.502058 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:51:59.502079 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:51:59.502108 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:51:59.502125 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:51:59.502148 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:51:59.502168 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:51:59.502189 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:51:59.502216 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:51:59.502236 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:51:59.502255 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:51:59.502275 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:51:59.502295 systemd[1]: Stopped verity-setup.service. Feb 13 15:51:59.502387 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:51:59.502414 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:51:59.502436 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:51:59.502459 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:51:59.502479 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:51:59.502501 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:51:59.502523 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:51:59.502545 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:51:59.502570 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:51:59.502593 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:51:59.502615 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:51:59.502637 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:51:59.502658 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:51:59.502682 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:51:59.502746 systemd-journald[1447]: Collecting audit messages is disabled. Feb 13 15:51:59.502790 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:51:59.502819 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:51:59.502842 kernel: loop: module loaded Feb 13 15:51:59.502864 kernel: fuse: init (API version 7.39) Feb 13 15:51:59.502885 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:51:59.502907 systemd-journald[1447]: Journal started Feb 13 15:51:59.502967 systemd-journald[1447]: Runtime Journal (/run/log/journal/ec299c2bb9bf775c6a79042e8398f9e3) is 4.8M, max 38.5M, 33.7M free. Feb 13 15:51:58.998970 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:51:59.031401 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Feb 13 15:51:59.031871 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:51:59.506005 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:51:59.508300 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:51:59.510444 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:51:59.510679 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:51:59.512514 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:51:59.542651 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:51:59.554123 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:51:59.580163 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:51:59.585117 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:51:59.585168 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:51:59.592153 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:51:59.612403 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:51:59.623234 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:51:59.626152 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:51:59.663905 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:51:59.671337 kernel: ACPI: bus type drm_connector registered Feb 13 15:51:59.671884 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:51:59.674142 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:51:59.680835 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:51:59.682416 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:51:59.703810 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:51:59.709924 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:51:59.718369 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:51:59.725463 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:51:59.733879 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:51:59.734235 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:51:59.740354 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:51:59.747444 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:51:59.749486 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:51:59.823292 systemd-journald[1447]: Time spent on flushing to /var/log/journal/ec299c2bb9bf775c6a79042e8398f9e3 is 101.829ms for 960 entries. Feb 13 15:51:59.823292 systemd-journald[1447]: System Journal (/var/log/journal/ec299c2bb9bf775c6a79042e8398f9e3) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:51:59.953287 systemd-journald[1447]: Received client request to flush runtime journal. Feb 13 15:51:59.953400 kernel: loop0: detected capacity change from 0 to 62848 Feb 13 15:51:59.815315 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:51:59.817848 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:51:59.821384 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:51:59.835363 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:51:59.853159 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:51:59.876903 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:51:59.909420 udevadm[1504]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:51:59.941058 systemd-tmpfiles[1493]: ACLs are not supported, ignoring. Feb 13 15:51:59.941083 systemd-tmpfiles[1493]: ACLs are not supported, ignoring. Feb 13 15:51:59.955541 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:51:59.975046 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:51:59.992551 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:52:00.000875 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:52:00.003976 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:52:00.010931 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:52:00.060204 kernel: loop1: detected capacity change from 0 to 141000 Feb 13 15:52:00.154326 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:52:00.172671 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:52:00.197338 kernel: loop2: detected capacity change from 0 to 138184 Feb 13 15:52:00.230463 systemd-tmpfiles[1518]: ACLs are not supported, ignoring. Feb 13 15:52:00.231008 systemd-tmpfiles[1518]: ACLs are not supported, ignoring. Feb 13 15:52:00.239601 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:52:00.353723 kernel: loop3: detected capacity change from 0 to 211296 Feb 13 15:52:00.491135 kernel: loop4: detected capacity change from 0 to 62848 Feb 13 15:52:00.503019 kernel: loop5: detected capacity change from 0 to 141000 Feb 13 15:52:00.552033 kernel: loop6: detected capacity change from 0 to 138184 Feb 13 15:52:00.586121 kernel: loop7: detected capacity change from 0 to 211296 Feb 13 15:52:00.635215 (sd-merge)[1523]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Feb 13 15:52:00.639138 (sd-merge)[1523]: Merged extensions into '/usr'. Feb 13 15:52:00.645095 systemd[1]: Reloading requested from client PID 1492 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:52:00.645201 systemd[1]: Reloading... Feb 13 15:52:00.836979 zram_generator::config[1548]: No configuration found. Feb 13 15:52:01.151651 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:52:01.306093 systemd[1]: Reloading finished in 659 ms. Feb 13 15:52:01.360554 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:52:01.382190 systemd[1]: Starting ensure-sysext.service... Feb 13 15:52:01.388545 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:52:01.431288 systemd[1]: Reloading requested from client PID 1597 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:52:01.431314 systemd[1]: Reloading... Feb 13 15:52:01.626402 systemd-tmpfiles[1598]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:52:01.626857 systemd-tmpfiles[1598]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:52:01.642855 systemd-tmpfiles[1598]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:52:01.643314 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Feb 13 15:52:01.643394 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Feb 13 15:52:01.786252 systemd-tmpfiles[1598]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:52:01.790911 systemd-tmpfiles[1598]: Skipping /boot Feb 13 15:52:01.863785 systemd-tmpfiles[1598]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:52:01.864023 systemd-tmpfiles[1598]: Skipping /boot Feb 13 15:52:02.013083 zram_generator::config[1622]: No configuration found. Feb 13 15:52:02.356878 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:52:02.493523 ldconfig[1487]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:52:02.662348 systemd[1]: Reloading finished in 1204 ms. Feb 13 15:52:02.697980 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:52:02.706576 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:52:02.715694 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:52:02.749223 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:52:02.787551 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:52:02.845492 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:52:02.869440 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:52:02.918369 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:52:02.951518 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:52:02.995014 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:52:03.006156 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:52:03.006458 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:52:03.020272 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:52:03.040460 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:52:03.059218 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:52:03.060643 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:52:03.060840 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:52:03.069499 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:52:03.069785 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:52:03.071137 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:52:03.071321 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:52:03.105744 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:52:03.112685 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:52:03.121677 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:52:03.124380 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:52:03.148701 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:52:03.153565 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:52:03.153906 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:52:03.164497 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:52:03.172000 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:52:03.179582 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:52:03.180227 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:52:03.182392 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:52:03.182616 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:52:03.197056 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:52:03.234348 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:52:03.234721 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:52:03.260258 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:52:03.264340 systemd[1]: Finished ensure-sysext.service. Feb 13 15:52:03.286314 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:52:03.286807 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:52:03.338189 systemd-udevd[1686]: Using default interface naming scheme 'v255'. Feb 13 15:52:03.349925 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:52:03.377574 augenrules[1717]: No rules Feb 13 15:52:03.398311 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:52:03.404981 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:52:03.414880 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:52:03.438301 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:52:03.444408 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:52:03.546291 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:52:03.555332 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:52:03.659353 systemd-resolved[1682]: Positive Trust Anchors: Feb 13 15:52:03.659374 systemd-resolved[1682]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:52:03.659433 systemd-resolved[1682]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:52:03.678860 systemd-resolved[1682]: Defaulting to hostname 'linux'. Feb 13 15:52:03.683265 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:52:03.686348 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:52:03.765805 (udev-worker)[1742]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:52:03.767152 systemd-networkd[1733]: lo: Link UP Feb 13 15:52:03.767168 systemd-networkd[1733]: lo: Gained carrier Feb 13 15:52:03.769393 systemd-networkd[1733]: Enumeration completed Feb 13 15:52:03.769536 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:52:03.771159 systemd[1]: Reached target network.target - Network. Feb 13 15:52:03.774888 systemd-networkd[1733]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:52:03.774932 systemd-networkd[1733]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:52:03.779242 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:52:03.783303 systemd-networkd[1733]: eth0: Link UP Feb 13 15:52:03.783688 systemd-networkd[1733]: eth0: Gained carrier Feb 13 15:52:03.783726 systemd-networkd[1733]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:52:03.789446 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:52:03.799207 systemd-networkd[1733]: eth0: DHCPv4 address 172.31.30.72/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 13 15:52:03.862790 systemd-networkd[1733]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:52:03.902013 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 13 15:52:03.910058 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 15:52:03.913908 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:52:03.914006 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Feb 13 15:52:03.916871 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 15:52:03.916958 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1742) Feb 13 15:52:03.925969 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Feb 13 15:52:04.088615 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:52:04.096979 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:52:04.144325 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Feb 13 15:52:04.144832 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:52:04.153290 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:52:04.157540 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:52:04.191639 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:52:04.196788 lvm[1844]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:52:04.228623 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:52:04.230545 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:52:04.241593 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:52:04.259974 lvm[1850]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:52:04.296139 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:52:04.387016 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:52:04.389327 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:52:04.390877 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:52:04.392629 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:52:04.394348 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:52:04.396119 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:52:04.397831 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:52:04.400502 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:52:04.400578 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:52:04.402198 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:52:04.407647 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:52:04.411710 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:52:04.425275 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:52:04.432559 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:52:04.434461 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:52:04.436305 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:52:04.437770 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:52:04.437805 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:52:04.442091 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:52:04.455216 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:52:04.459162 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:52:04.464160 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:52:04.481965 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:52:04.486604 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:52:04.496600 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:52:04.499355 jq[1859]: false Feb 13 15:52:04.506249 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 15:52:04.529884 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:52:04.539030 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 15:52:04.570171 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:52:04.573711 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:52:04.591211 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:52:04.593200 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:52:04.594057 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:52:04.595834 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:52:04.606083 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:52:04.611471 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:52:04.611837 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:52:04.637306 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:52:04.637558 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:52:04.654165 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:52:04.655103 extend-filesystems[1860]: Found loop4 Feb 13 15:52:04.655103 extend-filesystems[1860]: Found loop5 Feb 13 15:52:04.655103 extend-filesystems[1860]: Found loop6 Feb 13 15:52:04.655103 extend-filesystems[1860]: Found loop7 Feb 13 15:52:04.655103 extend-filesystems[1860]: Found nvme0n1 Feb 13 15:52:04.655103 extend-filesystems[1860]: Found nvme0n1p1 Feb 13 15:52:04.655103 extend-filesystems[1860]: Found nvme0n1p2 Feb 13 15:52:04.655103 extend-filesystems[1860]: Found nvme0n1p3 Feb 13 15:52:04.655103 extend-filesystems[1860]: Found usr Feb 13 15:52:04.655103 extend-filesystems[1860]: Found nvme0n1p4 Feb 13 15:52:04.687686 extend-filesystems[1860]: Found nvme0n1p6 Feb 13 15:52:04.687686 extend-filesystems[1860]: Found nvme0n1p7 Feb 13 15:52:04.687686 extend-filesystems[1860]: Found nvme0n1p9 Feb 13 15:52:04.687686 extend-filesystems[1860]: Checking size of /dev/nvme0n1p9 Feb 13 15:52:04.741314 ntpd[1862]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:33:53 UTC 2025 (1): Starting Feb 13 15:52:04.746503 jq[1872]: true Feb 13 15:52:04.747735 ntpd[1862]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:52:04.750476 ntpd[1862]: 13 Feb 15:52:04 ntpd[1862]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:33:53 UTC 2025 (1): Starting Feb 13 15:52:04.750476 ntpd[1862]: 13 Feb 15:52:04 ntpd[1862]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:52:04.750476 ntpd[1862]: 13 Feb 15:52:04 ntpd[1862]: ---------------------------------------------------- Feb 13 15:52:04.750476 ntpd[1862]: 13 Feb 15:52:04 ntpd[1862]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:52:04.750476 ntpd[1862]: 13 Feb 15:52:04 ntpd[1862]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:52:04.750476 ntpd[1862]: 13 Feb 15:52:04 ntpd[1862]: corporation. Support and training for ntp-4 are Feb 13 15:52:04.750476 ntpd[1862]: 13 Feb 15:52:04 ntpd[1862]: available at https://www.nwtime.org/support Feb 13 15:52:04.750476 ntpd[1862]: 13 Feb 15:52:04 ntpd[1862]: ---------------------------------------------------- Feb 13 15:52:04.747747 ntpd[1862]: ---------------------------------------------------- Feb 13 15:52:04.747758 ntpd[1862]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:52:04.747768 ntpd[1862]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:52:04.747778 ntpd[1862]: corporation. Support and training for ntp-4 are Feb 13 15:52:04.747788 ntpd[1862]: available at https://www.nwtime.org/support Feb 13 15:52:04.747800 ntpd[1862]: ---------------------------------------------------- Feb 13 15:52:04.773939 dbus-daemon[1858]: [system] SELinux support is enabled Feb 13 15:52:04.783037 (ntainerd)[1894]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:52:04.787483 update_engine[1871]: I20250213 15:52:04.774868 1871 main.cc:92] Flatcar Update Engine starting Feb 13 15:52:04.787777 ntpd[1862]: 13 Feb 15:52:04 ntpd[1862]: proto: precision = 0.058 usec (-24) Feb 13 15:52:04.787777 ntpd[1862]: 13 Feb 15:52:04 ntpd[1862]: basedate set to 2025-02-01 Feb 13 15:52:04.787777 ntpd[1862]: 13 Feb 15:52:04 ntpd[1862]: gps base set to 2025-02-02 (week 2352) Feb 13 15:52:04.775440 ntpd[1862]: proto: precision = 0.058 usec (-24) Feb 13 15:52:04.783338 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:52:04.792343 ntpd[1862]: 13 Feb 15:52:04 ntpd[1862]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:52:04.780190 ntpd[1862]: basedate set to 2025-02-01 Feb 13 15:52:04.780214 ntpd[1862]: gps base set to 2025-02-02 (week 2352) Feb 13 15:52:04.791909 ntpd[1862]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:52:04.795385 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:52:04.795649 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:52:04.802515 ntpd[1862]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:52:04.807133 ntpd[1862]: 13 Feb 15:52:04 ntpd[1862]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:52:04.807133 ntpd[1862]: 13 Feb 15:52:04 ntpd[1862]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:52:04.807133 ntpd[1862]: 13 Feb 15:52:04 ntpd[1862]: Listen normally on 3 eth0 172.31.30.72:123 Feb 13 15:52:04.807133 ntpd[1862]: 13 Feb 15:52:04 ntpd[1862]: Listen normally on 4 lo [::1]:123 Feb 13 15:52:04.807133 ntpd[1862]: 13 Feb 15:52:04 ntpd[1862]: bind(21) AF_INET6 fe80::454:3dff:fedc:f799%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:52:04.807133 ntpd[1862]: 13 Feb 15:52:04 ntpd[1862]: unable to create socket on eth0 (5) for fe80::454:3dff:fedc:f799%2#123 Feb 13 15:52:04.807133 ntpd[1862]: 13 Feb 15:52:04 ntpd[1862]: failed to init interface for address fe80::454:3dff:fedc:f799%2 Feb 13 15:52:04.807133 ntpd[1862]: 13 Feb 15:52:04 ntpd[1862]: Listening on routing socket on fd #21 for interface updates Feb 13 15:52:04.804142 ntpd[1862]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:52:04.807435 extend-filesystems[1860]: Resized partition /dev/nvme0n1p9 Feb 13 15:52:04.804191 ntpd[1862]: Listen normally on 3 eth0 172.31.30.72:123 Feb 13 15:52:04.807892 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:52:04.804235 ntpd[1862]: Listen normally on 4 lo [::1]:123 Feb 13 15:52:04.807936 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:52:04.804300 ntpd[1862]: bind(21) AF_INET6 fe80::454:3dff:fedc:f799%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:52:04.820440 tar[1876]: linux-amd64/helm Feb 13 15:52:04.812053 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:52:04.804325 ntpd[1862]: unable to create socket on eth0 (5) for fe80::454:3dff:fedc:f799%2#123 Feb 13 15:52:04.812082 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:52:04.804340 ntpd[1862]: failed to init interface for address fe80::454:3dff:fedc:f799%2 Feb 13 15:52:04.804377 ntpd[1862]: Listening on routing socket on fd #21 for interface updates Feb 13 15:52:04.830170 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 15:52:04.832736 update_engine[1871]: I20250213 15:52:04.823261 1871 update_check_scheduler.cc:74] Next update check in 10m24s Feb 13 15:52:04.832807 extend-filesystems[1906]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:52:04.808716 dbus-daemon[1858]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1733 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 15:52:04.831486 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:52:04.835268 ntpd[1862]: 13 Feb 15:52:04 ntpd[1862]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:52:04.835268 ntpd[1862]: 13 Feb 15:52:04 ntpd[1862]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:52:04.835465 jq[1896]: true Feb 13 15:52:04.809822 dbus-daemon[1858]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 15:52:04.834675 ntpd[1862]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:52:04.834709 ntpd[1862]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:52:04.844682 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 13 15:52:04.837147 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:52:04.928603 systemd-logind[1870]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:52:04.928628 systemd-logind[1870]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 15:52:04.928651 systemd-logind[1870]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:52:04.932036 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 15:52:04.948510 systemd-logind[1870]: New seat seat0. Feb 13 15:52:04.977758 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:52:05.029850 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 13 15:52:05.028090 systemd-networkd[1733]: eth0: Gained IPv6LL Feb 13 15:52:05.039640 coreos-metadata[1857]: Feb 13 15:52:05.031 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:52:05.039640 coreos-metadata[1857]: Feb 13 15:52:05.033 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Feb 13 15:52:05.039640 coreos-metadata[1857]: Feb 13 15:52:05.037 INFO Fetch successful Feb 13 15:52:05.039640 coreos-metadata[1857]: Feb 13 15:52:05.037 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Feb 13 15:52:05.039640 coreos-metadata[1857]: Feb 13 15:52:05.039 INFO Fetch successful Feb 13 15:52:05.039640 coreos-metadata[1857]: Feb 13 15:52:05.039 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Feb 13 15:52:05.046653 coreos-metadata[1857]: Feb 13 15:52:05.044 INFO Fetch successful Feb 13 15:52:05.046653 coreos-metadata[1857]: Feb 13 15:52:05.044 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Feb 13 15:52:05.054898 extend-filesystems[1906]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 13 15:52:05.054898 extend-filesystems[1906]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:52:05.054898 extend-filesystems[1906]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 13 15:52:05.052403 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:52:05.095758 coreos-metadata[1857]: Feb 13 15:52:05.054 INFO Fetch successful Feb 13 15:52:05.095758 coreos-metadata[1857]: Feb 13 15:52:05.054 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Feb 13 15:52:05.095758 coreos-metadata[1857]: Feb 13 15:52:05.063 INFO Fetch failed with 404: resource not found Feb 13 15:52:05.095758 coreos-metadata[1857]: Feb 13 15:52:05.063 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Feb 13 15:52:05.095758 coreos-metadata[1857]: Feb 13 15:52:05.064 INFO Fetch successful Feb 13 15:52:05.095758 coreos-metadata[1857]: Feb 13 15:52:05.064 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Feb 13 15:52:05.095758 coreos-metadata[1857]: Feb 13 15:52:05.067 INFO Fetch successful Feb 13 15:52:05.095758 coreos-metadata[1857]: Feb 13 15:52:05.067 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Feb 13 15:52:05.095758 coreos-metadata[1857]: Feb 13 15:52:05.068 INFO Fetch successful Feb 13 15:52:05.095758 coreos-metadata[1857]: Feb 13 15:52:05.068 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Feb 13 15:52:05.095758 coreos-metadata[1857]: Feb 13 15:52:05.074 INFO Fetch successful Feb 13 15:52:05.095758 coreos-metadata[1857]: Feb 13 15:52:05.074 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Feb 13 15:52:05.095758 coreos-metadata[1857]: Feb 13 15:52:05.078 INFO Fetch successful Feb 13 15:52:05.097357 extend-filesystems[1860]: Resized filesystem in /dev/nvme0n1p9 Feb 13 15:52:05.053852 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:52:05.058137 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:52:05.070802 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:52:05.080612 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Feb 13 15:52:05.101288 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:52:05.121432 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:52:05.176516 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:52:05.179447 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:52:05.213690 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1744) Feb 13 15:52:05.222743 bash[1948]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:52:05.225583 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:52:05.239501 systemd[1]: Starting sshkeys.service... Feb 13 15:52:05.376550 dbus-daemon[1858]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 15:52:05.377165 dbus-daemon[1858]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1908 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 15:52:05.380722 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 15:52:05.389041 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:52:05.398901 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:52:05.408533 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 15:52:05.426644 polkitd[2004]: Started polkitd version 121 Feb 13 15:52:05.447408 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:52:05.470440 polkitd[2004]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 15:52:05.471601 polkitd[2004]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 15:52:05.475307 polkitd[2004]: Finished loading, compiling and executing 2 rules Feb 13 15:52:05.476211 dbus-daemon[1858]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 15:52:05.476423 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 15:52:05.479124 polkitd[2004]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 15:52:05.481835 locksmithd[1909]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:52:05.578416 coreos-metadata[2000]: Feb 13 15:52:05.578 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 13 15:52:05.581421 coreos-metadata[2000]: Feb 13 15:52:05.579 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Feb 13 15:52:05.581421 coreos-metadata[2000]: Feb 13 15:52:05.581 INFO Fetch successful Feb 13 15:52:05.581421 coreos-metadata[2000]: Feb 13 15:52:05.581 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 15:52:05.586759 coreos-metadata[2000]: Feb 13 15:52:05.582 INFO Fetch successful Feb 13 15:52:05.584400 unknown[2000]: wrote ssh authorized keys file for user: core Feb 13 15:52:05.587196 amazon-ssm-agent[1934]: Initializing new seelog logger Feb 13 15:52:05.587196 amazon-ssm-agent[1934]: New Seelog Logger Creation Complete Feb 13 15:52:05.587196 amazon-ssm-agent[1934]: 2025/02/13 15:52:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:52:05.587196 amazon-ssm-agent[1934]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:52:05.587196 amazon-ssm-agent[1934]: 2025/02/13 15:52:05 processing appconfig overrides Feb 13 15:52:05.601221 amazon-ssm-agent[1934]: 2025/02/13 15:52:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:52:05.601221 amazon-ssm-agent[1934]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:52:05.601221 amazon-ssm-agent[1934]: 2025/02/13 15:52:05 processing appconfig overrides Feb 13 15:52:05.610551 amazon-ssm-agent[1934]: 2025/02/13 15:52:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:52:05.610551 amazon-ssm-agent[1934]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:52:05.610551 amazon-ssm-agent[1934]: 2025/02/13 15:52:05 processing appconfig overrides Feb 13 15:52:05.610551 amazon-ssm-agent[1934]: 2025-02-13 15:52:05 INFO Proxy environment variables: Feb 13 15:52:05.616599 systemd-hostnamed[1908]: Hostname set to (transient) Feb 13 15:52:05.630127 systemd-resolved[1682]: System hostname changed to 'ip-172-31-30-72'. Feb 13 15:52:05.653628 amazon-ssm-agent[1934]: 2025/02/13 15:52:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:52:05.653628 amazon-ssm-agent[1934]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 13 15:52:05.653628 amazon-ssm-agent[1934]: 2025/02/13 15:52:05 processing appconfig overrides Feb 13 15:52:05.697737 update-ssh-keys[2040]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:52:05.697628 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:52:05.706642 amazon-ssm-agent[1934]: 2025-02-13 15:52:05 INFO https_proxy: Feb 13 15:52:05.712502 systemd[1]: Finished sshkeys.service. Feb 13 15:52:05.829029 amazon-ssm-agent[1934]: 2025-02-13 15:52:05 INFO http_proxy: Feb 13 15:52:05.923112 amazon-ssm-agent[1934]: 2025-02-13 15:52:05 INFO no_proxy: Feb 13 15:52:05.927789 containerd[1894]: time="2025-02-13T15:52:05.926132582Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:52:06.031719 sshd_keygen[1895]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:52:06.032097 amazon-ssm-agent[1934]: 2025-02-13 15:52:05 INFO Checking if agent identity type OnPrem can be assumed Feb 13 15:52:06.064693 containerd[1894]: time="2025-02-13T15:52:06.064608462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:52:06.068463 containerd[1894]: time="2025-02-13T15:52:06.068403407Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:52:06.068630 containerd[1894]: time="2025-02-13T15:52:06.068609620Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:52:06.068718 containerd[1894]: time="2025-02-13T15:52:06.068701717Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:52:06.070977 containerd[1894]: time="2025-02-13T15:52:06.070156118Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:52:06.070977 containerd[1894]: time="2025-02-13T15:52:06.070199970Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:52:06.070977 containerd[1894]: time="2025-02-13T15:52:06.070283961Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:52:06.070977 containerd[1894]: time="2025-02-13T15:52:06.070300437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:52:06.070977 containerd[1894]: time="2025-02-13T15:52:06.070520570Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:52:06.070977 containerd[1894]: time="2025-02-13T15:52:06.070541006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:52:06.070977 containerd[1894]: time="2025-02-13T15:52:06.070560849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:52:06.070977 containerd[1894]: time="2025-02-13T15:52:06.070576653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:52:06.070977 containerd[1894]: time="2025-02-13T15:52:06.070658476Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:52:06.070977 containerd[1894]: time="2025-02-13T15:52:06.070896528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:52:06.071607 containerd[1894]: time="2025-02-13T15:52:06.071581685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:52:06.071683 containerd[1894]: time="2025-02-13T15:52:06.071669208Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:52:06.071977 containerd[1894]: time="2025-02-13T15:52:06.071933260Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:52:06.072115 containerd[1894]: time="2025-02-13T15:52:06.072100210Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:52:06.083055 containerd[1894]: time="2025-02-13T15:52:06.082348729Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:52:06.083055 containerd[1894]: time="2025-02-13T15:52:06.082427228Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:52:06.083055 containerd[1894]: time="2025-02-13T15:52:06.082452502Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:52:06.083055 containerd[1894]: time="2025-02-13T15:52:06.082476168Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:52:06.083055 containerd[1894]: time="2025-02-13T15:52:06.082496979Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:52:06.083055 containerd[1894]: time="2025-02-13T15:52:06.082691425Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:52:06.083340 containerd[1894]: time="2025-02-13T15:52:06.083061128Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:52:06.083340 containerd[1894]: time="2025-02-13T15:52:06.083209915Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:52:06.083340 containerd[1894]: time="2025-02-13T15:52:06.083234066Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:52:06.083340 containerd[1894]: time="2025-02-13T15:52:06.083272497Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:52:06.083340 containerd[1894]: time="2025-02-13T15:52:06.083294756Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:52:06.083340 containerd[1894]: time="2025-02-13T15:52:06.083315245Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:52:06.083340 containerd[1894]: time="2025-02-13T15:52:06.083334444Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:52:06.083565 containerd[1894]: time="2025-02-13T15:52:06.083355854Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:52:06.083565 containerd[1894]: time="2025-02-13T15:52:06.083377750Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:52:06.083565 containerd[1894]: time="2025-02-13T15:52:06.083406612Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:52:06.083565 containerd[1894]: time="2025-02-13T15:52:06.083426220Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:52:06.083565 containerd[1894]: time="2025-02-13T15:52:06.083445082Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:52:06.083565 containerd[1894]: time="2025-02-13T15:52:06.083473467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:52:06.083565 containerd[1894]: time="2025-02-13T15:52:06.083496102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:52:06.083565 containerd[1894]: time="2025-02-13T15:52:06.083514879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:52:06.083565 containerd[1894]: time="2025-02-13T15:52:06.083535991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:52:06.083565 containerd[1894]: time="2025-02-13T15:52:06.083554608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:52:06.085838 containerd[1894]: time="2025-02-13T15:52:06.083575812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:52:06.085838 containerd[1894]: time="2025-02-13T15:52:06.083595276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:52:06.085838 containerd[1894]: time="2025-02-13T15:52:06.083614572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:52:06.085838 containerd[1894]: time="2025-02-13T15:52:06.083650029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:52:06.085838 containerd[1894]: time="2025-02-13T15:52:06.083674345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:52:06.085838 containerd[1894]: time="2025-02-13T15:52:06.083693741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:52:06.085838 containerd[1894]: time="2025-02-13T15:52:06.083796993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:52:06.085838 containerd[1894]: time="2025-02-13T15:52:06.083815993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:52:06.085838 containerd[1894]: time="2025-02-13T15:52:06.083842478Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:52:06.085838 containerd[1894]: time="2025-02-13T15:52:06.083889301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:52:06.085838 containerd[1894]: time="2025-02-13T15:52:06.083910970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:52:06.085838 containerd[1894]: time="2025-02-13T15:52:06.083930259Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:52:06.085838 containerd[1894]: time="2025-02-13T15:52:06.084007794Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:52:06.085838 containerd[1894]: time="2025-02-13T15:52:06.084034061Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:52:06.086403 containerd[1894]: time="2025-02-13T15:52:06.084050018Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:52:06.086403 containerd[1894]: time="2025-02-13T15:52:06.084067905Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:52:06.086403 containerd[1894]: time="2025-02-13T15:52:06.084081746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:52:06.086403 containerd[1894]: time="2025-02-13T15:52:06.084101758Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:52:06.086403 containerd[1894]: time="2025-02-13T15:52:06.084116083Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:52:06.086403 containerd[1894]: time="2025-02-13T15:52:06.084132564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:52:06.086621 containerd[1894]: time="2025-02-13T15:52:06.084668433Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:52:06.086621 containerd[1894]: time="2025-02-13T15:52:06.084745659Z" level=info msg="Connect containerd service" Feb 13 15:52:06.086621 containerd[1894]: time="2025-02-13T15:52:06.084801167Z" level=info msg="using legacy CRI server" Feb 13 15:52:06.086621 containerd[1894]: time="2025-02-13T15:52:06.084812005Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:52:06.089393 containerd[1894]: time="2025-02-13T15:52:06.088719906Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:52:06.089910 containerd[1894]: time="2025-02-13T15:52:06.089872857Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:52:06.090435 containerd[1894]: time="2025-02-13T15:52:06.090412871Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:52:06.093817 containerd[1894]: time="2025-02-13T15:52:06.090567333Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:52:06.093817 containerd[1894]: time="2025-02-13T15:52:06.090616826Z" level=info msg="Start subscribing containerd event" Feb 13 15:52:06.093817 containerd[1894]: time="2025-02-13T15:52:06.090662672Z" level=info msg="Start recovering state" Feb 13 15:52:06.093817 containerd[1894]: time="2025-02-13T15:52:06.090739575Z" level=info msg="Start event monitor" Feb 13 15:52:06.093817 containerd[1894]: time="2025-02-13T15:52:06.090756920Z" level=info msg="Start snapshots syncer" Feb 13 15:52:06.093817 containerd[1894]: time="2025-02-13T15:52:06.090769210Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:52:06.093817 containerd[1894]: time="2025-02-13T15:52:06.090779142Z" level=info msg="Start streaming server" Feb 13 15:52:06.090941 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:52:06.098234 containerd[1894]: time="2025-02-13T15:52:06.096768250Z" level=info msg="containerd successfully booted in 0.172591s" Feb 13 15:52:06.126533 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:52:06.130426 amazon-ssm-agent[1934]: 2025-02-13 15:52:05 INFO Checking if agent identity type EC2 can be assumed Feb 13 15:52:06.142464 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:52:06.152397 systemd[1]: Started sshd@0-172.31.30.72:22-139.178.89.65:49174.service - OpenSSH per-connection server daemon (139.178.89.65:49174). Feb 13 15:52:06.191003 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:52:06.191268 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:52:06.201417 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:52:06.240064 amazon-ssm-agent[1934]: 2025-02-13 15:52:05 INFO Agent will take identity from EC2 Feb 13 15:52:06.249313 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:52:06.263489 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:52:06.275417 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:52:06.279292 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:52:06.339348 amazon-ssm-agent[1934]: 2025-02-13 15:52:05 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:52:06.438567 amazon-ssm-agent[1934]: 2025-02-13 15:52:05 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:52:06.503377 sshd[2091]: Accepted publickey for core from 139.178.89.65 port 49174 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY Feb 13 15:52:06.509009 sshd-session[2091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:06.538709 amazon-ssm-agent[1934]: 2025-02-13 15:52:05 INFO [amazon-ssm-agent] using named pipe channel for IPC Feb 13 15:52:06.543988 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:52:06.559865 tar[1876]: linux-amd64/LICENSE Feb 13 15:52:06.559865 tar[1876]: linux-amd64/README.md Feb 13 15:52:06.561378 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:52:06.589787 amazon-ssm-agent[1934]: 2025-02-13 15:52:05 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Feb 13 15:52:06.589787 amazon-ssm-agent[1934]: 2025-02-13 15:52:05 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Feb 13 15:52:06.589787 amazon-ssm-agent[1934]: 2025-02-13 15:52:05 INFO [amazon-ssm-agent] Starting Core Agent Feb 13 15:52:06.589787 amazon-ssm-agent[1934]: 2025-02-13 15:52:05 INFO [amazon-ssm-agent] registrar detected. Attempting registration Feb 13 15:52:06.589787 amazon-ssm-agent[1934]: 2025-02-13 15:52:05 INFO [Registrar] Starting registrar module Feb 13 15:52:06.589787 amazon-ssm-agent[1934]: 2025-02-13 15:52:05 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Feb 13 15:52:06.589787 amazon-ssm-agent[1934]: 2025-02-13 15:52:06 INFO [EC2Identity] EC2 registration was successful. Feb 13 15:52:06.589787 amazon-ssm-agent[1934]: 2025-02-13 15:52:06 INFO [CredentialRefresher] credentialRefresher has started Feb 13 15:52:06.589787 amazon-ssm-agent[1934]: 2025-02-13 15:52:06 INFO [CredentialRefresher] Starting credentials refresher loop Feb 13 15:52:06.589787 amazon-ssm-agent[1934]: 2025-02-13 15:52:06 INFO EC2RoleProvider Successfully connected with instance profile role credentials Feb 13 15:52:06.591818 systemd-logind[1870]: New session 1 of user core. Feb 13 15:52:06.608853 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:52:06.614561 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:52:06.625812 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:52:06.640756 amazon-ssm-agent[1934]: 2025-02-13 15:52:06 INFO [CredentialRefresher] Next credential rotation will be in 31.841576942733333 minutes Feb 13 15:52:06.641864 (systemd)[2106]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:52:06.836485 systemd[2106]: Queued start job for default target default.target. Feb 13 15:52:06.846078 systemd[2106]: Created slice app.slice - User Application Slice. Feb 13 15:52:06.846124 systemd[2106]: Reached target paths.target - Paths. Feb 13 15:52:06.846146 systemd[2106]: Reached target timers.target - Timers. Feb 13 15:52:06.848382 systemd[2106]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:52:06.868043 systemd[2106]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:52:06.869397 systemd[2106]: Reached target sockets.target - Sockets. Feb 13 15:52:06.870097 systemd[2106]: Reached target basic.target - Basic System. Feb 13 15:52:06.870194 systemd[2106]: Reached target default.target - Main User Target. Feb 13 15:52:06.870237 systemd[2106]: Startup finished in 215ms. Feb 13 15:52:06.870923 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:52:06.884180 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:52:07.054573 systemd[1]: Started sshd@1-172.31.30.72:22-139.178.89.65:35558.service - OpenSSH per-connection server daemon (139.178.89.65:35558). Feb 13 15:52:07.265524 sshd[2117]: Accepted publickey for core from 139.178.89.65 port 35558 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY Feb 13 15:52:07.267799 sshd-session[2117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:07.274422 systemd-logind[1870]: New session 2 of user core. Feb 13 15:52:07.284200 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:52:07.422444 sshd[2119]: Connection closed by 139.178.89.65 port 35558 Feb 13 15:52:07.423283 sshd-session[2117]: pam_unix(sshd:session): session closed for user core Feb 13 15:52:07.427464 systemd[1]: sshd@1-172.31.30.72:22-139.178.89.65:35558.service: Deactivated successfully. Feb 13 15:52:07.431353 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:52:07.432355 systemd-logind[1870]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:52:07.434107 systemd-logind[1870]: Removed session 2. Feb 13 15:52:07.463231 systemd[1]: Started sshd@2-172.31.30.72:22-139.178.89.65:35560.service - OpenSSH per-connection server daemon (139.178.89.65:35560). Feb 13 15:52:07.574901 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:52:07.582056 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:52:07.645301 amazon-ssm-agent[1934]: 2025-02-13 15:52:07 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Feb 13 15:52:07.598661 (kubelet)[2131]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:52:07.723041 systemd[1]: Startup finished in 851ms (kernel) + 9.034s (initrd) + 9.969s (userspace) = 19.855s. Feb 13 15:52:07.726090 agetty[2100]: failed to open credentials directory Feb 13 15:52:07.727910 amazon-ssm-agent[1934]: 2025-02-13 15:52:07 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2134) started Feb 13 15:52:07.749644 ntpd[1862]: Listen normally on 6 eth0 [fe80::454:3dff:fedc:f799%2]:123 Feb 13 15:52:07.750615 ntpd[1862]: 13 Feb 15:52:07 ntpd[1862]: Listen normally on 6 eth0 [fe80::454:3dff:fedc:f799%2]:123 Feb 13 15:52:07.758605 agetty[2098]: failed to open credentials directory Feb 13 15:52:07.824901 sshd[2124]: Accepted publickey for core from 139.178.89.65 port 35560 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY Feb 13 15:52:07.826854 sshd-session[2124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:07.828747 amazon-ssm-agent[1934]: 2025-02-13 15:52:07 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Feb 13 15:52:07.840132 systemd-logind[1870]: New session 3 of user core. Feb 13 15:52:07.847194 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:52:07.975059 sshd[2144]: Connection closed by 139.178.89.65 port 35560 Feb 13 15:52:07.975735 sshd-session[2124]: pam_unix(sshd:session): session closed for user core Feb 13 15:52:07.987027 systemd[1]: sshd@2-172.31.30.72:22-139.178.89.65:35560.service: Deactivated successfully. Feb 13 15:52:07.990253 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:52:07.991765 systemd-logind[1870]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:52:07.995517 systemd-logind[1870]: Removed session 3. Feb 13 15:52:09.091938 kubelet[2131]: E0213 15:52:09.091725 2131 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:52:09.099083 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:52:09.099581 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:52:09.100189 systemd[1]: kubelet.service: Consumed 1.057s CPU time. Feb 13 15:52:18.015891 systemd[1]: Started sshd@3-172.31.30.72:22-139.178.89.65:41040.service - OpenSSH per-connection server daemon (139.178.89.65:41040). Feb 13 15:52:18.185877 sshd[2161]: Accepted publickey for core from 139.178.89.65 port 41040 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY Feb 13 15:52:18.193077 sshd-session[2161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:18.220220 systemd-logind[1870]: New session 4 of user core. Feb 13 15:52:18.224276 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:52:18.374922 sshd[2163]: Connection closed by 139.178.89.65 port 41040 Feb 13 15:52:18.376440 sshd-session[2161]: pam_unix(sshd:session): session closed for user core Feb 13 15:52:18.380729 systemd[1]: sshd@3-172.31.30.72:22-139.178.89.65:41040.service: Deactivated successfully. Feb 13 15:52:18.384511 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:52:18.387855 systemd-logind[1870]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:52:18.391561 systemd-logind[1870]: Removed session 4. Feb 13 15:52:18.414398 systemd[1]: Started sshd@4-172.31.30.72:22-139.178.89.65:41052.service - OpenSSH per-connection server daemon (139.178.89.65:41052). Feb 13 15:52:18.605395 sshd[2168]: Accepted publickey for core from 139.178.89.65 port 41052 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY Feb 13 15:52:18.607141 sshd-session[2168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:18.613256 systemd-logind[1870]: New session 5 of user core. Feb 13 15:52:18.620179 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:52:18.734188 sshd[2170]: Connection closed by 139.178.89.65 port 41052 Feb 13 15:52:18.735355 sshd-session[2168]: pam_unix(sshd:session): session closed for user core Feb 13 15:52:18.740614 systemd[1]: sshd@4-172.31.30.72:22-139.178.89.65:41052.service: Deactivated successfully. Feb 13 15:52:18.743355 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:52:18.746982 systemd-logind[1870]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:52:18.748668 systemd-logind[1870]: Removed session 5. Feb 13 15:52:18.773803 systemd[1]: Started sshd@5-172.31.30.72:22-139.178.89.65:41068.service - OpenSSH per-connection server daemon (139.178.89.65:41068). Feb 13 15:52:18.956537 sshd[2175]: Accepted publickey for core from 139.178.89.65 port 41068 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY Feb 13 15:52:18.960417 sshd-session[2175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:18.970367 systemd-logind[1870]: New session 6 of user core. Feb 13 15:52:18.978198 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:52:19.104252 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:52:19.131086 sshd[2177]: Connection closed by 139.178.89.65 port 41068 Feb 13 15:52:19.131719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:52:19.133386 sshd-session[2175]: pam_unix(sshd:session): session closed for user core Feb 13 15:52:19.139903 systemd[1]: sshd@5-172.31.30.72:22-139.178.89.65:41068.service: Deactivated successfully. Feb 13 15:52:19.150183 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:52:19.153225 systemd-logind[1870]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:52:19.172826 systemd[1]: Started sshd@6-172.31.30.72:22-139.178.89.65:41076.service - OpenSSH per-connection server daemon (139.178.89.65:41076). Feb 13 15:52:19.176048 systemd-logind[1870]: Removed session 6. Feb 13 15:52:19.369981 sshd[2185]: Accepted publickey for core from 139.178.89.65 port 41076 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY Feb 13 15:52:19.371189 sshd-session[2185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:52:19.389196 systemd-logind[1870]: New session 7 of user core. Feb 13 15:52:19.397235 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:52:19.397442 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:52:19.402164 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:52:19.473047 kubelet[2191]: E0213 15:52:19.472923 2191 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:52:19.478573 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:52:19.478770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:52:19.555043 sudo[2201]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:52:19.555707 sudo[2201]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:52:20.243507 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:52:20.245183 (dockerd)[2219]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:52:20.950540 dockerd[2219]: time="2025-02-13T15:52:20.950477394Z" level=info msg="Starting up" Feb 13 15:52:21.173931 dockerd[2219]: time="2025-02-13T15:52:21.173875606Z" level=info msg="Loading containers: start." Feb 13 15:52:21.462105 kernel: Initializing XFRM netlink socket Feb 13 15:52:21.528035 (udev-worker)[2241]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:52:21.611857 systemd-networkd[1733]: docker0: Link UP Feb 13 15:52:21.655915 dockerd[2219]: time="2025-02-13T15:52:21.655865021Z" level=info msg="Loading containers: done." Feb 13 15:52:21.708741 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3009568961-merged.mount: Deactivated successfully. Feb 13 15:52:21.720229 dockerd[2219]: time="2025-02-13T15:52:21.719556777Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:52:21.720229 dockerd[2219]: time="2025-02-13T15:52:21.719734243Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:52:21.720229 dockerd[2219]: time="2025-02-13T15:52:21.720023126Z" level=info msg="Daemon has completed initialization" Feb 13 15:52:21.778799 dockerd[2219]: time="2025-02-13T15:52:21.776583100Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:52:21.778913 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:52:23.369307 containerd[1894]: time="2025-02-13T15:52:23.369256397Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\"" Feb 13 15:52:24.340786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount441605610.mount: Deactivated successfully. Feb 13 15:52:27.086063 containerd[1894]: time="2025-02-13T15:52:27.086015439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:27.087704 containerd[1894]: time="2025-02-13T15:52:27.087654713Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=35142283" Feb 13 15:52:27.088741 containerd[1894]: time="2025-02-13T15:52:27.088686230Z" level=info msg="ImageCreate event name:\"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:27.105310 containerd[1894]: time="2025-02-13T15:52:27.105235270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:27.108857 containerd[1894]: time="2025-02-13T15:52:27.108798518Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"35139083\" in 3.739485104s" Feb 13 15:52:27.108857 containerd[1894]: time="2025-02-13T15:52:27.108862884Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\"" Feb 13 15:52:27.167052 containerd[1894]: time="2025-02-13T15:52:27.167009280Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\"" Feb 13 15:52:29.734088 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:52:29.758110 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:52:30.061046 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:52:30.068891 (kubelet)[2485]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:52:30.200545 kubelet[2485]: E0213 15:52:30.198933 2485 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:52:30.204997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:52:30.205203 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:52:30.404177 containerd[1894]: time="2025-02-13T15:52:30.403629715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:30.406611 containerd[1894]: time="2025-02-13T15:52:30.406202353Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=32213164" Feb 13 15:52:30.409167 containerd[1894]: time="2025-02-13T15:52:30.407748355Z" level=info msg="ImageCreate event name:\"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:30.412448 containerd[1894]: time="2025-02-13T15:52:30.412406986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:30.413797 containerd[1894]: time="2025-02-13T15:52:30.413756498Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"33659710\" in 3.246705375s" Feb 13 15:52:30.414091 containerd[1894]: time="2025-02-13T15:52:30.414064606Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\"" Feb 13 15:52:30.448941 containerd[1894]: time="2025-02-13T15:52:30.448903303Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\"" Feb 13 15:52:35.304996 containerd[1894]: time="2025-02-13T15:52:35.304924076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:35.354734 containerd[1894]: time="2025-02-13T15:52:35.354643107Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=17334056" Feb 13 15:52:35.397316 containerd[1894]: time="2025-02-13T15:52:35.397231010Z" level=info msg="ImageCreate event name:\"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:35.420133 containerd[1894]: time="2025-02-13T15:52:35.418347910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:35.420133 containerd[1894]: time="2025-02-13T15:52:35.419537608Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"18780620\" in 4.970367273s" Feb 13 15:52:35.420133 containerd[1894]: time="2025-02-13T15:52:35.419580943Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\"" Feb 13 15:52:35.455868 containerd[1894]: time="2025-02-13T15:52:35.455829946Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:52:35.632651 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 15:52:36.701006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount938765797.mount: Deactivated successfully. Feb 13 15:52:37.345621 containerd[1894]: time="2025-02-13T15:52:37.345570241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:37.346729 containerd[1894]: time="2025-02-13T15:52:37.346577684Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=28620592" Feb 13 15:52:37.348180 containerd[1894]: time="2025-02-13T15:52:37.347974956Z" level=info msg="ImageCreate event name:\"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:37.350440 containerd[1894]: time="2025-02-13T15:52:37.350402745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:37.351095 containerd[1894]: time="2025-02-13T15:52:37.351060087Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"28619611\" in 1.895189542s" Feb 13 15:52:37.351172 containerd[1894]: time="2025-02-13T15:52:37.351102211Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\"" Feb 13 15:52:37.378038 containerd[1894]: time="2025-02-13T15:52:37.377992135Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:52:37.880703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1443223734.mount: Deactivated successfully. Feb 13 15:52:38.957315 containerd[1894]: time="2025-02-13T15:52:38.957261206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:38.958628 containerd[1894]: time="2025-02-13T15:52:38.958514698Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 15:52:38.960849 containerd[1894]: time="2025-02-13T15:52:38.960457958Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:38.963344 containerd[1894]: time="2025-02-13T15:52:38.963299959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:38.964435 containerd[1894]: time="2025-02-13T15:52:38.964395320Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.586363614s" Feb 13 15:52:38.964542 containerd[1894]: time="2025-02-13T15:52:38.964438914Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 15:52:38.988666 containerd[1894]: time="2025-02-13T15:52:38.988624760Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:52:39.490625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount545959164.mount: Deactivated successfully. Feb 13 15:52:39.492561 containerd[1894]: time="2025-02-13T15:52:39.490635664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:39.494776 containerd[1894]: time="2025-02-13T15:52:39.492805655Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 15:52:39.494927 containerd[1894]: time="2025-02-13T15:52:39.494892235Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:39.499127 containerd[1894]: time="2025-02-13T15:52:39.499086464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:39.502171 containerd[1894]: time="2025-02-13T15:52:39.502014306Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 513.342221ms" Feb 13 15:52:39.502171 containerd[1894]: time="2025-02-13T15:52:39.502055039Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 15:52:39.554614 containerd[1894]: time="2025-02-13T15:52:39.554567196Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Feb 13 15:52:40.117424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2642923867.mount: Deactivated successfully. Feb 13 15:52:40.446027 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:52:40.457319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:52:40.877624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:52:40.891471 (kubelet)[2630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:52:41.033401 kubelet[2630]: E0213 15:52:41.033280 2630 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:52:41.036889 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:52:41.037114 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:52:43.560714 containerd[1894]: time="2025-02-13T15:52:43.559125667Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Feb 13 15:52:43.560714 containerd[1894]: time="2025-02-13T15:52:43.560646552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:43.563789 containerd[1894]: time="2025-02-13T15:52:43.563745434Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:43.565449 containerd[1894]: time="2025-02-13T15:52:43.565408841Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 4.0108026s" Feb 13 15:52:43.565605 containerd[1894]: time="2025-02-13T15:52:43.565584744Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Feb 13 15:52:43.566471 containerd[1894]: time="2025-02-13T15:52:43.566438364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:52:47.941327 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:52:47.948553 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:52:47.980395 systemd[1]: Reloading requested from client PID 2711 ('systemctl') (unit session-7.scope)... Feb 13 15:52:47.980593 systemd[1]: Reloading... Feb 13 15:52:48.133993 zram_generator::config[2751]: No configuration found. Feb 13 15:52:48.300682 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:52:48.397731 systemd[1]: Reloading finished in 416 ms. Feb 13 15:52:48.474829 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:52:48.475352 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:52:48.475911 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:52:48.483585 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:52:48.729694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:52:48.748596 (kubelet)[2810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:52:48.816299 kubelet[2810]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:52:48.816702 kubelet[2810]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:52:48.816702 kubelet[2810]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:52:48.820011 kubelet[2810]: I0213 15:52:48.819915 2810 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:52:49.127357 kubelet[2810]: I0213 15:52:49.127235 2810 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:52:49.127357 kubelet[2810]: I0213 15:52:49.127277 2810 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:52:49.128207 kubelet[2810]: I0213 15:52:49.127581 2810 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:52:49.169894 kubelet[2810]: I0213 15:52:49.168780 2810 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:52:49.169894 kubelet[2810]: E0213 15:52:49.169805 2810 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.72:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.72:6443: connect: connection refused Feb 13 15:52:49.192970 kubelet[2810]: I0213 15:52:49.192451 2810 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:52:49.192970 kubelet[2810]: I0213 15:52:49.192805 2810 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:52:49.194436 kubelet[2810]: I0213 15:52:49.194387 2810 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:52:49.194436 kubelet[2810]: I0213 15:52:49.194437 2810 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:52:49.194674 kubelet[2810]: I0213 15:52:49.194458 2810 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:52:49.194674 kubelet[2810]: I0213 15:52:49.194593 2810 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:52:49.194754 kubelet[2810]: I0213 15:52:49.194720 2810 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:52:49.194754 kubelet[2810]: I0213 15:52:49.194739 2810 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:52:49.194824 kubelet[2810]: I0213 15:52:49.194773 2810 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:52:49.194824 kubelet[2810]: I0213 15:52:49.194789 2810 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:52:49.200558 kubelet[2810]: W0213 15:52:49.200500 2810 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.30.72:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.72:6443: connect: connection refused Feb 13 15:52:49.200558 kubelet[2810]: E0213 15:52:49.200566 2810 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.72:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.72:6443: connect: connection refused Feb 13 15:52:49.201787 kubelet[2810]: W0213 15:52:49.201734 2810 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.30.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-72&limit=500&resourceVersion=0": dial tcp 172.31.30.72:6443: connect: connection refused Feb 13 15:52:49.201876 kubelet[2810]: E0213 15:52:49.201797 2810 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-72&limit=500&resourceVersion=0": dial tcp 172.31.30.72:6443: connect: connection refused Feb 13 15:52:49.204966 kubelet[2810]: I0213 15:52:49.203499 2810 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:52:49.213599 kubelet[2810]: I0213 15:52:49.212978 2810 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:52:49.216409 kubelet[2810]: W0213 15:52:49.216348 2810 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:52:49.218479 kubelet[2810]: I0213 15:52:49.218415 2810 server.go:1256] "Started kubelet" Feb 13 15:52:49.218733 kubelet[2810]: I0213 15:52:49.218706 2810 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:52:49.220197 kubelet[2810]: I0213 15:52:49.219756 2810 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:52:49.222691 kubelet[2810]: I0213 15:52:49.222451 2810 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:52:49.224566 kubelet[2810]: I0213 15:52:49.224535 2810 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:52:49.224726 kubelet[2810]: I0213 15:52:49.224704 2810 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:52:49.229632 kubelet[2810]: E0213 15:52:49.228876 2810 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.72:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.72:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-72.1823cf71e79026a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-72,UID:ip-172-31-30-72,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-72,},FirstTimestamp:2025-02-13 15:52:49.218381473 +0000 UTC m=+0.462867399,LastTimestamp:2025-02-13 15:52:49.218381473 +0000 UTC m=+0.462867399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-72,}" Feb 13 15:52:49.240362 kubelet[2810]: I0213 15:52:49.240316 2810 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:52:49.248513 kubelet[2810]: E0213 15:52:49.248480 2810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-72?timeout=10s\": dial tcp 172.31.30.72:6443: connect: connection refused" interval="200ms" Feb 13 15:52:49.248651 kubelet[2810]: I0213 15:52:49.248605 2810 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:52:49.248730 kubelet[2810]: I0213 15:52:49.248705 2810 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:52:49.251524 kubelet[2810]: I0213 15:52:49.251218 2810 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:52:49.256813 kubelet[2810]: I0213 15:52:49.256774 2810 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:52:49.257324 kubelet[2810]: W0213 15:52:49.257196 2810 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.30.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.72:6443: connect: connection refused Feb 13 15:52:49.257324 kubelet[2810]: E0213 15:52:49.257323 2810 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.72:6443: connect: connection refused Feb 13 15:52:49.257550 kubelet[2810]: I0213 15:52:49.257528 2810 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:52:49.273342 kubelet[2810]: I0213 15:52:49.272319 2810 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:52:49.275014 kubelet[2810]: I0213 15:52:49.274752 2810 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:52:49.275014 kubelet[2810]: I0213 15:52:49.274789 2810 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:52:49.275014 kubelet[2810]: I0213 15:52:49.274814 2810 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:52:49.275014 kubelet[2810]: E0213 15:52:49.274874 2810 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:52:49.284618 kubelet[2810]: E0213 15:52:49.284237 2810 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:52:49.284618 kubelet[2810]: W0213 15:52:49.284461 2810 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.30.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.72:6443: connect: connection refused Feb 13 15:52:49.284618 kubelet[2810]: E0213 15:52:49.284514 2810 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.72:6443: connect: connection refused Feb 13 15:52:49.296796 kubelet[2810]: I0213 15:52:49.296194 2810 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:52:49.296796 kubelet[2810]: I0213 15:52:49.296425 2810 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:52:49.296796 kubelet[2810]: I0213 15:52:49.296446 2810 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:52:49.299135 kubelet[2810]: I0213 15:52:49.298841 2810 policy_none.go:49] "None policy: Start" Feb 13 15:52:49.300160 kubelet[2810]: I0213 15:52:49.299792 2810 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:52:49.300160 kubelet[2810]: I0213 15:52:49.299817 2810 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:52:49.307464 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:52:49.321522 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:52:49.325647 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:52:49.336170 kubelet[2810]: I0213 15:52:49.335510 2810 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:52:49.336170 kubelet[2810]: I0213 15:52:49.335815 2810 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:52:49.341110 kubelet[2810]: E0213 15:52:49.340680 2810 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-72\" not found" Feb 13 15:52:49.350520 kubelet[2810]: I0213 15:52:49.347591 2810 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-72" Feb 13 15:52:49.351298 kubelet[2810]: E0213 15:52:49.350526 2810 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.72:6443/api/v1/nodes\": dial tcp 172.31.30.72:6443: connect: connection refused" node="ip-172-31-30-72" Feb 13 15:52:49.375218 kubelet[2810]: I0213 15:52:49.375167 2810 topology_manager.go:215] "Topology Admit Handler" podUID="a50782a4619bcb871db9c9b508cb0f2f" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-72" Feb 13 15:52:49.377352 kubelet[2810]: I0213 15:52:49.377319 2810 topology_manager.go:215] "Topology Admit Handler" podUID="55a9f4d47754ab98adb27fc7e698672d" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-72" Feb 13 15:52:49.379555 kubelet[2810]: I0213 15:52:49.379469 2810 topology_manager.go:215] "Topology Admit Handler" podUID="b49ce6b3061100f724a3905bf9f62110" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-72" Feb 13 15:52:49.392639 systemd[1]: Created slice kubepods-burstable-poda50782a4619bcb871db9c9b508cb0f2f.slice - libcontainer container kubepods-burstable-poda50782a4619bcb871db9c9b508cb0f2f.slice. Feb 13 15:52:49.430735 systemd[1]: Created slice kubepods-burstable-pod55a9f4d47754ab98adb27fc7e698672d.slice - libcontainer container kubepods-burstable-pod55a9f4d47754ab98adb27fc7e698672d.slice. Feb 13 15:52:49.445397 systemd[1]: Created slice kubepods-burstable-podb49ce6b3061100f724a3905bf9f62110.slice - libcontainer container kubepods-burstable-podb49ce6b3061100f724a3905bf9f62110.slice. Feb 13 15:52:49.449001 kubelet[2810]: E0213 15:52:49.448940 2810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-72?timeout=10s\": dial tcp 172.31.30.72:6443: connect: connection refused" interval="400ms" Feb 13 15:52:49.552688 kubelet[2810]: I0213 15:52:49.552348 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55a9f4d47754ab98adb27fc7e698672d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-72\" (UID: \"55a9f4d47754ab98adb27fc7e698672d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-72" Feb 13 15:52:49.552688 kubelet[2810]: I0213 15:52:49.552410 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a50782a4619bcb871db9c9b508cb0f2f-ca-certs\") pod \"kube-apiserver-ip-172-31-30-72\" (UID: \"a50782a4619bcb871db9c9b508cb0f2f\") " pod="kube-system/kube-apiserver-ip-172-31-30-72" Feb 13 15:52:49.552688 kubelet[2810]: I0213 15:52:49.552441 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55a9f4d47754ab98adb27fc7e698672d-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-72\" (UID: \"55a9f4d47754ab98adb27fc7e698672d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-72" Feb 13 15:52:49.552688 kubelet[2810]: I0213 15:52:49.552468 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55a9f4d47754ab98adb27fc7e698672d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-72\" (UID: \"55a9f4d47754ab98adb27fc7e698672d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-72" Feb 13 15:52:49.552688 kubelet[2810]: I0213 15:52:49.552499 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55a9f4d47754ab98adb27fc7e698672d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-72\" (UID: \"55a9f4d47754ab98adb27fc7e698672d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-72" Feb 13 15:52:49.553048 kubelet[2810]: I0213 15:52:49.552539 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b49ce6b3061100f724a3905bf9f62110-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-72\" (UID: \"b49ce6b3061100f724a3905bf9f62110\") " pod="kube-system/kube-scheduler-ip-172-31-30-72" Feb 13 15:52:49.553048 kubelet[2810]: I0213 15:52:49.552565 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a50782a4619bcb871db9c9b508cb0f2f-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-72\" (UID: \"a50782a4619bcb871db9c9b508cb0f2f\") " pod="kube-system/kube-apiserver-ip-172-31-30-72" Feb 13 15:52:49.553048 kubelet[2810]: I0213 15:52:49.552596 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a50782a4619bcb871db9c9b508cb0f2f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-72\" (UID: \"a50782a4619bcb871db9c9b508cb0f2f\") " pod="kube-system/kube-apiserver-ip-172-31-30-72" Feb 13 15:52:49.553048 kubelet[2810]: I0213 15:52:49.552638 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55a9f4d47754ab98adb27fc7e698672d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-72\" (UID: \"55a9f4d47754ab98adb27fc7e698672d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-72" Feb 13 15:52:49.553627 kubelet[2810]: I0213 15:52:49.553592 2810 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-72" Feb 13 15:52:49.554074 kubelet[2810]: E0213 15:52:49.554052 2810 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.72:6443/api/v1/nodes\": dial tcp 172.31.30.72:6443: connect: connection refused" node="ip-172-31-30-72" Feb 13 15:52:49.715367 containerd[1894]: time="2025-02-13T15:52:49.715324772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-72,Uid:a50782a4619bcb871db9c9b508cb0f2f,Namespace:kube-system,Attempt:0,}" Feb 13 15:52:49.744575 containerd[1894]: time="2025-02-13T15:52:49.744208611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-72,Uid:55a9f4d47754ab98adb27fc7e698672d,Namespace:kube-system,Attempt:0,}" Feb 13 15:52:49.750119 containerd[1894]: time="2025-02-13T15:52:49.749757108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-72,Uid:b49ce6b3061100f724a3905bf9f62110,Namespace:kube-system,Attempt:0,}" Feb 13 15:52:49.849495 kubelet[2810]: E0213 15:52:49.849460 2810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-72?timeout=10s\": dial tcp 172.31.30.72:6443: connect: connection refused" interval="800ms" Feb 13 15:52:49.869183 update_engine[1871]: I20250213 15:52:49.869112 1871 update_attempter.cc:509] Updating boot flags... Feb 13 15:52:49.927060 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (2853) Feb 13 15:52:49.963242 kubelet[2810]: I0213 15:52:49.957563 2810 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-72" Feb 13 15:52:49.963242 kubelet[2810]: E0213 15:52:49.963181 2810 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.72:6443/api/v1/nodes\": dial tcp 172.31.30.72:6443: connect: connection refused" node="ip-172-31-30-72" Feb 13 15:52:50.170301 kubelet[2810]: W0213 15:52:50.169816 2810 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.30.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.72:6443: connect: connection refused Feb 13 15:52:50.170492 kubelet[2810]: E0213 15:52:50.170477 2810 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.72:6443: connect: connection refused Feb 13 15:52:50.274660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3131729797.mount: Deactivated successfully. Feb 13 15:52:50.286800 containerd[1894]: time="2025-02-13T15:52:50.286749131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:52:50.289083 containerd[1894]: time="2025-02-13T15:52:50.289040622Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:52:50.291689 containerd[1894]: time="2025-02-13T15:52:50.291634158Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 15:52:50.292387 containerd[1894]: time="2025-02-13T15:52:50.292342706Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:52:50.294714 containerd[1894]: time="2025-02-13T15:52:50.294679628Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:52:50.298628 containerd[1894]: time="2025-02-13T15:52:50.298562205Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:52:50.300054 containerd[1894]: time="2025-02-13T15:52:50.298611474Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:52:50.308638 containerd[1894]: time="2025-02-13T15:52:50.308590051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:52:50.310215 containerd[1894]: time="2025-02-13T15:52:50.310092556Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 560.245816ms" Feb 13 15:52:50.311507 containerd[1894]: time="2025-02-13T15:52:50.311466071Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 596.058925ms" Feb 13 15:52:50.315354 containerd[1894]: time="2025-02-13T15:52:50.315308832Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 570.948542ms" Feb 13 15:52:50.416801 kubelet[2810]: W0213 15:52:50.416378 2810 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.30.72:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.72:6443: connect: connection refused Feb 13 15:52:50.416801 kubelet[2810]: E0213 15:52:50.416461 2810 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.72:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.72:6443: connect: connection refused Feb 13 15:52:50.514932 kubelet[2810]: W0213 15:52:50.514611 2810 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.30.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.72:6443: connect: connection refused Feb 13 15:52:50.514932 kubelet[2810]: E0213 15:52:50.514688 2810 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.72:6443: connect: connection refused Feb 13 15:52:50.554122 kubelet[2810]: W0213 15:52:50.554041 2810 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.30.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-72&limit=500&resourceVersion=0": dial tcp 172.31.30.72:6443: connect: connection refused Feb 13 15:52:50.554122 kubelet[2810]: E0213 15:52:50.554133 2810 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-72&limit=500&resourceVersion=0": dial tcp 172.31.30.72:6443: connect: connection refused Feb 13 15:52:50.583966 containerd[1894]: time="2025-02-13T15:52:50.578938142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:52:50.583966 containerd[1894]: time="2025-02-13T15:52:50.579019709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:52:50.583966 containerd[1894]: time="2025-02-13T15:52:50.579036284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:52:50.583966 containerd[1894]: time="2025-02-13T15:52:50.579128139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:52:50.590342 containerd[1894]: time="2025-02-13T15:52:50.589923245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:52:50.590342 containerd[1894]: time="2025-02-13T15:52:50.590010646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:52:50.590342 containerd[1894]: time="2025-02-13T15:52:50.590031716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:52:50.590342 containerd[1894]: time="2025-02-13T15:52:50.590171999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:52:50.592382 containerd[1894]: time="2025-02-13T15:52:50.592098095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:52:50.592382 containerd[1894]: time="2025-02-13T15:52:50.592162501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:52:50.592382 containerd[1894]: time="2025-02-13T15:52:50.592189341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:52:50.592382 containerd[1894]: time="2025-02-13T15:52:50.592292140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:52:50.621811 systemd[1]: Started cri-containerd-e9e53f4a15fd639e5057aeaa0d5fc8d43b86e94650dd377c75b3a03dd9884f46.scope - libcontainer container e9e53f4a15fd639e5057aeaa0d5fc8d43b86e94650dd377c75b3a03dd9884f46. Feb 13 15:52:50.652086 kubelet[2810]: E0213 15:52:50.651936 2810 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-72?timeout=10s\": dial tcp 172.31.30.72:6443: connect: connection refused" interval="1.6s" Feb 13 15:52:50.691879 systemd[1]: Started cri-containerd-246dfbbb0e759f018e0059f2795895171ac0f981e9497fcb1fe3e1818db0b3ef.scope - libcontainer container 246dfbbb0e759f018e0059f2795895171ac0f981e9497fcb1fe3e1818db0b3ef. Feb 13 15:52:50.696302 systemd[1]: Started cri-containerd-494fa6fa9f6ead0cc27019edee983e733ea75f2ec2e776a0a13c49be642ce132.scope - libcontainer container 494fa6fa9f6ead0cc27019edee983e733ea75f2ec2e776a0a13c49be642ce132. Feb 13 15:52:50.774190 kubelet[2810]: I0213 15:52:50.773452 2810 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-72" Feb 13 15:52:50.774190 kubelet[2810]: E0213 15:52:50.773907 2810 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.72:6443/api/v1/nodes\": dial tcp 172.31.30.72:6443: connect: connection refused" node="ip-172-31-30-72" Feb 13 15:52:50.798514 containerd[1894]: time="2025-02-13T15:52:50.798455401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-72,Uid:55a9f4d47754ab98adb27fc7e698672d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9e53f4a15fd639e5057aeaa0d5fc8d43b86e94650dd377c75b3a03dd9884f46\"" Feb 13 15:52:50.811601 containerd[1894]: time="2025-02-13T15:52:50.811145769Z" level=info msg="CreateContainer within sandbox \"e9e53f4a15fd639e5057aeaa0d5fc8d43b86e94650dd377c75b3a03dd9884f46\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:52:50.834081 containerd[1894]: time="2025-02-13T15:52:50.833024525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-72,Uid:a50782a4619bcb871db9c9b508cb0f2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"494fa6fa9f6ead0cc27019edee983e733ea75f2ec2e776a0a13c49be642ce132\"" Feb 13 15:52:50.844474 containerd[1894]: time="2025-02-13T15:52:50.843852469Z" level=info msg="CreateContainer within sandbox \"494fa6fa9f6ead0cc27019edee983e733ea75f2ec2e776a0a13c49be642ce132\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:52:50.846253 containerd[1894]: time="2025-02-13T15:52:50.846117424Z" level=info msg="CreateContainer within sandbox \"e9e53f4a15fd639e5057aeaa0d5fc8d43b86e94650dd377c75b3a03dd9884f46\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3655c83b4f96a6520a9790cc0c5342fe115dd1685cee2e253ff9cf9e68633e71\"" Feb 13 15:52:50.847594 containerd[1894]: time="2025-02-13T15:52:50.847565745Z" level=info msg="StartContainer for \"3655c83b4f96a6520a9790cc0c5342fe115dd1685cee2e253ff9cf9e68633e71\"" Feb 13 15:52:50.854998 containerd[1894]: time="2025-02-13T15:52:50.854931645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-72,Uid:b49ce6b3061100f724a3905bf9f62110,Namespace:kube-system,Attempt:0,} returns sandbox id \"246dfbbb0e759f018e0059f2795895171ac0f981e9497fcb1fe3e1818db0b3ef\"" Feb 13 15:52:50.860211 containerd[1894]: time="2025-02-13T15:52:50.860068418Z" level=info msg="CreateContainer within sandbox \"246dfbbb0e759f018e0059f2795895171ac0f981e9497fcb1fe3e1818db0b3ef\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:52:50.876382 containerd[1894]: time="2025-02-13T15:52:50.876337567Z" level=info msg="CreateContainer within sandbox \"494fa6fa9f6ead0cc27019edee983e733ea75f2ec2e776a0a13c49be642ce132\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0cd1c24bbed8ad53a63273cffc8dbaff02a3f6f3946080f4570125ca1283f222\"" Feb 13 15:52:50.877232 containerd[1894]: time="2025-02-13T15:52:50.877202800Z" level=info msg="StartContainer for \"0cd1c24bbed8ad53a63273cffc8dbaff02a3f6f3946080f4570125ca1283f222\"" Feb 13 15:52:50.895635 containerd[1894]: time="2025-02-13T15:52:50.895514011Z" level=info msg="CreateContainer within sandbox \"246dfbbb0e759f018e0059f2795895171ac0f981e9497fcb1fe3e1818db0b3ef\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4a8c530a217204fd81cd7489aadd3b2780194ffc25de2dcd49f3718087ae1976\"" Feb 13 15:52:50.896557 containerd[1894]: time="2025-02-13T15:52:50.896525516Z" level=info msg="StartContainer for \"4a8c530a217204fd81cd7489aadd3b2780194ffc25de2dcd49f3718087ae1976\"" Feb 13 15:52:50.960605 systemd[1]: Started cri-containerd-3655c83b4f96a6520a9790cc0c5342fe115dd1685cee2e253ff9cf9e68633e71.scope - libcontainer container 3655c83b4f96a6520a9790cc0c5342fe115dd1685cee2e253ff9cf9e68633e71. Feb 13 15:52:50.976844 systemd[1]: Started cri-containerd-4a8c530a217204fd81cd7489aadd3b2780194ffc25de2dcd49f3718087ae1976.scope - libcontainer container 4a8c530a217204fd81cd7489aadd3b2780194ffc25de2dcd49f3718087ae1976. Feb 13 15:52:50.987430 systemd[1]: Started cri-containerd-0cd1c24bbed8ad53a63273cffc8dbaff02a3f6f3946080f4570125ca1283f222.scope - libcontainer container 0cd1c24bbed8ad53a63273cffc8dbaff02a3f6f3946080f4570125ca1283f222. Feb 13 15:52:51.084093 containerd[1894]: time="2025-02-13T15:52:51.083744449Z" level=info msg="StartContainer for \"3655c83b4f96a6520a9790cc0c5342fe115dd1685cee2e253ff9cf9e68633e71\" returns successfully" Feb 13 15:52:51.096252 containerd[1894]: time="2025-02-13T15:52:51.096147861Z" level=info msg="StartContainer for \"0cd1c24bbed8ad53a63273cffc8dbaff02a3f6f3946080f4570125ca1283f222\" returns successfully" Feb 13 15:52:51.138364 containerd[1894]: time="2025-02-13T15:52:51.138226504Z" level=info msg="StartContainer for \"4a8c530a217204fd81cd7489aadd3b2780194ffc25de2dcd49f3718087ae1976\" returns successfully" Feb 13 15:52:51.322855 kubelet[2810]: E0213 15:52:51.322818 2810 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.72:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.72:6443: connect: connection refused Feb 13 15:52:52.379491 kubelet[2810]: I0213 15:52:52.378429 2810 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-72" Feb 13 15:52:54.718302 kubelet[2810]: E0213 15:52:54.718255 2810 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-72\" not found" node="ip-172-31-30-72" Feb 13 15:52:54.761056 kubelet[2810]: I0213 15:52:54.759516 2810 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-30-72" Feb 13 15:52:54.803919 kubelet[2810]: E0213 15:52:54.803812 2810 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-30-72.1823cf71e79026a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-72,UID:ip-172-31-30-72,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-72,},FirstTimestamp:2025-02-13 15:52:49.218381473 +0000 UTC m=+0.462867399,LastTimestamp:2025-02-13 15:52:49.218381473 +0000 UTC m=+0.462867399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-72,}" Feb 13 15:52:55.204613 kubelet[2810]: I0213 15:52:55.204578 2810 apiserver.go:52] "Watching apiserver" Feb 13 15:52:55.257112 kubelet[2810]: I0213 15:52:55.257055 2810 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:52:58.502311 systemd[1]: Reloading requested from client PID 3184 ('systemctl') (unit session-7.scope)... Feb 13 15:52:58.502334 systemd[1]: Reloading... Feb 13 15:52:58.746976 zram_generator::config[3227]: No configuration found. Feb 13 15:52:58.926349 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:52:59.105442 systemd[1]: Reloading finished in 602 ms. Feb 13 15:52:59.174737 kubelet[2810]: I0213 15:52:59.174647 2810 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:52:59.176232 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:52:59.192500 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:52:59.192815 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:52:59.202459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:52:59.437525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:52:59.451620 (kubelet)[3281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:52:59.598921 kubelet[3281]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:52:59.598921 kubelet[3281]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:52:59.598921 kubelet[3281]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:52:59.599508 kubelet[3281]: I0213 15:52:59.599025 3281 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:52:59.606964 kubelet[3281]: I0213 15:52:59.606904 3281 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:52:59.606964 kubelet[3281]: I0213 15:52:59.606935 3281 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:52:59.607315 kubelet[3281]: I0213 15:52:59.607291 3281 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:52:59.612021 kubelet[3281]: I0213 15:52:59.610868 3281 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:52:59.645853 kubelet[3281]: I0213 15:52:59.645749 3281 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:52:59.660625 kubelet[3281]: I0213 15:52:59.660571 3281 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:52:59.661080 kubelet[3281]: I0213 15:52:59.661059 3281 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:52:59.661330 kubelet[3281]: I0213 15:52:59.661308 3281 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:52:59.661506 kubelet[3281]: I0213 15:52:59.661345 3281 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:52:59.661506 kubelet[3281]: I0213 15:52:59.661360 3281 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:52:59.661506 kubelet[3281]: I0213 15:52:59.661400 3281 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:52:59.661645 kubelet[3281]: I0213 15:52:59.661623 3281 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:52:59.662549 kubelet[3281]: I0213 15:52:59.661642 3281 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:52:59.667968 kubelet[3281]: I0213 15:52:59.667088 3281 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:52:59.670598 kubelet[3281]: I0213 15:52:59.670430 3281 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:52:59.673504 kubelet[3281]: I0213 15:52:59.673480 3281 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:52:59.673744 kubelet[3281]: I0213 15:52:59.673727 3281 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:52:59.675135 kubelet[3281]: I0213 15:52:59.675104 3281 server.go:1256] "Started kubelet" Feb 13 15:52:59.693003 kubelet[3281]: I0213 15:52:59.691905 3281 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:52:59.695486 kubelet[3281]: I0213 15:52:59.695458 3281 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:52:59.698067 kubelet[3281]: I0213 15:52:59.697792 3281 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:52:59.699928 kubelet[3281]: I0213 15:52:59.698195 3281 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:52:59.709084 kubelet[3281]: I0213 15:52:59.709050 3281 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:52:59.714595 kubelet[3281]: I0213 15:52:59.713687 3281 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:52:59.714595 kubelet[3281]: I0213 15:52:59.714335 3281 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:52:59.714776 kubelet[3281]: I0213 15:52:59.714622 3281 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:52:59.752958 kubelet[3281]: I0213 15:52:59.752911 3281 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:52:59.757975 kubelet[3281]: I0213 15:52:59.757829 3281 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:52:59.758140 kubelet[3281]: I0213 15:52:59.757992 3281 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:52:59.759983 kubelet[3281]: E0213 15:52:59.754420 3281 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:52:59.775071 kubelet[3281]: I0213 15:52:59.775036 3281 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:52:59.779119 kubelet[3281]: I0213 15:52:59.779090 3281 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:52:59.779119 kubelet[3281]: I0213 15:52:59.779123 3281 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:52:59.779619 kubelet[3281]: I0213 15:52:59.779146 3281 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:52:59.779619 kubelet[3281]: E0213 15:52:59.779347 3281 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:52:59.827932 kubelet[3281]: I0213 15:52:59.827482 3281 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-72" Feb 13 15:52:59.865864 kubelet[3281]: I0213 15:52:59.861329 3281 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-30-72" Feb 13 15:52:59.865864 kubelet[3281]: I0213 15:52:59.861430 3281 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-30-72" Feb 13 15:52:59.881256 kubelet[3281]: E0213 15:52:59.881032 3281 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:52:59.906173 kubelet[3281]: I0213 15:52:59.906140 3281 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:52:59.906173 kubelet[3281]: I0213 15:52:59.906166 3281 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:52:59.906389 kubelet[3281]: I0213 15:52:59.906187 3281 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:52:59.906389 kubelet[3281]: I0213 15:52:59.906380 3281 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:52:59.906471 kubelet[3281]: I0213 15:52:59.906412 3281 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:52:59.906471 kubelet[3281]: I0213 15:52:59.906423 3281 policy_none.go:49] "None policy: Start" Feb 13 15:52:59.907340 kubelet[3281]: I0213 15:52:59.907316 3281 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:52:59.907431 kubelet[3281]: I0213 15:52:59.907345 3281 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:52:59.908817 kubelet[3281]: I0213 15:52:59.907551 3281 state_mem.go:75] "Updated machine memory state" Feb 13 15:52:59.914649 kubelet[3281]: I0213 15:52:59.914466 3281 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:52:59.915070 kubelet[3281]: I0213 15:52:59.915012 3281 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:53:00.087120 kubelet[3281]: I0213 15:53:00.087073 3281 topology_manager.go:215] "Topology Admit Handler" podUID="a50782a4619bcb871db9c9b508cb0f2f" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-72" Feb 13 15:53:00.101429 kubelet[3281]: I0213 15:53:00.088037 3281 topology_manager.go:215] "Topology Admit Handler" podUID="55a9f4d47754ab98adb27fc7e698672d" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-72" Feb 13 15:53:00.101429 kubelet[3281]: I0213 15:53:00.088150 3281 topology_manager.go:215] "Topology Admit Handler" podUID="b49ce6b3061100f724a3905bf9f62110" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-72" Feb 13 15:53:00.153220 kubelet[3281]: E0213 15:53:00.153165 3281 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-30-72\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-30-72" Feb 13 15:53:00.228194 kubelet[3281]: I0213 15:53:00.223349 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55a9f4d47754ab98adb27fc7e698672d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-72\" (UID: \"55a9f4d47754ab98adb27fc7e698672d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-72" Feb 13 15:53:00.228194 kubelet[3281]: I0213 15:53:00.223416 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55a9f4d47754ab98adb27fc7e698672d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-72\" (UID: \"55a9f4d47754ab98adb27fc7e698672d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-72" Feb 13 15:53:00.228194 kubelet[3281]: I0213 15:53:00.223451 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b49ce6b3061100f724a3905bf9f62110-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-72\" (UID: \"b49ce6b3061100f724a3905bf9f62110\") " pod="kube-system/kube-scheduler-ip-172-31-30-72" Feb 13 15:53:00.228194 kubelet[3281]: I0213 15:53:00.223488 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a50782a4619bcb871db9c9b508cb0f2f-ca-certs\") pod \"kube-apiserver-ip-172-31-30-72\" (UID: \"a50782a4619bcb871db9c9b508cb0f2f\") " pod="kube-system/kube-apiserver-ip-172-31-30-72" Feb 13 15:53:00.228194 kubelet[3281]: I0213 15:53:00.223538 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55a9f4d47754ab98adb27fc7e698672d-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-72\" (UID: \"55a9f4d47754ab98adb27fc7e698672d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-72" Feb 13 15:53:00.230492 kubelet[3281]: I0213 15:53:00.223570 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55a9f4d47754ab98adb27fc7e698672d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-72\" (UID: \"55a9f4d47754ab98adb27fc7e698672d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-72" Feb 13 15:53:00.230492 kubelet[3281]: I0213 15:53:00.223604 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55a9f4d47754ab98adb27fc7e698672d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-72\" (UID: \"55a9f4d47754ab98adb27fc7e698672d\") " pod="kube-system/kube-controller-manager-ip-172-31-30-72" Feb 13 15:53:00.230492 kubelet[3281]: I0213 15:53:00.223635 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a50782a4619bcb871db9c9b508cb0f2f-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-72\" (UID: \"a50782a4619bcb871db9c9b508cb0f2f\") " pod="kube-system/kube-apiserver-ip-172-31-30-72" Feb 13 15:53:00.230492 kubelet[3281]: I0213 15:53:00.225068 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a50782a4619bcb871db9c9b508cb0f2f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-72\" (UID: \"a50782a4619bcb871db9c9b508cb0f2f\") " pod="kube-system/kube-apiserver-ip-172-31-30-72" Feb 13 15:53:00.672731 kubelet[3281]: I0213 15:53:00.672680 3281 apiserver.go:52] "Watching apiserver" Feb 13 15:53:00.717234 kubelet[3281]: I0213 15:53:00.717050 3281 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:53:00.878722 kubelet[3281]: E0213 15:53:00.877195 3281 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-30-72\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-72" Feb 13 15:53:00.914682 kubelet[3281]: I0213 15:53:00.914005 3281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-72" podStartSLOduration=0.913464182 podStartE2EDuration="913.464182ms" podCreationTimestamp="2025-02-13 15:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:53:00.90607348 +0000 UTC m=+1.420296190" watchObservedRunningTime="2025-02-13 15:53:00.913464182 +0000 UTC m=+1.427686878" Feb 13 15:53:00.954890 kubelet[3281]: I0213 15:53:00.954762 3281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-72" podStartSLOduration=2.95469555 podStartE2EDuration="2.95469555s" podCreationTimestamp="2025-02-13 15:52:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:53:00.937176994 +0000 UTC m=+1.451399705" watchObservedRunningTime="2025-02-13 15:53:00.95469555 +0000 UTC m=+1.468918259" Feb 13 15:53:00.955577 kubelet[3281]: I0213 15:53:00.955423 3281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-72" podStartSLOduration=0.954864363 podStartE2EDuration="954.864363ms" podCreationTimestamp="2025-02-13 15:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:53:00.954271096 +0000 UTC m=+1.468493805" watchObservedRunningTime="2025-02-13 15:53:00.954864363 +0000 UTC m=+1.469087072" Feb 13 15:53:01.690503 sudo[2201]: pam_unix(sudo:session): session closed for user root Feb 13 15:53:01.712874 sshd[2197]: Connection closed by 139.178.89.65 port 41076 Feb 13 15:53:01.714219 sshd-session[2185]: pam_unix(sshd:session): session closed for user core Feb 13 15:53:01.719415 systemd[1]: sshd@6-172.31.30.72:22-139.178.89.65:41076.service: Deactivated successfully. Feb 13 15:53:01.722695 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:53:01.723521 systemd[1]: session-7.scope: Consumed 4.850s CPU time, 183.3M memory peak, 0B memory swap peak. Feb 13 15:53:01.725130 systemd-logind[1870]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:53:01.726606 systemd-logind[1870]: Removed session 7. Feb 13 15:53:09.784328 kubelet[3281]: I0213 15:53:09.784274 3281 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:53:09.785424 containerd[1894]: time="2025-02-13T15:53:09.785379714Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:53:09.785882 kubelet[3281]: I0213 15:53:09.785610 3281 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:53:10.543977 kubelet[3281]: I0213 15:53:10.543040 3281 topology_manager.go:215] "Topology Admit Handler" podUID="b40d3451-a208-4d8a-9f86-9c5a4c80c198" podNamespace="kube-system" podName="kube-proxy-2k5ww" Feb 13 15:53:10.545744 kubelet[3281]: I0213 15:53:10.545708 3281 topology_manager.go:215] "Topology Admit Handler" podUID="46757bdc-3b7a-45cd-aced-95e528681413" podNamespace="kube-flannel" podName="kube-flannel-ds-7sx4b" Feb 13 15:53:10.560550 systemd[1]: Created slice kubepods-burstable-pod46757bdc_3b7a_45cd_aced_95e528681413.slice - libcontainer container kubepods-burstable-pod46757bdc_3b7a_45cd_aced_95e528681413.slice. Feb 13 15:53:10.572814 systemd[1]: Created slice kubepods-besteffort-podb40d3451_a208_4d8a_9f86_9c5a4c80c198.slice - libcontainer container kubepods-besteffort-podb40d3451_a208_4d8a_9f86_9c5a4c80c198.slice. Feb 13 15:53:10.622517 kubelet[3281]: I0213 15:53:10.622444 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46757bdc-3b7a-45cd-aced-95e528681413-xtables-lock\") pod \"kube-flannel-ds-7sx4b\" (UID: \"46757bdc-3b7a-45cd-aced-95e528681413\") " pod="kube-flannel/kube-flannel-ds-7sx4b" Feb 13 15:53:10.622517 kubelet[3281]: I0213 15:53:10.622496 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k45p\" (UniqueName: \"kubernetes.io/projected/46757bdc-3b7a-45cd-aced-95e528681413-kube-api-access-7k45p\") pod \"kube-flannel-ds-7sx4b\" (UID: \"46757bdc-3b7a-45cd-aced-95e528681413\") " pod="kube-flannel/kube-flannel-ds-7sx4b" Feb 13 15:53:10.622517 kubelet[3281]: I0213 15:53:10.622525 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b40d3451-a208-4d8a-9f86-9c5a4c80c198-lib-modules\") pod \"kube-proxy-2k5ww\" (UID: \"b40d3451-a208-4d8a-9f86-9c5a4c80c198\") " pod="kube-system/kube-proxy-2k5ww" Feb 13 15:53:10.622930 kubelet[3281]: I0213 15:53:10.622556 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vql9v\" (UniqueName: \"kubernetes.io/projected/b40d3451-a208-4d8a-9f86-9c5a4c80c198-kube-api-access-vql9v\") pod \"kube-proxy-2k5ww\" (UID: \"b40d3451-a208-4d8a-9f86-9c5a4c80c198\") " pod="kube-system/kube-proxy-2k5ww" Feb 13 15:53:10.622930 kubelet[3281]: I0213 15:53:10.622589 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/46757bdc-3b7a-45cd-aced-95e528681413-run\") pod \"kube-flannel-ds-7sx4b\" (UID: \"46757bdc-3b7a-45cd-aced-95e528681413\") " pod="kube-flannel/kube-flannel-ds-7sx4b" Feb 13 15:53:10.622930 kubelet[3281]: I0213 15:53:10.622630 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/46757bdc-3b7a-45cd-aced-95e528681413-cni\") pod \"kube-flannel-ds-7sx4b\" (UID: \"46757bdc-3b7a-45cd-aced-95e528681413\") " pod="kube-flannel/kube-flannel-ds-7sx4b" Feb 13 15:53:10.622930 kubelet[3281]: I0213 15:53:10.622659 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b40d3451-a208-4d8a-9f86-9c5a4c80c198-xtables-lock\") pod \"kube-proxy-2k5ww\" (UID: \"b40d3451-a208-4d8a-9f86-9c5a4c80c198\") " pod="kube-system/kube-proxy-2k5ww" Feb 13 15:53:10.622930 kubelet[3281]: I0213 15:53:10.622686 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b40d3451-a208-4d8a-9f86-9c5a4c80c198-kube-proxy\") pod \"kube-proxy-2k5ww\" (UID: \"b40d3451-a208-4d8a-9f86-9c5a4c80c198\") " pod="kube-system/kube-proxy-2k5ww" Feb 13 15:53:10.623087 kubelet[3281]: I0213 15:53:10.622715 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/46757bdc-3b7a-45cd-aced-95e528681413-cni-plugin\") pod \"kube-flannel-ds-7sx4b\" (UID: \"46757bdc-3b7a-45cd-aced-95e528681413\") " pod="kube-flannel/kube-flannel-ds-7sx4b" Feb 13 15:53:10.623087 kubelet[3281]: I0213 15:53:10.622739 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/46757bdc-3b7a-45cd-aced-95e528681413-flannel-cfg\") pod \"kube-flannel-ds-7sx4b\" (UID: \"46757bdc-3b7a-45cd-aced-95e528681413\") " pod="kube-flannel/kube-flannel-ds-7sx4b" Feb 13 15:53:10.871210 containerd[1894]: time="2025-02-13T15:53:10.870990329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-7sx4b,Uid:46757bdc-3b7a-45cd-aced-95e528681413,Namespace:kube-flannel,Attempt:0,}" Feb 13 15:53:10.879683 containerd[1894]: time="2025-02-13T15:53:10.879637716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2k5ww,Uid:b40d3451-a208-4d8a-9f86-9c5a4c80c198,Namespace:kube-system,Attempt:0,}" Feb 13 15:53:10.933663 containerd[1894]: time="2025-02-13T15:53:10.932586249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:53:10.933663 containerd[1894]: time="2025-02-13T15:53:10.932664424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:53:10.933663 containerd[1894]: time="2025-02-13T15:53:10.932682905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:53:10.933663 containerd[1894]: time="2025-02-13T15:53:10.932776943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:53:10.982155 systemd[1]: Started cri-containerd-9054beb9c1387919b7280bbcda035c610f1bb2fbd77bfb1820262e6fe0bfa0a0.scope - libcontainer container 9054beb9c1387919b7280bbcda035c610f1bb2fbd77bfb1820262e6fe0bfa0a0. Feb 13 15:53:11.000753 containerd[1894]: time="2025-02-13T15:53:11.000347174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:53:11.000753 containerd[1894]: time="2025-02-13T15:53:11.000501766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:53:11.000753 containerd[1894]: time="2025-02-13T15:53:11.000524899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:53:11.000753 containerd[1894]: time="2025-02-13T15:53:11.000628094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:53:11.059159 systemd[1]: Started cri-containerd-c24de36a462c7417e0c59cb86e241adcf87b423ab4f3126efef7afc4118954e5.scope - libcontainer container c24de36a462c7417e0c59cb86e241adcf87b423ab4f3126efef7afc4118954e5. Feb 13 15:53:11.111894 containerd[1894]: time="2025-02-13T15:53:11.111431621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2k5ww,Uid:b40d3451-a208-4d8a-9f86-9c5a4c80c198,Namespace:kube-system,Attempt:0,} returns sandbox id \"c24de36a462c7417e0c59cb86e241adcf87b423ab4f3126efef7afc4118954e5\"" Feb 13 15:53:11.117659 containerd[1894]: time="2025-02-13T15:53:11.116898986Z" level=info msg="CreateContainer within sandbox \"c24de36a462c7417e0c59cb86e241adcf87b423ab4f3126efef7afc4118954e5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:53:11.128688 containerd[1894]: time="2025-02-13T15:53:11.127477535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-7sx4b,Uid:46757bdc-3b7a-45cd-aced-95e528681413,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"9054beb9c1387919b7280bbcda035c610f1bb2fbd77bfb1820262e6fe0bfa0a0\"" Feb 13 15:53:11.130629 containerd[1894]: time="2025-02-13T15:53:11.130597323Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 15:53:11.155654 containerd[1894]: time="2025-02-13T15:53:11.155532685Z" level=info msg="CreateContainer within sandbox \"c24de36a462c7417e0c59cb86e241adcf87b423ab4f3126efef7afc4118954e5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f19935d178b5ffecb281956772f35860589b10b04f0b7c97ba12cf652a2979de\"" Feb 13 15:53:11.158045 containerd[1894]: time="2025-02-13T15:53:11.156510657Z" level=info msg="StartContainer for \"f19935d178b5ffecb281956772f35860589b10b04f0b7c97ba12cf652a2979de\"" Feb 13 15:53:11.190350 systemd[1]: Started cri-containerd-f19935d178b5ffecb281956772f35860589b10b04f0b7c97ba12cf652a2979de.scope - libcontainer container f19935d178b5ffecb281956772f35860589b10b04f0b7c97ba12cf652a2979de. Feb 13 15:53:11.231937 containerd[1894]: time="2025-02-13T15:53:11.231875259Z" level=info msg="StartContainer for \"f19935d178b5ffecb281956772f35860589b10b04f0b7c97ba12cf652a2979de\" returns successfully" Feb 13 15:53:11.895393 kubelet[3281]: I0213 15:53:11.895352 3281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2k5ww" podStartSLOduration=1.895269461 podStartE2EDuration="1.895269461s" podCreationTimestamp="2025-02-13 15:53:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:53:11.894043847 +0000 UTC m=+12.408266557" watchObservedRunningTime="2025-02-13 15:53:11.895269461 +0000 UTC m=+12.409492170" Feb 13 15:53:13.296931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount835624267.mount: Deactivated successfully. Feb 13 15:53:13.364997 containerd[1894]: time="2025-02-13T15:53:13.364928987Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:13.366892 containerd[1894]: time="2025-02-13T15:53:13.366600187Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Feb 13 15:53:13.369839 containerd[1894]: time="2025-02-13T15:53:13.369067690Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:13.373875 containerd[1894]: time="2025-02-13T15:53:13.373811914Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:13.374822 containerd[1894]: time="2025-02-13T15:53:13.374780388Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.244006811s" Feb 13 15:53:13.374822 containerd[1894]: time="2025-02-13T15:53:13.374820393Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Feb 13 15:53:13.377726 containerd[1894]: time="2025-02-13T15:53:13.377687296Z" level=info msg="CreateContainer within sandbox \"9054beb9c1387919b7280bbcda035c610f1bb2fbd77bfb1820262e6fe0bfa0a0\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 15:53:13.401879 containerd[1894]: time="2025-02-13T15:53:13.401831443Z" level=info msg="CreateContainer within sandbox \"9054beb9c1387919b7280bbcda035c610f1bb2fbd77bfb1820262e6fe0bfa0a0\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"70d9fdabf33bbf1ae5722098bfd3f6765cebc13ca17fa62bc7136f8884728460\"" Feb 13 15:53:13.403193 containerd[1894]: time="2025-02-13T15:53:13.403158398Z" level=info msg="StartContainer for \"70d9fdabf33bbf1ae5722098bfd3f6765cebc13ca17fa62bc7136f8884728460\"" Feb 13 15:53:13.441199 systemd[1]: Started cri-containerd-70d9fdabf33bbf1ae5722098bfd3f6765cebc13ca17fa62bc7136f8884728460.scope - libcontainer container 70d9fdabf33bbf1ae5722098bfd3f6765cebc13ca17fa62bc7136f8884728460. Feb 13 15:53:13.475177 systemd[1]: cri-containerd-70d9fdabf33bbf1ae5722098bfd3f6765cebc13ca17fa62bc7136f8884728460.scope: Deactivated successfully. Feb 13 15:53:13.480393 containerd[1894]: time="2025-02-13T15:53:13.480337578Z" level=info msg="StartContainer for \"70d9fdabf33bbf1ae5722098bfd3f6765cebc13ca17fa62bc7136f8884728460\" returns successfully" Feb 13 15:53:13.546840 containerd[1894]: time="2025-02-13T15:53:13.546732218Z" level=info msg="shim disconnected" id=70d9fdabf33bbf1ae5722098bfd3f6765cebc13ca17fa62bc7136f8884728460 namespace=k8s.io Feb 13 15:53:13.546840 containerd[1894]: time="2025-02-13T15:53:13.546828085Z" level=warning msg="cleaning up after shim disconnected" id=70d9fdabf33bbf1ae5722098bfd3f6765cebc13ca17fa62bc7136f8884728460 namespace=k8s.io Feb 13 15:53:13.546840 containerd[1894]: time="2025-02-13T15:53:13.546845942Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:53:13.891972 containerd[1894]: time="2025-02-13T15:53:13.891325254Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 15:53:14.197731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70d9fdabf33bbf1ae5722098bfd3f6765cebc13ca17fa62bc7136f8884728460-rootfs.mount: Deactivated successfully. Feb 13 15:53:16.228089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2106659261.mount: Deactivated successfully. Feb 13 15:53:18.278033 containerd[1894]: time="2025-02-13T15:53:18.277741818Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:18.280512 containerd[1894]: time="2025-02-13T15:53:18.280418438Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Feb 13 15:53:18.283531 containerd[1894]: time="2025-02-13T15:53:18.281647386Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:18.286653 containerd[1894]: time="2025-02-13T15:53:18.286612754Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:53:18.288735 containerd[1894]: time="2025-02-13T15:53:18.288693907Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 4.397300281s" Feb 13 15:53:18.288822 containerd[1894]: time="2025-02-13T15:53:18.288739263Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Feb 13 15:53:18.293374 containerd[1894]: time="2025-02-13T15:53:18.293325408Z" level=info msg="CreateContainer within sandbox \"9054beb9c1387919b7280bbcda035c610f1bb2fbd77bfb1820262e6fe0bfa0a0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:53:18.318093 containerd[1894]: time="2025-02-13T15:53:18.317791425Z" level=info msg="CreateContainer within sandbox \"9054beb9c1387919b7280bbcda035c610f1bb2fbd77bfb1820262e6fe0bfa0a0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"208b1da94b52aa6943726debbb1689d74c168d277ab0deef12a100d8d823c337\"" Feb 13 15:53:18.321915 containerd[1894]: time="2025-02-13T15:53:18.320884483Z" level=info msg="StartContainer for \"208b1da94b52aa6943726debbb1689d74c168d277ab0deef12a100d8d823c337\"" Feb 13 15:53:18.374170 systemd[1]: Started cri-containerd-208b1da94b52aa6943726debbb1689d74c168d277ab0deef12a100d8d823c337.scope - libcontainer container 208b1da94b52aa6943726debbb1689d74c168d277ab0deef12a100d8d823c337. Feb 13 15:53:18.451723 systemd[1]: cri-containerd-208b1da94b52aa6943726debbb1689d74c168d277ab0deef12a100d8d823c337.scope: Deactivated successfully. Feb 13 15:53:18.461219 containerd[1894]: time="2025-02-13T15:53:18.461023383Z" level=info msg="StartContainer for \"208b1da94b52aa6943726debbb1689d74c168d277ab0deef12a100d8d823c337\" returns successfully" Feb 13 15:53:18.485314 kubelet[3281]: I0213 15:53:18.485060 3281 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:53:18.504873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-208b1da94b52aa6943726debbb1689d74c168d277ab0deef12a100d8d823c337-rootfs.mount: Deactivated successfully. Feb 13 15:53:18.548088 kubelet[3281]: I0213 15:53:18.547559 3281 topology_manager.go:215] "Topology Admit Handler" podUID="2cea4ed9-8be1-4887-b7dc-84bbe01a3f18" podNamespace="kube-system" podName="coredns-76f75df574-btc7n" Feb 13 15:53:18.554776 kubelet[3281]: I0213 15:53:18.554730 3281 topology_manager.go:215] "Topology Admit Handler" podUID="9a5a98ea-68f7-4224-b19b-4290fa5b423e" podNamespace="kube-system" podName="coredns-76f75df574-m4qb6" Feb 13 15:53:18.579136 systemd[1]: Created slice kubepods-burstable-pod9a5a98ea_68f7_4224_b19b_4290fa5b423e.slice - libcontainer container kubepods-burstable-pod9a5a98ea_68f7_4224_b19b_4290fa5b423e.slice. Feb 13 15:53:18.586393 containerd[1894]: time="2025-02-13T15:53:18.586315445Z" level=info msg="shim disconnected" id=208b1da94b52aa6943726debbb1689d74c168d277ab0deef12a100d8d823c337 namespace=k8s.io Feb 13 15:53:18.586393 containerd[1894]: time="2025-02-13T15:53:18.586379986Z" level=warning msg="cleaning up after shim disconnected" id=208b1da94b52aa6943726debbb1689d74c168d277ab0deef12a100d8d823c337 namespace=k8s.io Feb 13 15:53:18.586393 containerd[1894]: time="2025-02-13T15:53:18.586392585Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:53:18.603567 systemd[1]: Created slice kubepods-burstable-pod2cea4ed9_8be1_4887_b7dc_84bbe01a3f18.slice - libcontainer container kubepods-burstable-pod2cea4ed9_8be1_4887_b7dc_84bbe01a3f18.slice. Feb 13 15:53:18.700805 kubelet[3281]: I0213 15:53:18.700668 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2959\" (UniqueName: \"kubernetes.io/projected/2cea4ed9-8be1-4887-b7dc-84bbe01a3f18-kube-api-access-q2959\") pod \"coredns-76f75df574-btc7n\" (UID: \"2cea4ed9-8be1-4887-b7dc-84bbe01a3f18\") " pod="kube-system/coredns-76f75df574-btc7n" Feb 13 15:53:18.700805 kubelet[3281]: I0213 15:53:18.700729 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a5a98ea-68f7-4224-b19b-4290fa5b423e-config-volume\") pod \"coredns-76f75df574-m4qb6\" (UID: \"9a5a98ea-68f7-4224-b19b-4290fa5b423e\") " pod="kube-system/coredns-76f75df574-m4qb6" Feb 13 15:53:18.707773 kubelet[3281]: I0213 15:53:18.707728 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cea4ed9-8be1-4887-b7dc-84bbe01a3f18-config-volume\") pod \"coredns-76f75df574-btc7n\" (UID: \"2cea4ed9-8be1-4887-b7dc-84bbe01a3f18\") " pod="kube-system/coredns-76f75df574-btc7n" Feb 13 15:53:18.707773 kubelet[3281]: I0213 15:53:18.707782 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w79g\" (UniqueName: \"kubernetes.io/projected/9a5a98ea-68f7-4224-b19b-4290fa5b423e-kube-api-access-5w79g\") pod \"coredns-76f75df574-m4qb6\" (UID: \"9a5a98ea-68f7-4224-b19b-4290fa5b423e\") " pod="kube-system/coredns-76f75df574-m4qb6" Feb 13 15:53:18.897877 containerd[1894]: time="2025-02-13T15:53:18.897587635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m4qb6,Uid:9a5a98ea-68f7-4224-b19b-4290fa5b423e,Namespace:kube-system,Attempt:0,}" Feb 13 15:53:18.916541 containerd[1894]: time="2025-02-13T15:53:18.916093295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-btc7n,Uid:2cea4ed9-8be1-4887-b7dc-84bbe01a3f18,Namespace:kube-system,Attempt:0,}" Feb 13 15:53:18.936477 containerd[1894]: time="2025-02-13T15:53:18.935900422Z" level=info msg="CreateContainer within sandbox \"9054beb9c1387919b7280bbcda035c610f1bb2fbd77bfb1820262e6fe0bfa0a0\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 15:53:18.992612 containerd[1894]: time="2025-02-13T15:53:18.992567789Z" level=info msg="CreateContainer within sandbox \"9054beb9c1387919b7280bbcda035c610f1bb2fbd77bfb1820262e6fe0bfa0a0\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"1ee4b9a71456c4ae7aa8c18c34c0c9a251f81da2ba713107f9f5bd2ad4bb5a82\"" Feb 13 15:53:19.000173 containerd[1894]: time="2025-02-13T15:53:19.000132036Z" level=info msg="StartContainer for \"1ee4b9a71456c4ae7aa8c18c34c0c9a251f81da2ba713107f9f5bd2ad4bb5a82\"" Feb 13 15:53:19.021627 containerd[1894]: time="2025-02-13T15:53:19.021464560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m4qb6,Uid:9a5a98ea-68f7-4224-b19b-4290fa5b423e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"46347aa45bd37448ac560c3b186eae241a6e45bac0d23b5915e74f4be9daba31\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:53:19.022799 kubelet[3281]: E0213 15:53:19.022230 3281 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46347aa45bd37448ac560c3b186eae241a6e45bac0d23b5915e74f4be9daba31\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:53:19.022799 kubelet[3281]: E0213 15:53:19.022309 3281 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46347aa45bd37448ac560c3b186eae241a6e45bac0d23b5915e74f4be9daba31\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-m4qb6" Feb 13 15:53:19.022799 kubelet[3281]: E0213 15:53:19.022345 3281 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46347aa45bd37448ac560c3b186eae241a6e45bac0d23b5915e74f4be9daba31\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-m4qb6" Feb 13 15:53:19.022799 kubelet[3281]: E0213 15:53:19.022435 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-m4qb6_kube-system(9a5a98ea-68f7-4224-b19b-4290fa5b423e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-m4qb6_kube-system(9a5a98ea-68f7-4224-b19b-4290fa5b423e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46347aa45bd37448ac560c3b186eae241a6e45bac0d23b5915e74f4be9daba31\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-m4qb6" podUID="9a5a98ea-68f7-4224-b19b-4290fa5b423e" Feb 13 15:53:19.036653 containerd[1894]: time="2025-02-13T15:53:19.036334667Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-btc7n,Uid:2cea4ed9-8be1-4887-b7dc-84bbe01a3f18,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"392c0ce3caf24689ca6c149f975ea44a885769313ca1bdaa6d06daa1d62b792e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:53:19.037233 kubelet[3281]: E0213 15:53:19.036940 3281 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"392c0ce3caf24689ca6c149f975ea44a885769313ca1bdaa6d06daa1d62b792e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:53:19.037233 kubelet[3281]: E0213 15:53:19.037048 3281 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"392c0ce3caf24689ca6c149f975ea44a885769313ca1bdaa6d06daa1d62b792e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-btc7n" Feb 13 15:53:19.037233 kubelet[3281]: E0213 15:53:19.037091 3281 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"392c0ce3caf24689ca6c149f975ea44a885769313ca1bdaa6d06daa1d62b792e\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-btc7n" Feb 13 15:53:19.037233 kubelet[3281]: E0213 15:53:19.037175 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-btc7n_kube-system(2cea4ed9-8be1-4887-b7dc-84bbe01a3f18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-btc7n_kube-system(2cea4ed9-8be1-4887-b7dc-84bbe01a3f18)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"392c0ce3caf24689ca6c149f975ea44a885769313ca1bdaa6d06daa1d62b792e\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-btc7n" podUID="2cea4ed9-8be1-4887-b7dc-84bbe01a3f18" Feb 13 15:53:19.058182 systemd[1]: Started cri-containerd-1ee4b9a71456c4ae7aa8c18c34c0c9a251f81da2ba713107f9f5bd2ad4bb5a82.scope - libcontainer container 1ee4b9a71456c4ae7aa8c18c34c0c9a251f81da2ba713107f9f5bd2ad4bb5a82. Feb 13 15:53:19.101310 containerd[1894]: time="2025-02-13T15:53:19.101264780Z" level=info msg="StartContainer for \"1ee4b9a71456c4ae7aa8c18c34c0c9a251f81da2ba713107f9f5bd2ad4bb5a82\" returns successfully" Feb 13 15:53:20.183761 (udev-worker)[3813]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:53:20.215427 systemd-networkd[1733]: flannel.1: Link UP Feb 13 15:53:20.215439 systemd-networkd[1733]: flannel.1: Gained carrier Feb 13 15:53:21.497357 systemd-networkd[1733]: flannel.1: Gained IPv6LL Feb 13 15:53:23.748177 ntpd[1862]: Listen normally on 7 flannel.1 192.168.0.0:123 Feb 13 15:53:23.748273 ntpd[1862]: Listen normally on 8 flannel.1 [fe80::8f8:a0ff:fe87:73e%4]:123 Feb 13 15:53:23.748879 ntpd[1862]: 13 Feb 15:53:23 ntpd[1862]: Listen normally on 7 flannel.1 192.168.0.0:123 Feb 13 15:53:23.748879 ntpd[1862]: 13 Feb 15:53:23 ntpd[1862]: Listen normally on 8 flannel.1 [fe80::8f8:a0ff:fe87:73e%4]:123 Feb 13 15:53:29.783722 containerd[1894]: time="2025-02-13T15:53:29.783294635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m4qb6,Uid:9a5a98ea-68f7-4224-b19b-4290fa5b423e,Namespace:kube-system,Attempt:0,}" Feb 13 15:53:29.863815 systemd-networkd[1733]: cni0: Link UP Feb 13 15:53:29.863834 systemd-networkd[1733]: cni0: Gained carrier Feb 13 15:53:29.875579 systemd-networkd[1733]: cni0: Lost carrier Feb 13 15:53:29.875693 (udev-worker)[3929]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:53:29.885634 kernel: cni0: port 1(veth30185b38) entered blocking state Feb 13 15:53:29.885756 kernel: cni0: port 1(veth30185b38) entered disabled state Feb 13 15:53:29.884090 systemd-networkd[1733]: veth30185b38: Link UP Feb 13 15:53:29.888499 kernel: veth30185b38: entered allmulticast mode Feb 13 15:53:29.888918 kernel: veth30185b38: entered promiscuous mode Feb 13 15:53:29.888981 kernel: cni0: port 1(veth30185b38) entered blocking state Feb 13 15:53:29.890318 kernel: cni0: port 1(veth30185b38) entered forwarding state Feb 13 15:53:29.891651 kernel: cni0: port 1(veth30185b38) entered disabled state Feb 13 15:53:29.894136 (udev-worker)[3934]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:53:29.925025 kernel: cni0: port 1(veth30185b38) entered blocking state Feb 13 15:53:29.925317 kernel: cni0: port 1(veth30185b38) entered forwarding state Feb 13 15:53:29.925690 systemd-networkd[1733]: veth30185b38: Gained carrier Feb 13 15:53:29.928785 systemd-networkd[1733]: cni0: Gained carrier Feb 13 15:53:29.937114 containerd[1894]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000018938), "name":"cbr0", "type":"bridge"} Feb 13 15:53:29.937114 containerd[1894]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:53:29.991234 containerd[1894]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T15:53:29.991005050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:53:29.991234 containerd[1894]: time="2025-02-13T15:53:29.991167127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:53:29.991905 containerd[1894]: time="2025-02-13T15:53:29.991194355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:53:29.991905 containerd[1894]: time="2025-02-13T15:53:29.991307210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:53:30.046830 systemd[1]: run-containerd-runc-k8s.io-31bb417744c0d1668ec98170ec8f93bcb014aa54cc6a77b3466cf2c213b7d2f9-runc.KNoRT4.mount: Deactivated successfully. Feb 13 15:53:30.064253 systemd[1]: Started cri-containerd-31bb417744c0d1668ec98170ec8f93bcb014aa54cc6a77b3466cf2c213b7d2f9.scope - libcontainer container 31bb417744c0d1668ec98170ec8f93bcb014aa54cc6a77b3466cf2c213b7d2f9. Feb 13 15:53:30.146332 containerd[1894]: time="2025-02-13T15:53:30.146268754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m4qb6,Uid:9a5a98ea-68f7-4224-b19b-4290fa5b423e,Namespace:kube-system,Attempt:0,} returns sandbox id \"31bb417744c0d1668ec98170ec8f93bcb014aa54cc6a77b3466cf2c213b7d2f9\"" Feb 13 15:53:30.174897 containerd[1894]: time="2025-02-13T15:53:30.174860534Z" level=info msg="CreateContainer within sandbox \"31bb417744c0d1668ec98170ec8f93bcb014aa54cc6a77b3466cf2c213b7d2f9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:53:30.206231 containerd[1894]: time="2025-02-13T15:53:30.206183616Z" level=info msg="CreateContainer within sandbox \"31bb417744c0d1668ec98170ec8f93bcb014aa54cc6a77b3466cf2c213b7d2f9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"980af50b96da55ae473c965d8adea76008ed4a6b840abbca8c2ad11610ee0030\"" Feb 13 15:53:30.208884 containerd[1894]: time="2025-02-13T15:53:30.206812622Z" level=info msg="StartContainer for \"980af50b96da55ae473c965d8adea76008ed4a6b840abbca8c2ad11610ee0030\"" Feb 13 15:53:30.251414 systemd[1]: Started cri-containerd-980af50b96da55ae473c965d8adea76008ed4a6b840abbca8c2ad11610ee0030.scope - libcontainer container 980af50b96da55ae473c965d8adea76008ed4a6b840abbca8c2ad11610ee0030. Feb 13 15:53:30.311777 containerd[1894]: time="2025-02-13T15:53:30.311213178Z" level=info msg="StartContainer for \"980af50b96da55ae473c965d8adea76008ed4a6b840abbca8c2ad11610ee0030\" returns successfully" Feb 13 15:53:31.074707 kubelet[3281]: I0213 15:53:31.074662 3281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-7sx4b" podStartSLOduration=13.915335081 podStartE2EDuration="21.074625255s" podCreationTimestamp="2025-02-13 15:53:10 +0000 UTC" firstStartedPulling="2025-02-13 15:53:11.129737196 +0000 UTC m=+11.643959886" lastFinishedPulling="2025-02-13 15:53:18.289027365 +0000 UTC m=+18.803250060" observedRunningTime="2025-02-13 15:53:19.946782422 +0000 UTC m=+20.461005133" watchObservedRunningTime="2025-02-13 15:53:31.074625255 +0000 UTC m=+31.588847964" Feb 13 15:53:31.075896 kubelet[3281]: I0213 15:53:31.074970 3281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-m4qb6" podStartSLOduration=21.074873673 podStartE2EDuration="21.074873673s" podCreationTimestamp="2025-02-13 15:53:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:53:31.074132669 +0000 UTC m=+31.588355380" watchObservedRunningTime="2025-02-13 15:53:31.074873673 +0000 UTC m=+31.589096385" Feb 13 15:53:31.225167 systemd-networkd[1733]: cni0: Gained IPv6LL Feb 13 15:53:31.417111 systemd-networkd[1733]: veth30185b38: Gained IPv6LL Feb 13 15:53:33.748210 ntpd[1862]: Listen normally on 9 cni0 192.168.0.1:123 Feb 13 15:53:33.748312 ntpd[1862]: Listen normally on 10 cni0 [fe80::9007:a4ff:fe48:1ad4%5]:123 Feb 13 15:53:33.748716 ntpd[1862]: 13 Feb 15:53:33 ntpd[1862]: Listen normally on 9 cni0 192.168.0.1:123 Feb 13 15:53:33.748716 ntpd[1862]: 13 Feb 15:53:33 ntpd[1862]: Listen normally on 10 cni0 [fe80::9007:a4ff:fe48:1ad4%5]:123 Feb 13 15:53:33.748716 ntpd[1862]: 13 Feb 15:53:33 ntpd[1862]: Listen normally on 11 veth30185b38 [fe80::e0e7:b4ff:feb6:3a6e%6]:123 Feb 13 15:53:33.748372 ntpd[1862]: Listen normally on 11 veth30185b38 [fe80::e0e7:b4ff:feb6:3a6e%6]:123 Feb 13 15:53:34.782157 containerd[1894]: time="2025-02-13T15:53:34.782118881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-btc7n,Uid:2cea4ed9-8be1-4887-b7dc-84bbe01a3f18,Namespace:kube-system,Attempt:0,}" Feb 13 15:53:34.817611 kernel: cni0: port 2(vethc84f9a0c) entered blocking state Feb 13 15:53:34.817753 kernel: cni0: port 2(vethc84f9a0c) entered disabled state Feb 13 15:53:34.815168 systemd-networkd[1733]: vethc84f9a0c: Link UP Feb 13 15:53:34.818973 kernel: vethc84f9a0c: entered allmulticast mode Feb 13 15:53:34.820028 kernel: vethc84f9a0c: entered promiscuous mode Feb 13 15:53:34.820682 (udev-worker)[4064]: Network interface NamePolicy= disabled on kernel command line. Feb 13 15:53:34.831980 kernel: cni0: port 2(vethc84f9a0c) entered blocking state Feb 13 15:53:34.832071 kernel: cni0: port 2(vethc84f9a0c) entered forwarding state Feb 13 15:53:34.832778 systemd-networkd[1733]: vethc84f9a0c: Gained carrier Feb 13 15:53:34.837915 containerd[1894]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Feb 13 15:53:34.837915 containerd[1894]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:53:34.861163 containerd[1894]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":8951,"name":"cbr0","type":"bridge"}time="2025-02-13T15:53:34.861075758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:53:34.861393 containerd[1894]: time="2025-02-13T15:53:34.861366046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:53:34.861513 containerd[1894]: time="2025-02-13T15:53:34.861489655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:53:34.862479 containerd[1894]: time="2025-02-13T15:53:34.862411394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:53:34.903273 systemd[1]: Started cri-containerd-2159a43e8983745f5abddbbce42e7e9d84196415339757192e031dfb64151abf.scope - libcontainer container 2159a43e8983745f5abddbbce42e7e9d84196415339757192e031dfb64151abf. Feb 13 15:53:34.958052 containerd[1894]: time="2025-02-13T15:53:34.958012191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-btc7n,Uid:2cea4ed9-8be1-4887-b7dc-84bbe01a3f18,Namespace:kube-system,Attempt:0,} returns sandbox id \"2159a43e8983745f5abddbbce42e7e9d84196415339757192e031dfb64151abf\"" Feb 13 15:53:34.964787 containerd[1894]: time="2025-02-13T15:53:34.964732615Z" level=info msg="CreateContainer within sandbox \"2159a43e8983745f5abddbbce42e7e9d84196415339757192e031dfb64151abf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:53:35.016887 containerd[1894]: time="2025-02-13T15:53:35.016837358Z" level=info msg="CreateContainer within sandbox \"2159a43e8983745f5abddbbce42e7e9d84196415339757192e031dfb64151abf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a9511396a9ebeaf253d20525bd1c78023efcddd9aff3548dd2f6817246fb3c5a\"" Feb 13 15:53:35.017567 containerd[1894]: time="2025-02-13T15:53:35.017519843Z" level=info msg="StartContainer for \"a9511396a9ebeaf253d20525bd1c78023efcddd9aff3548dd2f6817246fb3c5a\"" Feb 13 15:53:35.053191 systemd[1]: Started cri-containerd-a9511396a9ebeaf253d20525bd1c78023efcddd9aff3548dd2f6817246fb3c5a.scope - libcontainer container a9511396a9ebeaf253d20525bd1c78023efcddd9aff3548dd2f6817246fb3c5a. Feb 13 15:53:35.093726 containerd[1894]: time="2025-02-13T15:53:35.093676761Z" level=info msg="StartContainer for \"a9511396a9ebeaf253d20525bd1c78023efcddd9aff3548dd2f6817246fb3c5a\" returns successfully" Feb 13 15:53:36.027902 systemd-networkd[1733]: vethc84f9a0c: Gained IPv6LL Feb 13 15:53:36.107911 kubelet[3281]: I0213 15:53:36.107873 3281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-btc7n" podStartSLOduration=26.107624294 podStartE2EDuration="26.107624294s" podCreationTimestamp="2025-02-13 15:53:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:53:36.075856713 +0000 UTC m=+36.590079404" watchObservedRunningTime="2025-02-13 15:53:36.107624294 +0000 UTC m=+36.621846989" Feb 13 15:53:38.748250 ntpd[1862]: Listen normally on 12 vethc84f9a0c [fe80::bc98:99ff:fe2d:a9fc%7]:123 Feb 13 15:53:38.748678 ntpd[1862]: 13 Feb 15:53:38 ntpd[1862]: Listen normally on 12 vethc84f9a0c [fe80::bc98:99ff:fe2d:a9fc%7]:123 Feb 13 15:53:46.964981 systemd[1]: Started sshd@7-172.31.30.72:22-139.178.89.65:44014.service - OpenSSH per-connection server daemon (139.178.89.65:44014). Feb 13 15:53:47.204510 sshd[4228]: Accepted publickey for core from 139.178.89.65 port 44014 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY Feb 13 15:53:47.205461 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:53:47.213721 systemd-logind[1870]: New session 8 of user core. Feb 13 15:53:47.224513 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:53:47.476362 sshd[4230]: Connection closed by 139.178.89.65 port 44014 Feb 13 15:53:47.477063 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Feb 13 15:53:47.490619 systemd[1]: sshd@7-172.31.30.72:22-139.178.89.65:44014.service: Deactivated successfully. Feb 13 15:53:47.492435 systemd-logind[1870]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:53:47.495058 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:53:47.497613 systemd-logind[1870]: Removed session 8. Feb 13 15:53:52.513356 systemd[1]: Started sshd@8-172.31.30.72:22-139.178.89.65:44026.service - OpenSSH per-connection server daemon (139.178.89.65:44026). Feb 13 15:53:52.749341 sshd[4263]: Accepted publickey for core from 139.178.89.65 port 44026 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY Feb 13 15:53:52.750206 sshd-session[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:53:52.769484 systemd-logind[1870]: New session 9 of user core. Feb 13 15:53:52.783277 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:53:53.036616 sshd[4265]: Connection closed by 139.178.89.65 port 44026 Feb 13 15:53:53.038524 sshd-session[4263]: pam_unix(sshd:session): session closed for user core Feb 13 15:53:53.042166 systemd[1]: sshd@8-172.31.30.72:22-139.178.89.65:44026.service: Deactivated successfully. Feb 13 15:53:53.044918 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:53:53.047613 systemd-logind[1870]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:53:53.049218 systemd-logind[1870]: Removed session 9. Feb 13 15:53:58.085018 systemd[1]: Started sshd@9-172.31.30.72:22-139.178.89.65:57102.service - OpenSSH per-connection server daemon (139.178.89.65:57102). Feb 13 15:53:58.250833 sshd[4298]: Accepted publickey for core from 139.178.89.65 port 57102 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY Feb 13 15:53:58.252805 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:53:58.262161 systemd-logind[1870]: New session 10 of user core. Feb 13 15:53:58.270320 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:53:58.493636 sshd[4300]: Connection closed by 139.178.89.65 port 57102 Feb 13 15:53:58.495709 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Feb 13 15:53:58.499880 systemd-logind[1870]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:53:58.500960 systemd[1]: sshd@9-172.31.30.72:22-139.178.89.65:57102.service: Deactivated successfully. Feb 13 15:53:58.503384 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:53:58.504540 systemd-logind[1870]: Removed session 10. Feb 13 15:54:03.539373 systemd[1]: Started sshd@10-172.31.30.72:22-139.178.89.65:57114.service - OpenSSH per-connection server daemon (139.178.89.65:57114). Feb 13 15:54:03.715683 sshd[4337]: Accepted publickey for core from 139.178.89.65 port 57114 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY Feb 13 15:54:03.716628 sshd-session[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:03.724004 systemd-logind[1870]: New session 11 of user core. Feb 13 15:54:03.728729 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:54:03.938130 sshd[4339]: Connection closed by 139.178.89.65 port 57114 Feb 13 15:54:03.938852 sshd-session[4337]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:03.944094 systemd[1]: sshd@10-172.31.30.72:22-139.178.89.65:57114.service: Deactivated successfully. Feb 13 15:54:03.946666 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:54:03.948928 systemd-logind[1870]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:54:03.950152 systemd-logind[1870]: Removed session 11. Feb 13 15:54:03.975438 systemd[1]: Started sshd@11-172.31.30.72:22-139.178.89.65:57120.service - OpenSSH per-connection server daemon (139.178.89.65:57120). Feb 13 15:54:04.142747 sshd[4351]: Accepted publickey for core from 139.178.89.65 port 57120 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY Feb 13 15:54:04.143676 sshd-session[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:04.153109 systemd-logind[1870]: New session 12 of user core. Feb 13 15:54:04.158206 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:54:04.420927 sshd[4353]: Connection closed by 139.178.89.65 port 57120 Feb 13 15:54:04.421921 sshd-session[4351]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:04.427730 systemd-logind[1870]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:54:04.430074 systemd[1]: sshd@11-172.31.30.72:22-139.178.89.65:57120.service: Deactivated successfully. Feb 13 15:54:04.432221 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:54:04.434893 systemd-logind[1870]: Removed session 12. Feb 13 15:54:04.464164 systemd[1]: Started sshd@12-172.31.30.72:22-139.178.89.65:57136.service - OpenSSH per-connection server daemon (139.178.89.65:57136). Feb 13 15:54:04.642558 sshd[4362]: Accepted publickey for core from 139.178.89.65 port 57136 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY Feb 13 15:54:04.644387 sshd-session[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:04.649254 systemd-logind[1870]: New session 13 of user core. Feb 13 15:54:04.657276 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:54:04.877909 sshd[4364]: Connection closed by 139.178.89.65 port 57136 Feb 13 15:54:04.880186 sshd-session[4362]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:04.889208 systemd[1]: sshd@12-172.31.30.72:22-139.178.89.65:57136.service: Deactivated successfully. Feb 13 15:54:04.895798 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:54:04.898308 systemd-logind[1870]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:54:04.901375 systemd-logind[1870]: Removed session 13. Feb 13 15:54:09.919667 systemd[1]: Started sshd@13-172.31.30.72:22-139.178.89.65:53064.service - OpenSSH per-connection server daemon (139.178.89.65:53064). Feb 13 15:54:10.117974 sshd[4397]: Accepted publickey for core from 139.178.89.65 port 53064 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY Feb 13 15:54:10.118729 sshd-session[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:10.129318 systemd-logind[1870]: New session 14 of user core. Feb 13 15:54:10.135413 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:54:10.431392 sshd[4399]: Connection closed by 139.178.89.65 port 53064 Feb 13 15:54:10.433203 sshd-session[4397]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:10.437241 systemd-logind[1870]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:54:10.438519 systemd[1]: sshd@13-172.31.30.72:22-139.178.89.65:53064.service: Deactivated successfully. Feb 13 15:54:10.440806 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:54:10.441834 systemd-logind[1870]: Removed session 14. Feb 13 15:54:10.461583 systemd[1]: Started sshd@14-172.31.30.72:22-139.178.89.65:53076.service - OpenSSH per-connection server daemon (139.178.89.65:53076). Feb 13 15:54:10.626028 sshd[4410]: Accepted publickey for core from 139.178.89.65 port 53076 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY Feb 13 15:54:10.627495 sshd-session[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:10.632288 systemd-logind[1870]: New session 15 of user core. Feb 13 15:54:10.640304 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:54:11.232515 sshd[4418]: Connection closed by 139.178.89.65 port 53076 Feb 13 15:54:11.234904 sshd-session[4410]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:11.240662 systemd[1]: sshd@14-172.31.30.72:22-139.178.89.65:53076.service: Deactivated successfully. Feb 13 15:54:11.243488 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:54:11.245688 systemd-logind[1870]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:54:11.247411 systemd-logind[1870]: Removed session 15. Feb 13 15:54:11.284913 systemd[1]: Started sshd@15-172.31.30.72:22-139.178.89.65:53092.service - OpenSSH per-connection server daemon (139.178.89.65:53092). Feb 13 15:54:11.498387 sshd[4443]: Accepted publickey for core from 139.178.89.65 port 53092 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY Feb 13 15:54:11.500661 sshd-session[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:11.518714 systemd-logind[1870]: New session 16 of user core. Feb 13 15:54:11.527343 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:54:13.548965 sshd[4447]: Connection closed by 139.178.89.65 port 53092 Feb 13 15:54:13.552211 sshd-session[4443]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:13.561104 systemd-logind[1870]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:54:13.562695 systemd[1]: sshd@15-172.31.30.72:22-139.178.89.65:53092.service: Deactivated successfully. Feb 13 15:54:13.567275 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:54:13.583386 systemd-logind[1870]: Removed session 16. Feb 13 15:54:13.591611 systemd[1]: Started sshd@16-172.31.30.72:22-139.178.89.65:53100.service - OpenSSH per-connection server daemon (139.178.89.65:53100). Feb 13 15:54:13.754028 sshd[4463]: Accepted publickey for core from 139.178.89.65 port 53100 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY Feb 13 15:54:13.756581 sshd-session[4463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:13.764608 systemd-logind[1870]: New session 17 of user core. Feb 13 15:54:13.772165 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:54:14.270794 sshd[4465]: Connection closed by 139.178.89.65 port 53100 Feb 13 15:54:14.273687 sshd-session[4463]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:14.281771 systemd-logind[1870]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:54:14.283129 systemd[1]: sshd@16-172.31.30.72:22-139.178.89.65:53100.service: Deactivated successfully. Feb 13 15:54:14.288295 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:54:14.290137 systemd-logind[1870]: Removed session 17. Feb 13 15:54:14.308553 systemd[1]: Started sshd@17-172.31.30.72:22-139.178.89.65:53104.service - OpenSSH per-connection server daemon (139.178.89.65:53104). Feb 13 15:54:14.493595 sshd[4474]: Accepted publickey for core from 139.178.89.65 port 53104 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY Feb 13 15:54:14.494756 sshd-session[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:14.503730 systemd-logind[1870]: New session 18 of user core. Feb 13 15:54:14.512214 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:54:14.789224 sshd[4476]: Connection closed by 139.178.89.65 port 53104 Feb 13 15:54:14.791203 sshd-session[4474]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:14.796914 systemd-logind[1870]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:54:14.797591 systemd[1]: sshd@17-172.31.30.72:22-139.178.89.65:53104.service: Deactivated successfully. Feb 13 15:54:14.801622 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:54:14.803508 systemd-logind[1870]: Removed session 18. Feb 13 15:54:19.829566 systemd[1]: Started sshd@18-172.31.30.72:22-139.178.89.65:40400.service - OpenSSH per-connection server daemon (139.178.89.65:40400). Feb 13 15:54:20.029992 sshd[4508]: Accepted publickey for core from 139.178.89.65 port 40400 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY Feb 13 15:54:20.033182 sshd-session[4508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:20.055149 systemd-logind[1870]: New session 19 of user core. Feb 13 15:54:20.072294 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:54:20.331391 sshd[4510]: Connection closed by 139.178.89.65 port 40400 Feb 13 15:54:20.332295 sshd-session[4508]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:20.338832 systemd[1]: sshd@18-172.31.30.72:22-139.178.89.65:40400.service: Deactivated successfully. Feb 13 15:54:20.342600 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:54:20.344634 systemd-logind[1870]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:54:20.346584 systemd-logind[1870]: Removed session 19. Feb 13 15:54:25.371758 systemd[1]: Started sshd@19-172.31.30.72:22-139.178.89.65:33606.service - OpenSSH per-connection server daemon (139.178.89.65:33606). Feb 13 15:54:25.541223 sshd[4546]: Accepted publickey for core from 139.178.89.65 port 33606 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY Feb 13 15:54:25.543484 sshd-session[4546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:25.552657 systemd-logind[1870]: New session 20 of user core. Feb 13 15:54:25.560191 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:54:25.843032 sshd[4548]: Connection closed by 139.178.89.65 port 33606 Feb 13 15:54:25.843635 sshd-session[4546]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:25.850611 systemd-logind[1870]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:54:25.851722 systemd[1]: sshd@19-172.31.30.72:22-139.178.89.65:33606.service: Deactivated successfully. Feb 13 15:54:25.855675 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:54:25.857748 systemd-logind[1870]: Removed session 20. Feb 13 15:54:30.875499 systemd[1]: Started sshd@20-172.31.30.72:22-139.178.89.65:33612.service - OpenSSH per-connection server daemon (139.178.89.65:33612). Feb 13 15:54:31.058156 sshd[4601]: Accepted publickey for core from 139.178.89.65 port 33612 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY Feb 13 15:54:31.059420 sshd-session[4601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:31.080513 systemd-logind[1870]: New session 21 of user core. Feb 13 15:54:31.084194 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 15:54:31.336472 sshd[4603]: Connection closed by 139.178.89.65 port 33612 Feb 13 15:54:31.338508 sshd-session[4601]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:31.348901 systemd-logind[1870]: Session 21 logged out. Waiting for processes to exit. Feb 13 15:54:31.349360 systemd[1]: sshd@20-172.31.30.72:22-139.178.89.65:33612.service: Deactivated successfully. Feb 13 15:54:31.355031 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 15:54:31.356551 systemd-logind[1870]: Removed session 21. Feb 13 15:54:36.375568 systemd[1]: Started sshd@21-172.31.30.72:22-139.178.89.65:34178.service - OpenSSH per-connection server daemon (139.178.89.65:34178). Feb 13 15:54:36.592986 sshd[4635]: Accepted publickey for core from 139.178.89.65 port 34178 ssh2: RSA SHA256:nI/XXSxRjPl4WK5zIl4IIln7LmeKOaKrwYZMVq9W3UY Feb 13 15:54:36.594150 sshd-session[4635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:54:36.601820 systemd-logind[1870]: New session 22 of user core. Feb 13 15:54:36.610162 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 15:54:36.832120 sshd[4637]: Connection closed by 139.178.89.65 port 34178 Feb 13 15:54:36.833941 sshd-session[4635]: pam_unix(sshd:session): session closed for user core Feb 13 15:54:36.838327 systemd-logind[1870]: Session 22 logged out. Waiting for processes to exit. Feb 13 15:54:36.840122 systemd[1]: sshd@21-172.31.30.72:22-139.178.89.65:34178.service: Deactivated successfully. Feb 13 15:54:36.842915 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 15:54:36.845699 systemd-logind[1870]: Removed session 22.